AI and Research Administration

By SRAI News posted 10-11-2023 10:40 AM

  

AI and Research Administration

In the area of technological innovation and artificial intelligence (AI), a fascinating intersection promises to redefine the landscape of research and administration. We recently interviewed Dr. Milos Manic, Professor of Computer Science, and the director of Virginia Commonwealth University's (VCU) Cybersecurity Center. Dr. Manic is an expert in cybersecurity and critical infrastructure protection. We delved into the discussion on AI and its application in the research landscape and its administration. 

Milos Manic, PhD
Professor of Computer Science
Virginia Commonwealth University

Dr. Manic is frequently invited to speak about the impact of artificial intelligence. At a VCU panel discussion on artificial intelligence, Manic said, "We are a few inches away from the shore of a really deep ocean in terms of what AI is going to do. Artificial Intelligence is going to change every aspect of the university." Manic also works to implement an AI accelerator for real-time fraud detection and prevention as part of his role as Director of the Commonwealth Center for Advanced Computing.

A major concern with AI tools is text and content generation, which could lead to problems like plagiarism and attribution. Researchers' use of AI tools may impact the accountability and responsibility of an administrator's work. Research administrators will find it very challenging to determine if AI entirely or partially generated a proposal or research work or if human input was involved. This is critical because some research administrators have added responsibilities to review submissions to ensure compliance with the funding agencies and university policies on integrity for authenticity and trustworthiness of the researcher's proposals. 

Detecting AI-generated content or determining if AI was used in research is another problem that research administrators will face. This is a complex problem because of the evolving nature of AI algorithms and the fact that traditional plagiarism detection tools may need to be more effective in identifying AI-generated content.

Beyond this, the ethical implications of using AI tools seem problematic for research administrators. There is no doubt about the importance of protecting an institution's reputation, ensuring compliance, and maintaining ethical standards.

Despite the problems mentioned above, it is essential to recognize that integrating or embedding AI tools into research processes is inevitable and will support researchers in various tasks and capacities. Integrating AI tools will improve efficiency, accuracy, and new possibilities, but it also raises questions about boundaries, limitations, and potential risks.

Manic emphasizes the phrase "Responsible use of AI," drawing attention to users interacting with AI agents such as ChatGPT. It is essential to recognize that there are yet to be mechanisms that retain the privacy of the information users provide to chat agents. Also, AI learns from the data and user behavior provided. So, while there is an excellent capability that comes with AI, there is also a great deal of responsibility we all should exercise when interacting with AI.

One possible solution to counter such problems is to adhere to transparency and incorporate disclaimers or disclosure statements about the use of AI tools in the research process. Such messages can protect researchers from potential risks and help research administrators manage those risks.


Authored by Fikria El Kaouakibi, Assistant Director of Research
Virginia Commonwealth University in Qatar

0 comments
22 views