Blog Viewer

Hot Topics in Research Law | The Legal Labyrinth: AI in Research and its Unforeseen Challenges

By SRAI News posted 11 days ago

  

Spotlight: Hot Topics in Research Law | The Legal Labyrinth: AI in Research and its Unforeseen Challenges

This month the Spotlight continues exploring the implications of emerging legal issues when using artificial intelligence (AI) in the research arena. To illustrate this dilemma, an AI-generated article is presented here first, followed by an author-generated piece that offers observations on how AI technology brings challenges and risks.

FULL DISCLOSURE: THIS ARTICLE WAS GENERATED BY GEMINI, THE CURRENT GOOGLE AI ENGINE, IN A FEW SECONDS.

Artificial intelligence (AI) is rapidly transforming the research landscape. From accelerating drug discovery to uncovering hidden patterns in vast datasets, AI offers immense potential for scientific progress. However, this exciting new frontier comes with a tangle of legal issues that researchers and institutions must navigate. Let's delve into some of the key legal challenges arising from AI in research.

  1. Ownership and Copyright of AI-Generated Inventions
    One of the most pressing questions concerns ownership. When AI plays a significant role in a research project, who owns the resulting invention or discovery? Is it the researcher who formulated the research question, the programmer who built the AI model, or the AI itself (if legal personhood is ever extended to AI)? Current intellectual property laws struggle with this ambiguity. Patents typically reward human inventors, leaving AI's contribution unclear. Copyright law might be applicable for AI-generated creative outputs, but again, the lack of legal recognition for AI as a creator poses problems.
  2. Data Privacy and Security
    AI thrives on data, and research often involves sensitive information. This raises concerns about data privacy and security. How can researchers ensure informed consent when using personal data for training AI models? Data breaches and unauthorized access can have severe consequences, especially in fields like healthcare research. Researchers need to comply with data protection regulations like GDPR (General Data Protection Regulation) in the EU and develop robust security protocols to safeguard sensitive information.
  3. Algorithmic Bias and Discrimination
    AI algorithms are only as good as the data they are trained on. Unfortunately, data can be biased, reflecting societal prejudices. This bias can seep into AI models, leading to discriminatory outcomes in research findings. For example, an AI used for medical diagnosis might show bias against certain demographics due to historical data skews. Researchers must be vigilant in identifying and mitigating bias in their data and algorithms to ensure fair and objective research outcomes.
  4. Liability for Errors and Misconduct
    Who is accountable if AI-powered research leads to errors or even harm? If an AI-driven drug discovery process results in an unsafe medication, who is liable - the researcher, the programmer, or the AI itself? Legal frameworks for AI liability are still under development. Clear guidelines are needed to ensure accountability and prevent research misconduct involving AI.
  5. Transparency and Explainability of AI Models
    The "black box" nature of some AI models raises concerns about transparency and explainability. Researchers and regulators often struggle to understand how AI arrives at its conclusions. This lack of transparency can hinder scientific scrutiny and make it difficult to identify and address potential biases within the model.

The Road Ahead
Addressing these legal challenges requires a collaborative approach. Researchers , policymakers, and legal experts must work together to develop clear and adaptable legal frameworks for AI in research. Here are some potential solutions:

  • Developing new IP frameworks: New legal categories might be needed to recognize and protect AI-generated inventions while ensuring fair attribution to human collaborators.
  • Enhancing data governance: Strengthening data protection regulations and promoting responsible data collection and storage practices are crucial.
  • Promoting algorithmic fairness: Researchers should actively identify and mitigate biases in their data and algorithms. Explainable AI tools can enhance transparency and trust in AI models.
  • Establishing clear liability guidelines: Legal frameworks should define liability for errors and misconduct in AI-powered research, ensuring accountability while fostering innovation.

By proactively tackling these legal issues, we can harness the full potential of AI in research while ensuring ethical and responsible scientific progress. The future of AI-driven research is bright, but navigating the legal labyrinth requires careful consideration and collaboration between researchers, policymakers, and the legal community.

The article above was generated by Google’s current artificial intelligence (AI) engine GEMINI, in response to the query “Write an article of approximately 600 words on legal issues arising from artificial intelligence in research.” The immediate result shows the capability of AI tools available today.   It is apparent that this tool is adept at summarizing use of an AI tool in research and identifying issues relative to its use.  Clearly, the AI application can provide at least competent overviews of current topics.

At this time AI is at a crossroads, unlike other technological advances of the past. When the automobile was first introduced, Henry Ford and others had to overcome issues of what to do when a Model-T encountered a horse-drawn vehicle along the route. An entire infrastructure had to be developed to deal with the use of automobiles – from highways to standardized road signs.

At least two similar revolutionary technologies have been presented in our lifetime.  When first introduced, the internet was compared and contrasted in the law as a copy machine(1) as well as news media, distributor, and/or common carrier. The iPhone turned the phone into much more than a device to make phone calls. It spawned many new uses and industries capitalizing on its capabilities, but it also destroyed the landline and appears to be threatening the use of cash in day-to-day payments. 

AI is seeing its own set of challenges For example, Cordilia James writing in the Wall Street Journal ) “Day by day, there is growing pressure at the office.  Do you respond to all those clients—or let AI do it?  Do you attend that meeting—or do you send a bot?”(2) Pew Research Center indicates that Open AI’s ChatGPT has now been used by about 20% of employed adults as of February 2024,  up 8% from a year ago.  According to James (citing an Adobe study), the most popular uses for AI at work are research and brainstorming, writing first-draft emails and creating visuals and presentations.  Estimates of productivity improvements have been projected in the trillions.  The pressure to use AI for productivity gains will be strong in the everyday workplace and this will translate into pressure to use it in research as well.  For example, the New York Times cites a research paper published by Profluent that AI technology can be used to generate blueprints for microscopic biological mechanisms by creating new gene editors that can then be used to edit your DNA(3) 

Using AI brings challenges and risks, however. Should you disclose that the research article was written by or assisted by AI or is it any different than using other tools or research assistants to conduct or summarize your research? Do you need to self-verify the results?  From a legal perspective if your AI tool culled hundreds or thousands of articles to produce your generated result, was it a fair use or a copyright violation?  If you intend to patent or copyright parts of the research, are there enough human creative components to even make it patentable or qualify for a copyright?  Do you know what sources the AI tool used to generate your results?  Was confidential company or private personal data used?

As with any new technology, and especially with potentially revolutionary technology, legal issues are arising by the minute.  We will explore these issues in additional articles and presentations for SRAI (in the Catalyst and elsewhere) in coming months.


Citations:

  1. Religious Technology Ctr. vs. Netcom On-Line Comm. Servs., 907 F. Supp.1361 (N.D. Cal. 1995). 
  2. James, C. (2024, April 18). The Smartest Way to Use AI at Work. The Wall Street Journal.
  3. Metz, C. (2024, April 22). Generative A.I. Arrives in the Gene Editing World of CRISPR. The New York Times.


Authored by

David D. King, Retired, Senior Associate University Counsel
University of Louisville
SRAI Distinguished Faculty

J. Michael Slocum, JD, President
Slocum & Boddie, PC
SRAI Distinguished Faculty


#May2024
#Catalyst

0 comments
10 views

Permalink