Blog Viewer

Hot Topics in Research Law | Legal Issues in the Age of AI-Generated Media: Deepfakes and Other AI Chicanery

By SRAI News posted 07-09-2024 05:30 PM

  

Hot Topics in Research Law | Legal Issues in the Age of AI-Generated Media: Deepfakes and Other AI Chicanery

The Spotlight concludes its series on the implications of emerging legal issues when artificial intelligence (AI) platforms are incorporated as part of the research administration discipline. This month, potential problems provoked for research integrity through use of sophisticated AI deepfake techniques are examined.

The ability of artificial intelligence (AI) to create near-perfect replicas of images and voices and even video has ushered in a new era of creative expression. However, this rapidly increasing ability to mimic reality beyond our ability to spot the fake presents a legal minefield and creates numerous veracity issues for research administration and research integrity.  Legal issues surrounding AI-generated media used to create realistic counterfeits of our likenesses, voices, and even actions arise in many areas of the law. Some of these challenges include name, image, and likeness (NIL), free speech for satire and parody versus copyright and moral rights, and civil and criminal liability for fraud, defamation, and other legal wrongdoing. 

Copyright Conundrums

Many concerns relate to the infringement of existing copyrights. AI algorithms can be trained on massive datasets containing copyrighted material, including photographs and video recordings of individuals. Among several other questions, the issue of whether the resulting AI-generated content constitutes derivative work has been raised.  If so, the AI work infringes on the rights of the original copyright holder(s).

Many of those images and videos may well have been obtained from social media in which an individual has posted pictures and videos of him or herself; one can assume that the subject of an AI- generated image could sue to stop the use of a personal image, voice, or other fake aspects of the person’s outside appearance, voice, or actions. The argument can be made that the AI or the AI operator must have necessarily used images that the subject of the AI replication owned and posted to sites used to direct the AI.

When one of the most egregious uses of deepfakes is to embarrass or discredit someone, even if the image is an otherwise derivative work, a common defense to infringement by derivative work is satire and parody.  Even vicious satire has been allowed under the First Amendment and it is quite conceivable that for example, a deepfake of a famous politician with counterfeit pornographic content would be protected under existing Supreme Court precedent. (1)

Numerous claims and class action complaints are currently active in several courts.  The courts in California are the forum of choice. (2)  Successful plaintiffs typically provide proof that the copyrighted or private data was actually used in training the AI tool. (3) 

Right of Publicity

The Right of Publicity protects an individual's right to control the commercial use of their name, image, voice, or other identifying features. “The Right of Publicity is a state-based property right in the United States. Each state determines the parameters of recognition. A statute is not a prerequisite for the Right of Publicity to be enforceable. Many states arrive at the same outcome via common law. Louisiana, Alabama, Arkansas, New York and South Dakota are among the most recent to pass a Right of Publicity legislation.” (4) In the rest of the world, these kinds of rights may be protected under various legal doctrines.  For instance “The right to one’s own image for a famous person is protected as the right to privacy in accordance with Article 8 of the Convention for the Protection of Human Rights and Fundamental Freedoms (1950)” (5) and in the UK “… the right of publicity may be protected by using already existing torts, like malicious falsehood, false endorsement, infringement of IP rights, defamation, libel, etc. (Helling, 2005, p. 32). The latest British court practice shows that the infringement of the right of publicity is now protected by framing the case in the torts of breach of confidence and passing off.” (6) 

Liability for Misuse of Images, Voice and Other Identifiers

AI-generated deepfakes can create incredibly realistic portrayals (still and moving, silent and with voice) of individuals, raising concerns about unauthorized use. (7) For example, a celebrity's voice could be used without consent, potentially damaging reputation or causing financial harm, particularly in this era of NIL. (8) Clearly, current laws may not be adequately equipped to handle the complexities of AI-generated likenesses for celebrities.  Even more troubling are the issues concerning the protection of the NIL for non-celebrities. These include not only misuse of the images for nefarious purposes such as deepfake pornography, but also basic issues such as biometric fakery of voice, facial image, and even retinal images and fingerprints. (9) The issues arising from these misuses of AI image and voice generation will lead to massive legal activity (legislative, executive, and judicial) concerning the liability for frauds, publication of fake pornography and counterfeit speeches, illegal or immoral actions, and other harmful hoaxes and alterations of reality.

First Amendment Considerations

Freedom of speech is a cornerstone of many legal systems.  Critics argue that overly restrictive regulations on AI-generated media could stifle satire and artistic expression. Finding the right balance between protecting individuals' rights and safeguarding free speech will be a major challenge for policymakers.  As noted above, even salacious or disgusting depictions of persons have been protected. One can see a situation where the fact that a very realistic depiction of a person doing something he or she would clearly NEVER do, would be more protected than a less obvious but more clearly fake, since the absurdity of the realistic fake makes it more arguably satire. 

Misrepresentation and the Difficulty of Detection

These deepfakes can appear strikingly authentic – so much so that they could result in business loss or personal accusations of misdeeds.  In the realm of research integrity, these deepfakes will definitely complicate the inquiry and investigation phases of a research integrity claim.  The costs associated with conducting such matters will increase significantly as the human eye will become increasingly unable to detect the fake.  Current detection may involve generative adversarial networks where two networks face off to get better at detecting fakes. (10) Thus, those charged with research integrity in an institution will need to partner with evolving AI detectors in order to confirm what was produced by the research and what was fake. “Only vigilant human-machine teamwork has a chance of piecing digital truth from well-disguised lies.” (11) “The landscape on deepfake detection is presently similar to the adversarial  chase in cybersecurity, whereby advances in cyber hygiene and detection offer varying levels of risk management or reduction, but continuing evolution in cyberattack techniques imposes a temporality to any remedial technique.” (12)

Government Responses

Several legislative efforts are underway to address these concerns. The proposed No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act  (No AI FRAUD Act) (13) in the United States aims to establish a framework for protecting individuals' voices and likenesses from unauthorized use through AI. This act acknowledges First Amendment concerns, ensuring protection without unduly restricting free speech. Similarly, the draft Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act of 2023) (14) strives to balance these same interests. The Biden Administration has issued Executive Order 14110, titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (sometimes referred to as Executive Order on Artificial Intelligence) and the executive departments have begun to respond to the dictates of that order. (15) 

The Path Forward

The legal issues surrounding AI-generated media are complex and constantly evolving. A multi-pronged approach is likely necessary.  Some of those issues border on the philosophical.  Just what rights do people have to their outer appearance, to the sound of their voice, and to the indicia of their unique being? When can others use any of those, and what needs to be done to protect individuals’ rights from dilution and misuse? These issues will soon be exacerbated by the use of AI-generated avatars to act for us. For example, just how far will an AIvatartm (16) acting as a personal assistant go in interacting with others, making decisions for, and binding its principal (its HUMAN) to contracts and agreements? 

Technological Solutions 

Government agencies and private companies are implementing mechanisms for identifying and labeling AI-generated content that could help mitigate the spread of misinformation. (17) Additionally, cybersecurity firms in particular are rapidly deploying their own AI support tools and strategies to combat fraud and other wrongdoing based on AI-assisted malfeasance. (18) 

Legislative Action 

As referenced above, laws need to be updated to address the specific challenges of AI, considering both individual rights and free speech, and the allocation of the risks arising from misuse of AI to commit civil and criminal wrongdoing.

Industry Standards 

Much of the response to AI issues generally and to the issues of counterfeit and altered images, voices, and other personal identifiers will necessarily come from the development and deployment of industry-wide standards for responsible use of AI-generated media. Such standards could help mitigate potential harm and allocation of liability for the harm caused by bad actors, negligent businesses and consumers, and inevitable mistakes and misuse of the technology. These will be driven by both case law and legislation, but even more by the dictates of commerce.  The industry needs to develop a whole new understanding of the reality of and the sometimes-faulty perception of just what makes a real person.

“How many goodly creatures are there here! How beauteous mankind is! O brave new world, that has such people in ’t!” (19)

“Alice laughed. 'There's no use trying,' she said. 'One can't believe impossible things.'  ‘I daresay you haven't had much practice,' said the Queen. 'When I was your age, I always did it for half-an-hour a day. Why, sometimes I've believed as many as six impossible things before  breakfast.’ ” (20)

Citations: 

  1. Hustler Magazine, Inc. v. Falwell. (1988). 485 U.S. 46, 108; Justia, U.S. Supreme Court.
  2. Rushing, L. and Lowry, S. (2024, Spring).  Generative AI Litigation: A One-Year Check-In. The SciTech Lawyer, Vol. 20, No. 3, p. 25.
  3. Rushing, L. and Lowry, S. (2024, Spring). Generative AI Litigation: A One-Year Check-In. The SciTech Lawyer, Vol. 20, No. 3, p. 25.
  4.  Faber, J. (2024). Right of Publicity Statutes and Interactive Map. Right of Publicity.
  5. Moskalenko, K. (2015, December 8). The right of publicity in the USA, the EU, and Ukraine. International Comparative Jurisprudence, Vol. 1, Issue 2, pp.113-120.
  6. Moskalenko, K. (2015, December 8). The right of publicity in the USA, the EU, and Ukraine. International Comparative Jurisprudence, Vol. 1, Issue 2, pp. 113-120.
  7. Mickle, T. (2024, May 20). Scarlett Johansson Said No, but OpenAI's Virtual Assistant Sounds Just Like Her. The New York Times.
  8. NCAA Name, Image, Likeness Rule. (2024). NCSA Sports.
  9. Voice Deepfakes Are Coming for Your Bank Balance. (2023, August 30). The New York Times.
  10. de’Medici, B. (2024, Spring). Deepfakes and Malpractice Risk: Lawyers Beware. The SciTech Lawyer, Vol. 20, No. 3, p 13.
  11. de’Medici, B. (2024, Spring). Deepfakes and Malpractice Risk: Lawyers Beware. The SciTech Lawyer, Vol. 20, No. 3, p 14.
  12. de’Medici, B. (2024, Spring). Deepfakes and Malpractice Risk: Lawyers Beware. The SciTech Lawyer, Vol. 20, No. 3, p 14.
  13. Salazar, M. (2024, January 10). No AI FRAUD Act. H.R. 6943.
  14. Coons, C. (2023, October 11). NO FAKES Act of 2023. EHF23968 GFW.
  15. U.S. Department of the Treasury. (2024, March 27). Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector.
  16. While the author cannot assert a patent for the idea of an AI generated agent, the brand name is now taken.
  17. National Security Agency. (2023, September 12).  NSA, US. Federal Agencies Advise on Deepfake Threats.
  18. Cook, V. (2023, July). The dangers of deepfakes. Bank of America Institute.
  19. Shakespeare, W. (1611). The Tempest.
  20. Carroll, L. (1865). Alice's Adventures in Wonderland. 


Authored by

David D. King, Retired, Senior Associate University Counsel
University of Louisville
SRAI Distinguished Faculty

J. Michael Slocum, JD, President
Slocum & Boddie, PC
SRAI Distinguished Faculty


#Catalyst
#July2024
#Spotlight
#researchlaw
#AI
0 comments
7 views

Permalink