Sign up for The Key Point of View, our weekly newsletter of blogs and podcasts!
Recently, The New York Times has posted two different articles concerning the issue of deepfakes being made and the harm they have caused in school settings. In Baltimore, the athletic director of a high school was arrested for creating a deepfake that impersonated the principal, depicting him saying racist and antisemitic comments. (It’s believed the athletic director did so in retaliation of being investigated for supposedly suspicious uses of school funds.)s In Seattle, a group of high school girls were shocked to discover that a male classmate had been using images taken from a school dance to make deepfake nude images of them and distributing them to his friends.
Despite the disturbing nature of these events occurring at schools, they bring up the issue of the legality of deepfakes and other content created with artificial intelligence (AI). While creating revenge porn is already a topic that’s been brought before the courts, unscrupulous arguments could be made that no crime was committed when the images and recordings used aren’t actually real. Sure, we have concepts of defamation, slander, and libel that are applicable—but AI is presenting new issues that our legal systems may not be fully equipped to handle. Morally and ethically, what the perpetrators did in the two examples above were wrong. From a purely legal standpoint, though, could someone be arrested and charged for images or audio that didn’t really exist?
|
Speaking with US News & World Report about the use of deepfakes targeting celebrities, adjunct professor at the NYU School of Law Judith Germano said, “Part of the problem is that the technology has become less expensive, more accessible, and the products of the technology more believable, while the laws and the protections have not evolved as quickly.” In a related USA Today article, only 10 states are known to have laws related to deepfake videos and images, but the nature of these rulings is often tied to pornographic content; the oldest was established in Virginia in 2019. This means there are gaps in our legal system to protect those whose image or voice have been manipulated for nefarious means.
The big issue is that there isn’t a clear direction to move in…
US privacy laws vary from state to state, but some (like California's Right of Publicity Law) protect individuals from unauthorized use of their name, voice, signature, photograph, or likeness. However, this law appears designed to deal with problems like claiming celebrity endorsement for products/events without the person’s consent. Other US privacy laws may not fully cover non-commercial uses of deepfakes as they are not utilizing a person’s likeness for compensation. Also, while many of these laws require consent to an individual's likeness to not be considered an “illegal” use, enforcing requirements for consent with deepfakes—especially when they are created anonymously—can be challenging.
This means that there are plenty of gaps in protection for situations where deepfakes weren’t used in a pornographic context or for commercial means (or for already illegal means like extortion or blackmail). According to Ofindo, a digital security solution developer, “Fraudsters are increasingly using deepfakes as a way to attempt to dupe identity verification systems” and that they have seen “a 3,000% increase in deepfakes as part of fraudulent account onboarding attempts”. Politically driven crimes are another area in need of protection. Texas, as well as having an anti-revenge porn law, has another deepfake law specifically designed to protect voters from election interference. This could be in response to robocalls being released, such as the deepfake of President Biden supposedly calling people in New Hampshire to discourage them from voting in their primary election. Unfortunately, there aren’t any other states that have added such protections from AI abuse.
Keypoint Intelligence Opinion
While deepfakes were originally created for entertainment, we have moved far beyond its initial purpose. That said, we do have some international laws we can mimic to start drafting some laws to protect US citizens from deepfakes. The UK Online Safety Act of 2023 made it illegal to share explicit images or videos that have been digitally manipulated intentionally or recklessly caused distress to an individual. The EU AI Act (arguably the most comprehensive AI law) doesn’t disallow the creation of deepfakes outright, but attempts to regulate them through transparency obligations placed on the creators.
Until we have something drafted here, there are small ways we can provide ourselves some protections. Social media and apps that allow photo and video sharing have made things more complicated as a person’s image or voice can be obtained through various means—providing a glut of data for people to manipulate as they see fit. We need to be better at controlling how much of ourselves we are willing to put out into the ether or using privacy controls (especially for users under 18 years old). When it comes to robocalls, the Federal Communications Commission (FCC) recommends several steps—including not answering unknown numbers, exercising caution when answering calls from unknown numbers, and refusing to answer questions.
On a more personal note, while doing research on this topic, most articles concerned with deepfake laws used Taylor Swift as a target for deepfake pornography as a lens to focus their writing. And while she is due protection from her images being manipulated, we should also be concerned about our own safety and how our likeness and data is being used. As a billionaire recording artist with countless fans, she has the resources to bounce back from any attack on her character.
Can you say the same for yourself?
Browse through our Industry Reports Page (latest reports only). Log in to the InfoCenter to view research, reports, and studies on AI, government legislation, and more through our Artificial Intelligence and Workplace CompleteView Advisory Services. If you’re not a subscriber, contact us for more info by clicking here.
Keep Reading
Leading the Way in AI Governance
Deepfakes Are Making It Harder to Discover Deception in Media