How Bad Should the Deepfake Have to get Before the Creator is Punished?

This image is generated by ChatGPT with the following prompt: “Generate a representative image for a deepfake.”

Photos and videos have long been considered reliable sources of information. Because photographs and videos require their subjects’ physical presence, such media tends to be fairly reliable in conveying information. However, that may no longer be the case.

The advancement of artificial intelligence brought the rise of deepfakes. A deepfake is “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.” This means that with deepfakes, people can create realistic photographs and videos without the presence of real subjects.

The problem is that people generally cannot discern whether the pictures or videos are deepfake, despite their belief that they can. This problem leaves society more susceptible to defamation. The American Bar Association took note of the potential problems that deepfakes present on elections if the videos are left unchecked. For example, people can easily create videos that are altered to defame political candidates. Once created, social media would enable the video to reach a larger audience, because social media is not significantly regulated to handle this development in technology. Imagine someone posted a deepfake video of a presidential candidate engaging in a bar fight. How would people not be drawn to watch the video? Since social media would enable widespread viewership, the candidate would likely have their political career in peril.

Recognizing this problem, California enacted AB 2329 which restricts “any false or materially deceptive content that is ‘reasonably likely’ to harm the ‘reputation or electoral prospects of a candidate‘” (more than just defamation) according to the U.S. District Court in the Eastern District of California in Khols v. Bonta. But on October 2, 2024, the Khols court ruled that the law “does not pass constitutional scrutiny because the law does not use the least restrictive means available for advancing the State’s interest.” Simply put, the court ruled AB 2329 unconstitutional because even “deliberate lies (said with “actual malice”) about the government are constitutionally protected” from prosecution, though not from civil liability.

the government should act to determine the ‘least restrictive means’ to stop the harm caused by deepfake defamation.

But the Khols court understands that “California has a valid interest in protecting the integrity and reliability of the electoral process.” The court implies that AB 2329 could have been constitutional if the law had “the narrow tailoring and least restrictive alternative that a content based law requires under strict scrutiny.” Exactly what constitutes the “narrow tailoring and least restrictive alternative” remains unknown. Whether it would be proper to narrow the law’s applicability to just defamation is also unknown. But one thing is clear: overcoming the “strict scrutiny” is extremely difficult. 

The Khols court highlights the importance of freedom of speech while recognizing the need for the law to protect “the integrity and reliability of the electoral process.” But many questions remain unanswered. Is civil liability enough to safeguard the public and political candidates from defamation? Shouldn’t there be more protection? How defaming should the deepfake need to be for the law to meet the “narrow tailoring?” Clearly, posting a mere lie on social media is different from posting a realistic fake video. The aforementioned hypothetical presidential candidate would be much more harmed from the deepfake video of them engaging in a bar fight than from a mere claim or a parody video of the claim that they engaged in a bar fight. Accordingly, voters will be much more susceptible to these deepfakes, and the politician will be left with too much damage in reputation.

This is another challenge that the advance in technology has presented. Exactly how society can resolve this challenge is unknown. But putting resources into resolving this problem is necessary to protect, not just political candidates, but the entire public from defamation.

Sunghyun Shin

Sunghyun Shin is a 2L at UNC School of Law and a staff member of the Journal of Law and Technology.