Sora 2 and the Deepfake Dilemma: Free Speech in the Age of Generative AI

5:11 PM, Nov. 6, 2025

On October 16, 2025, OpenAI and the King Estate, Inc. released a joint statement, announcing that in response to disrespectful videos generated by artificial intelligence (“AI”), OpenAI was pausing the use of Dr. Martin Luther King’s likeness on Sora 2.

Sora 2 is the latest tool released by OpenAI. Through text-to-video AI, the platform allows users to create “hyper realistic videos” and then easily upload them to social media. Sora 2 has gone viral in the United States, receiving 1 million downloads in “less than 5 days.”

Despite this quick success, the platform has received criticism for controversial deepfake videos of famous historical figures like Dr. King, Robin Williams, and Stephen Hawking, depicting them making disrespectful and sometimes racist statements.

In response to these controversial videos, OpenAI decided to pause the use of Dr. King’s likeness on Sora 2 while the company “strengthen[ed] [the] guardrails for historical figures.” However, the likenesses of other famous historical figures remain available. In its statement, OpenAI claimed that, while it believes the families of public figures should have control over their loved ones’ likenesses, there are also “strong free speech interests in depicting historical figures.”

The First Amendment governs state action, so OpenAI is likely free to regulate how users engage with its platform without violating the free speech clause. However, this situation raises a larger question: How does traditional free speech analysis apply to modern deepfakes? It is not clear if deepfake videos should be viewed as classic non-commercial, protected speech or if the disrespectful deepfakes fall into one of the recognized categories where courts allow restriction.

The free speech clause of the First Amendment prohibits the government from “abridging the freedom of speech.” In cases challenging this fundamental right, the Supreme Court considers the category of the speech to determine the level of scrutiny applied. In general, “content-based free speech restrictions . . . receive strict scrutiny regardless of the topic at issue.” However, content-based restrictions may be permissible if the content falls within a recognized category, such as obscenity, defamation, or fraud.

These hyperrealistic videos are blurring the lines between real and fake more poignantly than previous technology, and this new reality warrants a revised regulatory perspective.

While not novel, the conversation surrounding free speech rights and deepfake videos has become increasingly relevant with the rapid rise in sophisticated deepfake technology. In 2012, the Supreme Court held in U.S. v. Alvarez that falsity alone is not enough to strip false speech of its First Amendment protection. Alvarez dealt with verbal false statements, but scholars have considered how this framework would apply in the digital world where generative AI content is becoming more common.

In his 2020 article, Professor Marc Blitz asserted, “Deepfake videos are First Amendment expression. . . . As such[,] they are presumptively protected by the First Amendment shield for false claims one finds in Alvarez.” However, he also believes deepfakes are not free from restriction, and the Alvarez framework likely permits regulation of some deepfakes.

As of September 2025, 46 states have enacted laws regulating the creation of deepfake materials that depict certain sexually explicit content. Obscenity is one of the recognized categories of unprotected speech, so regulation in this area has been largely successful. However, other content-based restrictions on borderline issues have been much more difficult, and a federal judge recently struck down a California law which sought to regulate deepfake political content during election cycles.

Sora 2 and similar technology have significantly lowered the skill barrier needed to create deepfakes and will continue to push the boundaries of deepfake regulation. To  many, including AI ethicist Olivia Gambelin, disrespectful content has always been a potential consequence of the lowered skill barrier. To her, instead of taking a “trial and error by firehose approach,” OpenAI should have implemented restrictions from the start. While this is a fair critique, it is not clear what kinds of restrictions OpenAI should have placed during the rollout. These hyperrealistic videos are blurring the lines between real and fake more poignantly than previous technology, and this new reality warrants a revised regulatory perspective.

In the long run, the quick adoption of Sora 2 by millions of people may provide clarity in this area of law. Will the increase in the amount of deepfake content provide new contexts for courts to analyze free speech? Judicial interpretation in this area could pave the way for Congress and states to regulate deepfakes and provide companies like OpenAI with better guidance for future releases.

Tej Munshi

Tej Munshi is a dual degree J.D., M.P.P. candidate at the University of North Carolina School of Law and the Duke Sanford School of Public Policy. Tej is originally from Alpharetta, Georgia, and graduated from Davidson College with a B.S. in Psychology.