As the technology landscape moves further into a world of artificial intelligence (AI) and large language models (LLMs), there are questions of how well the law will be able to keep up.
Sponsored by Sens. Chris Coons (D-Del.), Marsha Blackburn (R-Tenn.), Amy Klobuchar (D-Minn.), and Thom Tillis (R-N.C.), the NO FAKES Act of 2025 (short for “Nurture Originals, Foster Art, and Keep Entertainment Safe”) attempts to be forward-looking about the potential threat of so-called AI “deepfakes” by granting individuals a federal right over their likenesses to prevent “digital replicas.”
Under terms of the bill, an “online service” could be held liable for the “[t]he public display, distribution, transmission, or communication of, or the act of otherwise making available to the public, a digital replica without authorization by the applicable right holder.” The bill creates a safe harbor for online services that distribute such replicas if they create a notice-and-removal system and terminate accountholders that promulgate such replicas.
A U.S. House version has also been introduced by Reps. Maria Salazar (R-Fla.), Madeleine Dean (D-Pa.), Nathaniel Moran (R-Texas), Becca Balint (D-Vt.), and Joe Morelle (D-N.Y.).
In this post, I will consider the NO FAKES Act using the same criteria I previously employed to examine the TAKE IT DOWN Act: whether the proposed legislation solves a genuine problem, what its potential for collateral censorship might be, and whether it might conflict with the First Amendment.
Problems to Solve
The first question is what problem the NO FAKES Act attempts to solve. As it stands, 25 states have a right of publicity that protects the commercial interests of persons in their name, image, and likeness. While originally considered one of the four privacy torts under the common law, it has evolved into more of a property-based right. This is reflected in the federal right established in the NO FAKES Act, which explicitly calls it a “property right.”
One argument for why it is necessary to create a new federal property right over name, image, and likeness is that not every state has a right-of-publicity tort, nor are all state laws coextensive in their protections (although they are largely similar).
A better argument is that the rise of AI—especially generative-AI services like LLM—is likely to enable the creation of digital replicas at extremely low cost. While most states’ right-of-publicity laws would likely reach this issue, having one future-looking federal standard may be a better way forward.
The best part of this approach is that it focuses on LLM outputs, rather than their inputs. By giving persons the right to control their own likenesses, an LLM would first need to generate a “digital replica” of that person without their consent. As we’ve written previously, an approach that targets infringing or harmful AI outputs makes much more economic sense than one that focuses on AI training inputs–in part, because it is much easier to establish the level of harm arising from outputs.
Potential for Collateral Censorship
The NO FAKES Act’s central thrust is the creation of a federal right of action against unauthorized digital replicas, which can be used to target products or services designed to create such digital replicas. But it also creates liability for online services used to make those digital replicas available to the public—at least, in those cases where the platform receives proper notice of the replicas’ presence.
It’s essential that the law balances accountability for distributing illegal content with the risk that it may engender collateral censorship of legal content. On the one hand, the notice-and-takedown system does offer a safe harbor to online services that remove (and keep down) offending content in response to valid notices. There are also penalties for false or deceptive notices. Moreover, a platform’s failure to perform a good-faith review to determine whether a piece of content qualifies as an unauthorized digital replica counts as a material misrepresentation.
Despite these protections, the notice-and-takedown system does potentially encourage collateral censorship. The penalties for failing to take down unauthorized digital replicas are high: $25,000 per-work where there has been a good-faith effort to comply, and either up to $750,000 per-work if there is no good-faith effort or actual damages, whichever is higher.
Moreover, while the First Amendment protections built into the right-of-publicity tort are listed as “additional exclusions” from the bill’s scope, the NO FAKES Act places the burden on online services to determine whether these exclusions apply. For instance, Facebook would need to determine whether a digital replica for which they received notice is “consistent with the public interest in bona fide commentary, criticism, scholarship, satire, or parody.” But they would do so with the potential for liability attached if they are wrong and leave the content up. This will likely tend toward platforms taking down nearly all noticed content, as there is no penalty to the online service for erroneously removing content, as there is for erroneously leaving it up.
First Amendment Issues
While the NO FAKES Act attempts to exclude protected expression from the bill’s reach, its exclusions arguably aren’t sufficiently calibrated to the First Amendment.
The First Amendment is one of the primary defenses to the right-of-publicity tort. Normally, the right of publicity is applied in the commercial context, where First Amendment protection for commercial speech is more limited. For speech that provides news or public-interest information, or artistic or creative expression, the First Amendment provides greater protection. Caselaw has often dealt with the situation where an identity is used in a way that combines elements of newsworthy information or artistic expression with a commercial purpose.
First Amendment defenses have been successful in cases involving:
- Media reporting on newsworthy events or public-interest matters;
- Artistic works where significant transformative creative components have been added to the use of the identity. This includes the retelling of real-life stories in books, movies, and plays;
- Parody used as part of noncommercial speech; and
- Statistical web content for entertainment purposes, such as a fantasy baseball website’s extensive use of players’ names, statistics, and other identifying information.
Under the NO FAKES Act, the following types of content are excluded from the definition of illegal activity in the following situations:
(5) ADDITIONAL EXCLUSIONS.—
(A) IN GENERAL.—An activity shall not be considered to be an activity described in paragraph (2) if—
(i) the applicable digital replica is produced or used in a bona fide news, public affairs, or sports broadcast or account, provided that the digital replica is the subject of, or is materially relevant to, the subject of that broadcast or account;
(ii) the applicable digital replica is a representation of the applicable individual as the individual in a documentary or in a historical or biographical manner, including some degree of fictionalization, unless—
(I) the production or use of that digital replica creates the false impression that the work is an authentic sound recording, image, transmission, or audiovisual work in which the individual participated; or
(II) the digital replica is embodied in a musical sound recording that is synchronized to accompany a motion picture or other audiovisual work, except to the extent that the use of that digital replica is protected by the First Amendment to the Constitution of the United States;
(iii) the applicable digital replica is produced or used consistent with the public interest in bona fide commentary, criticism, scholarship, satire, or parody;
(iv) the use of the applicable digital replica is fleeting or negligible; or
(v) the applicable digital replica is used in an advertisement or commercial announcement for a purpose described in any of clauses (i) through (iv) and the applicable digital replica is relevant to the subject of the work so advertised or announced.
These exclusions do largely reflect the First Amendment’s protections. But the problem is one identified above: the onus is placed on online services to adjudicate whether noticed content constitutes an illegal digital replica or protected expression, with a thumb on the scale favoring the choice to take it down. In difficult cases, there is no reason to expect an online service to keep content up. Under the First Amendment, such deputization is challengeable.
One option that could save the NO FAKES Act is to limit the notice-and-takedown system to court orders that have already adjudicated content to be illegal. While this would add time and cost to removing illegal content for those who have been harmed, it would be much more speech-protective. It would allow First Amendment defenses to play out before a neutral arbitrator, rather than an online service potentially liable for failing to remove content.
Conclusion
The NO FAKES Act attempts to draw lessons from states’ right of publicity, applying them at the federal level in order to protect persons from perceived AI threats. But despite its attempts at balance, there remains a risk that the bill’s notice-and-takedown system, as currently devised, could generate enough collateral censorship to result in First Amendment problems.
Before pressing forward, Congress should reconsider how existing law may already protect people from the unauthorized commercial exploitation of their name, image, and likeness. A targeted notice-and-takedown system based on court orders for already-adjudicated illegal content, for example, might be a better approach.