Large language models (LLMs) have become a major touchpoint at the state and local level in recent years. LLMs’ ability to create images, videos, music, writing, and other artistic works of varying social value has sparked a rush across the states to introduce legislation and regulations to limit the possible harms that might ensue.
More recently, that rush has also extended to the federal level. In May, President Donald Trump signed S. 146, the TAKE IT DOWN Act (formally, the “Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act.”) Sponsored by Senate Commerce Committee Chair Ted Cruz (R-Texas), the law focuses on harms arising from so-called “revenge porn,” in which either genuine photos or video or “digital forgeries” intended to look like real persons are shared online without the subject’s consent.
Sen. Chris Coons (D-Del.) has also introduced S. 1367, the NO FAKES Act (the “Nurture Originals, Foster Art, and Keep Entertainment Safe Act”), which looks to address potential harms to content creators, musicians, actors, athletes, and others in a world where LLMs and other AI generators of content are trained on their works.
Each bill would create a new cause of action: a new federal crime in the case of the TAKE IT DOWN Act, and a new federal tort modeled on the right of publicity in the case of the NO FAKES Act. But they also create new duties for online intermediaries, requiring a notice-and-takedown system modeled after the Digital Millennium Copyright Act (DMCA), albeit with some important differences.
In this post and a subsequent follow-up post, I will consider the pros and cons of each bill from a law & economics perspective. When considering intermediary liability for speech platforms, it is important to balance the need for accountability with the threat of collateral censorship. At a fundamental level, both the TAKE IT DOWN Act and the NO FAKES Act rely on online intermediaries’ ability to monitor and control their users by finding illegal content for takedown. If those intermediaries are the lowest-cost avoiders of harm, then this is economically rational.
But even presuming that this is the case, the threat of collateral censorship remains present. These bills could, therefore, enact a large social cost. Important First Amendment values are at stake, both with the causes of action the legislation would create and with the proposed notice-and-takedown system.
I intend to flesh out some of these tradeoffs and evaluate the proposals from both a policy and constitutional perspective. In particular, I will evaluate what problem each bill is attempting to solve, the extent to which the proposed solution is likely to engender collateral censorship, and the First Amendment implications of the new causes of action and the notice-and-takedown provision. Each post will also compare the notice-and-takedown provisions to the DMCA. There will also be a consideration of Section 230 immunity, and how each bill changes the liability protections for online intermediaries.
In this post, I consider the TAKE IT DOWN Act, which was signed into law May 19. The act creates a federal cause of action for “intentional disclosure of nonconsensual intimate visual depictions,” which are defined to include both “authentic intimate visual depictions” and “digital forgeries.” It also creates a notice-and-removal process for such depictions, which will apply to covered platforms.
Problems to Solve
One could question whether the TAKE IT DOWN Act’s new federal cause of action is needed. Laws criminalizing “revenge porn” are already on the books in 48 states and the District of Columbia. There is even a federal civil cause of action under the Violence Against Women Act (VAWA) for the disclosure of intimate images. A critic might say that there is no legal gap that this law would fill.
On the other hand, a federal criminal law may be important for reasons beyond holding perpetrators of “revenge porn” accountable. Section 230 immunity has been read to extend to civil claims based on violations of state criminal law. This could mean that online intermediaries are protected from needing to take action on “revenge porn” they are made aware of on their sites, so long as they don’t contribute to its creation. As one court put it:
…the plain language of the statute contemplates application of immunity from civil suit under section 230 for interactive computer service providers even when the posted content is illegal, obscene, or otherwise may form the basis of a criminal prosecution.
Given the new federal cause of action, both federal and state law enforcement would be on solid ground to enforce laws against intermediaries when applicable. Section 230(e)(1) already makes clear that Section 230 immunity has no effect on the enforcement of “any… Federal criminal statute.” And Section 230(e)(3) ensures that states can enforce state criminal laws so long as they don’t conflict with Section 230.
The TAKE IT DOWN Act also makes clear that AI-generated “revenge porn” is actionable. Not all existing state laws clearly apply to such situations, which are now expected to become more prevalent as LLMs are able to more easily generate offending videos and images.
The TAKE IT DOWN Act does not, however, create a private right of action against intermediaries for failing to remove illegal content. Instead, it places the responsibility to bring actions with the Federal Trade Commission (FTC), using its consumer-protection authority to enforce the act.
Potential for Collateral Censorship
The TAKE IT DOWN Act will require online services that host user-generated content to create (within one year of the law’s enactment) a notice-and-removal process to take down real or fake intimate visual depictions. The law applies to any online platform that hosts user-generated content that may “publish, curate, host, or make available” nonconsensual intimate imagery as part of their business.
The notice-and-removal process must allow for victims and their authorized representatives to request removal of nonconsensual intimate images. The covered platform then has 48 hours to remove the requested content and “make reasonable efforts to identify and remove any known identical copies of such depiction.” As noted above, failure to comply would subject the platform to possible enforcement actions by the FTC.
Uncertainty about the scope of the law’s application could lead to some collateral censorship, in the form of reducing users’ ability to contribute content. For instance, the law explicitly excludes services where user-submitted content is “incidental” to the service’s primary function. This carveout appears designed to protect sites with comment sections or that permit consumer reviews or the like. What qualifies as “incidental” is sufficiently ambiguous that it could marginally reduce the availability of user-generated content from online platforms concerned about liability.
The notice-and-removal process of the TAKE IT DOWN Act looks vaguely like the notice-and-takedown provision of the DMCA, but notably lacks DMCA § 512(c)(3)(vi)’s requirement to include “[a] statement that the information in the notification is accurate, and under penalty of perjury, that the complaining party is authorized to act on behalf of the owner of an exclusive right that is allegedly infringed.”
This could have important implications for collateral censorship, where invalid notices are filed not just by putative victims but potentially by third parties purporting to act on their behalf. In combination with platforms’ protection for good-faith removal, the law definitely appears to place a thumb on the scale in favor of removing any content for which a removal notice is received. Perhaps this is an acceptable cost, given the harms associated with revenge porn and the low social value of pornographic images. But it is a potential social cost nonetheless, insofar as it may lead to the removal of lawful images or videos.
First Amendment Issues
The TAKE IT DOWN Act’s drafters appear to have drawn lessons from the states in how to craft the criminal cause of action. State courts reviewing “revenge porn” laws have mostly upheld them, but important to those rulings were important factual findings that the laws were narrowly tailored by definitions, specific intent in the mens rea, and harm requirements in the actus rea.
This is especially important in light of the U.S. Supreme Court’s 2023 opinion in Counterman v. Colorado, which held that, even in cases concerned with unprotected speech, a mens rea element is important to prevent challenges based on “chilling effects” to protected speech.
For instance, in State v. VanBuren, the Vermont Supreme Court found both that the First Amendment applied to the “revenge porn” statute and that strict scrutiny applied, since it was content-based. The statute nonetheless survived strict scrutiny. Important to its finding of narrow tailoring was that:
[The statute] defines unlawful nonconsensual pornography narrowly, including limiting it to a confined class of content, a rigorous intent element that encompasses the nonconsent requirement, an objective requirement that the disclosure would cause a reasonable person harm, an express exclusion of images warranting greater constitutional protection, and a limitation to only those images that support the State’s compelling interest because their disclosure would violate a reasonable expectation of privacy.
For its part, the TAKE IT DOWN Act contains both a limited mens rea and actus rea, and limitations for public concern:
(i) the intimate visual depiction was obtained or created under circumstances in which the person knew or reasonably should have known the identifiable individual had a reasonable expectation of privacy;
(ii) what is depicted was not voluntarily exposed by the identifiable individual in a public or commercial setting;
(iii) what is depicted is not a matter of public concern; and
(iv) publication of the intimate visual depiction–
(I) is intended to cause harm; or
(II) causes harm, including psychological, financial, or reputational harm, to the identifiable individual.
While a First Amendment challenge to the TAKE IT DOWN Act is likely inevitable at some point, it seems likely that the law’s careful drafting will allow it to survive scrutiny.
It’s also possible that a First Amendment challenge will be brought on grounds that the notice-and-removal process is not narrowly tailored. The outcome of that sort of challenge is less clear. As discussed above, the thumb appears to be on the scale in favor of removal. The fact that only the FTC can enforce the law does, however, make it less likely to lead to overbreadth than if there were a private cause of action. Nonetheless, if a court finds that this process leads to the removal of a significant quantity of protected speech, it could lead to a strong First Amendment challenge.
Conclusion
The TAKE IT DOWN Act is now law of the land. There are certainly positive aspects to the law, including the degree to which it will ensure that purveyors of revenge porn—either real or AI-generated—are brought to justice. There is also value in making sure intermediaries that host user-generated content have greater incentive to remove offending material. But there is also potential for mischief in the notice-and-takedown system, which (depending on the details) could ultimately lead to a successful First Amendment challenge.