There has been an evolving battle in recent years over how to protect minors on online platforms in a way that would be consistent with the principles of free speech. Given the strictures of both the First Amendment and Section 230 of the Communications Decency Act, the government has very limited ability to restrict either the presentation of or access to online speech. For instance, age-verification laws for protected speech have routinely been permanently enjoined as unconstitutional, or temporarily enjoined as likely to be so.
Nonetheless, states and other plaintiffs have gotten creative, using tort law and consumer-protection law to argue that they aren’t actually trying to regulate speech, but only conduct. The Commonwealth of Massachusetts continued this trend with a complaint against Meta Platforms targeting various harms alleged to arise from minors’ use of the Instagram platform.
In its complaint, the commonwealth alleges that Meta engaged in unfair trade practices by employing addictive product-design features on Instagram (count one of the complaint) and failing to sufficiently verify ages in order to exclude users who were under 13 (count three). An October 2024 trial court decision denying Meta’s motion to dismiss declined to engage in any substantial First Amendment analysis, finding the commonwealth’s claims were “principally based on conduct and product design, not expressive content.”
In an amicus brief that Geoffrey Manne, Kristian Stout, and I filed today with the Massachusetts Supreme Court, we argue that the trial court’s opinion should be vacated and that count one and count three should be dismissed under the First Amendment. In this post, I will offer some detail on the law & economics of protecting minors on speech platforms, and then preview the legal arguments we made in our brief.
The Law & Economics of Protecting Minors on Speech Platforms
The First Amendment protects “the marketplace of ideas.” On the supply side, this includes online speech platforms like those offered by Meta, while minors (as both potential speakers and listeners) may be present on the demand side. To attract and maintain users, online speech platforms must curate and present speech in an engaging way. Moreover, as we argue in the brief, to be successful in this marketplace, online speech platforms like Meta:
…must create a welcoming place for users, including minors, or they risk losing them to competitors and other means of entertainment. For instance, to the extent that harassment and bullying makes users less likely to stay online, then Meta has a strong reason to moderate such abuses. And since most advertisers don’t want to be associated with a platform that hosts CSAM, bullying, harassment, or fat-shaming, Meta also has a strong incentive to moderate such content. This is particularly true given the very limited monetary benefits that can be derived from targeting advertising to children or teens, who generally lack either the bank accounts or payment cards for online transactions. Thus, it is unsurprising that Meta offers features designed to protect the mental health of minors who use their platforms.
Nonetheless, there may be various harms associated with using online speech platforms, including some that affect minors. In a 2023 issue brief, I applied the Coase Theorem’s logic to online age-verification and parental-consent laws. My economic argument was that:
- Transaction costs associated with obtaining age verification and verifiable consent from parents and/or teens are sufficiently large that they will often prevent a bargain from being struck;
- The lowest-cost avoiders are parents and teens working together, using practical and technological means to make marginal decisions about minors’ social-media use; and
- Placing the transaction costs on social-media companies to obtain age verification and verifiable consent from parents and/or teens would actually reduce those parties’ ability to make marginal decisions about minors’ social-media use, as the platforms will respond by investing more in excluding minors from access than in creating safe and vibrant spaces for interaction.
More generally, whenever the marginal costs of hosting minors on online speech platforms exceed the marginal benefits those platforms expect to receive in terms of revenue, we should expect that they will exclude minors from access. The First Amendment is designed to avoid such collateral censorship when it results from government regulation.
The First Amendment case law protects the rights of online speech platforms to curate and present speech as they see fit as they participate in the marketplace of ideas. Moreover, least-restrictive-means analysis effectively places the burden of avoiding harms associated with those speech platforms primarily on parents and minors themselves, as they are the least-cost avoiders.
Our Amicus in Massachusetts v Meta
The two main arguments we make in our amicus brief are that the trial court failed to apply appropriate First Amendment scrutiny to the product-design claim of count one and the age-verification requirement of count three. The court not only found each claim was “principally based on conduct and product design, not expressive content” but argued in a footnote that:
Even if these features carry some expressive element, the claim may very well be permitted under the intermediate scrutiny test… because the Complaint plausibly alleges that such elements are commercial in nature.
To the contrary, we argue that not only is expressive content implicated by both counts, but the commonwealth’s claims should be subject to strict scrutiny under applicable precedents. On the product-design claims, we note that, under Moody v. NetChoice:
When it comes to things like Instagram Feed, “[d]eciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own.” Id. at 731 (emphasis added). The expressive product is the result of Meta’s “choices about whether—and, if so, how—to convey posts.” Id. at 738 (emphasis added). The First Amendment protects not only what content Meta can choose to show on its Instagram Feed, but how to present or convey it.
Therefore, while Massachusetts alleges that notifications, alerts, infinite scroll, autoplay, and ephemeral content are mere “conduct,” this simply can’t be the case. Notifications and alerts are speech, as they are intended to let users know when new content is available that may interest them. Infinite scroll, autoplay, and ephemeral content are all choices about how to present speech, akin to how newspapers or magazines choose how to display speech in their publications.
On the age-verification requirement, we cite a long string of cases where courts have found statutes requiring age verification before accessing protected speech to violate the First Amendment. We argue that:
…it makes no sense that [Meta] would have such a pre-existing duty to verify age subjecting them to Massachusetts consumer protection law… when the First Amendment precludes states from imposing such requirements.
Under strict scrutiny, the government would not be able to prove the remedies sought under either count are the least-restrictive means to achieve a compelling government interest. Indeed, we argue the commonwealth fails to establish a compelling government interest, because there is no causal link between the way Meta has chosen to present content and the mental-health harms to minors alleged in the complaint. As the National Academies for Science, Engineering, and Medicine concluded in its report, the data does “not support the conclusion that social media causes changes in adolescent health at the population level.”
Furthermore, even assuming a compelling government interest exists, neither count is narrowly tailored, as the complaint fails to show that widely available technological and practical means are inadequate as effective alternatives. Indeed, the commonwealth’s filings do not even attempt to do so.
Conclusion
The First Amendment protects the marketplace of ideas, which includes both Meta and minors who participate in social media. Protecting consumers, including minors, is obviously important, but this can’t include restricting protected speech.

