Australia’s online-safety amendment took effect Dec. 10, banning social-media use nationwide by individuals under age 16. Under terms of the law, social-media platforms that fail to take “reasonable steps” to prevent minors from registering and maintaining accounts may face fines up to A$49.5 million.
It now appears that the European Commission, France, Denmark, Greece, Romania, Indonesia, Malaysia, and New Zealand are all considering following Australia’s lead. Ex-Chicago Mayor and White House Chief of Staff Rahm Emanuel has called for the United States to follow suit, declaring that “[w]hen it comes to our adolescents, it’s either going to be adults or the algorithms that raise our kids,” and that “[p]arents cannot fight Big Tech alone.”
This language is very similar to what Australian Prime Minister Anthony Albanese has said in trying to sell the regulation to his own constituents:
We’re doing this for those parents – and for every parent. Because this law is about making it easier for you to have a conversation with your child about the risks and harms of engaging online. It’s also about helping parents push back against peer pressure. You don’t have to worry that by stopping your child using social media, you’re somehow making them the odd one out. Now, instead of trying to set a “family rule”, you can point to a national ban.
Emanuel and Albanese’s declarations bring to mind a great South Park line, delivered in an episode in which the town’s parents demand that an admittedly vulgar TV show be censored: “I think that parents only get so offended by television because they rely on it as a babysitter and the sole educator of their kids.”
As a parent of two little girls—one of whom will soon be a smartphone user—I share concerns about the potential adverse effects of social media. There are many things I would prefer my daughters to avoid or use only under supervision. But I don’t need a national ban to do that.
In this post, I argue that “outsourcing” our sons and daughters’ online safety to the state would be not only ineffective, but harmful to society in an often-ignored way: It would weaken social norms, and communities and families’ ability to regulate the behavior of adolescents and children. Paradoxically, the law would hinder what should be an essential part of the solution: A community-driven approach to equip parents and teens with the tools to navigate the digital world safely.
A Double-Edged Feed
To be sure, there are risks associated with the use of social media by children and adolescents. Heavy social media use has been linked to a range of mental-health issues—most notably, higher rates of depression and anxiety, and even heightened feelings of loneliness or social isolation. It can also make teens feel worse about their appearance and increase their exposure to cyberbullying, which is consistently associated with elevated depression risk in youth. Research further shows that excessive social-media use is often associated with disruptions in sleep and may contribute to attention difficulties in adolescents.
But as my colleague Ben Sperry has explained, that is only half the story; there are also benefits. For children and adolescents, social media can offer essential opportunities for positive social connection, support, and personal growth, helping them to maintain friendships and find communities that provide companionship and reduce their loneliness—especially for those who are marginalized.
Online engagement can also help in fostering identity development and creative expression. According to the U.S. Surgeon General’s 2023 “Advisory on Social Media and Youth Mental Health,” most adolescents report that social media “helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%).”
Given these benefits, a blanket prohibition is unwarranted. Legislation like the new Australia ban has several costs and unintended consequences. It may:
- Impede or discourage industry efforts to manage harmful content effectively;
- Push minors to circumvent restrictions by migrating to less-regulated platforms, or through the the use of virtual private networks (VPNs);
- Eliminate or reduce the incentives to employ curation and parental-control tools that accompany youth accounts, leaving children exposed to the very content the law seeks to avoid, while stripping parents of useful safeguards; and
- Create risks to privacy, free expression, and security by forcing platforms to collect broad identity information in order to verify users’ age, chilling anonymous speech and creating new vulnerabilities for users of all ages, not just minors.
I would urge readers, again, to see this piece by Ben Sperry, or this David Inserra post for a detailed criticism of the Australian social-media ban. But there is another critical cost of such bans that is often overlooked: the gradual displacement of community- or family-based governance by centralized state rules.
When the State Crowds Out the Village
In his book “The Third Pillar,” former International Monetary Fund Chief Economist Raghuram Rajan describes society as supported by three pillars: the state, markets, and communities (the social fabric of norms, families, civic groups). Rajan notes that, when any one pillar becomes too dominant, the system has a tendency to fall out of balance. He argues that the modern world has often let markets and states overpower local communities, leading to social breakdown. While I probably disagree with Rajan about the appropriate magnitude of the state pillar, I agree with the general thrust of his argument.
The debate over social-media harms is a case in point. Faced with new technologies that challenge traditional parenting, many have leapt to call for state intervention (bans, mandates), while neglecting the role of families, schools, and social norms—the community institutions that have historically guided youth behavior.
The South Park quote above encapsulates this neglect. Parents who rely on television (or Snapchat, or YouTube) as de facto babysitters often prefer to outsource the responsibility of monitoring their children’s use of such technologies and platforms, and blame the providers for any potential harm. Politicians, in turn, seize the opportunity to gain political credit for “doing something” about it and to appease the parents’ guilt.
A healthier approach would be to reengage the community pillar by empowering and trusting parents, educators, and local groups to manage children’s media use actively. That doesn’t have to mean leaving everything to individual parents without support. But it does require recognizing that norms and education can often address social ills more flexibly and efficiently than laws and regulations.
Decades ago, before the concept of cyberspace was such a presence in our collective lives, social scientists like Robert Ellickson and Elinor Ostrom documented how communities solve problems through informal norms and mutual monitoring. Ellickson’s famous study “Order Without Law” demonstrated how ranchers and farmers in Shasta County, California, employed neighborly norms to resolve cattle-trespass disputes more effectively than any statute or lawsuit could. Ostrom studied the commons—from fisheries to irrigation systems—and likewise found that communities could craft sophisticated rules for themselves that outperformed top-down regulations.
The key insight is local knowledge and adaptability: people on the ground often understand the nuances of various issues in ways that allow them to tailor solutions better than a distant legislature could. As a college instructor, for instance, I favor banning cell phones in the classroom. But that’s me, and what I find appropriate for the twentysomethings who attend my Law & Economics class at a law school. A teacher of a psychology, communications, or computer-science class might determine otherwise. The right approach is to let professors and schools decide, while a blanket ban would hinder professors’ ability to promote supervised and beneficial use.
Apply this same standard to teens’ online safety. Who is best positioned to gauge whether a 15-year-old is sufficiently mature to use Instagram responsibly? Not a D.C. or Canberra lawmaker, but that child’s parent, teacher, or caregiver. As the American Psychological Association (APA) has found, “every young person is different, and… parents and guardians are best to determine the best use for their child,” while considering factors like the teen’s maturity, self-regulation skills, and home environment. Any blanket ban inherently ignores this variation; it treats a straight-A 15-year-old who wants to join a coding forum the same as a vulnerable 12-year-old being cyberbullied on Snapchat.
In contrast, parental involvement and community norms can calibrate rules to individual needs. A parent might allow limited Instagram use, but not late at night. A school might enforce phone-free classrooms, while encouraging supervised use of educational technology for projects. These calibrated controls are effectively outlawed or made irrelevant by bans that forestall parents’ efforts to guide teens’ social-media use. The state is effectively saying: “We don’t trust any parents to get this right, so we’ll just take over.”
Training Wheels, Not Handcuffs
Of course, not all parents do get it right. Many feel overwhelmed by the pace of digital change, and some may be neglectful. But rather than render parents irrelevant, the best solution is to support and educate them to fulfill their role more effectively.
This is where policy could focus on nurturing the “third pillar,” rather than bypassing it. For example, public campaigns could educate parents about device parental controls, content filters, and strategies to discuss online risks with their children. Schools and libraries could host digital-literacy workshops for families. The APA and other child-development experts have consistently called for more extensive media-literacy training and open parent-child communication about social media as key protective factors.
Social norms among young users also matter. Teen culture may develop healthier norms around the use of social media and other technologies if they are guided, rather than simply forbidden. We’ve seen schools successfully implement programs where students pledge not to use phones during certain hours, and teens themselves have taken the lead in calling out online bullying.
Bans don’t foster positive norms; they simply impose an external rule that many young people will view as illegitimate or out of touch, thus encouraging evasion. By contrast, when communities (schools, youth groups, peer networks) set expectations—e.g., that it’s uncool to be glued to your phone at lunch or that group chats won’t tolerate hate speech—those can have a much more powerful effect on behavior. Such decentralized governance is also more adaptable. If a particular platform is causing an issue (say, a toxic local Snapchat group), the community can target that issue specifically, rather than prohibiting all social-media use.
Rajan’s thesis offers a prescriptive insight here. Rather than the state weakening the community by taking over its functions, the goal should be to strengthen the community pillar to meet new challenges. In the context of social media, this means empowering parents, schools, and civil society to collaborate in managing online safety for young people.
The Australian ban arguably sends the opposite signal: that parents can stand down, because “Big Brother” will handle it. That’s a dangerous message. As Rajan notes, when one pillar (the state) subsumes another (the community), it can breed backlash and dysfunction in the long run. Here, we risk further eroding trust and responsibility at the family and community level by implying that only sweeping government mandates can safeguard kids.
Conclusion
The Australian social-media ban is a classic example of heavy-handed regulation that, in practice, likely will not help to achieve its intended policy objective. It may have noble intent, but its method is miscalibrated and fraught with unintended costs.
It treats teens’ social-media use as an absolute harm, when the reality is more nuanced. And it opts for a state rule, where family and community governance could be more adaptive. Indeed, it may undermine the very goals it seeks to achieve: driving young people toward riskier online spaces, denying them positive opportunities, and dulling the incentive for all of us to develop smarter, more tailored solutions.
The principles of law & economics caution to be skeptical of sweeping bans that ignore tradeoffs and incentive effects. This case is no different. Rather than a blunt prohibition, a mix of targeted interventions—empowering parents, educating kids, improving platform safety, and focusing enforcement on actual harms—would likely yield better outcomes at lower cost. Rahm Emanuel’s framing of the issue, that “it’s either going to be adults or the algorithms that raise our kids,” actually offers an easy choice: it should be the adults. It should be us.

