The digital economy has made consumer data a central consideration in all kinds of consumer transactions. The digital economy “runs on data,” so to speak, although claims that data is “the new oil” fall short of the mark.
Various digital services employ data to improve ad targeting, search, and artificial intelligence. In some kinds of services (e.g., search or social-media platforms), products and services are “free” (i.e., consumers don’t pay an explicit monetary price) and it is often explained that they “pay with their data.” This phenomenon, in turn, has brought privacy to the center of various public-policy discussions.
Critics of so-called “big tech” argue that the global platforms’ data-collection practices go beyond traditional market exchange, and amount to a form of consumer exploitation. In “The Age of Surveillance Capitalism,” for instance, Shoshana Zuboff contends that the dominant tech firms are imposing “economic oppression” through the unilateral extraction and commercialization of personal data.
In response to both real privacy risks and hyperbolic criticism, jurisdictions around the world have embraced comprehensive privacy regulations, often inspired by the European Union’s General Data Protection Regulation (GDPR). These rules aim to protect personal data by imposing far-reaching obligations on firms that collect or process it.
While well-intentioned, these rules impose significant costs on firms, and both harm competition and consumers. This approach, which we may call “privacy absolutism,” fails to recognize an important economic dynamic: privacy is not just a right to be protected, but also a vector of competition.
Are Companies Forcing Consumers to Share Their Data?
As Judge Richard Posner explained in his treatise on law & economics:
When a transaction is between a large corporation and an ordinary individual, it is tempting to invoke the analogy of duress and compare the individual to the helpless fellow forced to sign a promissory note with a knife at his throat—especially if his contract with the corporation is a standard contract or the consumer is a poor person—and conclude that the terms of the deal are coercive. Many contracts (insurance contracts are a good example) are offered on a take-it-or-leave-it basis. The seller hands the purchaser a standard printed contract that sets forth, sometimes in numbing detail, the respective obligations of the parties. The purchaser can sign it or not as he pleases, but there is no negotiation over terms. It is an easy step from the observation that there is no negotiation to the conclusion that the purchaser lacked a free choice and therefore should not be bound. (Economic Analysis of Law, 2014, at 148).
Just as happened decades ago with standardized contracts, some data-protection and competition authorities are interpreting the fact that certain digital platforms condition the provision of a service on the acceptance of consent for the processing of personal data to be equivalent to a lack of “freely given consent.”
Take, for instance, the opinion of the European Data Protection Board (EDPB) regarding “valid consent” in the context of consent-or-pay models. According to the opinion, in analyzing if consent was freely given, data-protection agencies should consider (among other things):
Whether any fee imposed is such as to inhibit data subjects from making a genuine choice or nudge them towards providing their consent. In respect of the imposition of any fee to access the ‘equivalent alternative’ version of the service, controllers should assess, on a case-by-case basis, both whether a fee is appropriate at all and what amount is appropriate in the given circumstances, bearing in mind the need of preventing the fundamental right to data protection from being transformed into a premium feature reserved for the wealthy.
…Whether any fee imposed is such as to inhibit data subjects from making a genuine choice or nudge them towards providing their consent. In respect of the imposition of any fee to access the ‘equivalent alternative’ version of the service, controllers should assess, on a case-by-case basis, both whether a fee is appropriate at all and what amount is appropriate in the given circumstances, bearing in mind the need of preventing the fundamental right to data protection from being transformed into a premium feature reserved for the wealthy.
This line of reasoning opens the door for data-protection authorities to conclude that, whenever a firm offers a service on a “consent-or-pay” basis, there is no meaningful consent—that is, consumers have no real choice. But such a conclusion only holds if the price attached to the alternative is so high as to render the choice effectively coercive. And as my colleague Mikolaj Barczentewicz has observed, determining when a price crosses that threshold is far from straightforward.
More fundamentally, this concern presumes a lack of market alternatives. If the firm in question operates in a competitive environment, consumers will retain meaningful choice: they can accept the terms, decline them, or turn to a rival offering a different balance between privacy and price. Once competition is taken into account, the argument that “consent-or-pay” undermines consent is far less compelling. As Posner explained, “what is important is not whether there is haggling in every transaction but whether competition forces sellers to incorporate in their form contracts terms that protect the purchasers.” (at 149).
Even in applying their own digital-competition rules, however, enforcement agencies often appear to give little consideration to the actual competition that a data collector faces. Consider the European Commission’s recent decision against Meta. Article 5(2) of the Digital Markets Act (DMA) bans the “cross-use personal data from the relevant core platform service in other services provided separately by the gatekeeper” (which Meta uses for personalized advertising) “unless the end user has been presented with the specific choice and has given consent within the meaning of Article 4, point (11), and Article 7 of Regulation (EU) 2016/679.” That is, consent on the terms of the GDPR.
In analyzing if that consent was granted, the Commission found that Meta’s “consent-or -pay” model “does not ensure that end users give valid consent (in particular, they are not enabled to ‘freely’ give their consent),” because “a clear imbalance of power exists between Meta, as a data controller… and the end users of its Non-Ads Services forming part of the Facebook and Instagram environments” (paragraph 180). The Commission goes on to mention Facebook and Instagram’s large user base and “strong lock in and network effects” (paragraphs 188-190).
After a cursory analysis, the Commission disregarded Meta’s claim that “end users have multiple alternative options available when they do not consent, or withdraw consent, to the combination of their personal data” (paragraph 214), because the alternatives “do not fulfil the same purposes and do not have the same functionalities as Meta’s Non-Ads Services. As such, they cannot be considered as sufficiently similar by end users” (paragraph 252).
As Dan Gilman has pointed out, however, an analysis of substitution should look into actual substitution—and there is evidence of substitution, for instance, from the 2021 Facebook outage or the 2025 TikTok ban—and not to specific features of a product that are both subjective and part of the differentiation that serves as a function of competition.
All in all, the Commission decision remains as a “single-firm” analysis of consent which seems incomplete, to say the least. To be sure, one of the DMA’s objectives in employing the “gatekeeper” designation is precisely to dispense with the need to perform relevant-market analysis and assess monopoly power. But even within a regulatory regime that departs from classical antitrust principles, enforcement must still adhere to the standards of reasonableness and proportionality. In this context, can the existence of meaningful competition really be disregarded when assessing whether consent is “freely given”?
In the Meta case, the Commission should have taken into account that consumers offered a consent-or-pay model do have alternatives. Facebook operates in a competitive environment with other platforms—such as Snapchat, TikTok, and X—that offer comparable social-networking functions while collecting less user data. Even if some competition authorities attempt to construct narrow market definitions to suggest otherwise, these substitutes remain accessible and relevant.
On the other hand, even as it mentioned network effects and “lock in,” the Commission should also have given serious consideration to factors that counteract such effects and are present in digital markets: saturation, market growth, and multi-homing, among others (see, e.g., Christopher Yoo’s “Network Effects in Action”).
Ultimately, users also retain the option to forgo the service altogether—a meaningful choice, particularly when dealing with non-essential digital services. Here the Commission points out that Meta’s services “are not a public utility or ‘essential facility’ such as, for example, certain government agencies” but “are nevertheless an integral part of the daily lives of many Union citizens” and that “many people do not conceive a ‘life without social media’” (paragraph 245).
Once again, competition is (or should) be the answer. Even if a given good or service is very important and part of citizens’ daily lives, there has to be a solid case of market failure to justify intervening in the freedom of contract and property rights of the firms that provide it. That is hardly the case for social media.
It is worth noting that the Commission’s Meta decision is, as Barczentewicz has pointed out, more restrictive than the criteria outlined by the Court of Justice of the European Union (CJEU) in Meta Platforms v Bundeskartellamt, which acknowledged the possibility of valid consent—even in the case of a “dominant” provider—and allowed for a fee-based alternative.
Do Companies Compete on Privacy?
Yes, they do—although perhaps not as prominently as other aspects of competition. This may be because consumers often claim to value privacy more than their actual behavior reflects. Moreover, privacy preferences vary widely among individuals, and in competitive markets, firms can respond to this heterogeneity by offering different levels of data protection.
There are many examples of firms competing on privacy. One of the most visible examples is Apple, which has branded itself as a privacy-first tech company. In 2021, Apple launched its App Tracking Transparency feature, which requires apps to obtain explicit permission before tracking users across other companies’ apps and websites. This move was framed as a privacy enhancement, but also served as a competitive differentiator against rivals like Facebook.
Another example is DuckDuckGo, a search engine that emphasizes anonymous browsing and minimal data collection. Although its market share is small compared to Google, DuckDuckGo has grown steadily by appealing to a privacy-conscious user base.
In the messaging-app space, Signal and Telegram have differentiated themselves from Meta’s WhatsApp and Facebook Messenger by emphasizing end-to-end encryption and noncommercial business models. A notable episode illustrating that privacy is a relevant margin of competition could be seen in early 2021, when WhatsApp announced changes to its terms and conditions that many users interpreted as a reduction in message encryption. While this interpretation was later clarified—WhatsApp maintained full encryption—public concern triggered a mass exodus, with Signal alone reportedly gaining 7.5 million new downloads in just a few days.
Now that WhatsApp will show ads from businesses in its “Status” feature, we will see if some privacy-minded users will change to a new service (although WhatsApp apparently will use only limited information, such as country and language).
Are Laws to Protect Privacy Even Needed?
Privacy is important, even essential. But precisely because it is so important, laws to protect it should be effective and efficient. They should allow consumers to have access to different levels of privacy, in balance with other goods they also desire.
An ideal regulatory framework for privacy would recognize the heterogeneity of consumer preferences and the dynamic nature of competition in digital markets. Rather than imposing rigid, one-size-fits-all mandates, regulators should adopt a light-touch approach that empowers consumers to choose among privacy options, and that provides incentives for firms to differentiate themselves accordingly. Such a model would preserve room for innovation, lower compliance costs, and enable the discovery of privacy features through market experimentation. Such a framework would center enforcement on clearly defined harms, such as deception, coercion, or egregious misuse of personal data.
As Dan Gilman and Liad Wagman note, privacy should not be treated as a uniform good, but as a complex domain with significant tradeoffs that vary across industries and individuals. Blanket regulation often fails to capture this nuance, producing unintended costs and discouraging entry and investment—especially among startups and small firms that lack the resources to navigate compliance-heavy regimes like the GDPR. By contrast, focusing enforcement on practices that cause substantial harm would avoid unnecessary litigation.
Under this approach, private parties should be able to file complaints against the unauthorized use of personal information and an administrative agency should be able to prosecute those cases with evidence of significant harm or where the cost is prohibitive to private parties. Enforcement should prioritize transparency and disclosures, allowing consumers to make informed choices. At the same time, “soft law” tools, such as guidance documents and industry codes of conduct, can complement formal enforcement to shape norms and reduce information asymmetries without imposing excessive regulatory burdens.
Conclusion
While privacy is important, data-privacy regulations—particularly those modeled on the GDPR—come with substantial costs, both in terms of compliance burdens and their unintended effects on innovation and competition. These rules can entrench incumbents, deter entry by startups, and divert resources away from product development toward legal formalities.
The assumption that only heavy-handed, top-down regulation can address privacy risks ignores both the complexity of consumer preferences and the potential for market forces to deliver differentiated solutions. A smarter approach would take competition seriously.
Privacy can and should be a dimension along which firms compete, giving consumers (in most markets) the ability to choose the level of data protection that suits them. In this light, regulators should adopt a light-touch framework—one that allows the market to discover and respond to diverse privacy preferences, while reserving enforcement for cases of genuine harm, deception, or coercion. This strategy promises better outcomes at lower social cost, ensuring that privacy is not only protected but also subject to innovation and improvement through the process competition.