Every major technological leap in human history—whether it be the printing press or the automobile or the internet—has been greeted by an uneasy blend of optimism and trepidation. Optimism for the opportunities each new technology offers—for change, evolution, empowerment, and growth. And trepidation that the new equilibrium might upend the established order, destroy jobs, devalue human dignity, or tear at the social fabric.
The same was always going to be true for artificial intelligence. Since its explosion into the mainstream around 2022, AI has inspired speculation, hope, fear, and an equal measure of utopianism and luddism. In competition-policy circles, AI was almost immediately flagged as a threat in the United States, United Kingdom, and European Union, despite the nascent sector being demonstrably dynamic, competitive, and innovative. This underscores that irrational fears, rather than evidence, have often driven public-policy reactions.
Against this backdrop, the Competition Commission of India’s (CCI) recent “Market Study on Artificial Intelligence and Competition” arrives at a crucial moment. Its choice to study AI, rather than immediately move to endorse its regulation, marks a prudent and welcome departure from the premature, heavy-handed approach adopted in other jurisdictions. It may yet prove to be India’s biggest competitive advantage.
A Measured Response to a Dynamic Market
The CCI’s study provides an in-depth look at the technological and regulatory contours of India’s AI landscape. It identifies the procompetitive potential of AI across sectors, while flagging such emerging concerns as algorithmic collusion, unilateral conduct, pricing practices, barriers to entry, and the high cost of cloud computing. Importantly, the report correctly acknowledges that concrete evidence of such risks remains limited.
Indeed, contrary to what we have seen in other parts of the world, the report does not jump on the bandwagon of doomsday prophecies about AI as a “first-class fire accelerator” of anticompetitive behavior. Rather, it offers a much more nuanced account of how technological change is reshaping competition dynamics. It recognizes that AI, while posing novel regulatory questions, can enhance efficiency, reduce information asymmetries, and empower smaller enterprises to compete effectively. The commission’s choice to observe and understand these changes before considering the possibility of prescribing any specific rules demonstrates a level of maturity that, unfortunately, has been found lacking in other jurisdictions.
This measured posture shouldn’t be entirely surprising, however. Ever since the economic turn of the early 2000s following the “Raghavan Report,” regulatory intervention in India has generally been guided by market realities, rather than political expediency. India’s competition regime has repeatedly demonstrated that it can engage effectively with complex digital issues within that framework. Indeed, the CCI has evaluated diverse conduct across digital sectors—from Google’s Playstore to Meta’s Whatsapp—demonstrating its ability to apply economic analysis and context-specific reasoning rather than relying on rigid presumptions.
Lessons from Europe’s Regulatory Overreach
India’s contrast with Europe’s approach is stark. The European Union’s Digital Markets Act (DMA) and its recently enacted AI Act impose broad obligations on large technology firms that often equate mere size with market failure. By negating or diluting the benefits of scale, size, and network effects in markets where such characteristics are crucial in driving value (see here), Europe may well have shot itself in the foot.
For instance, a range of products and features have been delayed or not rolled out in the EU because of companies’ concerns that they may run afoul of one or several of the EU’s web of “landmark” regulations (DMA, AI Act, GDPR, etc.). Needless to say, every delay is a missed opportunity, potentially posing irreversible competitive disadvantages in a market where victories are increasingly won and lost at the margins.
For example, Google’s Gemini AI app was delayed for months in both the EU and UK because of compliance uncertainties related to the DMA and the UK’s Digital Markets, Competition and Consumers Act (DMCC). For similar reasons, Google’s AI reviews were launched in only eight EU members after a nine-month delay. Other examples of delayed rollouts include Meta’s multimodal AI capabilities, which interpret video, audio, images, and text (not launched in EU); Apple Intelligence (delayed six months and then launched with only part of the functionalities); iPhone mirroring (not available in EU); Apple Airpods 3 Live Translation (withheld in Europe); and Meta’s Threads (delayed in the EU for several months).
But is this really surprising? Ex-ante regimes rely on rigid presumptions that presume what conduct constitutes harm or which technologies are “risky” with no need to prove it, and no opportunity to disprove it. That is because, at their heart, ex-ante regimes like the DMA seek to address perceived moral failures, not market failures. Their goal is to level-down large companies, benefit successful rentseekers (mostly, large complementors), and redistribute rents in a way the regulator considers “fair”—an intractable notion that leaves room for infinite discretion (here and here).
This distorted conception of “competition” is wildly at odds with what competition law is actually about. Rather than promoting competition and efficient conduct, static rules untethered from analysis of economic effects will tend to conflate harm to competition with “harm” to competitors. In the process, they punish efficiency and discourage experimentation. The consequences of stifling progress would be concerning even in a static market—like, say, cement production or groceries—but would be catastrophic in a fast-evolving economy like India’s, especially in the burgeoning field of AI.
India may yet choose a different path than the EU. Its existing competition framework—anchored in economic analysis and institutional flexibility—remains better suited to the realities of the digital age. From what we have seen to date, AI markets appear to be fluid and self-correcting; their boundaries shift faster than regulation can adapt. By the time a regulation is approved, it is already obsolete. The DMA, for instance, didn’t predict the rollout of AI and is now struggling to shoehorn it into the law’s rigid framework. Put differently: a law that prejudges certain conduct as inherently anticompetitive or “bad” would risk freezing innovation and misallocating enforcement resources.
The CCI’s choice to rely on established principles of competition law, such as effects-based analysis and proportional remedies, helps to avoid that pitfall.
Fostering Capacity, Not Complexity
Far from advising passivity, the CCI’s study’s recommendations offer a thoughtful roadmap for how to strengthen both compliance and institutional readiness. They include encouraging companies to conduct self-audits of their AI systems to identify potential risks early; developing voluntary frameworks to enhance transparency about the use and purpose of AI, without requiring disclosure of proprietary algorithms; and investing in the CCI’s own capacity through conferences, workshops, and skill-building.
Indeed, many of the supposed ills that ex-ante rules typically seek to address can likely be better and more efficiently addressed by building institutional capacity, streamlining procedures, and ensuring timely enforcement. Recent reforms establishing a deal-value threshold for merger scrutiny, creating a voluntary settlement and commitment mechanism, and setting significantly higher penalties for infringements are all steps in that direction.
But the big picture, which the report seems to grasp, is that regulation is not a prerequisite for a functioning market. Indeed, regulation is often the last stop in a string of deeper, long-term cumulative problems that eschew simplistic slogans and monolithic explanations like “big is bad” or “data is the new oil.” In that sense, regulation can sometimes be less a proactive design choice than a belated corrective—a recognition that other institutions, incentives, or policy instruments failed to adapt in time. Regulation is, in this view, not a triumph of governance but a sign that governance arrived late.
The CCI report recognizes this complexity and demonstrates awareness of the range of broader legal, institutional, and social factors that are at play in fostering AI development. Accordingly, it urges action to lower structural entry barriers by expanding affordable access to cloud infrastructure, promoting open-source frameworks, improving data availability, and building talent pipelines.
Finally, it calls for greater international cooperation through partnerships with peer regulators and participation in global forums such as the OECD, the International Competition Network (ICN), and the United Nations Conference on Trade and Development (UNCTAD). This would ensure that India remains in the global AI conversation, while tailoring its approach to domestic realities.
India’s Confidence in the Age of AI
Just as the printing press once unsettled scribes and cars unsettled horse-drawn carriage drivers, AI has stirred both awe and anxiety in equal measure. Britain’s infamous Red Flag Act forced early motorists to lag behind a man waving a warning banner, serving as a reminder of how fear can literally slow progress.
India’s approach to AI and competition—curious, deliberate, and grounded in evidence—captures the opposite spirit. With AI markets projected to grow from around U.S. $6 billion in 2024 to approximately U.S. $32 billion by 2031, India stands at the cusp of a dramatic transformation.
By choosing to understand before intervening, the CCI signals that progress does not need to be micromanaged to be meaningful. In a world quick to panic at the pace of change, India’s calm in the face of global hype may well prove to be its greatest competitive advantage.

