Debates about artificial intelligence often focus on the technology’s intrinsic qualities, such as its speed, scale, and uncanny ability to generate text or code. But the lesson we should draw from every major technological transition since World War II has been that economic outcomes are not determined by technology alone. The institutional framework surrounding those technologies also matters.
Each major wave of transition—from the birth of semiconductors, to the rise of personal computing, to the launch of the commercial internet, to various breakthroughs in biotechnology—has produced new firms and entire new categories of work because the legal environment encouraged entrepreneurial experimentation. Clear property rights, low transaction costs for entry, open architectures, and liability rules that reward responsible deployment have been the general conditions that allow broad-based innovation to flourish.
Artificial intelligence now operates as a general-purpose technology with broad downstream spillovers (at least, in many of its popular forms). We are seeing the earliest outlines of a new entrepreneurial wave in AI. Proprietary systems are rapidly expanding the capabilities available to firms of all sizes, especially in code generation, business-process automation, and professional services, while open-source models are producing striking early results by driving down the cost of experimentation for individuals, startups, and small teams.
Both sides of the ecosystem matter. Proprietary systems offer state-of-the-art performance and reliability that many commercial users require; open-source models create a parallel channel for rapid tinkering, customization, and domain-specific innovation. Together, they have enabled small firms to build specialized models, language tools, civic-tech platforms, and automation services at a pace reminiscent of the early personal-computing and internet eras. These developments are happening because multiple viable pathways—commercial, open, and hybrid—lower capital requirements, reduce transaction costs, and allow experimentation at the edges of the economy.
Policy choices made in the here and now may determine whether AI’s future more resembles the early semiconductor era—when a healthy licensing regime and competitive market structure generated thousands of firms—or more constrained sectors where innovation clustered within a few large incumbents. The question is whether the United States strengthens the conditions that make entrepreneurial dynamism possible, or whether it will adopt regulatory structures that inadvertently suppress the very activity that has historically turned technological revolutions into broad economic gains.
How New Technologies Spawn New Work and New Firms
A consistent theme across the major technological transitions of the past 80 years is that entrepreneurialism is allowed to surge where institutions lower the cost of experimentation. The economic dynamism that followed the development of semiconductors, personal computing, the commercial internet, and biotechnology can be attributed to institutional choices that reduced transaction costs, clarified property rights, and opened the door for entrepreneurs to build on foundational technologies. Where those conditions were present, new firms and entirely new categories of work appeared at remarkable speed. Where they were absent, innovation stalled.
AT&T and the Semiconductor Revolution
The semiconductor revolution provides the clearest early example of how reducing transaction costs can unlock entrepreneurial entry. In 1956, AT&T signed a consent decree with the U.S. Justice Department (DOJ) barring the company from entering non-telecommunications businesses like the burgeoning computer industry and forcing it to license its patents to any third party that paid reasonable royalties. The agreement resolved a monopolization suit the DOJ brought in 1949 that originally sought to force AT&T to divest itself of its Western Electric manufacturing arm, and that targeted a highly unusual corporate structure in which AT&T’s Bell Labs unit—which invented the transistor in 1947—never intended to commercialize most of its inventions.
That consent decree, which is often cited for its compulsory-licensing provisions, should not be taken as a general policy blueprint. Compulsory licensing itself entails many undesirable tradeoffs. But the agreement did illustrate a broader principle: when the institutional environment lowers the cost of accessing foundational technologies—whether through market licensing, organizational design, or research spillovers—entrepreneurs can build extraordinary new firms.
Combined with defense-procurement policies that guaranteed early demand for integrated circuits, the result was a dense cluster of new entrants, from Fairchild Semiconductor to the many “Fairchildren” it spawned. Engineers could move across firms, start new ventures, and commercialize designs without navigating prohibitive contracting frictions. Entire occupations—from circuit-layout designers to microelectronics engineers—emerged because the surrounding institutions made experimentation feasible.
Two Approaches to Personal Computing
The personal-computing era followed a similar pattern, though through different mechanisms. IBM’s choice to publish the specifications for the IBM PC and to rely on third-party components made the PC a de facto open platform. That decision, combined with contractual arrangements that allowed Microsoft to license MS-DOS broadly, set off an entrepreneurial explosion. Software developers, clone manufacturers, and peripheral-device makers operated with minimal permission costs. The PC ecosystem that emerged included thousands of companies building hardware and software compatible with the IBM architecture. This was the direct result of a platform structure designed for maximum contestability.
Apple took the opposite approach from IBM, building a tightly integrated hardware–software stack. While this restricted some forms of entry, it also lowered coordination costs for developers and users by offering stable interfaces, predictable performance, and strong technical guidance. The result was a different—but still vibrant—ecosystem of software firms, designers, and peripheral makers. Both models illustrate the same principle: entrepreneurial activity thrives when the surrounding institutions keep the practical barriers to innovation low, whether through open standards or well-structured integration.
Section 230 and the Commercial Internet
The commercial-internet era amplified this dynamic. The shift from a publicly funded backbone to a commercial network occurred alongside the adoption of Section 230 of the Communications Decency Act of 1996, which clarified that platforms would not bear publisher liability for user-generated content. This single rule change dramatically reduced the transaction costs of building internet services that relied on user contributions.
This model would expand to cover everything from online marketplaces to social networks to early video-hosting platforms. Startups could experiment without the risk of facing crushing litigation for user behavior, and a broad range of new roles (web developers, digital marketers, cybersecurity analysts, cloud-infrastructure specialists) emerged as a result. The legal regime made this entrepreneurial diversity possible.
Bayh-Dole and Biotechnology
Biotechnology provides yet another example through a different institutional lever. Before the Bayh-Dole Act of 1980, federally funded inventions were typically locked inside government agencies, unavailable for commercialization. Bayh-Dole formalized and accelerated the university-startup pipeline that had begun emerging in the late 1970s. Pioneering biotech firms such as Genentech (1976), Biogen (1978), and Amgen (1980) were founded by university scientists before the act was passed, demonstrating the commercial potential of academic research.
The Bayh-Dole Act then institutionalized this model: between 1996 and 2020, the technology-transfer system it created led to more than 17,000 university startups and more than 200 new drugs and vaccines. Venture financing flowed, clusters formed around research institutions, and new scientific occupations appeared. Again, the mechanism was the same: institutions that lowered the cost of turning research into commercial activity generated entrepreneurial abundance.
The Telephone Network and Innovation Suppressed
If the preceding examples show how good institutional design encourages entrepreneurial entry, the U.S. telephone network through the mid-20th century demonstrates the opposite.
For decades, AT&T’s vertically integrated monopoly, which was reinforced by public-utility regulation, functionally prohibited third-party innovation. The Federal Communications Commission’s pre-1968 rules barred customers from attaching non-AT&T devices to the network. The 1968 Carterfone decision, which simply allowed consumers to connect “any lawful device” so long as it did not harm the system, is remembered precisely because it broke with this entrenched model. For decades before Carterfone, entrepreneurs who might have built answering machines, early modems, or networked devices had no legal path to do so.
The stagnation of telephone-network innovation was not a technological inevitability; it was the predictable consequence of a legal regime that eliminated contestability. Only after Carterfone and the eventual breakup of AT&T did downstream experimentation flourish. The early modems that enabled computer networking, the proliferation of consumer telephone equipment, and eventually the commercial internet all emerged once institutional barriers were lifted. The telephone network’s history thus underscores the same lesson revealed by the success cases: Innovation accelerates when outsiders are permitted to experiment, and it stalls when institutions prevent them from doing so.
The New AI Entrepreneurial Wave: What We’re Already Seeing
Artificial intelligence exhibits the features of a general-purpose technology in the sense used by Timothy F. Bresnahan and Manuel Trajtenberg: broad applicability across sectors, steep learning curves, and the ability to catalyze downstream innovation in unpredictable ways. Historically, technologies of this kind produce the widest entrepreneurial surface areas because they make entirely new categories of work economically viable.
The labor-market data reinforce this point. As David Autor and his co-authors have shown, nearly 60% of today’s jobs did not exist in 1940. In essence, this means the signature economic effect of transformative technologies is not mass displacement, but the creation of new work that emerges only after the technology diffuses.
AI appears to be following this familiar pattern. The rapid improvement of generative models has lowered the cost of experimentation for established firms and startups alike, while also enabling specialized tools, niche services, and entirely new modes of production. What’s striking is not just the pace of technical improvement but the diversity of entrepreneurial entry points already forming around both proprietary and open-source models.
Proprietary AI models are already transforming productivity across coding, analytics, and professional services. For example, JPMorgan Chase recently found adopting a coding-assistant tool resulted in a 10–20% efficiency gain for software engineers, enabling them to shift to higher-value projects. Companies that scale AI workflows could unlock roughly $4.4 trillion in added productivity growth—indicative of AI’s capacity to transform how work gets done.
These gains illustrate how firms are re-orienting workflows: proprietary models become embedded in specialized workflows, creating new service models, new work streams (e.g., AI-ops, model governance, fine-tuning specialists), and reducing the transaction cost of providing advanced analytics and decision support. Rather than merely replacing existing jobs, these models expand what professional services can offer, shifting time from rote tasks to strategic and creative work.
Beyond firms’ internal transformations, open-source ecosystems offer unusually visible evidence of early entrepreneurial activity because they let individuals and small teams experiment at very low cost, much as the internet facilitated worldwide competition for small and medium-sized enterprises (SMEs). Open-source models have become a common starting point for startups and SMEs; such models can be fine-tuned, extended, and deployed without specialized infrastructure, lowering the transaction costs to build domain-specific tools. This has already created opportunities for fine-tuning firms, model-ops specialists, and sector-specific developers—precisely the pattern seen in earlier technological transitions when foundational layers became inexpensive and flexible.
Early AI accelerators show how quickly entrepreneurship can emerge when experimentation becomes cheap. Meta’s accelerator program has supported hundreds of prototypes in e-commerce, logistics, finance, and agriculture. These types of programs have enabled teams to build language tools, health diagnostics, and back-office automation for local markets. AI entrepreneurialism has therefore moved beyond the theoretical: as in the semiconductor and PC eras, the first waves of innovation are coming from small groups working with flexible tools.
The rapid adoption of open-source AI models is functioning much like IBM’s open PC architecture: a predictable, well-documented substrate that supports plugins, model-ops services, consulting shops, and lightweight software-as-a-service (SaaS) products. The cost of building a viable prototype—once measured in tens or hundreds of thousands of dollars, if not more—has collapsed to what a small team can accomplish over a weekend. This complements, rather than diminishes, the role of proprietary models. The ecosystem is evolving as a dual structure: proprietary systems push performance forward, while open models broaden the range of experimentation. Both depend on institutions that keep the barriers to innovation low.
The Legal and Policy Levers that Shape Whether Entrepreneurship Flourishes
If AI follows the economic logic of earlier general-purpose technologies, then its entrepreneurial potential will depend on choices made now about property rights, competition policy, liability, and workforce development.
The historical record—from semiconductors to PCs, the internet, and biotechnology—shows that new firms emerge when the institutional environment keeps entry feasible, allows experimentation at the edges, and avoids regulatory structures that harden incumbents’ advantages. Those same levers will shape whether AI produces a broad entrepreneurial boom or consolidates into a small number of tightly controlled platforms.
Entrepreneurial formation depends on whether firms can legally use, adapt, and build on foundational technologies without prohibitive permission costs. The historical diffusion mechanisms that fueled earlier waves functioned primarily by lowering the cost of experimentation for outsiders. Open-source AI plays a comparable role today: it allows startups and SMEs to fine tune and deploy models without frontier-scale capital. Proprietary, vertically integrated systems remain crucial in providing high-trust, high-reliability experiences that help drive adoption. A healthy AI economy needs both channels: proprietary systems that push performance and trust, and accessible models that allow new firms to build novel tools in thousands of niche markets.
AI has large fixed costs in compute and energy, which naturally lead to concentration at the model-training frontier. But high concentration at one layer does not require heavy-handed structural interventions that freeze the ecosystem around today’s incumbents. A more entrepreneurship-friendly approach would be to target exclusionary conduct where it occurs, not to mandate interoperability, dictate architectures, or compel access to proprietary assets. Open-source models help keep the ecosystem contestable by letting entrants build on smaller, efficient architectures. But the larger principle is institutional: policy should preserve the possibility of entry, not script the market’s structure in advance.
Liability frameworks determine which ventures can form. Overly broad, ex ante regulatory regimes—such as the EU’s model-level compliance requirements—impose upfront costs that smaller firms cannot bear. A risk-based, downstream-use liability model avoids that trap. Developers should be responsible for defects and failure to warn; deployers should be responsible for how and where systems are used. This maintains incentives for responsible innovation, while avoiding a world in which only incumbents can afford to participate.
Technological transitions create new jobs at scale, but they also impose transitional costs on specific workers. For entrepreneurialism, the key question is whether workers can move into new roles—including founding startups, joining early-stage firms, or developing sector-specific tools. The evidence is clear: Most long-run job creation emerges from occupations that did not exist at the start of the prior technological wave. Workforce policy should therefore focus on mobility, retraining, and skills acquisition, not on slowing the adoption of AI. Preparing workers to become participants in, and creators of, new AI-enabled firms is the surest way to ensure that the technology produces broad-based economic opportunity.
Conclusion
The history is unambiguous: early institutional choices can determine whether new general-purpose technologies become foundations for broad entrepreneurial dynamism or ossify into stagnant and overly regulated sectors.
As the telephone network showed, early policy mistakes can freeze market structure for decades. By contrast, the successes that powered the semiconductor boom, the PC revolution, the commercial internet, and modern biotechnology collectively generated trillions in economic value and millions of new jobs because they kept experimentation cheap and entry feasible.
AI stands at a similar precipice. The entrepreneurial wave is already visible, from proprietary tools reshaping high-skilled work to open-source models enabling thousands of small teams to build niche products. The central policy question is not how to slow or contain AI, but how to avoid erecting institutional barriers that choke off entry, local innovation, and the creation of new firms.
If policymakers preserve contestability, maintain clear and workable liability rules, and avoid overly broad ex ante restrictions, AI can deliver the same pattern of entrepreneurial flourishing that marked earlier technological revolutions. If they do not, the United States risks smothering the next great surge of economic and occupational creation before it begins.

