A multi-billion dollar AI partnership might be on the brink of reinvention.
Microsoft and OpenAI are in deep negotiations to redefine their relationship. And the trigger for it all? Artificial general intelligence, or AGI.
The agreement between the two companies hinges on the development of AGI.
Right now, Microsoft has rights to OpenAI’s models through 2030 or until OpenAI decides it has achieved AGI. That vague clause might have felt theoretical when the ink was fresh. But AGI is no longer science fiction. And if OpenAI declares it’s reached AGI under current terms, Microsoft could be cut off.
So, that elusive, poorly defined milestone just became a high-stakes legal boundary with massive implications for both companies.
If they don’t get this new deal right, Microsoft could lose access to the very technology it’s betting the future of its business on.
On Episode 160 of The Artificial Intelligence Show, Marketing AI Institute founder and CEO Paul Roetzer broke down for me what’s worth paying attention to in these negotiations.
The Fine Print That Could Break a $13.75B Deal
The stakes couldn’t be higher. If Microsoft loses access to OpenAI’s technology, that will have an immediate effect on some of its core products. Its cloud platform (Azure), software suite (Office), development tools (GitHub), and generative AI ambitions (Copilot) all hinge on access to OpenAI’s models.
That’s why the two companies are now trying to hammer out a new deal. According to reporting from Bloomberg and TechCrunch, Microsoft wants to retain access even post-AGI, and may acquire a bigger equity stake in OpenAI (rumored in the low- to mid-30% range).
OpenAI, meanwhile, wants more freedom: fewer restrictions on where and how it sells its models, stricter controls on Microsoft’s deployments, and more revenue.
Oh, and OpenAI is still trying to finalize its transition to a fully commercial structure. Right now, it’s a nonprofit that governs a capped-profit company, a structure designed to limit investor returns. But with IPO rumors swirling and massive infrastructure needs looming, that structure is increasingly untenable.
So, the talks haven’t exactly been friction-free.
Microsoft has even reportedly blocked some of OpenAI’s attempted acquisitions. That’s because Microsoft has its own AI ambitions. With Mustafa Suleyman (former DeepMind and Inflection AI founder) now leading Microsoft AI, it’s possible the company is preparing to hedge against OpenAI, or even compete with it.
That’s especially likely as OpenAI seeks compute resources beyond what Microsoft can offer, eyeing deals with Google and Oracle. Each side is signaling it won’t be boxed in.
“It’s not an easy deal to get done,” says Roetzer. “There’s lots of variables here.”
So…What Counts as AGI, Anyway?
The whole renegotiation hinges on whether OpenAI reaches AGI. But there’s no clear definition. And the definition that matters for purposes of the OpenAI-Microsoft relationship will be the definition both firms can agree upon.
One popular benchmark, cited in the talks, describes AGI as AI that can outperform humans at most economically valuable work.
But who decides that? And how?
One proposed model gaining traction in the AI industry at large is the Economic Turing Test, a concept promoted by AI safety researcher and Anthropic co-founder Ben Mann.
On a recent episode of Lenny’s Podcast, Mann proposed this test as a more concrete yardstick: If an AI agent can perform a job for 1–3 months and gets hired over a human without the employer knowing it’s a machine, it passes.
Take it further: If AI passes that test for 50% of “money-weighted” jobs, we’ve entered the era of transformative AI, where GDP growth could hit 10% per year. Mann puts that moment at 2027 or 2028.
It’s just one idea for how to define AGI. But it’s much needed, says Roetzer.
“This is what’s missing from the AGI conversation basically.”
Also, Anthropic is considered conservative in its AGI timelines, he says. So if they’re forecasting economic upheaval that soon, then real AGI might be closer than we think.
Real-World AGI May Be Closer Than the Contracts Are Ready For
All of the negotiations, clauses, and IP rights mask a deeper reality: AGI-level performance might already be leaking into the economy.
Just consider this: People are already getting jobs they’re not qualified for by using AI to complete assessments or cheat hiring workflows. And once they’re in? Some are offloading the actual work to AI agents behind the scenes.
That’s not hypothetical. That’s very likely happening today, says Roetzer.
“I can almost guarantee you that’s happening a ton,” he says. “And people are making a lot of money, having agents doing most of the work.”
It’s a strange twist on the economic Turing Test. In some cases, employers may already be hiring AI, without knowing it.