Neszed-Mobile-header-logo
Tuesday, December 16, 2025
Newszed-Header-Logo
HomeUSA NewsGoogle’s New A.I. Chip Is Shaking Nvidia’s Dominance: What to Know

Google’s New A.I. Chip Is Shaking Nvidia’s Dominance: What to Know

The Sphere displays an advertisement for Google Gemini,
The Sphere in Las Vegas displays an advertisement for Google Gemini on Nov. 18, 2025. Aaron M. Sprecher/Getty Images

Last week, The Information reported that Meta is in talks to buy billions of dollars’ worth of Google’s A.I. chips starting in 2027. The report sent Nvidia’s stock sliding as investors worried the company’s decade-long dominance in A.I. computing hardware now faces a serious challenger.

Google officially launched its Ironwood TPU in early November. A TPU, or tensor processing unit, is an application-specific integrated circuit (ASIC) optimized for the kinds of math deep-learning models use. Unlike CPUs that handle everyday computing tasks or GPUs that process graphics and now power machine learning, TPUs are purpose-built to run A.I. systems efficiently.

Ironwood’s debut reflects a broader industry shift: workloads are shifting from massive, capital-intensive training runs to cost-sensitive, high-volume inference tasks, underpinning everything from chatbots to agentic systems. That transition is reshaping the economics of A.I., favoring hardware like Ironwood that’s designed for responsiveness and efficiency rather than brute-force training.

The TPU ecosystem is gaining momentum, although real-world adoption remains limited. Korean semiconductor giants Samsung and SK Hynix are reportedly expanding their roles as component manufacturers and packaging partners for Google’s chips. In October, Anthropic announced plans to access up to one million TPUs from Google Cloud (not buying them, but effectively renting them) in 2026 to train and run future generations of its Claude models. The company will deploy them internally as part of its diversified compute strategy alongside Amazon’s Trainium custom ASICs and Nvidia GPUs.

Analysts describe this moment as Google’s “A.I. comeback.” “Nvidia is unable to satisfy the A.I. demand, and alternatives from hyperscalers like Google and semiconductor companies like AMD are viable in terms of cloud services or local A.I. infrastructure. It is simply customers finding ways to achieve their A.I. ambitions and avoiding vendor lock-in,” Alvin Nguyen, a senior Forrester analyst specializing in semiconductor research, told Observer.

These shifts illustrate a broader push across Big Tech to reduce reliance on Nvidia, whose GPU prices and limited availability have strained cloud providers and A.I. labs. Nvidia still supplies Google with Blackwell Ultra GPUs—such as the GB300—for its cloud and data center workloads, but Ironwood now offers one of the first credible paths to greater independence.

Google began developing TPUs in 2013 to handle growing A.I. workloads inside data centers more efficiently than GPUs. The first chips went live internally in 2015 for inference tasks before expanding to training with TPU v2 in 2017.

Ironwood now powers Google’s Gemini 3 model, which sits at the top of benchmark leaderboards in multimodal reasoning, text generation and image editing. On X, Salesforce CEO Marc Benioff called Gemini 3’s leap “insane,” while OpenAI CEO Sam Altman said it “looks like a great model.” Nvidia also praised Google’s progress, noting it was “delighted by Google’s success” and would continue supplying chips to the company, though it added that its own GPUs still offer “greater performance, versatility and fungibility than ASICs” like those made by Google.

Nvidia’s dominance under pressure

Nvidia still controls more than 90 percent of the A.I. chip market, but the pressure is mounting. Nguyen said Nvidia will likely lead the next phase of competition in the near term, but long-term leadership is likely to be more distributed.

“Nvidia has ‘golden handcuffs’: they are the face of A.I., but they are being forced to keep pushing state-of-the-art in terms of performance,” he said. “Semiconductor processes need to keep improving, software advances need to keep happening, etc. This keeps them delivering high-margin products, and they will be pressured to abandon less profitable products/markets. This will give competitors the ability to grow their shares in the abandoned spaces.”

Meanwhile, AMD continues to gain ground. The company is already well positioned for inference workloads, updates its hardware on the same annual cadence as Nvidia, and delivers performance that is on par with or slightly superior to equivalent Nvidia products. Google’s newest A.I. chips also claim performance and scale advantages over Nvidia’s current hardware, though slower release cycles could shift the balance over time.

Google may not dethrone Nvidia anytime soon, but it has forced the industry to imagine a more pluralistic future—one where a vertically integrated TPU–Gemini stack competes head-to-head with the GPU-driven ecosystem that has defined the past decade.

Google’s New A.I. Chip Is Shaking Nvidia’s Dominance: What to Know



Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments