Neszed-Mobile-header-logo
Friday, February 13, 2026
Newszed-Header-Logo
HomeCelebritiesAnthropic CEO Dario Amodei Calls Overspending By AI Rivals Risky

Anthropic CEO Dario Amodei Calls Overspending By AI Rivals Risky

Dario Amodei, the outspoken CEO of AI startup Anthropic, praised the rapid advance of new technology but said a current overspending  — mostly by rivals — poses risks.

“I think there are some players who are not managing that risk well or taking unwise risk,” Amodei said during a Q&A at the New York Times’ DealBook Summit.

“On the technological side of it, I feel really solid. I think I’m one of the most bullish people around. On the economic side, I have my concerns where, even if the technology is really powerful and fulfills all its promises, I think there may be players in the ecosystem who, if they just make a timing error, they just get it off by a little bit, bad things can happen.”

He declined to name names. But Sam Altman-led OpenAI and rival Google are investing hundreds of billions in computing power. The company behind the Claude chatbot works more in the safer realm enterprise, or business-to-business, applications, he said, while rivals are competing in the more perilous consumer market.

Markets have been jittery for months about massive spending on AI data centers, worried that revenue won’t keep pace with the outlay and create a bubble.

Anthropic revenue has grown ten times every year for the past three years, he said, and will hit $8 billion to $10 billion at the end of this year. He has told investors the company will break even in 2028. The FT reported that the company is considering a massive IPO in a race with ChatGPT parent OpenAI.

He described the core dilemma as uncertainty over “how quickly the economic value [of AI] is going to grow, and the lag time in building the data centers that drive it … We as a company try to manage as responsibly as we can. I think there are some players who are YOLO and I’m very concerned.” The industry has started talking about AI spending as YOLO, or You Only Live Once.

“I have to decide now how much compute I need to buy to serve the models in 2027 … And there’s two dangers. One is that if I don’t buy enough compute, I won’t be able to serve all the customers I want off, which will turn them away and then send them to my competitors.

“If I buy too much compute I might not get enough revenue to pay for that compute. And in the extreme case, there’s kind of the risk of going bankrupt … let’s say you’re a person who just kind of like constitutionally, just wants to YOLO things, or just likes big numbers, then you may turn that dial very far. So, I think there’s a real underlying risk.”

He says corporate America and governments must look hard and make provision for workforce retraining as AI gets “better and better at every task under the sun” and takes jobs. It’s getting better at coding, at science, at biomedicine, at law, at finance, at materials and manufacturing. “Models are routinely winning high school Math Olympiads and moving on to college Math Olympiads and starting to do new mathematics. And the drumbeat is just going to continue.”

He’s raised the alarm about rival authoritarian governments getting the edge. “I have always felt that we need to have the advantage here. This is a national security issue.  I think we’re building a growing and singular capability that has singular national security implications. And democracies need to get there first. It is absolutely an imperative.”

So, he does not think Nvidia should sell chips to China, as President Trump has allowed. “That just makes it more likely they will get there first. It’s common sense.”

He acknowledged “we should [also] worry about concentration of power in democracies. Not as much as we worry about it in authoritarian states. But we need to make sure that the technology is governed in a way that allows people to participate, that gives people basic rights.

“And so the formulation I’ve always given when I think about how to apply these models for national security is, I think we should aggressively use them in every possible way, except in the ways that would make us more like our adversaries. We need to beat them, but we need to not do the things that would cause us to become like them. That is the one constraint we should observe.”

Source link

RELATED ARTICLES

Most Popular

Recent Comments