Neszed-Mobile-header-logo
Wednesday, October 29, 2025
Newszed-Header-Logo
HomeAIAI Godfathers, Steve Bannon, and Prince Harry Agree on One Thing: Stop...

AI Godfathers, Steve Bannon, and Prince Harry Agree on One Thing: Stop Superintelligence

A new open letter is urging a global prohibition on the development of superintelligence, and it’s signed by one of the most unusual coalitions imaginable.

The statement, organized by the Future of Life Institute (FLI), has gathered support from AI “Godfathers” Geoffrey Hinton and Yoshua Bengio, Apple co-founder Steve Wozniak, and Nobel laureates. But it also includes signatures from political figures like Steve Bannon and Susan Rice, and celebrities like Prince Harry and Meghan Markle.

Their message is short and blunt: “We call for a prohibition on the development of superintelligence,” not to be lifted until there is “broad scientific consensus that it will be done safely and controllably” and with “strong public buy-in.”

The organizers warn that time is running out, claiming that superintelligence, or AI capable of outperforming humans on all cognitive tasks, could arrive in as little as one or two years.

But is this call for a ban realistic, or even helpful? To unpack the letter and its implications, I spoke with SmarterX and Marketing AI Institute founder and CEO Paul Roetzer on Episode 176 of The Artificial Intelligence Show.

A “Counterproductive and Silly” Proposal?

Roetzer’s initial take is that the letter is primarily for building public awareness. But he also highlights a powerful counter-argument that exposes the fundamental flaws in the proposal, which comes from Dean Ball at the Foundation for American Innovation.

Ball’s core issue with the proposed superintelligence pause is: How can you prove superintelligence is safe without first building it?

This logic suggests that the only way to enforce such a ban would be to create a “sanctioned venue and institution” for superintelligence development. In other words, a global governance body tasked with building the very technology the letter seeks to prohibit.

This scenario would centralize development and give a consortium of governments (the same entities, Ball notes, with “militaries, police, and a monopoly on legitimate violence”) unilateral control over the most powerful technology ever conceived.

“This sounds to me like the worst possible way to build superintelligence,” Ball concludes.

While he agrees the current high-speed, competitive race between a few private labs isn’t ideal, a centralized government monopoly could be even more dangerous.

“Do we need regulation? Absolutely. It doesn’t feel like the way we’re doing this right now is the safest way,” says Roetzer.

“But I don’t feel like the superpowers of the world are currently in a place where we’re going to be able to negotiate that. There’s some other stuff that we’re trying to work out together that isn’t going so smoothly.”

A New Framework to Define AGI

This public statement coincides with a new academic paper from many of the same figures, including FLI’s Max Tegmark and Yoshua Bengio, attempting to create a concrete, quantifiable definition of AGI.

The paper defines AGI as “matching the cognitive versatility and proficiency of a well-educated adult” and breaks intelligence down into ten core cognitive domains, like reasoning, math, and memory.

The framework reveals a “jagged” profile for current models: high scores in knowledge and math, but critical deficits in areas like memory and speed.

Crucially, the paper scores GPT-4 at just 27% on this AGI scale, while estimating GPT-5 at 57%, a massive leap that highlights the rapid progress toward the goal.

The Bottom Line

So why is this all happening now? Roetzer believes it’s because the risks are no longer theoretical, and the key players know it.

AI labs at OpenAI and Meta are now openly calling themselves “superintelligence labs.” The entire tech economy is being driven by capital spending on AI infrastructure. And regulators are finally starting to act.

“Everyone’s sort of simultaneously realizing like, oh my gosh, this is a huge deal and we have no idea how to handle any of it in education and business in the economy,” says Roetzer.

While the open letter itself may be more symbolic than practical, it’s one clear sign that some in society are beginning to grapple in their own way with a rapidly accelerating future.



Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments