Neszed-Mobile-header-logo
Saturday, July 26, 2025
Newszed-Header-Logo
HomeGlobal EconomyPalantir’s AI Justified Israel’s Attack on Iran — Will Tech Take Us...

Palantir’s AI Justified Israel’s Attack on Iran — Will Tech Take Us Back to Irrational Beliefs?

“If you don’t have AI, there’s nothing going on.” Those are Peter Thiel’s words in an interview with Ross Douthat for the podcast Interesting Times. To put it into context, Thiel is referring to something he calls the “stagnation hypothesis.”

Thiel’s view is not unique—others such as Tyler Cowen and Robert Gordon have proposed something similar. The idea is that there has not been any significant advancement in any field (the sort that might change our understanding of it) since the 1970s—or, at least, not in the past 50 year in a way that clearly signals progress.

According to Thiel, for a significant breakthrough to happen, risk must be taken in any field—be it medicine, transportation, or science. He argues this isn’t happening due to a risk-averse culture, excessive regulation, and an over-financialized economy yielding low returns, among other reasons.

The argument is that without a breakthrough to propel a new wave of economic growth, the social fabric as we know it—in the West—will begin to collapse. To avoid that, we need to seek progress through various means, including exploring new forms of government, ventures into radical biology—such as transhumanism—or missions to Mars.

Thiel’s view differs from that of accelerationism. Though both share the stagnation hypothesis and the need for radical transformation, Thiel seeks to reinvigorate capitalism—a new wave of unbridled, unregulated capitalism—while accelerationists seek to overcome capitalism entirely, aiming for a yet-undefined system.

There’s another point of agreement: the belief that artificial intelligence might be the source of this desired transformation. Though Thiel is more cautious and less enthusiastic, he is adamant that it be deregulated.

Perhaps that’s why Trump’s flagship tax bill included a provision barring states from regulating artificial intelligence. According to Bloomberg, “Trump allies in Silicon Valley, including venture capitalist Marc Andreessen, defense tech firm Anduril Industries Inc. founder Palmer Luckey, and Palantir Technologies Inc. co-founder Joe Lonsdale all advocated for including the restriction.”

Matt Stoller goes into detail about how and why the Senate voted to strip the AI provision from Trump’s tax bill. But, as he says, “Don’t be fooled by the lopsided vote—this AI regulation ban was much closer to being enacted into law than it appears. The attempt to eliminate regulation of automated decision-making and AI systems will return. Big business is going to have an open checkbook going forward—amounts of money that are unfathomable—to enact their agenda.”

Since Trump—whom Thiel supported early on and now advises—returned to office in a second term, he has dismantled federal regulation meant to oversee the development of AI and has promoted its integration with the government.  This is part of what Musk’s DOGE achieved: complete access to fiscal, health, and other sensitive, confidential data to feed into an algorithm.

But the story of AI’s entanglement with the government predates Trump and Musk’s affair. Thiel’s company, Palantir, was developed with CIA funding to later sell software to the U.S. Department of Defense. According to some reports, this began around 2003–2005. In fact, in a Reddit AMA from 2014, when asked if Palantir was a front for the CIA, Thiel responded: “The CIA is a front for Palantir.”

Obviously, that statement seems far-fetched, but it does reflect how blurry the boundaries have become. And it might even end up being true. If that sounds exaggerated, consider how Palantir’s Mosaic—the AI model developed for the International Atomic Energy Agency (IAEA)—was used to justify Israel’s and the U.S.’s attack on Iran even above CIA’s intelligence.

Palantir’s AI platform, Mosaic, has been integrated into the IAEA’s monitoring systems since 2015. It processes massive datasets—satellite imagery, surveillance footage, social media, and even Mossad intelligence—to detect anomalies in Iran’s nuclear program. The report that prompted the IAEA to declare Iran in breach of its non-proliferation agreement was likely based on Mosaic’s prediction.

Of course, there are reports claiming that the IAEA has a bias toward Israel, that Mosaic used Mossad’s data in its predictions, and that Palantir is also deeply involved in Israel’s operations in Gaza through a similar algorithm called Lavender. But the relevant point here is that this explains why Trump had the audacity to dismiss his own head of intelligence, Tulsi Gabbard, when she said there was no intelligence to confirm that Iran was pursuing nuclear weapons.

Trump’s “intelligence” was probably based on Israeli sources, which likely relied on the IAEA’s predictions—which were, in turn, based on Mosaic’s algorithm. This would also explain why Rafael Grossi later denied that the IAEA had any verifiable intelligence regarding Iranian nuclear weapons.

Whether ignoring CIA intelligence in favor of Palantir’s was a deliberate attempt to create a casus belli against Iran, or whether White House decision-makers genuinely preferred Mosaic’s prediction, the result is the same: the “intelligence” of an AI model was prioritized over that of an actual human intelligence agency.

This signals a major turning point. The CIA works—or should work—with verifiable information. Even when that information is fake or distorted to fit a narrative, it should still be technically possible to review sources and assess their credibility.

With AI, it’s different. Researchers increasingly admit they don’t fully understand how these systems function. Given that models draw from hundreds of millions of data points and can refine their own reasoning, at some point their conclusions become unverifiable to the human mind. We simply can’t follow the logic behind them. Which means we can’t verify those conclusions—we have to trust them. Blindly. Or, put differently, we’re being asked to have faith in them.

Paradoxically, this was Henry Kissinger’s fear, according to his biographer Niall Ferguson in an interview with Noema: “The insight that he had, long before anyone had heard of ChatGPT, was that we had created technologies that were doing things and delivering outcomes that we could not explain.” Kissinger traced the development of AI back to the application of the scientific method developed during the Enlightenment.

However, the unregulated pursuit of AI may have the opposite effect of what the Enlightenment intended. Back then, the goal was to explain everything through reason—what could not be explained was considered speculation at best. Now, we are making critical decisions based on processes we do not understand.

 

Palantir’s AI Justified Israel’s Attack on Iran — Will Tech Take Us Back to Irrational Beliefs?

Source link

RELATED ARTICLES

Most Popular

Recent Comments