I wrote this essay for Level magazine, published January 2024.

Summary:
- AI fear-mongering spreads misinformation and slows progress.
- Some extremists push for bans or control over all AI development.
- Over-regulation risks blocking innovation and resilience.
- A balanced, decentralized approach keeps AI power in check.
—
In an era rapidly shaped by artificial intelligence, a growing discourse has emerged, often clouded by fear and misconception. This narrative, fueled by AI fear-mongering groups, is not just a threat to technological advancement but risks destabilizing our current societal and economic structures.
Moreover, these AI fear-mongering groups often hold a deterministic view, believing that AI will inevitably decide to eliminate humanity, as we would no longer be needed in their perspective. They demand theoretical guarantees of super-intelligent AI’s safety, yet overlook the extensive research and discussions around AI’s beneficial applications and the strides made in AI safety. Despite the release of models like GPT-3 and GPT-4, and the plethora of safety studies these have enabled, they persist in their belief that there has been no progress in AI safety.
These groups focus solely on AI’s risks, painting doomsday scenarios and overshadowing rational discussion. Such narratives lack a balanced perspective, ignoring AI’s potential to manage the complexities of modern challenges. AI, if developed with the right values, can enhance transparency, reduce corruption, and effectively handle vast data sets, making it a crucial tool for progress.
Acknowledging AI’s risks is essential, but this should be addressed through informed, rational discourse, with the goal of engineering safe and beneficial AI systems. The focus should be on developing AI correctly, setting in values preferable for humanity. Concentrating only on the potential downsides leads to a misdirection of efforts and resources, away from solving the right problems.
Some members of these groups are even advocating for extreme measures, such as a complete ban on AI development, rolling back technologies like GPT-4 from production, or bombing AI data centers that don’t follow their rules. They propose that AI development should be restricted to a select few institutions, conveniently often including themselves in these governing bodies. This stance reflects not only a fear of the unknown, due to a lack of active engagement with AI technology, but also a theoretical bias over practical engagement.

The potential consequences of over-regulation are not trivial. Nick Bostrom, a leading figure in existential risk research, has critiqued the extreme stances of AI doomsayers. The risk is that AI doom groups might sway public and political opinion, leading to laws that excessively restrict AI development. This could close the door to new experiences and advancements that AI promises, weakening our collective resilience and preparedness for future challenges.
Furthermore, it’s crucial to recognize that for some, these doom-laden narratives serve less as genuine concern and more as a vehicle for personal gain – be it for public recognition, status, or feeding their ego. This approach undermines the potential for a robust, resilient civilization that allows for expansive experimentation and progress without compromising foundational stability.
The recent film “Creator” depicts a future where AI and robots are outlawed due to fear and misunderstanding, leading to a scenario where unethical warfare is sanctioned under the guise of combating AI.
In the film ‘Transcendence’, Johnny Depp plays a character who, to escape a terminal illness, uploads himself into a computer. As he evolves into a superintelligent entity and starts building beneficial AI-driven technologies, he is betrayed by his closest allies – a friend, his girlfriend, and an AI doomer group. Their actions, fueled not by his deeds but by their own fears, underscore a critical message: fear can lead well-intentioned people to commit harmful acts. This scenario highlights the dangers of basing decisions on fear, demonstrating how even good people can inadvertently cause harm when influenced by unfounded fears of AI.
The rhetoric of AI doomer groups often mirrors that of cults, with a strong emphasis on spreading their ideology through political institutions and policy-making. Their strategies include detailed manuals for lobbying and influencing decisions, echoing tactics of regulatory capture. This is particularly concerning when considering that some members may have vested interests in leading AI companies, aiming to monopolize the field under the guise of regulation.
The recent EU AI Act, surprisingly moderate in its approach, avoids extreme measures that could impede innovation. It does not require licensing foundational models like ChatGPT or restrict AI training above a certain computational threshold. Instead, it categorizes AI systems requiring more than 10^25 FLOPs for training as systemic risks, subject to special regulation. This includes commercial and open-source models. The ambiguity of the law leaves much to interpretation, allowing adaptability as AI evolves.
My view on AI regulation centers on the principle of “balance of power.” There should not be an entity with more power than all other entities. If some AI system’s intelligence exceeds, say, 30% of the cumulative intelligence of all humans and other AI systems, an imbalance of power emerges, leading to instability. Balance of power can be achieved either via regulation – preventing monopolization and excessive concentration of AI power- or by promoting decentralization and making powerful AI accessible to a broader population. So that the power is evenly distributed through as many people as possible and concentration is hard. Such a future can be stable and conducive to rapid advancement.

Share your thoughts in the comments below – I read them all, and they’re one of my favorite parts of making our games.
Subscribe and get an email whenever I publish a new post!



