How big AI uses existential risk claims to avoid regulation

AI companies warn of civilization-ending risk not to protect the public, but to shift attention from documented current harms like bias and deepfakes. The tactic has slowed regulation while letting the industry write its own rules.

Published on: Apr 17, 2026
How big AI uses existential risk claims to avoid regulation

How AI Companies Use Existential Risk as a Shield Against Regulation

When facing pressure to address documented harms, the AI industry has adopted a strategy fundamentally different from the playbook used by Big Tobacco and other regulated industries. Rather than sowing doubt about current evidence of harm, AI executives shift public attention to theoretical future catastrophes-a tactic that has proved remarkably effective at slowing regulation.

The contrast with earlier defensive strategies is stark. In 1953, tobacco executives decided not to directly challenge scientific findings linking smoking to disease. Instead, they demanded more research. "More research is needed" became a corporate mantra that delayed regulation for decades. Tech companies followed the same approach in recent litigation, with Meta and YouTube experts arguing that evidence of addictive design was too weak to support regulation.

AI companies do something different. They claim their tools are so dangerous they could one day destroy civilization-a framing borrowed more from Hollywood than from technical analysis. Even the nuclear energy industry, which actually deals with catastrophic risk, never attempted this move.

The timing reveals the strategy

In March 2023, AI executives including Elon Musk called for a moratorium on new language models to prevent "[losing] control of our civilization." Not one of them followed through on the proposal.

The moratorium call came precisely when politicians were debating documented, current problems: discrimination in AI systems, information pollution, and deepfakes. The EU's AI Act, then under negotiation, posed a direct threat to industry profit margins. By redirecting focus to speculative future risks, executives could simultaneously minimize current harms and position themselves as the only entities capable of managing the technology.

Musk used a similar attention-shifting tactic in 2013, hyping the hyperloop to derail California's high-speed rail project. The strategy worked then. It has worked again with AI.

How the narrative took hold

Government documents from 2016 and EU Commission reports from 2021 never mentioned existential risk or societal collapse as AI concerns. The language changed rapidly. By September 2023, European Commission President Ursula von der Leyen adopted the existential risk framing-repeating language from an open letter published by OpenAI and Anthropic executives in May 2023.

Immediately after praising existential risk warnings, von der Leyen endorsed the "voluntary rules" the AI industry had designed. Months later, she pushed for weakening significant portions of the AI Act.

Whether industry executives coordinated this strategy or simply recognized its power remains unclear. What is clear: policymakers and journalists found existential risk narratives more compelling than discussions of statistical bias and discrimination.

For executives evaluating AI governance and strategy, the pattern matters. The industry's rhetorical shift has successfully reframed the regulatory debate away from measurable, current harms toward speculative future scenarios. Understanding this dynamic is essential for anyone responsible for setting organizational policy around AI deployment and risk management.

Learn more about AI for Executives & Strategy and Generative AI and LLM capabilities to better understand the landscape you're navigating.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)