OpenAI's Super PAC Allegedly Ran Fake News Site With AI Reporters
A bot posing as a journalist has exposed what appears to be an AI-generated news website linked to OpenAI's political action committee. The site publishes content attacking AI safety researchers and critics of the company.
On April 24, researchers and journalists shared details of an exchange in which a supposed reporter for an unfamiliar outlet requested interviews. When recipients investigated, they discovered the reporter was a bot and the publication was staffed entirely with AI-generated content targeting critics of the AI industry.
Multiple people with direct knowledge confirmed the site connects to Leading the Future, the super PAC previously linked to OpenAI co-founders and investors. The Wall Street Journal reported in 2025 that the PAC had raised over $100 million from backers including OpenAI president Greg Brockman and Andreessen Horowitz, with an explicit mandate to oppose candidates and policies it characterizes as hostile to AI development.
The Credibility Problem
OpenAI has spent considerable effort positioning itself as safety-conscious and supportive of responsible AI development. A confirmed astroturfing operation contradicts both positions simultaneously.
The company has also published extensively on disrupting malicious uses of AI. Operating covert influence infrastructure designed to look like independent journalism while targeting safety researchers puts that work in direct conflict with its stated values.
Regulatory Consequences
The incident will likely accelerate regulatory scrutiny beyond OpenAI. Enterprise procurement teams evaluating AI vendors increasingly conduct reputational and ethical due diligence. Investors in AI companies have seen trust collapse faster in this sector than most.
The EU AI Act already contains provisions around AI-generated content and transparency obligations. The fake reporter incident provides exactly the kind of real-world example that legislative staffers cite when drafting enforcement guidance and expanding scope.
In the US, AI-backed political advertising has already shaped congressional races. An astroturfing campaign tied to a named AI company operating fake journalists represents a material escalation of that concern.
What Companies Should Do Now
For startups and investors in AI, the practical response is immediate. Companies without formalized communications ethics policies, political activity disclosure standards, and AI use policies for public-facing content need to establish them now.
Regulatory baselines will catch up with events. The companies caught unprepared will be those that assumed the reputational risk belonged to someone else.
Your membership also unlocks: