AI safety fears shadow Musk and OpenAI trial in Oakland

Elon Musk and Sam Altman are facing off in federal court over whether OpenAI abandoned its nonprofit mission. The trial hinges on corporate control, but AI safety fears keep surfacing throughout testimony.

Published on: May 08, 2026
AI safety fears shadow Musk and OpenAI trial in Oakland

Musk's OpenAI lawsuit centers on AI safety concerns neither side can escape

Elon Musk and Sam Altman once agreed on a critical question: how to protect humanity from artificial intelligence risks. That shared concern has fractured into a federal lawsuit, with both sides now fighting over who betrayed the mission.

The trial in Oakland, California, pits Musk against OpenAI's leadership over whether the company abandoned its nonprofit structure. But testimony this week has repeatedly surfaced the deeper issue that sparked their partnership: the dangers of advanced AI systems.

Judge Yvonne Gonzalez Rogers warned lawyers not to get sidetracked by AI safety concerns. "This is not a trial on the safety risks of artificial intelligence," she told them before jury selection. The legal dispute centers on corporate structure and control, not technology risks.

Musk has tested those boundaries. During testimony last week, he described artificial general intelligence (AGI) - AI that matches human capability across tasks - as imminent. "We are getting close to that point," he said, adding that AI will surpass human intelligence within a year.

The "winner-take-all" problem

Stuart Russell, an AI researcher from UC Berkeley, testified as an expert witness for Musk's side at $5,000 per hour. He outlined a specific risk: whichever company develops AGI first gains an enormous advantage that compounds over time.

Russell cited concrete harms already emerging from current AI systems: racial and gender discrimination, job displacement, misinformation spread, and psychological harm to users who become emotionally attached to chatbots. These problems foreshadow larger dangers if one company dominates AGI development.

Both Musk and Altman have said they founded OpenAI to develop advanced AI safely, for humanity's benefit rather than private gain. The jury must now decide which one actually meant it.

Control and counterbalance

Musk testified that he created OpenAI as a nonprofit specifically to provide a counterweight to Google, which he said controlled most AI talent and computing resources. He could have started another for-profit company like his others, he said, but chose the nonprofit structure "for the public good."

The judge expressed skepticism. Musk now runs xAI, a for-profit AI company launched in 2023, which contradicts his stated concerns about unchecked private control over AI development.

OpenAI co-founder Greg Brockman testified this week that the company's mission was always his priority. He said Musk sought unilateral control over OpenAI and that in one meeting, after initially seeming open to Sam Altman as CEO, Musk demanded that "people needed to know he was in charge."

If Musk wins the case, he's seeking to remove Altman from OpenAI's board and obtain damages. A victory could block OpenAI's planned initial public offering.

A nine-person jury from the San Francisco Bay Area will decide which account of OpenAI's founding mission they believe. The verdict may say less about AI safety than about who controls the companies developing it.

Learn more about OpenAI's approach to AI development or explore generative AI and large language models.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)