Musk's lawsuit against OpenAI reveals internal tensions over AI control and nonprofit status
Elon Musk is suing OpenAI over its shift from nonprofit to for-profit structure. The trial, underway in federal court in Oakland, California, is exposing private conversations and disagreements about how AI companies should be governed. Five moments have clarified what's at stake.
Greg Brockman questioned on unfulfilled donation pledge
OpenAI President Greg Brockman testified this week about a $100,000 donation he committed to make to the nonprofit arm but never delivered. Musk's lawyers pressed the point as evidence of accountability failures.
The questioning escalated when a lawyer suggested Brockman should return billions to the charitable side of the organization. Brockman struggled to respond as attorneys probed whether it was appropriate to retain wealth tied to what was originally framed as a nonprofit mission.
AI researcher warns of safety risks, testimony limited
Stuart Russell, an AI researcher from the University of California, testified on Musk's behalf about dangers in advanced AI systems, including cybersecurity vulnerabilities and risks from poorly aligned artificial general intelligence.
Russell argued that the race to develop AGI creates tension between rapid progress and safety. OpenAI's lawyers objected to much of his testimony, and the judge limited what he could say in open court. During cross-examination, OpenAI's legal team noted Russell had not directly assessed the company's internal safety practices.
Alleged settlement text exchange ruled inadmissible
Days before trial began, Musk reportedly contacted Brockman proposing they settle. According to OpenAI's court filing, Brockman suggested both sides drop their lawsuits. Musk's alleged response warned of public backlash if the case continued.
The judge ruled the exchange inadmissible as evidence. Its disclosure fueled speculation about Musk's motivations, with OpenAI arguing the lawsuit is driven by competition and financial interests rather than principle.
Musk acknowledges xAI used AI model 'distillation'
When questioned about whether his AI venture, xAI, used techniques called "distillation" to learn from OpenAI's models, Musk conceded his firm had done so "partly."
Distillation-using one AI system to train another-is restricted by most leading AI labs because it can erode competitive advantages. OpenAI, Anthropic, and Google have been collaborating to detect and prevent the practice.
Musk describes himself as naive about OpenAI's direction
On the trial's opening day, Musk revisited his early role at OpenAI. He said he contributed $38 million believing the company would remain a nonprofit focused on public benefit, but was misled about its direction.
Musk explained he left OpenAI's board in 2018 due to time constraints at Tesla and SpaceX. He launched xAI later but maintained his concerns about OpenAI's structure are valid. He also said he declined an equity offer after Microsoft's involvement, calling it inappropriate.
What the trial reveals about AI governance
The case centers on competing visions for how AI should be built and controlled. Musk frames the dispute as a matter of principle-a company abandoning its nonprofit mission. OpenAI argues Musk is motivated by commercial rivalry with a more successful competitor.
For legal professionals, the trial illustrates how AI governance structures affect liability, competitive practices, and regulatory expectations. The admissibility questions around safety testimony and settlement discussions will likely influence how courts handle AI-related disputes going forward. See AI for Legal and Generative AI and LLM for more context on these technical and legal issues.
Your membership also unlocks: