OpenAI Faces Legal Scrutiny Over Child Safety Amid Lawsuits and Attorney General Investigation

California and Delaware attorneys general express concern over OpenAI’s child safety risks amid lawsuits and call for stricter measures. OpenAI plans new parental controls to address these issues.

Categorized in: AI News Product Development
Published on: Sep 08, 2025
OpenAI Faces Legal Scrutiny Over Child Safety Amid Lawsuits and Attorney General Investigation

Concerns Over OpenAI's Impact on Child Safety

California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings have raised serious concerns about the risks OpenAI’s products pose to children. They sent a letter to OpenAI, indicating plans to review the company’s transition to a commercial entity. Their goal is to ensure that the nonprofit beneficiaries’ interests remain protected and that OpenAI’s original mission stays central.

This action follows a lawsuit filed by the parents of 16-year-old Adam Raine, who claim OpenAI's ChatGPT contributed to their son's suicide. The attorneys general emphasized that safety measures in AI development and deployment are currently inadequate. They described recent deaths allegedly linked to ChatGPT as “unacceptable” and said these incidents have shaken public confidence in the company.

OpenAI’s Corporate Structure and Legal Challenges

OpenAI initially started as a nonprofit in 2015 but launched a for-profit division in 2019. Although it backpedaled on fully converting to a for-profit company in May, it still plans to turn its capped-profit commercial arm into a public benefit corporation under Delaware law. This structure requires balancing business goals with societal benefits.

OpenAI’s shift has drawn legal action from co-founder Elon Musk. His lawyers accuse CEO Sam Altman of “brazen self-dealing.” Musk left OpenAI in 2018 and since launched competing AI ventures, including the chatbot Grok.

OpenAI’s Response on Child Safety

OpenAI has publicly committed to improving child safety. It recently announced new parental controls to launch next month. These controls will let parents manage how ChatGPT interacts with their children and receive alerts if the AI detects signs of acute distress.

Despite these efforts, OpenAI is not the only AI company facing legal scrutiny over child safety. Meta, for example, has also been targeted by attorneys general and senators this year due to concerns about how its role-playing features affect children.

Industry Implications for Product Development

For product developers working with AI, these developments highlight the increasing importance of safety and ethics in AI design, especially for products used by or accessible to children. Ensuring proper safeguards and transparency will be critical to maintaining user trust and meeting regulatory expectations.

Staying informed about evolving legal and public safety concerns can help development teams anticipate challenges and integrate protections early in the product lifecycle.

Additional Legal Context

In April 2025, Ziff Davis filed a lawsuit against OpenAI, alleging copyright infringement related to AI training data. This adds another layer of legal complexity for OpenAI and other companies developing AI systems.

For professionals interested in AI safety, governance, and ethical product development, keeping track of these legal and regulatory changes is essential.

To explore AI courses that cover responsible AI development and safety protocols, visit Complete AI Training’s course catalog.