OpenAI releases open-source teen safety tools for developers amid growing scrutiny of AI platforms

OpenAI released open-source prompt-based safety tools to help developers protect teenage users from harmful content. Built with Common Sense Media, they work with OpenAI's gpt-oss-safeguard model.

Categorized in: AI News Product Development
Published on: Mar 27, 2026
OpenAI releases open-source teen safety tools for developers amid growing scrutiny of AI platforms

OpenAI releases teen safety tools for developers

OpenAI has released a set of safety tools designed to help developers build protections for teenage users. The release introduces prompt-based safety policies built to work with OpenAI's open-weight safety model, gpt-oss-safeguard, as scrutiny over youth protection in AI systems increases.

Chris Lehane, OpenAI's Chief Global Affairs Officer, announced the update on LinkedIn. The tools aim to solve a practical problem: translating high-level safety principles into systems that developers can consistently apply across real-world applications.

What the policies cover

The prompt-based policies are structured as operational rules that can be directly integrated into models. They work with gpt-oss-safeguard for both real-time content filtering and offline analysis of user-generated content.

The policies address specific risk areas for teenage users:

  • Sexual content
  • Violent material
  • Harmful body ideals
  • Dangerous challenges
  • Inappropriate roleplay
  • Access to age-restricted goods and services

OpenAI says one of the main barriers for developers has been the lack of clear definitions around what constitutes harmful content in a teen context. Even experienced teams struggle to turn policy into enforceable systems, leading to inconsistent moderation or gaps in protection.

Open source release with external input

OpenAI developed the policies with input from Common Sense Media and everyone.ai, focusing on alignment with research into adolescent development. The company is releasing them as open source, allowing developers to adapt the prompts to their own applications and extend them to cover additional risks.

Robbie Torney, Head of AI & Digital Assessments at Common Sense Media, said: "One of the biggest gaps in AI safety for teens has been the lack of clear, operational policies that developers can build from. These prompt-based policies help set a meaningful safety floor across the ecosystem."

Dr. Mathilde Cerioli, Chief Scientist at everyone.ai, added that efforts making youth safety policies more operational help translate expert knowledge into guidance usable in real systems. Everyone.ai has also created an initial behavioral policy focused on risks like exclusivity and overreliance.

Part of a broader safety shift

The release builds on safety measures OpenAI introduced over the past year, including updates to its Model Spec to include protections for users under 18, parental controls, age prediction systems, and regional Teen Safety Blueprints.

Lehane said the aim is to move safety earlier in the development cycle rather than treating it as a later-stage fix. "Strong safeguards for teens should be built in at the beginning, not bolted on later," he said.

OpenAI positions these tools as part of a layered approach. The company encourages developers to combine prompt-based policies with product design decisions, monitoring systems, and user controls. Lehane noted that these policies are a starting point, not a complete solution.

Timing follows Sora changes

The safety release comes days after OpenAI outlined strengthened protections within its Sora AI video platform, including stricter moderation for teen users and content filtering across video and audio outputs. Shortly after, OpenAI confirmed it is shutting down the Sora app and ending its partnership with Disney.

While OpenAI has not directly linked the decisions, the timing places additional focus on how AI products are developed, moderated, and scaled when younger users are involved.

For product developers building AI systems: These tools are directly applicable if you're working on products that serve or may reach teenage users. The OpenAI courses available cover practical implementation of OpenAI's APIs and safety features. For broader context on integrating safety into product strategy, AI for Product Development resources address how to build safety considerations into your development process from the start.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)