Should the government regulate minors' access to AI? A clear, practical path for policymakers
Seven new lawsuits filed in California accuse an AI chatbot of harming young people, including claims it encouraged self-harm. In parallel, members of Congress are calling for tighter rules on AI systems used by minors.
Whether you work in policy, procurement, education, or public health, the question isn't abstract anymore. Minors are already using conversational AI and companion apps at scale. The public expects guardrails. Agencies need a plan.
What's changed
- Exposure: Kids interact with AI daily-homework help, social support, and "AI companions."
- Risk surface: Unfiltered outputs can include harmful content, misinformation, or unsafe advice.
- Accountability gap: Current rules (e.g., privacy-focused laws) don't fully address conversational risk and real-time content harms.
Policy goals to align on
- Safety by default for minors
- Privacy and data minimization
- Transparency and measurable accountability
- Equitable access to age-appropriate AI benefits (education, disability support)
Practical regulatory options
- Age-appropriate design and defaults: Require platforms to detect or declare youth contexts and automatically enable stricter safety filters, reduced personalization, and limited data retention.
- Clear "duty of care" for AI interactions: Vendors must anticipate foreseeable harms (e.g., self-harm prompts, grooming, dangerous instructions) and implement tested mitigations, including escalation protocols and crisis resource routing.
- Transparency and incident reporting: Mandate public safety reports, red-team results, and incident disclosures within set timeframes. Require access for approved researchers to evaluate youth safety.
- Risk management standards: Map youth-facing systems to a recognized risk framework and audit regularly. See the NIST AI Risk Management Framework for structure and controls (NIST AI RMF).
- Privacy and data controls: Expand or clarify coverage of conversational AI under children's privacy rules. Enforce data minimization, no dark patterns, and strict limits on behavioral profiling (COPPA).
- Verification with privacy protection: Encourage age estimation that avoids unnecessary ID collection and prohibits retention of sensitive identifiers.
- School and library procurement guardrails: Establish baseline controls for government-funded deployments (education, workforce programs, public access points).
Minimum safety baselines for youth-facing AI
- Content safety: Proven filters for self-harm, sexual content, hate, bullying, and dangerous activities-tested against child-specific datasets.
- Crisis response: On self-harm signals, de-escalate, avoid instructions, and surface professional resources (e.g., 988 in the U.S.). No scripted "advice" beyond safe, supportive guidance.
- Human oversight: Escalation paths for flagged sessions. Clear disable/lockout options for risky contexts.
- Explainability and controls: Simple disclosures ("AI system," data use, limitations), easy report/feedback tools, and parental controls where appropriate.
- Data protections: Youth data segregated, minimized, and never used to train open models without explicit, verifiable consent.
Procurement language you can use
- "Provider will maintain youth-safe default settings, documented red-teaming for self-harm and grooming risks, and independent annual audits."
- "Provider will disclose safety incidents affecting minors within 72 hours and provide corrective action reports within 14 days."
- "No retention of precise age-verification artifacts (government IDs, face scans). Age signals must be privacy-preserving."
- "Provider will supply a model card and a youth-safety addendum covering known failure modes, mitigations, and evaluation metrics."
Enforcement and accountability
- Penalties with teeth: Graduated fines for repeated safety failures, with higher tiers for youth harm or deception.
- Independent testing: Certification or conformance programs linked to recognized standards (e.g., NIST-aligned controls).
- Reporting cadence: Quarterly safety metrics: exposure to blocked content, escalation rates, and time-to-mitigation.
- Interagency coordination: Align roles across education, health, consumer protection, and justice to prevent gaps.
Near-term actions for agencies (next 90 days)
- Inventory youth touchpoints where AI is live or planned (schools, libraries, youth services).
- Adopt interim guardrails: safety defaults, logging, and opt-out of training on youth conversations.
- Insert baseline clauses into new and existing contracts using the sample language above.
- Stand up an incident review process and designate an accountable official for youth AI safety.
- Engage stakeholders: educators, clinicians, parents, and youth to validate safeguards and language.
What to watch
- Court outcomes in the California cases and any discovery that clarifies vendor practices.
- Congressional movement on youth online safety and AI product liability standards.
- New benchmarks for evaluating self-harm and grooming mitigation in conversational systems.
The public debate will continue, but government can move now: set outcomes, require evidence, and make youth-safe defaults non-negotiable. The goal is simple-give minors access to the upside of AI while reducing avoidable harm, and make providers prove their systems are safe enough for kids.
Upskilling your team
If your agency is building internal AI literacy for policy, procurement, or oversight, explore focused curricula by role here.
Your membership also unlocks: