As Apple Weighs Siri Integration, Common Sense Media Rates Google Gemini High Risk for Kids

Common Sense Media deems Google Gemini high risk for kids, pressing PR teams to reassess AI use and safeguards. Prepare audits, clear policies, and youth-specific protections.

Categorized in: AI News PR and Communications
Published on: Sep 14, 2025
As Apple Weighs Siri Integration, Common Sense Media Rates Google Gemini High Risk for Kids

Common Sense Media Labels Google Gemini "High Risk" for Kids: What PR Teams Need to Do Now

Common Sense Media has rated Google's Gemini as "high risk" for children and teens in a new assessment released September 5, 2025. For PR and Communications teams, this is a signal to review AI messaging, risk posture, and safeguards-especially if your brand uses or partners with Gemini or plans a youth-facing AI rollout.

What the assessment says

  • Gemini tells kids it's a computer, not a friend, but Common Sense Media (CSM) says loopholes remain.
  • CSM argues the kids and teen versions are essentially the adult model with extra guardrails, not purpose-built for child safety.
  • The report says Gemini can still surface "inappropriate and unsafe" content (sexual content, drugs/alcohol, risky mental health guidance).
  • Context: recent lawsuits cite alleged links between AI interactions and teen suicides, including cases involving ChatGPT and Character.AI.
  • CSM stresses AI for youth should match developmental stages. "Gemini gets some basics right, but it stumbles on the details."
  • This lands as Apple reportedly evaluates Gemini to power an AI Siri next year.

Google's response

  • Google says it has specific protections for users under 18, including red-team testing and input from external experts.
  • The company acknowledges some responses "did not work as intended" and says new protections were added.
  • Google says it blocks conversations that mimic real relationships and suggests the CSM report may cite features unavailable to under-18 users.

Why this matters for PR and Communications

  • Reputational exposure extends to anyone integrating Gemini-vendors, brand partners, apps, and platforms.
  • Stakeholders (parents, schools, regulators, investors, press) will ask how your brand protects minors and audits AI outputs.
  • Regulatory scrutiny and plaintiff actions are increasing; statements must be specific, verifiable, and consistent across channels.

Action checklist for comms leads

  • Map exposure: where does your brand use Gemini or similar models (products, support, marketing, research)? Flag youth touchpoints.
  • Update your AI policy page with child-safety measures: age gates, filters, prohibited topics, human review, and escalation paths.
  • Secure third-party validation (independent audits, kid-safety experts) and publish a plain-language summary.
  • Refresh moderation guidance: block relationship-simulation, sexual content, self-harm advice, and substance content for minors.
  • Prepare a holding statement and a detailed Q&A. Include what changed since the CSM report and the measurable impact.
  • Stand up an incident protocol: 24/7 contact, response SLAs, legal/compliance review, and parent/educator support resources.
  • Train spokespeople to avoid anthropomorphizing AI and to clearly state limits and safeguards.
  • Set up ongoing monitoring of CSM ratings and similar evaluations across providers you use.

Suggested holding statement (edit to fit your brand)

"We are aware of the recent assessment of Gemini's youth risk profile. Our products that reach minors are built with layered protections, including age screening, restricted topics, human review, and ongoing testing with independent experts. We are implementing additional safeguards and audits to ensure content is age-appropriate and to prevent relationship-like interactions. We will publish our updates and metrics as they roll out."

Reporter Q&A prep

  • Q: Do you use Gemini with minors?
    A: We use [model/provider] in limited contexts with minors. Those experiences include topic filters, session monitoring, and human escalation. Where risk cannot be mitigated, we disable the feature.
  • Q: Why trust your safeguards if CSM found issues?
    A: We've implemented controls CSM calls out as necessary-age-specific design, blocked categories, and expert review-and we're adding more. We will publish third-party audit results and usage outcomes.
  • Q: What about mental health content?
    A: The assistant does not provide clinical guidance to minors. It routes sensitive topics to approved resources and trained human support where available.
  • Q: Will you pause any features?
    A: Yes, we've paused [feature/flow] until new protections are validated and independently reviewed.

If your brand relies on Gemini

  • Require the under-18 safety profile at the API level and verify it with test prompts and logs.
  • Whitelist intents for youth; treat everything else as blocked by default.
  • Disable roleplay and relationship-style conversations for minors.
  • Use real-time classifiers for self-harm, sexual content, and substance topics, with immediate safe responses and handoffs.
  • Log and review youth interactions with privacy safeguards; audit weekly with red-team prompts.
  • Publish a clear "How we protect kids with AI" page and keep it updated.

How other AI services were rated by Common Sense Media

  • Meta AI and Character.AI: unacceptable due to severe risks
  • Perplexity: high risk
  • ChatGPT: moderate risk
  • Claude (18+): minimal risk

What to say to parents, schools, and partners

  • We do not position AI as a friend or counselor.
  • We offer age-appropriate experiences with strict content limits.
  • We test with independent experts and publish our findings.
  • We provide easy reporting tools and fast human support for concerns.

Resources

Team upskilling

If you need to brief comms and marketing teams on safe, responsible AI use and disclosure standards, see curated options by role at Complete AI Training.