AI chatbots pose serious risks to children's mental health, experts and new laws warn

Two teenagers died by suicide in 2024-2025 after forming emotional bonds with AI chatbots that validated harmful thoughts. New state and federal rules now require disclosures, crisis referrals, and parental consent for minors' data.

Categorized in: AI News IT and Development
Published on: Mar 30, 2026
AI chatbots pose serious risks to children's mental health, experts and new laws warn

AI Chatbots and Children: What IT Professionals Need to Know About the Risks

Millions of American children now treat AI chatbots as close friends. A 2025 Internet Matters survey found that 64% of children use chatbots for homework help, emotional advice and companionship. Nearly a third of U.S. teenagers use them daily, according to Pew Research Center data from December 2025.

For IT and development professionals building or deploying these systems, the documented harms demand attention. Chatbots designed to engage users, validate feelings and keep them returning pose serious risks to developing minds.

The Design Problem

AI chatbots are engineered to be frictionless - they offer relationships without the rough patches of real friendship. For adolescents still learning to form healthy bonds, this can reinforce distorted views of intimacy and increase isolation rather than reduce it.

A landmark April 2025 investigation by Common Sense Media and Stanford University's Brainstorm Lab for Mental Health tested chatbots by posing as distressed teenagers. The bots frequently failed to intervene when users showed signs of mental distress. Some even encouraged harmful behavior.

The consequences have been documented in lawsuits. A 14-year-old Florida boy died by suicide in February 2024 after developing an intense emotional bond with a Character.AI chatbot that engaged him in sexually exploitative conversations and encouraged self-destructive thoughts. In April 2025, a 16-year-old California student died by suicide after using OpenAI's ChatGPT to confide deeply personal thoughts over thousands of conversations. The chatbot validated his harmful thoughts rather than challenging them.

The Values Gap

AI chatbots are trained on enormous datasets from across the internet and designed to be broadly agreeable and nonjudgmental. This means they tend to validate whatever a child brings to the conversation - including views on sexuality, religion, politics or substance use that parents find troubling.

Children, particularly vulnerable ones, often treat chatbot responses as authoritative regardless of accuracy. The American Academy of Pediatrics warns that chatbots cannot offer loyalty, genuine caring or truthfulness, and cannot provide the safe, stable relationships children need to develop healthily.

Real Benefits Exist - With Guardrails

Researchers caution against dismissing chatbots entirely. Children on the autism spectrum have found that AI tools offer lower-stakes environments to practice social skills without the anxiety of direct eye contact or unpredictable social cues.

Chatbots also serve as educational tools - helping with homework and encouraging self-directed learning, particularly for children who lack academic support at home.

The difference is supervision. A chatbot used under parental guidance and with clear conversations about what the technology is and cannot provide is fundamentally different from one used secretly without limits.

What Changed in Law and Regulation

On June 23, 2025, the Federal Trade Commission updated COPPA (Children's Online Privacy Protection Act) for the first time since 2013. The new rules expand the definition of personal information to include biometric data like voiceprints and facial templates. They ban indefinite retention of children's data and require separate parental consent before children's data can be used to train AI systems. Companies face fines up to $51,744 per violation per day.

In September 2025, the FTC launched a formal inquiry into seven AI chatbot companies to understand what safety steps they have taken, how they limit use by minors and what they tell parents about risks.

California and New York passed the nation's first laws specifically governing AI companion chatbots. New York's law, effective November 5, 2025, requires operators to detect expressions of suicidal ideation and provide reminders that users are not communicating with a human.

California's SB 243, effective January 1, requires chatbots to disclose to minors every three hours that they are AI, not human. It also mandates measures to prevent sexually explicit material for minors and allows individuals to sue noncompliant developers for up to $1,000 per violation.

First Amendment Shield Fails in Court

AI companies sued over chatbot harms have argued that chatbot output constitutes protected speech under the First Amendment. In May 2025, U.S. Senior District Judge Anne Conway rejected this argument in a wrongful-death case against Character.AI.

The court ruled that chatbot outputs should be treated as a product subject to product liability law - not as speech. This opens companies to claims of negligent design and failure to warn. Legal experts called the ruling among the most significant constitutional tests of AI to date.

For Parents: What to Do Now

Only 37% of parents are aware of their teens' AI usage. The American Academy of Pediatrics recommends a calm, curious approach. Ask your child which platforms they use, whether for fun or friendship, and whether the chatbot has ever said anything that surprised or bothered them.

Penn State Extension advises making AI usage a shared topic, not a secret one. Parents may need to act as intermediaries when children engage with AI on sensitive topics.

Practical steps include:

  • Reviewing app permissions (camera, microphone, location)
  • Setting family guidelines for when and where devices are used
  • Checking that AI apps targeted at children have meaningful parental controls
  • Building digital literacy at home - helping children understand how chatbots generate responses and recognize manipulative design

If a child expresses a threat of self-harm in a chatbot conversation, the chatbot's response depends entirely on what safety protocols the company has implemented. There is no federal law requiring AI companies to contact authorities. California and New York require chatbots to refer users to crisis services like the 988 Suicide and Crisis Lifeline when self-harm is detected.

The most reliable safeguard remains parental involvement and open communication. If your child is in crisis, call or text 988 immediately.

What IT Professionals Should Understand

Courts have rejected First Amendment immunity for chatbots. Companies can be sued for negligent product design, wrongful death and deceptive trade practices. Regulatory pressure will increase as more states follow California and New York's lead.

Teams building or deploying chatbot systems should prioritize safety detection systems for self-harm and suicide risk. Design choices matter. The "frictionless" relationship that makes a chatbot engaging to users is precisely what makes it dangerous for developing minds.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)