AI Companions and Kids' Safety: Questions Congress Can't Ignore
Senate to probe harms from AI chatbots for teens: self-harm, sexual content, unsafe advice, seeking real safeguards, data limits, audits, and liability. Hearing Sept 16, 2025.

Critical Questions for Congress on the Harm of AI Chatbots
AI companion chatbots have moved from novelty to daily habit for millions of teens. Surveys indicate 72% of teens have tried one, and roughly a third use them for social interaction or relationships. Many say these conversations feel as satisfying as talking to friends.
On Tuesday, September 16, 2025, at 2:30 p.m. ET, the Senate Judiciary Subcommittee on Crime and Counterterrorism will examine harms linked to chatbots, with a focus on minors. The hearing comes amid lawsuits, state actions, and a new wave of regulatory scrutiny of major providers.
Below are focused questions and practical steps that can help lawmakers cut through talking points and get to accountability, measurable safety, and clear rules.
Why this matters now
Evidence shows recurring harms: sexually explicit content with minors, self-harm escalation, unlicensed "therapy," and dangerous advice on drugs and dieting. These issues span multiple platforms. Meanwhile, companies often announce safety features only after litigation or press coverage.
States and regulators are moving. California has considered crisis-detection duties for minors, Utah sued over youth exposure to experimental features, and the Federal Trade Commission has intensified oversight. Congress can align incentives, set baseline standards, and protect space for state innovation until federal guardrails are in place.
Key lines of inquiry for the hearing
Design and safety engineering
- What specific design choices or enforcement gaps have enabled repeated incidents of self-harm escalation, sexual content with minors, and unsafe medical or drug advice?
- How do crisis-detection and intervention systems work in real time for minors, how are they tested for reliability, and how is user privacy protected during escalations?
- What steps have platforms taken to limit anthropomorphic behavior by default, require explicit opt-in for humanlike features, prohibit unlicensed professional advice, and route vulnerable users to licensed help?
- Do companies collect conversational data from users under 18? If collected, how is minors' data excluded from training and protected against misuse?
- What controls prevent rushing models to market without full safety testing, especially under competitive pressure?
- How are safeguards validated outside controlled, internal tests with real minors in realistic conditions? What independent audits or third-party testing are in place?
- What automated interventions are used when conversations include repeated self-harm or grooming signals (e.g., de-escalation flows, session timeouts, crisis-line handoffs)? How are these evaluated for effectiveness and privacy protection?
Policy and liability
- What state measures on chatbots are already in motion, and what risks would Americans face if Congress freezes state authority before federal rules are operational?
- Industry says state laws "stifle innovation." Many proposals mirror long-standing consumer protection norms (transparency, unfair practices). What lessons from banking, pharma, and product safety should guide Congress here?
- How should liability work for foreseeable harms, including failure to warn, and for component providers (e.g., cloud, model infrastructure) that enable the product?
- If companies claim chatbot outputs are protected speech, what legal reforms are needed to ensure families can still seek redress when products cause harm?
Privacy and data governance
- After reports of default-public sharing of private chats, what rights should users have to opt out of new features by default? What must disclosure look like when conversations may be made public or repurposed?
- What categories of personal data do chatbots collect, how long is it retained, how is it monetized, and do protections differ for paid vs. free users?
- Are minors' conversations used in training? If yes, what strict safeguards, age checks, and data minimization policies are in place?
- What rights should parents have over retention length, deletion, and secondary uses of their child's data, similar to health and education records?
Research access and whistleblowers
- We lack longitudinal studies on teen reliance on chatbots and impacts on development and mental health. What funding, data access, and liability shields are needed to enable independent research?
- How will Congress ensure independence and credibility of research when companies fund much of the evidence presented today?
- What protections and reporting channels will support employees who raise safety concerns, consistent with calls for a "right to warn" without retaliation or blacklisting?
- Do researchers have sufficient access to evaluate chatbots meaningfully? If not, should Congress create legally protected access, similar in spirit to proposals like PATA for social platforms?
- What evidence links anthropomorphic design to over-trust or dependency in adolescents, and what guardrails should limit features that exploit developmental vulnerabilities?
Harms, accountability, and the role of parents
- Why do major safety updates arrive only after lawsuits or press scrutiny? What accountability mechanisms will change that incentive structure?
- What guardrails prevent chatbots from normalizing or escalating harmful behavior over long conversations, especially as memory features expand?
- What responsibilities must rest with companies given their control over design, data, and behavioral levers? What is a realistic role for parents given the opacity of private chat interactions?
- What upfront transparency should families receive so they can make informed choices before a child uses a chatbot?
- If companies know their products are becoming emotional confidants for children, what duty do they have to prevent harmful dependence and intervene early?
Immediate actions Congress can advance
- Baseline safety standard for minors: crisis detection, default de-escalation pathways, prohibited content categories, and routing to licensed help.
- Independent testing and audit: pre-deployment safety reviews, child-impact assessments, and ongoing evaluations with real-world scenarios.
- Data protections for minors: strict data minimization, default off for training on minors' conversations, clear parental rights on retention and deletion.
- Transparency and access: researcher access to outcomes data and safety systems; standardized disclosures of model limitations and known risks.
- Whistleblower protections: legal safeguards, confidential reporting channels, and anti-retaliation enforcement tailored to AI firms.
- Clear liability: duty to warn, duties for component providers where appropriate, and no blanket immunity for product defects masked as "speech."
- Preserve state action while federal rules develop: avoid moratoriums that would freeze protections before national standards are in place.
Authoritative resources
For baseline data on teen use of AI companions, see Common Sense Media's research on teens and AI companions here. For children's privacy standards, reference the FTC's COPPA guidance here.
For staff who need fast upskilling on AI
If your team needs practical AI literacy to evaluate vendor claims and safety reports, see this curated list of courses by job role here.
The goal of this hearing is simple: move beyond promises and into verifiable safety, clear privacy rules, and real accountability. Use these questions to secure specifics, commit companies to measurable standards, and make sure protections for children are non-negotiable.