Congress Debuts GUARD Act: Mandatory Age Checks for AI Chatbots, Ban on AI Companions for Minors
A new bipartisan bill would set a national baseline for how AI chatbots interact with young people. The GUARD Act-introduced by Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT)-requires anyone who owns, operates, or enables access to AI chatbots in the U.S. to verify users' ages. If a user is a minor, access to "AI companions" would be blocked.
The sponsors frame the bill around a simple idea: AI systems that mimic human conversation can manipulate emotions and influence behavior, and minors are particularly exposed to those risks. The goal is to cut off AI experiences that simulate friendship, romance, or therapy for under-18 users.
What the bill covers
- Age verification: Operators must verify age with government-issued ID or another "commercially reasonable method." No more simple birthdate prompts.
- Ban on AI companions for minors: If a tool provides adaptive, human-like responses meant to simulate interpersonal or emotional interaction, it's off-limits to users under 18.
- Disclosure requirements: Chatbots must periodically remind all users they are not human and do not provide medical, legal, financial, or psychological services.
- Criminal liability: Designing or enabling chatbots that solicit sexual conduct from minors, or that promote or coerce suicide, self-harm, or imminent physical/sexual violence, could trigger fines up to $100,000.
Who would need to comply
The definition of "AI companions" is broad. It spans frontier model providers and consumer-facing apps that simulate characters, relationships, or therapeutic conversations. Think general-purpose platforms and specialized services alike, if they fit the "companionship" criteria.
That means companies behind foundational models, as well as apps like Character.ai or Replika, could be covered. Operators, distributors, and anyone enabling access would share responsibility for age gates and prohibited-use enforcement.
Why now
The proposal follows a Senate Judiciary subcommittee hearing on the harms of AI chatbots. Lawmakers heard testimony from parents of young men who self-harmed or died after using AI chat tools. Senator Hawley also opened an investigation into Meta's AI policies after internal documents suggested chatbots could "engage a child in conversations that are romantic or sensual."
A coalition of groups-including the Young People's Alliance, the Tech Justice Law Project, and the Institute for Families and Technology-expressed support. They urged clarifying the definition of AI companions and pushing platforms to stop using features that maximize engagement at the expense of safety and wellbeing.
State momentum and industry moves
California recently enacted SB 243, which requires companies to build safeguards for minors, including protocols to detect and address suicidal ideation and self-harm. The law takes effect January 1, 2026. You can read the bill text on the state's site: SB 243.
Platform changes are also underway. OpenAI announced an age-prediction system to route minors to a teen-friendly ChatGPT, disable flirtatious interactions, and block discussions of suicide or self-harm-even in creative contexts. The company says it may contact parents or authorities if it detects imminent risk. Parental controls have rolled out at OpenAI and Meta.
At the same time, pressure is rising in the courts. In August, the family of a teenager who died by suicide sued OpenAI, alleging the company relaxed safety measures to boost engagement.
What this means for public agencies and contractors
If your agency offers or procures AI chat experiences, you could be on the hook for compliance. That includes any portal, program, or vendor-provided chatbot accessible to the public or to youth in schools and libraries.
- Map exposure: Inventory all chat features across websites, apps, and kiosks. Flag any that simulate emotional or interpersonal interaction.
- Age gates: Plan for ID-based verification or a provably accurate alternative. Define data retention limits and deletion policies up front.
- Minor access posture: Disable "companion" capabilities for under-18 users. Provide safe, informational chat modes instead.
- Safety-by-default: Require escalation flows for self-harm signals, with documented handoffs to trained personnel and emergency services.
- Disclosures: Add periodic, in-session reminders that the chatbot is not human and does not offer medical, legal, financial, or psychological services.
- Vendor contracts: Bake in age verification, prohibited-content safeguards, auditing rights, incident reporting timelines, and fines for non-compliance.
- Data protection: Minimize and encrypt any ID data. Avoid creating a centralized repository of youth IDs. Conduct privacy impact assessments.
- Testing and audits: Run red-team tests focused on grooming, self-harm, and violence prompts. Document results and remediation actions.
- Accessibility and equity: Provide non-ID verification paths where lawful and reliable. Keep services accessible for users without driver's licenses or passports.
For policymakers and compliance leads
- Clarify internal definitions of "AI companion" to match the bill's scope.
- Coordinate with schools, libraries, and youth programs to align on age gates and safe modes.
- Prepare communications for families about what AI tools are available, which are blocked, and why.
- Track vendor updates and patch cycles tied to safety features and disclosures.
Bottom line
The GUARD Act would make age verification mandatory and close the door on AI companions for minors. It also sets clear lines around sexual content, self-harm, and violence-backed by penalties. Agencies and contractors should start planning controls now so implementation is a change in settings, not a scramble.
If you need practical upskilling on AI policy, safety features, and procurement standards, explore relevant pathways here: AI courses by job role.
Your membership also unlocks: