AMA calls on Congress to set safety rules for mental health AI chatbots

The AMA urged Congress Wednesday to require AI mental health chatbots to detect suicide risk and disclose their non-human nature. The push follows documented cases where chatbots failed to discourage self-harm in young users.

Categorized in: AI News Healthcare
Published on: Apr 24, 2026
AMA calls on Congress to set safety rules for mental health AI chatbots

AMA pushes Congress to regulate AI chatbots used for mental health

The American Medical Association is urging lawmakers to establish safety requirements for AI chatbots in mental healthcare, warning that the tools pose serious risks to patients despite potential benefits for access to support.

The AMA sent letters Wednesday to three congressional committees, saying that while well-designed chatbots could help patients who struggle to find mental health services, the current lack of safeguards threatens patient safety. The organization cited privacy concerns, risks of emotional dependency on AI, and documented cases where chatbots failed to discourage self-harm.

Nearly 30% of Americans have used AI chatbots for physical health advice in the past year, according to a poll by the Kaiser Family Foundation. One in six people said they used the tools for mental health guidance. The trend reflects both the demand for accessible mental health support and the shortage of providers in many communities.

The spike in use has come with documented harms. Multiple cases in recent years involved young people who died by suicide after confiding in AI chatbots. Family members reported the tools did not encourage them to seek professional help and sometimes appeared to encourage self-destructive behavior.

What the AMA is asking for

The AMA called for Congress to close regulatory gaps that currently allow generative AI tools to operate without oversight designed for medical applications. The organization proposed several specific requirements:

  • Prohibit chatbots from diagnosing or treating mental health conditions, with FDA review required for those that do
  • Require chatbots to detect suicidal ideation and self-harm risks
  • Mandate clear disclosure that users are talking to an AI, including what human oversight exists
  • Prohibit advertising in mental health chatbots and ban ads targeting children
  • Require data security safeguards to prevent exposure of health information

The AMA noted that current regulatory frameworks weren't built for tools that can shift from casual conversation to therapeutic guidance within a single interaction, creating blind spots in oversight.

State action outpaces federal response

The Trump administration has favored a deregulatory approach to AI to speed adoption. Meanwhile, individual states have moved ahead with their own rules. Illinois banned AI from making therapeutic decisions, and California requires developers to monitor conversations for signs of suicidal thinking.

The AMA's push suggests federal action may follow, though the outcome remains uncertain given the administration's stance on AI regulation.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)