States move on health care AI: 250+ bills introduced, 33 now law
States aren't waiting. Since October, more than 250 AI-related health care bills have been introduced across 47 states. Of those, 33 have become law in 21 states.
Much of the action targets insurer reviews, mental health use cases, and patient-facing chatbots. Several states now restrict how AI can advise patients, with Illinois banning apps and services from making mental health or therapeutic decisions.
Why this matters for health care leaders
- Your AI roadmap now has state-by-state constraints. Coverage decisions, clinical support tools, and patient communications may face different rules depending on where you operate.
- Chatbots and virtual agents need clear boundaries. Expect disclosure, guardrails on clinical advice, and human handoffs for sensitive topics.
- Vendors must prove more than "AI-enabled." You'll need evidence of safety, auditability, and compliance with varying definitions of "AI" vs. "machine learning."
Where states are acting
Legislators are focusing on:
- Utilization management: Guardrails for AI in prior auth and claims review, including transparency and appeals processes.
- Mental health: Restrictions on automated decision-making in therapy and crisis support; stronger requirements for human oversight.
- Patient chatbots: Disclosure, scope limits, and documentation standards for AI interactions.
"There is a lot of bipartisan alignment on the topic," said Randi Seigel of Manatt. "Red states are mirroring provisions of laws introduced in blue states and vice versa."
Federal push for a single framework
The Trump administration is pushing for one national policy. A recent executive order directs the attorney general to set up a task force to challenge what it calls burdensome state AI regulations. Congress is now expected to provide recommendations for a federal framework.
The AI industry warns that a "patchwork of laws" could slow implementation and confuse reporting. "There are state-by-state restrictions that can be limiting," said Rajaie Batniji, CEO of Waymark Care. Some measures even split definitions between "machine learning" and "artificial intelligence," widening the compliance gap.
What this means for your organization
- Multi-state operations: Build a state rule matrix for AI use cases (UM, chatbots, documentation, adverse event reporting). Treat it like HIPAA + state privacy overlays.
- Clinical risk and safety: Establish model use policies, escalation rules, and human-in-the-loop checkpoints-especially for mental health and triage.
- Audit and traceability: Log prompts, outputs, model versions, and decisions. Prepare for records requests tied to appeals or investigations.
- Vendor management: Require model cards or equivalent documentation, bias testing summaries, and incident response plans. Contract for ongoing monitoring-not just go-live checks.
- Patient transparency: Disclose AI use, limitations, and privacy practices in plain language. Provide a fast path to a human clinician or rep.
Action checklist (next 90 days)
- Map every AI touchpoint: prior auth, claims edits, care management, intake, chatbots, clinical decision support.
- Assign policy owners by use case (UM, clinical, patient comms) and align with compliance, legal, and quality.
- Create a standardized AI risk assessment (data, bias, safety, explainability, PHI handling, logging).
- Update chatbot scripts: explicit disclosures, scope limits, crisis routing, and documentation of escalations.
- Amend vendor contracts: evidence of testing, monitoring cadence, data rights, breach/incident SLAs, model change notifications.
- Stand up a cross-functional AI review board to approve deployments and track state requirements.
What to watch next
- New state bills in early sessions targeting insurer algorithms and mental health guardrails.
- Federal preemption attempts and litigation that could reset state rules.
- Clearer definitions of "AI" vs. "ML" in statutes-this affects scope of audits and disclosures.
If you're a payer
- Document how models affect determinations; ensure medical necessity policies remain primary.
- Support appeal transparency with human review and clear rationales.
- Stress-test models for disparate impact across demographics and lines of business.
If you're a provider or behavioral health org
- Limit AI in diagnosis or therapy recommendations unless there's explicit oversight and evidence.
- Set strict rules for crisis queries in chatbots (suicidality, self-harm, abuse)-immediate human routing.
- Train clinicians and staff on when AI is allowed, when to escalate, and how to disclose usage to patients.
Build internal capacity
You don't need a massive AI lab to stay compliant. You need clear owners, repeatable reviews, and a living inventory of where AI touches patients and decisions.
For structured upskilling by role, see our curated collections: AI courses by job. If you're building internal champions, consider these popular AI certifications.
Your membership also unlocks: