Vermont weighs new guardrails on AI and neurotechnology in health care
Vermont lawmakers are reviewing two bills-H.814 and H.816-aimed at tightening protections around artificial intelligence and neurotechnology in clinical and mental health settings. The focus: patient privacy, clear disclosures, and keeping licensed professionals in charge of care decisions.
"The rapid development of artificial intelligence and neurotechnology raises legitimate questions about privacy, autonomy, transparency and protection from misuse," said Stephanie Winters, deputy executive director of the Vermont Medical Society and Vermont Psychiatric Association, in testimony to the House Committee on Health Care on Feb. 27.
What H.814 would require
- Regulate therapy chatbots that act as stand-ins for mental health professionals.
- Require clear disclosures when generative AI is used in any health care setting.
- Restrict health plans from using AI to deny, delay, or modify care decisions-deferring to a licensed human provider.
- Protect brain-computer interface (BCI) data: patients must consent to sharing, can revoke consent at any time, and records must be destroyed within 10 days of revocation.
- Direct Vermont's Artificial Intelligence Advisory Council to report to lawmakers on ethical, responsible AI use in health care, human services, and education by next January.
What H.816 would do
- Regulate how mental health professionals use AI, with a prohibition on advertising services that use AI to provide therapeutic judgment, diagnosis, or treatment.
Support and concerns from the field
Lynn Currier, executive director of the National Association of Social Workers, backed H.814 and urged a ban on AI therapy chatbots. "At best, it's unlicensed practice. At worst, anything that's attached to a large language model is potentially very dangerous," she told the committee on Feb. 25.
Dr. Rick Barnett, a licensed psychologist and chair of the Legislative Committee for the Vermont Psychological Association, said the organization supports H.814 but cautioned against over-limiting tools the U.S. Food and Drug Administration has encouraged. "We don't want to get dinged, violating these new laws for using an FDA approved product," he said on Feb. 25. For context, see the FDA's work on AI/ML-enabled medical devices: FDA AI/ML medical devices.
Winters agreed with H.816's intent-AI should not replace licensed professionals-but warned that, as written, it could block useful tools and add red tape. She recommended a supervised-use exception to allow AI-assisted outputs if reviewed and approved by a licensed clinician.
What this means for clinicians, practices, and health plans
- Disclosures: Build a simple, standard script for when and where AI is used (front desk chatbots, documentation support, triage, etc.). Train staff on consistent messaging.
- Human-in-the-loop: Require clinician review for any AI-influenced clinical judgment. Note that review in the chart.
- Consent and revocation (BCI and biosignal tools): Update intake forms to cover data use, revocation rights, and a 10-day destruction process. Test the workflow end-to-end.
- Payer interactions: If an AI-only utilization decision appears in an EOB or portal, escalate for a licensed provider review. Document all communications.
- Vendor contracts: Add clauses for data minimization, PHI boundaries, audit logs, model versioning, and guaranteed data deletion on revocation within 10 days.
- Risk management: Pilot AI tools in controlled rollouts, monitor for bias and error rates, and define an incident response plan for AI-related safety events.
- Documentation: Capture when AI assisted with notes, coding suggestions, or decision support-who reviewed it and the final clinical decision.
Questions to ask your AI vendors now
- Regulatory status: Is it FDA-cleared or registered? What's the intended use and clinical evidence?
- Data practices: What PHI is collected? Where is it stored? Can you certify deletion within 10 days of revocation?
- Safety and oversight: What guardrails prevent diagnostic or therapeutic advice without clinician review? Do you provide audit logs and explainability artifacts?
- Compliance roadmap: How will you align with H.814/H.816 requirements? Will you sign a contract addendum reflecting these obligations?
Timeline and next steps
Lawmakers are continuing to take testimony and have not voted on either bill. If passed, expect operational guidance to follow from state bodies and professional associations. It's smart to prepare your disclosures, consent workflows, and vendor agreements now.
For ongoing skills and governance context, see AI for Healthcare.
Your membership also unlocks: