New York advances bill to ban AI chatbots from giving legal or medical advice - and lets users sue

New York moves to ban AI legal and medical advice, with clear disclosures required. Users could sue chatbot owners for violations, and warnings won't erase liability.

Categorized in: AI News Legal
Published on: Mar 04, 2026
New York advances bill to ban AI chatbots from giving legal or medical advice - and lets users sue

New York bill would ban AI legal and medical advice - and let users sue

New York lawmakers are advancing a bill that draws a clear line: AI chatbots cannot provide legal or medical advice. Senate Bill S7263 passed out of the Internet and Technology Committee on a 6-0 vote and targets systems that impersonate licensed professionals or provide substantive guidance that would violate licensing rules or constitute the unauthorized practice of law.

The proposal also requires "clear, conspicuous, and explicit" AI disclosures in the same language as the chatbot and in a readable font size. Crucially, that disclosure is not a shield - the bill states it does not absolve owners from liability.

Enforcement has teeth. The bill creates a private right of action allowing users to sue chatbot owners and seek damages and attorney's fees. Supporters point to the deterrent value of private suits; Maine Attorney General Aaron Frey has described this mechanism as having a "significant deterrent effect."

If enacted, the law would take effect 90 days after the governor's signature. It is part of a broader package that addresses risks to minors in chatbots, imposes notice requirements on generative AI, and sets new rules for biometric data and synthetic content. The package follows high-profile settlements involving Character.AI and Google tied to the suicides of several minors.

State Sen. Kristen Gonzalez, who chairs the technology committee, framed the agenda plainly: innovation should not come at the expense of safety, especially for children. "People deserve real care from real people. They deserve transparency, accountability, and the promise that their data is secure while utilizing technology."

Why this matters for legal teams

If your firm, legal department, or vendor stack includes any chatbot that fields legal queries, this bill goes straight to your risk surface. Disclaimers alone won't save you; the text explicitly says disclosure does not eliminate liability.

Expect plaintiff-side testing once a private right of action is available. Intake bots, marketing assistants, and client-facing Q&A tools will need tighter guardrails to avoid "substantive" guidance that crosses into UPL.

What the bill would require (practical read)

  • No impersonation of licensed professionals (e.g., "your lawyer," "your doctor").
  • No substantive responses, information, or advice that would violate licensing laws or constitute UPL.
  • Mandatory AI notice that is clear, conspicuous, explicit, in the same language as the chatbot, and in a readable font size.
  • Disclosure is not a safe harbor: owners can still face liability.
  • Private right of action with potential damages and attorney's fees.
  • Effective 90 days after the governor signs.

Immediate actions for law firms and in-house counsel

  • Inventory every bot: public site chat, client portals, intake, matter triage, and internal assistants that could reach clients.
  • Disable or strictly limit legal guidance: gate outputs to general process info, routing, and resource links - not advice.
  • Rewrite prompts and system rules: prohibit advice, opinions, or document drafting without human review and sign-off.
  • Rework UI copy and placement: add clear AI notices near the input box and send button; use plain language and adequate font size.
  • Block professional impersonation: ban titles like "attorney," "paralegal," or "doctor" in bot persona and outputs.
  • Escalation-by-default: any request hinting at law-specific facts should route to a human with a timestamped handoff.
  • Log interactions: retain prompts, outputs, and escalation events for audit and defense; set retention and access controls.
  • Review vendor contracts: add UPL/medical-advice prohibitions, indemnities, logging, and update SLAs for prompt remediation.
  • Test and red-team: run adversarial prompts to confirm the bot refuses advice and respects guardrails; document results.
  • Train staff: teach intake and marketing teams what "substantive advice" looks like and when to stop the bot.
  • Reassess insurance: confirm coverage for AI-related claims, including statutory damages and fee-shifting risks.

Edge cases legal teams will ask about

  • Are disclaimers enough? No. The bill says disclosures do not eliminate liability. Functionality must be constrained.
  • Can a bot offer general legal information? Possibly, if it avoids fact-specific guidance and UPL. Keep it high level and route to humans fast.
  • What about multi-state exposure? Assume similar bills will appear elsewhere. Standardize guardrails to the strictest standard you can operationalize.

Broader package signals

Lawmakers are zeroing in on youth safety, transparency around generative systems, and controls on biometric and synthetic content. Expect more prescriptive notice rules and new private rights of action tied to AI use cases that affect consumers.

If you're leading policy, compliance, or tech adoption in a legal setting, now is the time to pressure-test your AI governance, especially anything client-facing. For ongoing strategies and tools, see AI for Legal.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)