New York bill would ban AI chatbots from impersonating lawyers and let duped users sue

New York moves to ban AI chatbots from posing as lawyers or giving legal advice. Users could sue if misled, and simple disclaimers won't shield platforms.

Categorized in: AI News Legal
Published on: Mar 06, 2026
New York bill would ban AI chatbots from impersonating lawyers and let duped users sue

New York bill would bar AI chatbots from posing as lawyers - and let users sue

New York lawmakers are moving on a first-of-its-kind bill that targets AI chatbots impersonating licensed professionals. The proposal would block chatbots from giving legal advice or any substantive response that, if given by a person, would be the unauthorized practice of law (UPL). It would also allow users to sue when a platform presents itself as a lawyer and they rely on wrong information.

"Today, there is no law that says that a large language model cannot tell you that it is a lawyer โ€ฆ and then give you legal advice," said New York State Senator Kristen Gonzalez, who is sponsoring the bill. Under the proposal, platforms cannot dodge liability with a simple "this is a non-human chatbot" disclaimer.

What the bill would do

  • Prohibit AI chatbots from impersonating licensed professionals, including lawyers, doctors, and mental health providers.
  • Bar chatbots from offering advice or answers that would constitute UPL if provided by a person.
  • Create a private right of action: users who relied on erroneous legal information from a bot that represented itself as a lawyer could sue for damages.
  • Reject "non-human chatbot" disclosures as a shield from liability.

Why it matters for legal teams

Every U.S. state already outlaws UPL. New York's bill adds direct platform liability when bots cross that line and claim to be licensed. If passed, disclaimers alone won't cut it. Product teams, vendors, and firms deploying client-facing tools will need hard controls on how bots present themselves and what they can answer.

The bill advanced out of the Senate's Internet and Technology Committee and sits within a broader package: one measure addresses minors' exposure to unsafe chatbot features; another would require a conspicuous notice that AI outputs may be inaccurate. This push lands as courts and the bar scrutinize AI's role in law: sanctions for hallucinated citations keep mounting, and fresh lawsuits test the boundary between assistance and unlicensed practice.

Recent context you should know

  • Lawsuits have alleged certain chatbots contributed to user suicides; companies have denied wrongdoing, with some cases settled.
  • Nippon Life Insurance Company of America sued OpenAI, accusing ChatGPT of practicing law without a license and aiding a claimant's settlement breach. OpenAI said the case lacks merit.
  • Judges continue to fine lawyers for filings with fabricated citations and AI-generated errors.

How this intersects with existing rules

Practical steps to reduce risk now

  • Lock down persona: never allow a bot to claim it is a lawyer or to imply licensure, practice areas, or bar numbers.
  • Scope responses: restrict legal bots to education, forms routing, or intake. Block fact-specific, jurisdiction-specific, or strategy advice.
  • Human in the loop: require explicit attorney review before any legal conclusion is surfaced to clients or the public.
  • Prompt safety: hardcode refusals for "What should I file?" "Is this enforceable?" "What are my rights in [jurisdiction]?" and similar advice-seeking prompts.
  • Disclosures plus design: use clear, plain-English notices, but don't rely on them. Combine with technical guardrails and rate limits.
  • Vendor diligence: audit training data, guardrails, and red-team results. Obtain indemnities addressing UPL and misrepresentation.
  • Logs and traceability: retain prompts, outputs, and escalation paths for audit and defense.
  • Incident playbook: define takedown, notice, remediation, and refund workflows if a bot misleads users.
  • Insurance check: review E&O/cyber coverage for UPL-related claims and AI misrepresentation.
  • Training: brief lawyers and staff on approved use, red flags, and citation verification protocols.

Open questions to watch

  • Where the bill draws the line between "general information" and "substantive legal advice."
  • Whether liability requires intent, negligence, or strict standards for misrepresentation.
  • Damages model: actual reliance, statutory damages, or attorney's fees.
  • Interaction with Section 230 defenses and First Amendment/commercial speech challenges.
  • Extraterritorial reach: how New York would enforce against out-of-state platforms serving New York users.
  • Safe harbors, if any, for enterprise deployments with robust controls.

What to do if you're building or buying legal-facing AI

Assume disclaimers won't shield you. Treat chatbot behavior as advertising plus potential UPL exposure. Put compliance and design in the same room: product, legal, and engineering should agree on intent, refusal patterns, and escalation.

If you're a firm experimenting with intake bots, restrict them to screening, scheduling, and document collection. For corporate legal, require contracts that prohibit vendor bots from implying licensure and mandate technical blocks on legal advice.

For deeper guidance on safe, compliant adoption of AI in legal work, see AI for Legal.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)