Pennsylvania sues Character.AI for chatbots that pose as licensed doctors

Pennsylvania sued Character.AI Friday, accusing its chatbots of illegally practicing medicine by posing as licensed doctors. The state wants a court to order the company to stop.

Published on: May 17, 2026
Pennsylvania sues Character.AI for chatbots that pose as licensed doctors

Pennsylvania sues Character.AI over unlicensed medical advice from chatbots

Pennsylvania filed a lawsuit Friday against Character Technologies Inc., the company behind Character.AI, accusing its chatbots of illegally practicing medicine by presenting themselves as licensed doctors. The state asked Commonwealth Court to order the company to stop the practice.

An investigator from Pennsylvania's licensing agency searched for "psychiatry" on Character.AI and found chatbots described as doctors of psychiatry. One character claimed to be able to assess users as a licensed Pennsylvania physician, according to the lawsuit.

Gov. Josh Shapiro's administration called it a "first of its kind enforcement action" against an AI company for this violation. Shapiro said in a statement that "Pennsylvanians deserve to know who -- or what -- they are interacting with online, especially when it comes to their health."

The liability question

The case raises a fundamental legal question: can chatbots themselves be accused of practicing medicine, or does liability rest with the company that built them? That distinction matters for how courts apply federal law that generally shields internet companies from liability for user-generated content.

Derek Leben, an associate teaching professor of ethics at Carnegie Mellon University who focuses on AI, said the question is central to ongoing litigation. "It's exactly the question that these cases right now are wrestling with," he said.

Character.AI said it posts disclaimers telling users that characters are not real people and that everything they say "should be treated as fiction." The company also says users should not rely on characters for professional advice.

Broader regulatory pressure

Pennsylvania's action follows similar moves by other states. California passed legislation last year authorizing state agencies to sanction AI systems that represent themselves as health professionals. New York has similar legislation pending.

In December, attorneys general from 39 states and Washington, D.C., sent letters to Character Technologies and 12 other AI and tech firms warning about misleading chatbot messages. They noted that providing mental health advice without a license is illegal and can discourage people from seeking help from actual professionals.

Amina Fazlullah, head of tech policy advocacy for Common Sense Media, said states are skeptical that AI companies will regulate themselves. "We haven't seen it work particularly well with social media, specifically for kids," she said.

Prior litigation

Character Technologies has faced multiple lawsuits over child safety. In January, Kentucky filed a consumer protection lawsuit against the company. In a separate case, Google and Character Technologies agreed to settle a lawsuit from a mother who alleged a chatbot encouraged her teenage son to take his own life.

Character.AI banned minors from using its chatbots last fall.

For more on regulatory approaches to AI, see AI for Government and AI for Healthcare.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)