Trump and Kennedy move to strip safety requirements from AI health record tools

The Trump administration wants to drop federal rules requiring healthcare AI tools be tested on real users and their logic be visible to clinicians. Critics warn the rollback leaves hospitals with no safeguard against AI errors in patient records.

Categorized in: AI News Healthcare
Published on: May 16, 2026
Trump and Kennedy move to strip safety requirements from AI health record tools

Trump Administration Proposes Weakening AI Healthcare Safeguards

The Trump administration is moving to eliminate federal requirements that healthcare AI tools be tested on actual users and that their decision-making processes be transparent to doctors and nurses. The proposal removes oversight rules that have governed electronic health records since the Obama administration, drawing concern from hospital systems, physician groups, and patient-safety researchers.

The changes come as AI-powered note-taking software spreads through hospitals nationwide. These tools automatically summarize patient visits, promising to save clinicians significant time on paperwork. A study published in April in the Journal of the American Medical Association found that doctors using these products most heavily saved more than 30 minutes daily.

But real-world use reveals persistent problems. Paul Boyer, a psychotherapist at Kaiser Permanente in Oakland, California, said his hospital's AI scribe software made by Abridge struggles with clinical nuance and emotional tone - factors critical in mental health. "It is not super useful," Boyer said. "They end up correcting the computer-written notes."

Safety Concerns Amid Regulatory Rollback

Raj Ratwani, a researcher specializing in human factors at MedStar Health, said "there is currently no safeguard in place" to vet scribe software at the federal level. He worries that removing transparency requirements will compound the problem.

Poor record design can lead to medication errors. Ratwani described a scenario where a cluttered medication list with 30 versions of Tylenol at different doses could cause a physician to select the wrong drug. User-centered design testing was meant to prevent these errors by having developers test products on actual doctors and nurses before deployment.

The proposed rules eliminate that requirement. They also scrap plans for AI "model cards" - tools that would let clinicians click through information about how AI systems were trained and tested. The Biden administration introduced model cards in 2024, but few clinicians used them.

Still, hospitals want them to stay. The American Hospital Association wrote that model cards "provide information on how a predictive or generative AI application was designed, developed, tested, evaluated and should be used. These data are critical to foster trust in AI tools and ensure patient safety."

Industry Divided on Deregulation

The administration argues that removing requirements will spur innovation and competition in a consolidated market. Epic and Oracle Health account for more than 70% of the hospital electronic records market, according to a 2022 study.

Some healthcare consultants support the rollback. Ryan Howells, a principal at Leavitt Partners, which advises digital health companies, said federal regulations are "the single biggest inhibitor to true clinical innovation."

But even developers are uncertain. Leigh Burchell, vice president for policy at Altera Digital Health, an electronic health records company, said her industry group had "a lot of different perspectives" on the proposal - unusual for a normally aligned trade group. Still, Burchell's organization supports requiring companies to disclose what data AI relies on.

Abridge, the scribe software maker, said it "broadly supports" the government's rules as a "necessary modernization." The company said it monitors clinician edits, ratings, and feedback to evaluate its software at every stage.

Evidence of Effectiveness Remains Thin

A Veterans Health Administration study comparing 11 AI scribes found the software performed worse than humans across five simulated scenarios. "Although ambient AI scribes can generate complete notes, the overall quality remains broadly below that of human-authored documentation," the authors wrote, noting that omitted information poses particular risk to follow-up care.

Boyer said he can mostly ignore his AI scribe for now. But he worries management will schedule more patients based on expected time savings, forcing him to spend more hours correcting the software's errors. "When I am correcting that note, I feel like this is too much work," he said. "This is definitely making this worse."

A Kaiser Permanente spokesperson said the company does not require clinicians to use AI tools.

Learn more about AI for Healthcare and Generative AI and LLM technologies.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)