Trump and Kennedy Push to Relax AI Safeguards in Healthcare Records
The Trump administration is moving to strip federal requirements that developers test AI tools on actual doctors and make their decision-making transparent to clinicians. The proposed rules from the Office of the National Coordinator for Health IT would eliminate safeguards built over years to ensure electronic health records remain usable and safe.
The changes come as hospitals nationwide deploy AI scribes-software that automatically summarizes patient visits. While these tools save doctors time, quality problems are already surfacing in practice.
The Real-World Problem
Paul Boyer, a psychotherapist at Kaiser Permanente in Oakland, California, uses Abridge, a widely deployed AI scribe. He finds it "not super useful." The software misses emotional tone and clinical nuance-critical details in mental health where how something is said matters more than what is said.
Boyer and colleagues spend time correcting the AI's notes rather than seeing it as a time-saver. A study published in April in the Journal of the American Medical Association found doctors using these products most heavily saved more than half an hour daily, yet safety researchers worry clinicians may not catch all errors. Bad information in a patient's record could affect future care decisions.
Abridge says it monitors clinician edits and feedback at every deployment stage. But there is no federal vetting process for scribe software before it enters hospitals.
What the Rules Remove
Beginning in the Obama administration, the Health and Human Services Department required "user-centered design" testing-having developers test products with actual doctors and nurses. Regulators also pushed for transparency about how AI tools make decisions.
Both requirements are eliminated in the new proposal from HHS Secretary Robert F. Kennedy Jr.'s office.
The rules also remove privacy protections, security standards, and requirements for data format consistency across systems. The administration says fewer rules will spur innovation and competition in a market dominated by two vendors-Epic and Oracle Health control more than 70% of the hospital market.
Hospital and Doctor Concerns
Raj Ratwani, a researcher specializing in how people interact with technology at MedStar Health, warns that unclear record design causes errors. A medication list with 30 different versions of Tylenol at different doses can lead doctors to select the wrong drug.
User testing was meant to simplify these designs. Removing that requirement troubles even some in the industry.
The administration is also scrapping a Biden-era plan for AI "model cards"-simple tools letting clinicians see what data trained an AI system and how it works. Few clinicians used them in their first year, regulators say.
But hospitals pushed back. The American Hospital Association said model cards "provide information on how a predictive or generative AI application was designed, developed, tested, evaluated and should be used. These data are critical to foster trust in AI tools and ensure patient safety."
The American College of Physicians warned that removing transparency could "undermine clinician trust, increase liability expense, and erode the patient-physician relationship."
Limited Evidence of Effectiveness
A Veterans Health Administration study comparing 11 AI scribes found the software performed worse than humans across simulated scenarios. "Although ambient AI scribes can generate complete notes, the overall quality remains broadly below that of human-authored documentation," the authors wrote, noting that missing information could affect follow-up care.
Abridge's general counsel said the company "broadly supports" the government's rules as necessary for keeping pace with AI development. Other industry consultants argue existing rules burden providers seeking better systems.
Boyer worries management will schedule more patients based on expected time savings from the AI, forcing him to spend more time correcting software errors. Kaiser Permanente says it does not require clinicians to use the AI scribe.
"When I am correcting that note, I feel like this is too much work," Boyer said. "This is definitely making this worse, and this is taking up time that I need to not be spending on correcting an AI tool."
Learn more about AI for Healthcare and how Generative AI and LLM systems work.
Your membership also unlocks: