Advanced AI Tools Threaten Healthcare Cybersecurity Defenses
Powerful new AI models capable of finding software vulnerabilities in minutes could accelerate ransomware attacks against hospitals and medical devices, healthcare security experts warn. The risk is acute because many clinical systems run outdated software that takes months to patch without disrupting patient care.
Anthropic's Claude Mythos model can autonomously identify and exploit zero-day vulnerabilities - flaws unknown to software makers - across decades-old code. The company deemed the tool too dangerous for public release and restricted access to about a dozen major tech firms and 40 software organizations through a program called Project Glasswing.
Healthcare ransomware attacks hit 460 times in 2025, making the sector the most frequently targeted critical infrastructure industry, according to FBI data. Experts fear AI-assisted attacks could compress warning times from days to hours, enabling coordinated outages across multiple hospitals simultaneously.
The Legacy Systems Problem
Medical imaging systems, infusion pumps, and patient monitors typically run outdated operating systems with minimal security controls. These devices are difficult to update without interrupting care and often lack the detection tools hospitals need to spot breaches early.
"CISOs are concerned that Mythos class tools shrink the timeline from months or days down to hours and minutes," said Errol Weiss, chief security officer of the Health Information Sharing and Analysis Center. "That means more ransomware, less warning before attacks and a greater chance of simultaneous, multi-hospital disruptions."
Healthcare defenders already operate with limited resources and visibility into their networks. Advanced AI in adversaries' hands widens that gap further.
Defensive Potential
The same technology could strengthen hospital defenses if deployed responsibly. AI tools can scan large codebases and device configurations faster than human teams, flagging vulnerabilities before attackers find them.
These systems could also prioritize which vulnerabilities pose real operational risk rather than relying solely on industry severity scores. They could stress-test legacy devices in controlled environments and help security teams triage incidents during active outages.
"In a sector with limited security resources, using AI to amplify defensive work is not optional - it's essential," Weiss said.
Healthcare Left Out of Access
No major healthcare organizations or health-focused groups appear to have access to Mythos Preview through Project Glasswing. Experts say that's a mistake.
Healthcare operates under unique constraints: patient safety requirements, regulatory complexity, and dependence on interconnected legacy systems. Without healthcare voices shaping how Mythos is tested and deployed, defensive tools may not address clinical environments effectively.
"ISACs - like Health ISAC for the global health sector - are precisely the mechanisms we already use to share threat information safely and at scale," Weiss said. "Healthcare has unique risk considerations that need to be reflected in safeguards and testing methodologies."
Anthropic said its eventual goal is enabling organizations to deploy Mythos-class models at scale for defensive security purposes. Healthcare organizations should push for inclusion in that process now.
For more on how AI applies to healthcare security, see AI for Cybersecurity Analysts and AI for Healthcare.
Your membership also unlocks: