AI vs. AI in Healthcare Cybersecurity: What Benoit Desjardins Wants You to Know
"There is an eternal battle of attackers using AI versus the defenders of AI," said cybersecurity expert Benoit Desjardins during the HIMSS AI and Cybersecurity Virtual Forum. His message to healthcare leaders was simple: attackers move fast, and AI is fueling both sides.
Healthcare remains a high-value target because it combines personal data, clinical systems, and high uptime requirements. That combo gives intruders leverage and keeps defenders under pressure.
The speed gap that hurts healthcare
Once inside a network, most attackers can get to sensitive data in less than five hours, according to Desjardins. Meanwhile, the average organization takes 235 days to detect a breach.
That gap is where patient data is exposed and operations get disrupted. Closing it is priority one.
Why traditional defenses are stretched
Two common entry points are still malware and phishing. Signature-based detection is widespread because it's fast and accurate-until a variant shows up. It flags known threats by matching patterns, but it misses new twists.
Behavior-based tools catch what malware does, not just what it looks like. Useful, but they still struggle with volume, variants, and the time it takes analysts to review endless alerts.
How attackers are using GenAI
Generative models can create convincing fakes-images, voices, texts, entire personas. Generative Adversarial Networks can produce images that are hard to distinguish from real ones. Attackers are using these to scale phishing, spoof executives, and stage social engineering.
One case in February 2024: an employee at Arup was duped by a video meeting populated by "executives" who weren't real and approved a $25 million transfer. She was the only human on the call. That's the new playbook.
If your staff hasn't been trained to verify live video and audio requests, you're exposed. For baseline guidance on phishing, see CISA's overview.
What defenders can do with AI
GenAI helps on defense too. It can surface flaws, summarize noisy data, scan visual evidence, and analyze digital conversations. Discriminative models are already used across a data layer, feature layer, intelligent layer, and application layer to detect intrusions, malware, and phishing.
Several commercial models report over 99% accuracy, Desjardins noted. Many wins aren't public, but they exist-and they're practical.
Strengths and limits of AI in cybersecurity
- Strengths: simplicity, scale, reusability, speed.
- Limits: big data needs, tedious supervised training, and hallucinations that require human review.
AI won't replace clinicians or cybersecurity experts. Teams that learn to use it will outperform those that don't. AI works 24/7; people provide judgment.
What healthcare leaders should do this quarter
- Close the speed gap: track mean time to detect/respond. Aim to cut both by 50% with alert triage automation and clear on-call coverage.
- Lock down high-value access: enforce phishing-resistant MFA (FIDO2) for admins, VPN, EHR, and email. Remove legacy SMS codes where possible.
- Email and web controls: DMARC/DKIM/SPF at enforcement, modern email security with link isolation, and DNS blocking of known malicious domains.
- Deepfake verification protocol: for any request involving money, PHI, or access changes: call-back on a known number, use a shared code phrase, require dual approval, and add a 24-hour hold over set thresholds.
- AI-assisted SOC: deploy AI models to cluster alerts, rank risk, summarize logs, and auto-generate incident timelines for human review.
- Segment and minimize: restrict east-west traffic, apply least privilege, and reduce where PHI is stored. Fewer doors, fewer headaches.
- Patch what matters: prioritize internet-facing assets and known exploited vulnerabilities. Measure time to remediate.
- Train staff on modern phishing and social engineering: include voice, video, and SMS lures. Test monthly and report improvements.
- Run playbooks: rehearse ransomware and third-party breach scenarios. Validate backups and recovery time against clinical uptime needs.
Key takeaways from Desjardins
- Attackers and defenders both use AI. Treat it as a permanent arms race.
- Time favors attackers: hours to data vs. months to detection. Shrink that gap.
- Traditional tools help, but variants and scale demand AI plus human judgment.
- Upskill your teams. The people who use AI well will lead the field.
For governance and risk framing, see the NIST AI Risk Management Framework.
Want structured, practical training to get your team fluent with AI for security and clinical ops? Explore courses by role at Complete AI Training.
About Benoit Desjardins: Professor of Radiology at the University of Montreal, CMIO at CHUM, and IT consultant for the Quebec government. His session, "AI v. AI - Defending Against AI-Powered Cyber Threats in Healthcare," is expected to be available in a repeat broadcast.
Your membership also unlocks: