Fortinet warns AI security failures in healthcare put patient safety at risk

Healthcare accounted for 18% of Australia's notifiable data breaches in early 2025. Fortinet warns that compromised AI tools could affect diagnoses and patient triage, making security a clinical safety issue.

Categorized in: AI News Healthcare
Published on: Mar 29, 2026
Fortinet warns AI security failures in healthcare put patient safety at risk

Healthcare AI Security Now a Patient Safety Issue, Fortinet Warns

Healthcare organisations face a new class of security threat that goes beyond data protection into clinical decision-making. Fortinet Australia argues that AI security in healthcare should be treated as a patient safety concern, not just a compliance requirement.

The health sector accounted for 18% of all notifiable data breaches in Australia between January and June 2025 - the highest share of any industry. As AI tools embed deeper into clinical workflows, imaging analysis, scheduling, and administrative systems, the attack surface expands beyond traditional hospital networks and electronic health records.

Three Main Vulnerabilities

AI systems in healthcare face distinct risks that conventional cybersecurity approaches may miss. First, training datasets containing sensitive patient information become targets if compromised or manipulated. Second, systems using natural language interfaces are exposed to prompt injection and input-based attacks. Third, the models themselves can be manipulated to extract patient data or alter outputs.

The consequences differ from standard data breaches. A compromised imaging model could affect diagnostic accuracy. A manipulated triage system could disrupt patient prioritisation. Administrative AI handling sensitive data may expose patient records if security controls fail.

Compliance Is Not Enough

Existing privacy frameworks were built around traditional IT systems, not AI-driven decision environments. Healthcare organisations need governance covering how AI models are trained, validated, monitored, and secured throughout their lifecycle.

Five measures are recommended:

  • Establish AI governance frameworks and standards
  • Secure the data pipeline
  • Strengthen identity-centric security
  • Monitor AI behaviour and outputs
  • Align cybersecurity with clinical resilience

ISO 27090, a developing standard for healthcare information security, is cited as relevant for organisations building AI security controls.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)