Special Report Highlights Cybersecurity Threats of Large Language Models in Radiology
Large language models (LLMs) like OpenAI’s GPT-4 and Google’s Gemini have become essential tools in healthcare, particularly in radiology. These AI models assist with clinical decision support, patient data analysis, drug discovery, and improving communication by simplifying medical language. Many healthcare providers are exploring their integration into daily workflows.
However, a new report published in Radiology: Artificial Intelligence, by the Radiological Society of North America (RSNA), warns about the cybersecurity risks associated with using LLMs in healthcare. The report emphasizes the need for strict security measures to prevent these models from being exploited maliciously within health systems.
Security Challenges Surrounding LLMs
LLMs are vulnerable to several types of cyberattacks. Malicious actors can extract sensitive patient information, manipulate data, or alter clinical outcomes. Examples include data poisoning—where harmful data corrupts the model’s training set—and inference attacks that bypass safeguards to produce restricted or harmful outputs.
Beyond vulnerabilities inherent in AI models, threats can arise from the surrounding ecosystem. This includes risks such as unauthorized access, data breaches, or installation of harmful software. In radiology, attackers might manipulate image analysis results or access confidential patient records, posing significant risks to patient safety and privacy.
Protective Measures for Radiologists and Healthcare Institutions
Before deploying LLMs, healthcare providers must conduct thorough cybersecurity risk assessments. Radiologists and IT teams should implement standard security practices like strong passwords, multi-factor authentication, and timely software updates. Given the sensitivity of patient data, heightened security protocols are essential.
Key steps for safe LLM integration include:
- Deploying models in secure, controlled environments
- Using strong encryption to protect data in transit and at rest
- Continuous monitoring of model interactions to detect anomalies
- Using only institution-approved tools and anonymizing any sensitive input data
- Providing regular cybersecurity training for all staff, similar to mandatory radiation safety education
Patient Awareness and Reassurance
While the adoption of LLMs introduces new cybersecurity challenges, patients should be informed but not alarmed. Awareness of potential risks is important, yet healthcare institutions are actively investing in stronger cybersecurity frameworks and complying with evolving regulations to protect patient data.
Ongoing efforts aim to reduce vulnerabilities and ensure the safe use of AI tools in clinical settings.
About the Report
The special report, titled “Cybersecurity Threats and Mitigation Strategies for Large Language Models in Healthcare,” was authored by a team including experts from various medical and research institutions. It was published in Radiology: Artificial Intelligence, edited by Charles E. Kahn Jr., M.D., M.S., and published by the Radiological Society of North America, Inc.
RSNA is a professional association dedicated to advancing radiology through education, research, and technology innovation. The organization supports excellence in patient care and healthcare delivery.
For healthcare professionals interested in enhancing their understanding of AI and cybersecurity in medical settings, resources and courses are available at Complete AI Training.
Your membership also unlocks: