Healthcare Workers Risk Patient Privacy by Uploading Sensitive Data to GenAI and Cloud Accounts
Recent research from cybersecurity firm Netskope reveals a troubling trend: healthcare workers are frequently exposing sensitive patient information by using generative AI tools like ChatGPT and Google Gemini, as well as uploading data to personal cloud storage services such as Google Drive and OneDrive. Despite the healthcare sector’s widespread adoption of AI to improve efficiency, this practice raises serious privacy concerns.
Widespread AI Adoption in Healthcare
Netskope Threat Labs data shows that 88% of healthcare organizations have integrated cloud-based generative AI apps into their workflows. Nearly all (98%) use applications with genAI features, and 96% employ apps that utilize user data for training purposes. Additionally, 43% of these organizations are experimenting with running genAI infrastructure locally.
While the availability of AI tools is increasing, fewer healthcare workers rely on personal AI accounts for work, dropping from 87% to 71% in the past year. Still, the use of personal AI accounts remains common, posing risks when sensitive data is involved.
HIPAA Compliance and Patient Trust at Risk
Many generative AI tools are not HIPAA-compliant, and their developers refuse to sign business associate agreements. Using these tools with protected health information (PHI) violates HIPAA rules, exposing organizations to regulatory penalties. Uploading patient data to genAI platforms or personal cloud accounts without strong safeguards also damages patient trust.
“Beyond financial consequences, breaches erode patient trust and damage organizational credibility with vendors and partners,” states Netskope. This highlights the urgent need for stronger oversight and authorized AI tools to reduce risks linked to unapproved or “shadow AI” usage.
Data Violations and Security Concerns
Mishandling HIPAA-regulated data is the top security concern in healthcare. PHI is the most common type of sensitive data uploaded to personal cloud and genAI apps, as well as other unapproved locations. Netskope reports that 81% of data policy violations involve regulated healthcare data, with the remainder involving source code, secrets, and intellectual property.
Healthcare organizations must carefully balance the advantages of generative AI with strict data governance policies to reduce risks. Enterprise-grade genAI applications with strong security controls are essential to protect sensitive information.
Recommended Security Measures
- Adopt AI applications designed with security features that comply with healthcare regulations.
- Deploy data loss prevention (DLP) tools to monitor and control access to genAI platforms.
- Block high-risk AI apps; for example, DeepAI, Tactiq, and Scite are blocked by 44%, 40%, and 36% of healthcare organizations respectively.
- Inspect all HTTP and HTTPS traffic for phishing attempts and malware.
- Use remote browser isolation for visiting high-risk or newly registered domains.
Notably, 54% of healthcare organizations now have DLP policies in place, up from 31% last year, showing progress in addressing these risks.
Malware Risks via Cloud Applications
Threat actors increasingly exploit cloud apps like GitHub, OneDrive, Amazon S3, and Google Drive to deploy malware such as information stealers and ransomware. Instead of breaching networks directly, attackers use social engineering to manipulate healthcare employees into introducing malware, which then grants initial access to systems.
To counter this, healthcare organizations should implement comprehensive security monitoring and enforce strict access controls to cloud and AI tools.
Conclusion
Generative AI tools hold potential for improving healthcare efficiency, but they also introduce significant privacy and security challenges. Healthcare providers must remain vigilant, enforce data protection policies, and incorporate AI-related risks into cybersecurity training to safeguard patient data.
For healthcare professionals interested in learning more about AI and security best practices, exploring specialized courses on Complete AI Training can provide valuable insights.
Your membership also unlocks: