89% Use AI - Most Leave Core Risks Exposed

89% run or pilot AI, yet basics lag and 34% report breaches. Fix what's actually failing: patch vulns, lock down identities, encrypt data, test models-compliance is just the floor.

Categorized in: AI News Management
Published on: Nov 14, 2025
89% Use AI - Most Leave Core Risks Exposed

89% adopt AI workloads-yet most miss the basics on security

According to the State of Cloud and AI Security 2025 report by Tenable and the Cloud Security Alliance, 89% of organisations are either running (55%) or piloting (34%) AI workloads. The attack surface has expanded fast, and so have the incidents. More than one-third (34%) of organisations with AI workloads have already faced an AI-related breach. The problem isn't sci-fi threats-it's fundamentals left undone.

What's actually breaking vs. what leaders fear

  • Top breach causes: exploited software vulnerabilities (21%), AI model flaws (19%), insider threats (18%).
  • Top worries: model manipulation (18%) and unauthorised AI models (15%).

There's a clear gap between perceived threats and the real exposures showing up in incident reports.

Compliance is table stakes-not the finish line

Over half of organisations (51%) lean on frameworks such as the NIST AI Risk Management Framework or the EU AI Act to guide strategy. That's smart governance-but it isn't enough.

Basic controls are still missing: 78% have not both classified and encrypted AI data. Only 22% do both. And just 26% run AI-specific security testing, like red teaming.

"The data shows us that AI breaches are already here and confirms what we've been warning about: most organisations are looking in the wrong direction," said Liat Hayun, VP of Product and Research, Tenable. "The real risks come from familiar exposures - identity, misconfigurations, vulnerabilities - not science-fiction scenarios. Without addressing these fundamentals, AI environments will remain exposed."

What management should do now

  • Treat compliance as the starting point. Use frameworks to set the floor, then push for technical depth.
  • Prioritise fundamentals in AI environments: identity governance, misconfiguration monitoring, workload hardening, and access management.
  • Classify and encrypt AI data (training, validation, prompts, outputs). Close this gap first.
  • Embed AI-specific exposures into a unified risk program across hybrid and multi-cloud. One inventory, one control plane, one dashboard.
  • Test like it matters: AI red-teaming, abuse testing, and model threat scenarios. Make it a quarterly cadence.
  • Own the basics: patch exploitable vulns quickly, enforce least privilege, and rotate secrets used in AI pipelines.
  • Stand up an approved model and tool registry to reduce shadow AI.

90-day execution plan

  • Days 0-30: Inventory AI workloads, data flows, identities, and third-party services. Classify sensitive AI data.
  • Days 31-60: Enforce MFA and least privilege for AI services. Encrypt data at rest and in transit. Fix high-risk misconfigurations.
  • Days 61-90: Launch AI red-team exercises. Centralise risk reporting across cloud and AI stacks. Assign clear owners and SLAs.

Executive metrics to track

  • Patch SLA for exploitable vulnerabilities in AI workloads.
  • % of AI datasets classified and encrypted (target: 100%).
  • MFA and key rotation coverage for AI-related identities and secrets.
  • Count of critical misconfigurations in AI infrastructure over time.
  • AI testing coverage (red-team, abuse testing) and remediation rate.

Bottom line

AI adoption is high. Breaches are real. Focus your teams on identity, configuration, and vulnerability management inside your AI stack, then layer AI-specific testing and governance. Get the basics right, and most "AI risk" turns into standard, solvable security work.

If your teams need structured upskilling on AI use and governance, explore role-based options here: Complete AI Training - Courses by Job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide