Security gaps and data theft fears slow agentic AI adoption, Ponemon survey finds

Nearly half of IT leaders say their organizations lack the security controls needed to manage autonomous AI systems, a survey of 1,878 practitioners found. Security gaps-not caution-are the main barrier slowing agentic AI adoption.

Categorized in: AI News Management
Published on: May 02, 2026
Security gaps and data theft fears slow agentic AI adoption, Ponemon survey finds

Most Organizations Lack Controls for Agentic AI, Survey Finds

Nearly half of IT leaders say their organizations don't have the security controls needed to manage autonomous AI systems, according to a survey of 1,878 IT and security practitioners worldwide. The finding suggests that deployment of agentic AI-systems that make decisions and take actions without constant human oversight-is being held back by real security gaps, not just caution.

Only 38 percent of respondents said their organizations have fully or partially adopted agentic AI. Among those using the technology, deployment focuses on routine work: coding, email responses, and data queries. But 43 percent of respondents flagged the absence of proper risk and security controls as a barrier to wider adoption.

Five Barriers to Agentic AI Rollout

  • Lack of proper risk and security controls - 43 percent
  • Complex system integration - 38 percent
  • Data quality issues - 33 percent
  • High costs to implement and maintain - 32 percent
  • Unclear business value - 28 percent

Performance Questions Linger

Even organizations using agentic AI express doubts about its capabilities. Just 44 percent of those who deployed the technology said it performs well at automating manual security tasks with minimal human involvement. Only 39 percent said agentic AI effectively removes human error when retrieving threat intelligence.

Data Theft Risk Rises With Malicious Use

More than half of respondents (55 percent) believe agentic AI will increase data theft risk-either significantly (29 percent) or moderately (26 percent). The concern centers on malicious deployment: attackers using autonomous agents to steal corporate data faster and at greater scale than traditional methods allow.

Intrusion Detection Becomes Harder

Two-thirds of respondents using agentic AI said the systems complicate intrusion detection. Only 12 percent said agents don't impede detection efforts. Malicious AI agents can execute faster attacks, mimic legitimate behavior more convincingly, and poison training data-making it harder to spot breaches in real time.

C-Suite and Staff See Different Risks

A significant gap exists between how executives and technical staff view AI security. When asked whether their organizations have tools to respond to data leakage and prompt injection attacks, 65 percent of C-level respondents said yes. Only 35 percent of staff-level respondents agreed.

Executives also expressed more confidence in AI systems' future ability to reason and avoid misuse. Fifty-six percent of C-level respondents said this would be possible, compared to 40 percent of technical staff. That misalignment matters: staff closest to the technology are less confident in its safeguards.

AI for Executives & Strategy training addresses this gap directly, helping leaders understand the technical realities that shape organizational risk.

Understanding Generative AI and LLM foundations is essential for anyone managing agentic AI deployments, since these systems build on underlying language model technology.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)