AI agents and shadow AI use expand human risk faster than most security teams can respond, Forrester warns

AI-related security incidents are up 44%, and up to 40% of employees have already shared sensitive data with AI tools. Traditional security strategies weren't built for this speed or scale.

Categorized in: AI News Management
Published on: Mar 26, 2026
AI agents and shadow AI use expand human risk faster than most security teams can respond, Forrester warns

AI Is Reshaping Your Human Risk Management Strategy

Organizations are seeing a 44% increase in AI-related security incidents. At the same time, AI agents are operating continuously-24/7, without pause or second-guessing-creating attack windows that traditional security approaches weren't built to handle. For management teams, the message is direct: the line between human risk and technology risk has dissolved.

This shift demands a new approach to how you manage workforce risk. Your employees still introduce risk. Now they're doing it faster, at greater scale, and often through tools you may not know exist.

The Dual Risk: Unintentional and Malicious

AI creates two distinct problems. Employees misuse AI tools with good intentions-trying to work faster, solve problems more efficiently. Threat actors, meanwhile, weaponize AI through deepfakes, voice manipulation, and prompt injection attacks.

The result is an expanded attack surface. Adversaries aren't constrained by human limitations. They don't need sleep or breaks. They can scale attacks instantly.

Shadow AI Is the New Shadow IT

Up to 40% of employees have already shared sensitive information with large language models-often without realizing it. This isn't a technology failure. It's a cultural one.

When security becomes a blocker instead of an enabler, employees find workarounds. They're not trying to cause harm. They're trying to do their jobs. In the age of AI, those workarounds scale exponentially.

The problem compounds when security teams don't have visibility into which AI tools employees are using, what data gets shared, or where that data goes.

AI Agents Need the Same Rigor as Human Employees

Organizations are deploying AI agents without applying the same oversight used for human hires. There's no onboarding, no background checks, no governance framework. Many teams don't even have a basic inventory of where AI agents exist or what data they can access.

If you wouldn't deploy a human employee without oversight, the same standard should apply to AI agents. This means identity controls, continuous monitoring, and clear guardrails around what agents can do and what data they can touch.

Governance Frameworks Are Lagging Behind Deployment

Most organizations are playing catch-up. Formal AI governance structures, policies, and oversight committees are only now emerging. That delay creates real risk exposure.

Effective AI security requires more than a single solution. It requires coordination across governance, risk and compliance; identity and access management; data security; and zero trust principles. This is an organizational problem, not a technology one.

Five Actions for Management Teams

1. Understand That AI Amplifies Human Risk

AI doesn't eliminate human risk-it multiplies it. Employees are still making decisions and introducing risk, just faster and at greater scale. Your risk management strategy needs to account for this acceleration.

2. Treat AI Agents Like Workforce Members

Security, accountability, and guardrails aren't optional. Apply the same onboarding, governance, and monitoring standards to AI agents that you apply to people.

3. Build Visibility First

Create a clear inventory of AI tools in use, agents in development or deployed, and data being shared with AI systems. You can't secure what you can't see.

4. Measure Risk Instead of Assuming It

Move beyond one-size-fits-all training. Behavior-based risk scoring-for both humans and AI agents-enables targeted, real-time interventions based on actual risk.

5. Rethink Security Culture

The biggest shift isn't technical. It's cultural. Security culture in a world where humans and AI agents work together looks fundamentally different from what it did before.

The Path Forward

AI is a significant opportunity-but only if you approach it with discipline. You can't ignore it. You can't block it. You can't secure it with yesterday's strategies.

For management teams, the path is clear: embrace AI, govern it rigorously, and keep human risk management at the center of your strategy. Understanding AI for Management and the fundamentals of Generative AI and LLMs is now essential for making informed decisions about how your organization deploys these tools.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)