IBM security leaders outline key controls for managing AI agent identities and privileges

IBM says 80% of cyberattacks exploit compromised identities - a risk that grows as organizations deploy AI agents without proper access controls. Five key safeguards include least-privilege permissions, unique agent IDs, and real-time monitoring.

Categorized in: AI News Management
Published on: Mar 16, 2026
IBM security leaders outline key controls for managing AI agent identities and privileges

IBM Security Leaders Identify Five Critical Controls for AI Agent Deployment

IBM's Bob Kalka and Tyler Lynch say organizations deploying AI agents face a fundamental security gap: they manage human identities rigorously but often overlook the non-human identities running automated tasks. About 80% of cyber attacks today exploit compromised identities, they said, making this oversight costly.

AI agents-software workloads written in languages like Python that interact with databases and APIs-introduce new attack surfaces. When granted excessive permissions, a single compromised agent can expose customer data or alter system configurations. The risk compounds because most organizations lack visibility into what these agents actually do.

The Five Security Imperatives

Kalka and Lynch outlined five controls organizations must implement:

  • Accountability. Each AI agent needs a unique identifier so its actions can be traced. Without this, auditing security incidents becomes impossible. Organizations need to know exactly what an agent did, when it did it, and what system or user it affected.
  • Least privilege. Agents should have only the minimum permissions needed for their tasks. Lynch said: "We don't want that privilege to be existent at all times, or to be running in that privileged state." Over-privileged agents become high-value targets if compromised.
  • Last-mile controls. The execution point matters as much as the authorization. Organizations must ensure agent actions are scoped correctly and that privileges are revoked when tasks complete. This applies to sensitive operations like accessing customer data or modifying configurations.
  • Orchestration and governance. A centralized system should manage an agent's entire lifecycle-from provisioning and configuration through monitoring and de-provisioning. This includes managing the secrets and credentials agents use, ensuring they're stored securely and rotated regularly.
  • Observability. Organizations need tools providing real-time visibility into agent behavior. Kalka said: "The ability to see what's happening, how it's happening, and what the risk factors are." This means detecting anomalies, enforcing policies, and responding to threats as they occur.

Breaking Down Organizational Silos

Securing AI agents requires collaboration across departments. Kalka said: "The CISOs, the IT teams, and the Dev teams need to be working together on this." Security can't be bolted on after development; it must be integrated from the start.

This cross-functional approach establishes clear policies, implements appropriate controls, and ensures security decisions account for operational realities. Development teams understand what's technically feasible. IT teams know what's operationally sustainable. Security teams define the non-negotiables.

For managers deploying AI Agents & Automation, the message is direct: treat AI agents as identities requiring the same governance rigor you apply to human users. The technical implementation matters, but so does the organizational structure that oversees it.

More details are available in IBM's full discussion on their YouTube channel.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)