Microsoft Adds Identity Controls for AI Agents to Tighten Security
Microsoft launched several tools this week to help organizations manage AI agents more closely, including agent identities in its Entra ID service and guardrails in its Azure AI Foundry platform. The company announced these measures at the RSAC Conference, addressing a critical gap in how enterprises control autonomous AI systems.
The move comes as AI agent adoption exploded in 2025, creating new security risks that traditional controls don't address. More than half of companies surveyed by analyst firm Omdia lack confidence they can secure resources accessed by nonhuman identities.
Agents Need Identities
Microsoft's core strategy treats AI agents like users or applications - giving each one its own identity within Entra ID. This allows security teams to track what agents do, assign permissions, and log their behavior.
Herain Oberoi, vice president of data and AI security at Microsoft, said the proliferation of unmanaged AI agents poses the most pressing threat to enterprise security, more urgent than AI sprawl, data leakage, or new regulations. An agent registry will track which agents work on behalf of users and which operate independently.
The company also expanded guardrails - collections of controls assigned to specific models or agents - and added tools to flag when agents receive risky capabilities. Microsoft is using its existing identity infrastructure for users, apps, and devices as the foundation for agent controls.
Using Agents to Defend Against Agents
Microsoft updated its Security Copilot to deploy agents that improve incident response. A new Security Triage Agent summarizes alerts in the background, while a Security Analyst agent conducts multi-step investigations across infrastructure using data from Microsoft Defender and Sentinel.
A posture agent assesses data security and recommends risk remediation. These agents use the new identity registry to operate transparently within the security stack.
The company also added an AI pillar to its Zero Trust Workshop, outlining defense-in-depth strategies for securing autonomous agents. Oberoi said the security industry will need to continue developing controls as agentic systems evolve.
For managers overseeing AI deployments, understanding AI governance and identity controls is now essential. Organizations should also familiarize themselves with how generative AI and large language models power these autonomous systems to make informed decisions about agent deployment and risk management.
Your membership also unlocks: