Cybersecurity Operations Vulnerabilities & Threats
The Growing Challenge of AI Agent and NHI Management
The number of AI agents, chatbots, and machine credentials now far exceeds human users, creating a significant but under-recognized security risk. Machines outnumber humans by more than 80 to 1, according to recent reports. As companies adopt microservices, containerization, and serverless cloud computing, machine identities—also called non-human identities (NHIs)—have become essential for automation.
Adding AI agents into this mix causes the number of identities to grow exponentially. These autonomous agents have broad access and operate at scale, which makes oversight difficult. If an AI agent is compromised or jailbroken, it could act against your interests, potentially spying or causing harm without traditional network breaches.
Growing Challenges of NHI Management
Non-human identities include machine accounts, service identities, and API keys linked to agents. Their role in AI security is crucial, yet often underestimated. Recent demonstrations have shown how agents can be exploited through data poisoning, jailbreaking, and prompt injection.
Several organizations have highlighted multiple attack vectors targeting these AI agents. The Cloud Security Alliance listed a dozen such vectors, while OWASP and CSA added more threats recently. Risks also come from forgotten or abandoned agents that continue to operate unpredictably, exposing organizations to serious vulnerabilities.
Research has revealed that malicious actors can insert harmful proxy settings into agent prompts, enabling data exfiltration without user knowledge. Vulnerabilities like EchoLeak have already affected popular AI tools such as Microsoft’s Copilot, and similar threats are expected to surface regularly.
10 Steps To Mitigate AI Agent Risk
- Fingerprint agents: Assign unique, verifiable NHIs linked to the responsible person, not just the creator.
- Tag NHIs: Clearly label identities as AI agents to improve visibility.
- Use unique identifiers: Assign universally unique IDs with verifiable credentials following standards like MCP or A2A.
- Limit permissions: Grant only necessary, agent-specific permissions with strict IP restrictions and guardrails.
- Manage ownership: Implement protocols for transferring, maintaining, and retiring NHIs throughout the agent lifecycle.
- Handle orphaned agents: Identify, quarantine, terminate, and audit agents that are abandoned or malfunctioning.
- Secure personal access tokens (PATs): Carefully control tokens associated with agents.
- Detect prompt injection: Filter inputs in real time to block malicious attempts.
- Sanitize inputs and test security: Apply strict access controls and conduct routine static, dynamic, and composition testing.
- Enforce sandboxing: Use network restrictions, syscall filtering, and least-privilege container setups to contain agents.
Unlike traditional IT infrastructure, AI agents are unpredictable. They may produce different results for the same task and often select random NHIs during operations. This unpredictability challenges existing permission management tools and requires more stringent controls.
Personal assistant NHIs, for example, may need permissions equal to or greater than their human counterparts, increasing the risk of permission sprawl and loss of direct control. With machine credentials already outnumbering humans by a large margin, the security risks are escalating quickly.
Addressing these challenges early in an organization’s maturity cycle is far more effective than reacting after a breach. Proactive security measures now will reduce future risks and save time and resources.
For those managing AI integrations and cybersecurity, understanding and acting on NHI security is essential. More information and training on AI and automation security can be found at Complete AI Training.
Your membership also unlocks: