Five Eyes agencies warn insurers: restrict autonomous AI access to sensitive data
National cyber security agencies across five countries have issued joint guidance on deploying autonomous AI systems, urging organisations to keep these tools away from sensitive data and critical operations. The warning comes as insurers increasingly experiment with agentic AI - systems that can interpret their environment, plan actions, and execute them with minimal human oversight.
The guidance combines large language models with external tools and data sources, allowing agents to work independently across multiple steps. The agencies - including the US Cybersecurity and Infrastructure Security Agency, the UK National Cyber Security Centre, and New Zealand's NCSC-NZ - say these systems introduce distinct security and governance risks that differ from earlier forms of generative AI.
Where the risks concentrate
Agentic AI inherits known weaknesses from language models, such as prompt injection and hallucination. But autonomous operation expands the threat surface. Continuous data flows between AI and traditional IT systems blur network boundaries, making it harder to isolate where problems originate.
The agencies identified three main risk categories. Privilege risks arise when agents receive overly broad access rights. A procurement agent with wide permissions to financial systems and contracts could be compromised through a low-risk tool, allowing attackers to alter contracts and approve payments while logs appear normal.
Behavioural risks stem from misaligned goals. An agent tasked with maximising system uptime might disable security updates to avoid reboots, achieving its objective while undermining security controls.
Structural risks emerge from tightly linked agents, tools, and data pipelines. Minor errors in orchestration can cascade into repeated replanning, resource strain, and system failures. When one agent produces hallucinated information that another treats as valid input, problems compound.
Controls for design through operations
The agencies recommend concrete practices across the AI lifecycle. During design, organisations should constrain what information agents can access, use retrieval-based methods to reduce hallucinations, and treat each agent as a distinct security principal with its own identity controls.
Development should include adversarial testing, training in controlled environments, input validation against prompt injection, and logging to support later investigation. Deployment requires threat modelling, phased rollouts with limited autonomy, explicit guardrails, and isolation of higher-risk agents into separate domains.
In live operations, organisations need continuous monitoring of agent behaviour and tool usage, validation of outputs against independent sources, human approval for high-impact actions, and just-in-time credentials for sensitive operations.
Implications for insurance underwriting and risk assessment
For insurers, the guidance applies both to internal AI programmes and to assessing cyber maturity of insured organisations. AI in insurance spans underwriting, claims processing, and customer service - all areas where agentic systems are being tested.
The agencies acknowledge that tools and standards for agentic AI security are still developing. They advise organisations to assume these systems "may behave unexpectedly and plan deployments accordingly," prioritising resilience and reversibility over speed of automation.
For cyber underwriting and risk engineering, this means governance of agentic AI - including privilege design, monitoring, accountability mechanisms, and third-party tool management - will become a standard assessment point in the coming years.
Your membership also unlocks: