AI and identity: Government at the next frontier
AI is changing how government defines and manages identity. As machine actors surge across networks and services, the core question is blunt: who - or what - gets access, and on what terms?
That focus ran through a recent panel with Howard Tweedie (ex-Ministry of Defence), Jonathan Neal (Saviynt) and Ian Norton (advisor to the UK's One Login programme). Their message: identity is now bigger than people. It's decisions, machines, and the guardrails that keep both in check.
From platforms to decisions
Military lessons from Libya, Syria and Ukraine point to a clear shift. Civilian data and networks now support operations. We are moving from a platform-centric view to an information- and decision-centric approach.
The volume of social media data and the use of AI, machine learning and automation have sped that shift since 2021. Capability now depends on how fast we can sense, decide and act - not just on the kit we buy.
Two sides: AI for identity, and identity for AI
Neal drew a line between two agendas. First, AI for identity: better lifecycle management, fewer manual tasks, higher accuracy in access decisions, and compliance at scale.
Second, identity for AI: establishing trust for non-human agents. There is no HR file for an API client, a bot, or a model. Yet these entities touch critical systems and sensitive data. More than 60% of internet traffic is machine-to-machine. In many enterprises, non-human identities outnumber humans by roughly 45:1 - and the ratio is climbing.
Old questions, new tools
Norton cut through the hype: the questions stay the same - who are you, can I trust you, can I grant access? What changes is the proof. Yesterday it was a passport at a counter. Today it's digital evidence, verifiable credentials, and decentralised identifiers.
The real shift is the "identity of things." Systems talk to systems. AI agents call other AI agents. If we don't manage those identities end-to-end, adversaries will.
Machine actions, human consequences
One example tells the story. A mail-assistant app added a single rogue line in version 16: "open every email." The agent complied. No checks. No pause. That's the risk when actors execute without judgement and identity controls are thin.
Tweedie's view: adopt zero trust and broaden the team. You need engineers, lawyers and social scientists in the same room. In defence, that includes clear rules for targeting decisions - where a human must be in, on, or out of the loop.
What government should do now
- Be explicit on use cases and outcomes. Start small. Measure. Iterate.
- Adopt zero trust as a standard, not a project. Map to NIST SP 800-207 and UK guidance.
- Build a single identity fabric for humans and machines. Unify IAM, PAM, secrets management and workload identity.
- Create a machine identity lifecycle: inventory, attest provenance, assign clearance, rotate credentials, enforce least privilege and time-bound access.
- Use verifiable credentials and decentralised identifiers where they help reduce central points of failure.
- Enforce human-in-the-loop for high-consequence actions. Predefine which actions AI agents may take without approval.
- Strengthen observability: continuous discovery of identities, full audit trails, anomaly detection, and a kill switch for agents.
- Bake identity into procurement. Require standards (OIDC/OAuth2, mTLS), SBOMs, model and dataset lineage, and third-party attestations.
- Stand up cross-functional governance (security, legal, ethics, operations). Approve policies, run red-teams, and review incidents fast.
- Align with national platforms. Ensure services interoperate with One Login for Government and shared API standards.
- Use AI to manage AI at scale. Automate discovery, entitlement reviews, and drift detection - with clear human oversight.
Guardrails before scale
Norton's advice: keep focus, define guardrails (standards, policy, technical controls), and expect exceptions. Neal added that continuous validation is essential; you won't keep up without automation. Small, safe steps beat big bets.
The new identity frontier
There are no borders here. Departments use the same technologies citizens use at home, which means shared risks. Move fast, but keep ethics front and centre.
AI gives speed and flexibility - you can stand up services in days, not months. The trade-off is consequence. Leaders must ask: do we want this outcome, and is it acceptable?
Upskill your team
If you're setting up AI identity controls or zero trust programmes and need structured learning for your team, explore AI courses by job role.