US and allies urge critical infrastructure operators to plan and oversee AI use
Western governments released new guidance to help critical infrastructure operators adopt AI without adding avoidable risk. The document boils the message down to four principles: general risk awareness, need and risk assessment, AI model governance, and operational fail-safes.
It targets operational technology (OT) environments-water systems, energy, transportation, manufacturing-where a bad AI decision can cause real-world harm. The intent is simple: move forward, but keep control.
Who published the guidance
The guidance is a joint effort from CISA, the FBI, and the NSA, alongside cybersecurity agencies in Australia, Canada, Germany, the Netherlands, New Zealand, and the U.K. It reflects a shared concern: AI is entering OT fast, often without the guardrails these environments demand.
What the guidance expects from operators
- Understand AI's specific risks in OT and educate staff before deploying anything.
- Define clear business need and risk tolerance for every AI use case. No vague pilots.
- Set explicit security requirements for vendors and models; document who is accountable.
- Assess integration challenges with existing OT systems early to avoid unsafe shortcuts.
- Create written AI use and accountability procedures, including approval and rollback paths.
- Test thoroughly before production and keep validating compliance with safety and regulatory rules.
- Keep humans in the loop for any action that could impact safety, availability, or compliance.
- Build fail-safe mechanisms so AI can fail gracefully without disrupting critical operations.
- Update incident response plans to include AI-specific failure modes and attack scenarios.
Why this matters for government leaders
Federal guidance over the past year has stressed two truths: AI can improve operations, and it can open new attack paths if rushed. DHS outlined roles across the ecosystem-from developers and cloud providers to operators-and the White House called for expanded sharing of AI-related security warnings with infrastructure providers.
OT already struggles with aging systems, constrained budgets, and thin security staffing. Adding AI without clear controls can create blind spots in monitoring, accountability gaps with vendors, and unsafe autonomy. Government leaders should set policy now so pilots don't outpace safety.
90-day action plan for public-sector operators
- Inventory all current and proposed AI uses in OT. Map each to a defined business need and risk owner.
- Establish decision authority: who approves AI in safety-relevant workflows, and who can hit stop.
- Require human-in-the-loop for any action that can affect safety, reliability, or regulatory obligations.
- Test before trust: red-team models, simulate failures, and run tabletop exercises with operations staff.
- Bake minimum security requirements into contracts (model provenance, update cadence, logging, support SLAs).
- Instrument monitoring and logging specific to AI decisions and model drift; set alert thresholds.
- Add AI scenarios to incident response plans and rehearse manual fallback procedures.
- Train operators and supervisors on safe AI use, escalation paths, and how to disable AI functions.
- Start small in low-impact areas, measure outcomes, and expand only after controls hold in production.
Context and further reading
For official materials and updates, see:
Bottom line
AI in OT should be deliberate, not experimental. Define the need, keep humans in control, test hard, and plan for failure. That's how you get value without trading away safety and resilience.
Your membership also unlocks: