CSA opens public consultation on securing Agentic AI systems
The Cyber Security Agency of Singapore (CSA) has released "Securing Agentic AI" - an Addendum to its existing Guidelines and Companion Guide on securing AI systems - for public consultation. The Addendum gives practical guidance for system owners to secure Agentic AI in real deployments. It was announced by Josephine Teo at Singapore International Cyber Week (SICW) 2025.
AI is delivering real gains across sectors, but those gains depend on trust and security. CSA's original Guidelines laid out core security principles across the AI lifecycle, and the Companion Guide translated them into actionable measures. This new Addendum focuses on the next wave: Agentic AI that plans, decides, and acts.
Why Agentic AI ups the security bar
Agentic AI can interpret context, form plans, and take independent actions to hit a goal. That autonomy, paired with tool and data access, expands the attack surface. Think prompt injection with consequences, tool misuse, data exfiltration, and unintended actions at machine speed.
What the Addendum covers
- Risk identification by capability: Map agentic workflows end-to-end to spot where threat actors could intervene or escalate access.
- Lifecycle controls: Practical controls spanning design, build, test, deploy, and operate - so risks are managed before and after release.
- Implementation examples: Guidance across different autonomy levels and scenarios, including coding assistants, automated client onboarding, and fraud detection systems.
- Use with existing materials: The Addendum is meant to be read alongside CSA's Guidelines and Companion Guide.
What IT and development teams can do now
- Inventory agentic behaviors: List where your system plans, calls tools, or loops; define clear role and permission boundaries for each action.
- Threat model the workflow: Identify injection points (inputs, tools, connectors, memory), escalation paths, and sensitive operations.
- Least privilege for tools: Scope API keys, databases, and function access. Use allowlists and per-action consent where feasible.
- Human-in-the-loop for high-impact actions: Require review/approval on money movement, data deletion, provisioning, or policy changes.
- Guardrails and constraints: Policy prompts, output filters, tool-use constraints, and semantic checks for intent drift.
- Isolation and staging: Run agents in sandboxed environments; separate dev/test/prod; use ephemeral credentials and scoped tokens.
- Prompt injection defenses: Input sanitization, content provenance checks, instruction hierarchy, and red-teaming against known attack patterns.
- Data safeguards: Mask PII, apply row/column-level security, log and rate-limit data access, and monitor for unusual export behaviors.
- Auditability: Full trace of prompts, tool calls, decisions, and outputs; immutable logs for post-incident analysis.
- Fallbacks and kill switches: Safe defaults on tool error, rollback plans, and the ability to suspend agent actions quickly.
- Testing at autonomy levels: Test behaviors with increasing freedom; verify safety constraints before enabling higher autonomy.
Public consultation timeline
CSA's public consultation on the Addendum runs from Oct 22, 2025 to Dec 31, 2025. If you build, secure, or operate AI systems, your feedback matters - especially on threat models, guardrails, and operational controls for autonomous behaviors.
For official details and documents, visit the Cyber Security Agency of Singapore at csa.gov.sg. The Addendum is intended to be used with the existing Guidelines and Companion Guide.
Helpful resources
- Practical AI certification for developers - sharpen secure AI development habits and implementation patterns.
Your membership also unlocks: