Prompt Injection Emerges as Daily Risk in Government AI Use
State and territorial governments are deploying generative AI tools across daily operations, and a persistent security vulnerability is traveling with them. The Center for Internet Security released a report identifying prompt injection as a critical threat tied to that expansion.
A 2025 survey of 51 state and territorial CIOs found that 82% reported employees using GenAI in daily work, up from 53% the year before. Most organizations have moved beyond pilot testing into widespread deployment. AI ranked as the top policy and technology priority for 2026.
Government teams use these tools for routine tasks: summarizing documents, drafting emails, writing code, and managing schedules.
How the vulnerability works
Language models cannot distinguish between instructions and regular data. A malicious instruction embedded in an email, webpage, or document gets processed the same way as a legitimate request.
This creates two attack vectors. Direct prompt injection happens when someone tries to override a model's safeguards through direct interaction. Indirect prompt injection hides malicious instructions inside external content that AI systems later retrieve and process.
The problem persists despite training efforts. Research shows that targeted training alone does not provide sufficient protection against these attacks.
Real-world attacks in government systems
Several documented cases show how prompt injection spreads through connected systems.
In one example, a GenAI code assistant processed hidden instructions embedded in a documentation page and sent code snippets and AWS API credentials to an external URL. An update to Amazon Q for Visual Studio Code in July 2025 contained a prompt instructing the AI agent to delete files, terminate servers, and remove cloud data. AWS patched the issue two days later.
The Morris II worm demonstrates persistence. A malicious prompt embedded in an email entered a retrieval-augmented generation database through an AI email assistant. The assistant then generated additional emails containing the same malicious prompt, along with sensitive information extracted from the system.
In the GeminiJack case, malicious instructions hidden in enterprise data sources-Google Docs, calendar entries-triggered data theft when retrieved through search. Google separated Vertex AI Search from Gemini Enterprise to address the vulnerability.
What government organizations should do
Control measures focus on limiting what AI systems can access and requiring human oversight.
- Define acceptable use policies for AI tools and train employees on handling sensitive data
- Map which systems and data AI platforms can reach
- Enforce least privilege access
- Require human approval before actions involving sensitive data or code execution
- Review logs regularly for unusual behavior
The Open Web Application Security Project identified prompt injection as the top risk category for GenAI and language model applications. Understanding Generative AI and LLM vulnerabilities and Prompt Engineering best practices can help government teams recognize and mitigate these attacks.
Your membership also unlocks: