Local Government Faces Rising Cyber Threats as AI Adoption Accelerates
Cleveland Municipal Court shut down for a week in early 2025 after the Qilin ransomware group breached its systems. The court couldn't resume normal operations for more than three weeks. Background checks halted. Dozens of trials were rescheduled. The website remained offline for nearly a month.
That attack was one of many targeting local government offices last year. As cities expand their use of AI tools, security experts warn that cyberattacks will intensify in 2026-and that attackers are now using AI themselves.
The Adoption Problem
Many cities are already deploying AI to automate repetitive work. Los Angeles, Austin, and Honolulu use AI to speed up planning and permitting. Several municipalities have launched AI chatbots to improve 3-1-1 resident services.
But adoption is outpacing governance. When official guidance lags, employees fill the gap themselves. A 2025 report from Menlo Security found that 68 percent of government employees use free-tier AI tools like ChatGPT on personal accounts. Fifty-seven percent of those employees input sensitive data into these tools.
The cost is steep. IBM's 2025 Cost of a Data Breach Report found that incidents involving unsanctioned AI use cost organizations an average of $670,000 more than typical breaches. Ninety-seven percent of AI-related security incidents involved systems that lacked proper limits on data access.
Two Emerging Threats in 2026
Prompt Injection
Hidden instructions embedded in emails, PDFs, images, and webpages can hijack AI assistants. These attacks trick AI systems into extracting sensitive data or taking unauthorized actions.
Local governments using AI tools that process external documents are most at risk. Mitigation requires screening incoming data for malicious prompts and limiting what data AI systems can access.
AI-Led Cyberattacks
Attackers are using AI to conduct cyberattacks with minimal human involvement. Anthropic disrupted a campaign in which hackers used Claude to handle 80-90 percent of their operations.
Small and midsized city offices face the highest risk. As AI makes attacks cheaper to execute, more organizations become profitable targets. Mitigation requires strong passwords, multi-factor authentication, phishing awareness, least privilege principles, robust backups, and patch automation. Regular AI-powered penetration testing also helps.
What Local Governments Should Do Now
Start with policy. Establish clear AI governance that documents how adoption is managed, monitored, and where risks are addressed.
Train staff on AI risks. Employees often assume approved tools are inherently safe. They need education on how AI tools fail and what responsibilities come with using them.
Know where AI is being used. Generative AI is embedded in many existing software products. The line between approved and unapproved use can blur quickly. Inventory all AI applications across your organization.
Monitor usage. Set key performance and risk indicators. Track AI use to ensure policy compliance and help employees adopt tools more effectively.
Don't avoid adoption. Understanding AI capabilities and failure modes requires hands-on experience. The risk isn't in using AI-it's in using it without oversight.
Data Handling Considerations
Criminal Justice Information is subject to special handling requirements that may restrict which AI vendors you can use.
Personal accounts with tools like ChatGPT should be considered unsafe for sensitive data. Court-mandated data retention and disclosure rules create additional liability. Corporate accounts with stronger privacy agreements are standard practice.
Interactions with AI tools may become subject to public records requests. Set policies that account for this.
Always validate AI outputs before acting on them. Current AI tools are pattern-matching systems trained to produce plausible-sounding text. They cannot guarantee accuracy.
Be cautious with AI for HR decisions. These tools often cannot explain their recommendations in auditable terms, making it difficult to rule out discrimination. New York and California have specific regulations on AI use in hiring.
The Governance Imperative
The threat landscape is moving faster than most governance structures can adapt. Local governments that treat AI security as a core requirement-not an afterthought-will be best protected and positioned to benefit from AI's capabilities.
For guidance on implementing these practices, explore resources on AI for Government and AI for Cybersecurity Analysts.
Your membership also unlocks: