What Is Adversarial AI, and How Can State and Local Agencies Defend Against It?
Cybercriminal groups are increasingly using artificial intelligence to develop more evasive and effective attacks. This shift, driven by adversarial AI, has changed the cybersecurity threat landscape for state and local agencies. For example, the ransomware group FunkSec specializes in AI-assisted malware creation. Other attackers use AI to launch sophisticated phishing campaigns that bypass traditional email security measures by crafting highly convincing malicious attachments and deceptive social engineering content.
For government agencies frequently targeted by ransomware, the first step in defending against this evolving threat is recognizing how adversarial AI is enhancing ransomware tactics.
What Is Adversarial AI?
Traditionally, adversarial AI referred to attempts to disrupt AI systems themselves. Now, the term increasingly describes attackers who use AI to carry out malicious actions. Some threat actors, like Scattered Spider, leverage large language models to automate parts of their operations.
AI helps attackers quickly design strategies and malware. For instance, hackers can use AI to decide whether to deploy a remote access Trojan first and then follow up with ransomware. AI also speeds up malware coding, allowing adversaries to automate the creation of new threats efficiently.
How Ransomware Groups Use Adversarial AI
Ransomware attacks rely heavily on phishing and social engineering to gain initial access. AI excels at generating convincing phishing emails and social engineering content, making it easier for attackers to trick employees into clicking malicious links or downloading harmful files.
Attackers have become faster. The time it takes for adversaries to move from initial access to lateral movement inside a network dropped by 14 minutes last year, partly due to AI assistance.
AI also improves ransomware extortion tactics. After stealing data, attackers use AI to quickly identify the most sensitive information, increasing the chances of a higher ransom payment. Groups like FunkSec openly claim to use AI to speed up malware development and overall operations.
The Rise of FunkSec and AI-Driven Threats
FunkSec gained notoriety rapidly, claiming nearly 100 victims in their first month. They operate as a Ransomware-as-a-Service (RaaS) provider, offering subscription-based access to their ransomware tools. This model allows many attackers to launch ransomware attacks using FunkSec’s AI-powered resources.
For state and local IT leaders, this is a serious warning. The speed at which adversaries can breach and move inside systems is increasing, and agencies must respond with equal urgency to keep up.
How State and Local Agencies Can Defend Against Adversarial AI
Start with the basics: identify what assets need protection and understand external attack surfaces. Knowing where vulnerabilities exist is critical.
Next, leverage AI-powered defense tools. Deploy AI systems that analyze network traffic in real time, monitor endpoints, and strengthen email security. These defenses can detect and respond to sophisticated AI-driven attacks more effectively.
For many government agencies with limited resources, partnering with managed security service providers is essential. These partners offer threat hunting and monitoring capabilities that can track AI-assisted adversaries and help agencies stay ahead of emerging threats.
By combining foundational security practices with AI-powered defenses and external expertise, state and local agencies can better protect themselves from adversarial AI-driven ransomware attacks.
Your membership also unlocks: