AI agents conduct full cyberattack campaigns against Latin American government and financial targets

Two AI-driven attack campaigns hit Latin American governments and banks, with agents automating intrusions from initial access to data theft. Strong patching and zero-trust controls stopped the attacks where they were applied.

Categorized in: AI News Government
Published on: May 12, 2026
AI agents conduct full cyberattack campaigns against Latin American government and financial targets

Two AI-Driven Threat Campaigns Target Latin American Governments and Banks

Security researchers have identified two separate threat campaigns using AI agents to conduct intrusions against government entities and financial organizations across Latin America. The campaigns, tracked as SHADOW-AETHER-040 and SHADOW-AETHER-064, represent among the first documented cases where AI agents executed attacks from initial access through data theft.

SHADOW-AETHER-040 targeted six Mexican government entities between late December 2025 and early January 2026. SHADOW-AETHER-064 began targeting Brazilian financial organizations in April 2026. Both groups used similar tactics but operated independently, distinguished primarily by language-Spanish and Portuguese respectively.

How the attacks worked

Both campaigns deployed agentic AI tools that integrated with large language models to automate attack tasks. SHADOW-AETHER-040 used Anthropic's Claude through a command-line interface, sending prompts to the model and executing resulting commands. The AI agents accessed external services like Shodan and VulDB to identify vulnerabilities and attack surfaces.

The attackers did not fully automate operations. Instead, they used AI agents as assistants, supervising behavior and correcting course when the agent deviated. When prompted to attack government targets directly, the AI model often refused. Attackers overcame this by framing operations as authorized red team exercises.

Once inside networks, the AI agents established SOCKS5 tunnels using tools like Chisel and Neo-reGeorg, enabling remote access to internal systems. The agents then conducted lateral movement, credential theft, and data exfiltration through SSH connections routed via ProxyChains.

AI-generated code replaced pre-built tools

Rather than deploying existing hacking tools, both campaigns instructed AI agents to generate commands and scripts on demand. This approach reduced detection risk because dynamically generated code differs with each execution, avoiding signatures that security tools rely on.

SHADOW-AETHER-040 deployed a previously unreported Python backdoor called implante_http. Analysis showed clear indicators of AI generation: explanatory comments typical of AI clarifying code, extensive documentation of iterative changes, emoji icons in messages, and perfect error handling. The backdoor supported command execution, file transfer, SSH tunneling, and interactive shell access.

SHADOW-AETHER-064 developed custom tools including POW (Proxy over Web) and SOCKTZ, a reverse SOCKS5 tunneling utility written in Go. Both tools showed signs of AI-assisted development through comprehensive comments and iterative refinement across multiple versions.

What AI agents accomplished

The AI agents performed specific tasks under human direction. For SHADOW-AETHER-040, these included establishing tunnels, deploying backdoors, maintaining persistence through cron jobs and SSH keys, scanning internal networks, generating exploit scripts, checking for security software, and exfiltrating databases via SQL commands.

SHADOW-AETHER-064 agents performed similar functions: port scanning, SQL injection testing, credential gathering from configuration files, password spraying, account creation, Group Policy modification for privilege escalation, and database exfiltration.

The AI agents excelled at rapid analysis of source code, configuration files, and logs to identify misconfigurations and exposed credentials. Tasks that historically required manual review could now be completed almost immediately.

Defensive implications

AI agents cannot create vulnerabilities from nothing. In cases where researchers observed failed attacks, the targeted systems had strong security configurations that prevented lateral movement. Timely patching, zero-trust access controls, and comprehensive activity monitoring remained effective defenses.

The emergence of AI-augmented campaigns underscores the importance of fundamental security practices. Organizations that maintain strong configurations, keep systems patched, and monitor network activity can resist these attacks even when facing AI-assisted adversaries.

For government professionals managing cybersecurity, understanding these attack patterns matters. Learn more about AI for Government and consider the AI Learning Path for Cybersecurity Analysts to build skills in detecting and responding to AI-driven threats.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)