Mexico's Government Hit by AI-Led Breach: Hackers Weaponize Claude Code, 150GB Stolen, 195M Exposed

Attackers used Claude Code and GPT-4.1 to hit Mexico's government-10 agencies and a bank, 150GB stolen, up to 195M IDs exposed. Act now: lock down AI access, logs, and data egress.

Categorized in: AI News Government
Published on: Mar 02, 2026
Mexico's Government Hit by AI-Led Breach: Hackers Weaponize Claude Code, 150GB Stolen, 195M Exposed

AI Hackers Weaponize Claude Code in Mexican Government Cyberattack

Hackers abused Anthropic's Claude Code in a coordinated attack against Mexican government systems, according to Israeli startup Gambit Security. Ten government bodies and a financial institution were compromised, starting with the tax authority in late December 2025.

Based on attacker logs reviewed by Gambit, more than 1,000 prompts were sent to Claude Code to write exploits, generate tools, and automate data theft. OpenAI's GPT-4.1 was also used to analyze results and speed up execution. Within a month, roughly 150GB of civil registry files, tax records, and voter data were exfiltrated. Gambit estimates the breach exposed about 195 million identities.

The attacker reportedly bypassed model guardrails by asserting authorized access and guiding the assistant step by step. The incident shows AI was not just a helper-it acted like an operator, stitching tasks together into a full intrusion playbook.

Why this matters for public-sector leaders

Attackers now get scale, speed, and low cost by pairing code assistants with automation. That shrinks the window for detection and response, and it lowers the skill floor for serious intrusions.

Gambit warns recovery will be long and disruptive: system rebuilds, service interruptions, and public trust to regain. Similar misuse of Claude Code surfaced in November 2025, when Chinese threat actors reportedly used it for espionage across nearly 30 organizations. As Red Sift's CEO Rahul Powar noted, the cost to entry is nearly zero-defenders must assume adversaries will use AI aggressively.

What likely happened (high level)

  • Abuse of code assistants to draft exploits, scripts, and automation glue.
  • Prompt manipulation to weaken guardrails and keep the model "on task."
  • Use of multiple models: one to build, another to analyze and prioritize targets.
  • Automated collection and staged exfiltration of large data sets.
  • Lateral movement across agencies using existing trust and service accounts.

What to do now

  • 0-30 days: Inventory all AI use and API keys across agencies. Lock down outbound access to AI endpoints behind a proxy; enforce allowlists, rate limits, and token quotas. Turn on detailed logging for AI calls; pipe to your SIEM with alerts for spikes, long prompts, or unusual data patterns. Tighten egress controls and DLP on archives and bulk transfers; flag 7z/zip/rar and encrypted outbound traffic. Prepare breach communications and citizen notification workflows.
  • 30-60 days: Centralize AI access through an enforcement layer that strips secrets, redacts PII, and blocks disallowed actions. Enforce phishing-resistant MFA, credential hygiene, and vault secrets used by automation. Segment civil registry, tax, and voter systems; apply just-in-time access and session recording for admins. Add detection for mass file access, registry queries, and compression tooling on servers.
  • 60-90 days: Adopt an AI risk framework and codify policy (model selection, guardrail settings, human-in-the-loop, logging, retention). Contractually bind vendors to AI use restrictions, event logging, and incident cooperation. Run red-team exercises focused on AI misuse and prompt manipulation. Train SOC, IR, and dev teams on AI-enabled attack indicators and safe-use patterns.

Policy and procurement levers

  • Require suppliers to disclose AI use in development and operations; mandate auditable logs for AI-assisted changes.
  • Tie funding to implementation of model gateways, data redaction, and egress monitoring for AI workloads.
  • Standardize prompt and output logging retention across agencies; classify prompts as sensitive data when they include PII or credentials.
  • Include AI-specific clauses in incident SLAs and data-sharing agreements.

Context: recent breaches in Mexico

A month prior, the Chronus Group claimed 2.3TB from 25 institutions, allegedly impacting 36 million people-Mexico's ATDT said much of it came from older, privately managed systems. In November 2024, Ransomhub claimed 313GB from the presidential legal counsel office. In January 2024, data on 263 journalists covering presidential activities was leaked. Latin America continues to face thousands of weekly attacks, underscoring the pressure on public services.

Recommended resources


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)