When Config Becomes Code: Claude Code Bugs Let Attackers Run Shell Commands and Steal API Keys-Now Patched

Check Point found Claude Code flaws where repo configs could trigger RCE and leak API keys. Anthropic patched them and now requires explicit approval before anything runs.

Categorized in: AI News IT and Development
Published on: Mar 04, 2026
When Config Becomes Code: Claude Code Bugs Let Attackers Run Shell Commands and Steal API Keys-Now Patched

Claude Code vulnerabilities expose new risks in AI-assisted development

March 3, 2026 - Check Point Research disclosed critical issues in Anthropic's Claude Code that enabled remote code execution and API key theft through malicious project configurations. The attack paths triggered when developers cloned and opened untrusted repositories. Check Point coordinated with Anthropic, and all reported issues were patched before public disclosure.

What happened

Claude Code lets developers run tasks from the terminal using natural language, including file edits, Git ops, tests, builds, and shell commands. The project-level config, especially .claude/settings.json, became an unexpected execution surface because it could define active behaviors on collaborators' machines.

  • Hooks: Malicious Hooks could run shell commands at workflow stages without explicit, per-command user approval, leading to remote code execution.
  • MCP servers: Early Model Context Protocol settings allowed commands to run without consent, opening another path for arbitrary command execution.
  • Environment variables: ANTHROPIC_BASE_URL could be manipulated to siphon API keys before any user action, exposing Claude Code Workspaces and shared files.

Why this matters for your team

Configuration files used to be passive. In AI tooling, they now control live execution paths. That shift turns pull requests, public repos, and compromised internal codebases into potential delivery vehicles for code execution and credential theft.

Impact can include unauthorized file access, file deletion or poisoning, and abusive API usage with financial and operational fallout. Treat AI tool configs as code that runs-because they do.

What Anthropic fixed

Anthropic shipped stronger trust and consent controls. MCP servers cannot execute without explicit user approval. Network activity-including API calls-is blocked until the user approves the trust dialog. These changes cut off the paths described in the research.

Actionable safeguards for engineering teams

  • Guard high-risk files: Add CODEOWNERS and protected-branch rules for .claude/ and MCP config files. Require reviews for any changes to Hooks or MCP settings.
  • Default to untrusted: Do not auto-run project hooks. Make trust a deliberate action on first open, and re-prompt when config changes.
  • Scope your secrets: Prefer per-project, least-privilege API keys. Rotate keys often. Avoid exporting sensitive env vars in global shells.
  • Sandbox untrusted code: Use ephemeral dev containers or VMs with non-root users and restricted egress. Block network access until trust is granted.
  • Watch the wire: Egress filter unknown domains for MCP or plugin endpoints. Alert on unusual API usage spikes.
  • Harden Git workflows: Require signed commits, enforce PR reviews, and diff-watch config directories in CI for new executable behaviors.
  • Pre-commit and CI checks: Flag risky Hooks, unexpected MCP endpoints, and changes to ANTHROPIC_BASE_URL. Fail builds on policy violations.
  • Developer awareness: Make it standard practice to read trust prompts, scrutinize config diffs, and keep tools updated.

Quick checks you can run today

  • Create a test repo with a benign Hook and MCP server. Verify your environment prompts for consent and blocks network calls until trust is approved.
  • Change ANTHROPIC_BASE_URL in a PR. Confirm your reviews and CI policies catch and flag it.
  • Open an untrusted repo in a sandbox and audit what executes on first run. Nothing should run without an explicit allow.

Signals worth investigating

  • Shell commands executing on project open without a prompt.
  • Unexpected outbound requests to new MCP endpoints or unknown domains.
  • Surprise edits or deletions in workspace files shortly after cloning.
  • Unusual Anthropic API usage from developer machines or CI runners.

If your team uses Claude tools regularly, review safe usage patterns and updates under Claude. For teams integrating external tools or backends, get familiar with secure configurations and consent flows under MCP.

For broader supply chain hardening and pipeline policy design, see SLSA and OWASP Top 10 CI/CD Security Risks. Keep your AI-assisted workflows current, review config diffs like code, and never auto-trust a repo you did not build.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)