Anthropic and tech giants launch Project Glasswing to use AI model in finding software vulnerabilities

Anthropic and 11 tech companies launched Project Glasswing, committing $100M to scan critical software for security flaws using an unreleased AI model. The model found a 27-year-old vulnerability in OpenBSD that no human or tool had caught.

Categorized in: AI News Government
Published on: Apr 09, 2026
Anthropic and tech giants launch Project Glasswing to use AI model in finding software vulnerabilities

Anthropic and 11 Tech Giants Launch Project Glasswing to Defend Critical Infrastructure

Anthropic, Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks announced Project Glasswing on Tuesday, a coordinated effort to use AI to find and fix security vulnerabilities in the world's most critical software before attackers exploit them.

The initiative centers on Claude Mythos Preview, an unreleased AI model that has demonstrated the ability to identify thousands of zero-day vulnerabilities-flaws unknown to software developers-in every major operating system and web browser. The model found a 27-year-old vulnerability in OpenBSD, a 16-year-old flaw in FFmpeg, and multiple vulnerabilities in the Linux kernel that no human reviewer or automated tool had caught.

For government officials, the stakes are direct. State-sponsored cyberattacks from China, Iran, North Korea, and Russia have already targeted civilian infrastructure and military systems. Global cybercrime costs roughly $500 billion annually. The emergence of AI models that can find and exploit vulnerabilities at speed threatens to accelerate these attacks.

The Capability Gap

Mythos Preview performs at levels comparable to the most skilled human security researchers. On a standard cybersecurity benchmark, it achieved 83.1% accuracy in reproducing vulnerabilities, compared to 66.6% for Anthropic's previous best model.

The model developed these capabilities through advanced coding and reasoning skills. It scored 77.8% on SWE-bench Pro, a software engineering benchmark where the previous model scored 53.4%. It autonomously identified vulnerabilities and developed exploits without human guidance.

The window for response has narrowed. What once took months for attackers to weaponize a vulnerability now happens in minutes with AI assistance.

How Project Glasswing Works

Anthropic is committing $100 million in model usage credits to the initiative. More than 40 organizations beyond the core partners-including open-source maintainers who maintain critical infrastructure code-will receive access to Mythos Preview to scan their systems.

The model will be available to participants at $25 per million input tokens and $125 per million output tokens after the research phase ends. Access is available through the Claude API, Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry.

Anthropic also donated $4 million to open-source security organizations: $2.5 million to Alpha-Omega and the Open Source Security Foundation through the Linux Foundation, and $1.5 million to the Apache Software Foundation.

What Government Needs to Know

Anthropic said it has been in ongoing discussions with US government officials about Mythos Preview's offensive and defensive capabilities. The company framed AI leadership as a national security priority for democratic nations.

Within 90 days, Anthropic will publish what it has learned from the project, including specific vulnerabilities fixed and improvements made. The partners plan to develop practical recommendations for how security practices should evolve, covering vulnerability disclosure, software updates, open-source security, secure development practices, and patching automation.

The company is not making Mythos Preview generally available. Instead, it plans to develop safeguards that detect and block the model's most dangerous outputs, with new protections launching in an upcoming Claude Opus model.

The Dual-Use Problem

Project Glasswing acknowledges a fundamental problem: the same AI capabilities that defenders use to find vulnerabilities can be used by attackers to exploit them. Anthropic's position is that moving faster on defense is the only viable response.

CrowdStrike's Chief Technology Officer Elia Zaitsev said the window between discovery and exploitation has collapsed. "That is not a reason to slow down; it's a reason to move together, faster," he said.

The Linux Foundation's Jim Zemlin noted that open-source maintainers-whose code underpins much of the world's critical infrastructure-have historically lacked resources for security work. Project Glasswing offers them access to AI-powered vulnerability detection.

Next Steps

Anthropic invited other AI companies to join the effort and set industry standards. The company suggested that an independent, third-party body bringing together private and public-sector organizations might eventually coordinate this work at scale.

For government cybersecurity professionals, understanding AI-augmented vulnerability detection is becoming essential. The AI Learning Path for Cybersecurity Analysts covers how AI models are being applied to threat detection and security operations-the same principles underlying Project Glasswing's defensive work.

Security teams will need to adapt vulnerability management processes, patching timelines, and incident response procedures as AI-assisted attacks accelerate. The practical recommendations Anthropic plans to publish in 90 days will likely shape how government agencies approach these changes.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)