Anthropic refuses to release Mythos AI model publicly, citing hacking risks that experts say are real but overstated

Anthropic built an AI model called Mythos that can find critical flaws in every major OS and browser, then refused to release it publicly over hacking risks. A small group of tech firms got limited access to use it defensively.

Categorized in: AI News Science and Research
Published on: Apr 18, 2026
Anthropic refuses to release Mythos AI model publicly, citing hacking risks that experts say are real but overstated

Anthropic Withholds Mythos AI Model, Citing Hacking Risks

Anthropic announced a new AI model called Mythos on April 7 and immediately refused to release it publicly. The company says the model poses too great a risk to cybersecurity. It is the first time a major AI developer has withheld a system from public use since OpenAI temporarily held back GPT-2 in 2019.

The decision has triggered concern among financial regulators and government officials. German banks consulted authorities about the risks, and the Bank of England said it intensified AI risk testing after learning about Mythos.

What Mythos Can Do

Mythos operates like a senior software engineer. It spots subtle bugs in code, self-corrects mistakes, and scored 31 percentage points higher than Anthropic's previous model, Opus 4.6, on the USAMO 2026 Mathematical Olympiad.

The same coding ability makes it dangerous. Anthropic says Mythos can identify and exploit software vulnerabilities better than all but the most skilled humans. In tests, it found critical flaws in every widely used operating system and web browser. The company says 99 percent of those vulnerabilities remain unpatched.

The U.K.'s AI Security Institute tested the model independently and found it succeeded in expert-level hacking tasks 73 percent of the time. No AI model could complete those tasks before April 2025.

Controlled Access Instead of Public Release

Rather than releasing Mythos widely, Anthropic created Project Glasswing to give selected organizations defensive access. The initial group includes Microsoft, Google, Apple, Amazon Web Services, JPMorgan Chase, and Nvidia. These companies can use the model to scan their networks and patch vulnerabilities before flaws become public.

Experts Divided on the Threat Level

Cybersecurity researchers disagree about how serious the risk actually is. Peter Swire, a cybersecurity professor at Georgia Tech and former advisor to the Clinton and Obama administrations, says many of his colleagues view Mythos as expected progress rather than a watershed moment. "A large fraction of the cybersecurity professors believe this is pretty much what was expected, and pretty much more of the same," Swire said.

Ciaran Martin, a former CEO of the U.K.'s National Cyber Security Center, shares that assessment. "It's a big deal, but it's unlikely to prove to be the end of the world," Martin said. "I would not be at the more apocalyptic end of the scale."

Martin points out a limitation in the testing: Mythos faced software defenses that lacked many protections present in real systems. He compares it to a soccer forward scoring against the world's worst goalkeeper.

Incentives Shape the Narrative

Both experts acknowledge Mythos is a significant advance but suggest institutional interests drive some of the alarm. Swire notes that chief information security officers and cybersecurity vendors have financial incentive to highlight severe risks, even if their internal estimates assume lower actual impact.

"One risk after Mythos is that it will be easier to turn a vulnerability, a known flaw, into an exploit, something that somebody actually takes advantage of," Swire said. "Every cybersecurity defender should take Mythos seriously, but the expected harm to defense is likely to be far lower than the worst-case scenarios would suggest."

Martin added that organizations rarely suffer commercially by predicting disaster, which creates bias toward alarming public statements.

What Comes Next

Mythos is the first model trained on next-generation graphics processing units, the advanced chips that power AI training. Its capabilities have already prompted financial institutions to reassess their AI risk strategies.

The divided expert response suggests the field will need to distinguish between genuine capability advances and institutional self-interest as more powerful models emerge. For cybersecurity professionals, the practical takeaway is clearer: treat Mythos as a real threat to patch known vulnerabilities, but avoid assuming worst-case scenarios will materialize.

Learn more about Generative AI and LLM systems and their applications, or explore the AI Learning Path for Cybersecurity Analysts.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)