Google finds first evidence of AI used to develop zero-day exploit

Google has confirmed the first known zero-day exploit built by cybercriminals using AI. The company patched the affected system and warns that many more AI-developed vulnerabilities likely exist undetected.

Categorized in: AI News IT and Development
Published on: May 12, 2026
Google finds first evidence of AI used to develop zero-day exploit

Google Finds First Working Zero-Day Built With AI

Google's Threat Intelligence Group has identified a working zero-day vulnerability that cybercriminals developed using artificial intelligence - the first time the company has observed AI being used to create this type of exploit.

The vulnerability affected an unnamed company. Google reported its findings to the affected firm before publishing its analysis, and the company released a patch.

John Hultquist, chief analyst at Google Threat Intelligence Group, said the discovery shows the race to use AI for finding network vulnerabilities "has already begun." He added: "For every zero-day we can trace back to AI, there are probably many more out there."

The Broader Threat

Threat actors are using AI to increase the speed, scale, and sophistication of their attacks, according to Google's report. In recent months, researchers have documented multiple instances of state-backed groups deploying AI in cyberattacks.

In November, Anthropic disclosed that Beijing-backed hackers used AI to fully automate cyberattacks for the first time. Russia-linked groups have used AI models to target Ukrainian networks with malware. North Korea's APT45 hacking group has employed AI to refine and scale its cyber methods.

Google's analysis suggests that Anthropic's Claude Mythos model - which has identified thousands of vulnerabilities across operating systems and web browsers - was not used to develop the zero-day in question.

A Closing Window

Both Anthropic and OpenAI have restricted access to their most advanced AI models to a small group of researchers, tech companies, and government agencies. The companies argue this staged release creates a "defenders' advantage."

Rob Bair, head of cyber policy at Anthropic, said last week that this window is "somewhere in the months timeframe - not years." The concern is that as these models become more capable, criminals and foreign adversaries will find ways to use them to launch attacks at unprecedented scale.

For IT and development professionals, understanding how AI is being weaponized is now a core security concern. Cybersecurity analysts need to understand AI's role in modern threats, while IT and development teams should stay current on AI security implications.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)