Google Reports First Confirmed Case of AI-Discovered Zero-Day Exploit
Cybercriminals have used artificial intelligence to find and weaponize a previously unknown software vulnerability for the first time, Google revealed Monday. The discovery signals that attackers have moved beyond experimenting with AI as a research tool and are now embedding it directly into active hacking campaigns.
Google's Threat Intelligence Group documented the incident in a report on accelerating adversary use of AI. The attackers, described as prominent cybercriminals, targeted a widely used open-source web administration tool and developed an exploit that bypassed two-factor authentication on the platform.
Signs of AI-Generated Code
The exploit script showed unmistakable markers of AI generation: excessive educational annotations, a hallucinated severity score, and a clean, textbook-style structure typical of large language model output. Google said it had high confidence that an AI model was used to both identify the vulnerability and build the exploit, though likely not Google's Gemini.
Google worked with the affected vendor to disclose the vulnerability responsibly, and the planned attack was disrupted before deployment.
The Broader Threat
John Hultquist, chief analyst at Google Threat Intelligence Group, said the incident is unlikely to be isolated. "We believe this is the tip of the iceberg. Other AI-developed zero-days are probably out there," he said in a statement Monday.
Hacking groups linked to China, Russia, and North Korea are all integrating AI tools into different phases of their operations-from reconnaissance and phishing to malware development and large-scale vulnerability research.
Chinese state-linked groups have used AI to conduct vulnerability research on embedded devices and router firmware. They've also experimented with specialized vulnerability databases to train models to reason like security experts.
North Korean group APT45 has been sending thousands of automated prompts to recursively analyze known vulnerabilities and validate exploits, building a more robust attack arsenal than would otherwise be feasible.
Malware and Autonomous Attacks
Russia-linked actors have leveraged AI to generate large volumes of decoy code designed to conceal malicious components from detection. Google identified two malware families, CANFAIL and LONGSTREAM, using this technique against Ukrainian targets.
An Android backdoor called PROMPTSPY uses Google's own Gemini API to independently navigate a victim's device interface, interpret what's on screen, and execute commands without human direction. Google said no apps containing PROMPTSPY are currently available on Google Play, and Android devices with Google Play Services are automatically protected through Google Play Protect.
Underground Infrastructure for AI Access
Threat actors have built automated pipelines to register and cycle through accounts at major AI providers, using anti-detection tools and proxy services to evade bans and safety filters. This infrastructure effectively industrializes their access to commercial AI models at scale.
For insurance professionals, these developments underscore the need to reassess cyber risk exposure. Organizations should review their vulnerability management processes and consider how AI-accelerated attack timelines affect incident response planning and coverage adequacy.
Google continues to use AI defensively through tools like Big Sleep, which proactively hunts for vulnerabilities in software, and CodeMender, an experimental tool that uses Gemini to automatically patch critical code flaws.
Learn more about how Generative AI and LLM capabilities are reshaping both attack and defense strategies, or explore the AI Learning Path for Cybersecurity Analysts to understand emerging threat detection methods.
Your membership also unlocks: