Threat actors now using AI to scale cyberattacks, Google warns
Hackers working for state-sponsored and criminal groups are moving beyond experimenting with generative AI tools and are now integrating them directly into active cyberattack operations, according to Google's Threat Intelligence Group. The shift from theoretical misuse to operational reality means adversaries can now automate targeting, refine malware, and execute attacks faster and at greater scale.
Google identified threat actors linked to China, North Korea, Iran, and Russia using large language models like Gemini across multiple stages of the attack lifecycle. This includes reconnaissance, vulnerability research, malware development, privilege escalation, and post-compromise activity inside victim networks.
How attackers are using AI today
Threat actors use AI models to accelerate tasks that would previously require significant manual work. Researchers observed adversaries using generative AI to:
- Conduct open-source intelligence gathering and profile high-value targets
- Generate malicious code and scripts
- Research publicly disclosed vulnerabilities
- Develop phishing content tailored to specific individuals
- Automate post-compromise activities like data extraction
- Bypass authentication systems and endpoint detection tools
For operations teams, the most immediate concern is that AI-generated phishing campaigns are becoming more sophisticated. Google documented cases where attackers used AI to generate detailed organizational hierarchies for large enterprises, focusing on high-value departments like finance and human resources. This data produces more convincing phishing lures targeting individuals with administrative privileges.
Zero-day exploits developed with AI assistance
Google disrupted what it described as the first known case of attackers using AI to identify and develop a zero-day exploit before launching a mass exploitation campaign. The exploit targeted an unnamed open-source web administration platform and aimed to bypass two-factor authentication through a hardcoded trust assumption.
John Hultquist, chief analyst at Google Threat Intelligence Group, said the threat is already operational. "For every zero-day we can trace back to AI, there are probably many more out there. Threat actors are using AI to boost the speed, scale, and sophistication of their attacks," Hultquist said in a statement.
As AI coding capabilities advance, threat actors use these tools to reverse-engineer applications and develop exploits with less expertise required. This lowers the barrier for both state-sponsored and criminal groups to produce sophisticated attacks.
Malware becoming more adaptive
Google identified malware samples that use AI-generated code to evade detection. Malware like PROMPTSPY signals a shift toward autonomous attack orchestration, where AI models interpret system states and generate commands to manipulate victim environments without continuous human direction.
Researchers also found that threat actors use AI to develop polymorphic malware - code that changes itself during execution - and obfuscation networks that hide malicious functionality. Russia-linked intrusion activity targeting Ukrainian organizations has deployed AI-enabled malware variants called CANFAIL and LONGSTREAM, which use AI-generated decoy code to obscure their true purpose.
Supply chain attacks targeting AI systems
Adversaries are increasingly targeting AI software dependencies as an initial access vector. Google tracked a threat actor group called TeamPCP (also known as UNC6780) attempting to compromise AI software to pivot into broader networks for ransomware deployment and extortion.
The vulnerability lies not in frontier AI models themselves, which remain resilient to direct compromise, but in the orchestration layers. Open-source wrapper libraries, API connectors, and skill configuration files present exploitable weaknesses that grant AI systems their operational utility.
Information operations at scale
AI tools are accelerating disinformation campaigns. Google documented a pro-Russia operation called "Operation Overload" that used suspected AI voice cloning to impersonate journalists in video content. This represents an advancement of existing tactics designed to appropriate media branding for campaign messaging.
Threat actors are also pursuing premium-tier access to AI models through professionalized middleware and automated registration pipelines to bypass usage limits. This infrastructure enables large-scale misuse while subsidizing operations through trial abuse and account cycling.
What operations teams should do
The threat is no longer theoretical. Operations teams need to account for AI-assisted attacks in their incident response and threat modeling. This means understanding that reconnaissance may be more targeted, malware may adapt during execution, and phishing campaigns may reference accurate organizational details.
Consider AI for Cybersecurity Analysts training to build team capability in detecting and responding to AI-enabled threats. Understanding how threat actors use generative AI and LLM tools helps operations teams anticipate attack patterns and improve detection strategies.
Monitor for signs of AI-assisted activity: unusually sophisticated social engineering, rapidly evolving malware variants, and reconnaissance that suggests detailed prior knowledge of your organization's structure. These patterns indicate adversaries are using AI to augment their operations.
Your membership also unlocks: