PROMPTFLUX Shows How AI Malware Can Rewrite Itself-and Google Is Fighting Back

PROMPTFLUX uses LLMs to rewrite itself, evading signatures and shifting payloads. Google sees early samples and urges tighter AI API controls, key hygiene, and script logging.

Published on: Nov 10, 2025
PROMPTFLUX Shows How AI Malware Can Rewrite Itself-and Google Is Fighting Back

Self-Rewriting Malware Is Here: What PROMPTFLUX Means for Security, IT, and Dev Teams

Google's Threat Intelligence Group (GTIG) has flagged an experimental malware family called PROMPTFLUX that can rewrite its own code using large language models. It can generate malicious scripts on demand, obfuscate itself, and swap functions in real time. That makes static signatures and traditional detections far less useful.

The twist: PROMPTFLUX interacts with Google's Gemini API to learn how to modify itself on the fly. GTIG says the samples look like early-stage work, with incomplete features and limits on API calls. There's no evidence of infections, and Google has disabled assets tied to the activity.

Why this matters

Malware that evolves mid-execution is harder to pin down. With functions created "just in time," indicators become fleeting, and payloads shift faster than blocklists update. GTIG ties this effort to financially motivated actors and points to a growing underground market for illicit AI tools, which lowers the bar for less experienced attackers.

Google also notes that state-backed groups in North Korea, Iran, and China are experimenting with AI to enhance operations. GTIG has introduced a conceptual framework to help secure AI systems, signaling that defenders need new playbooks-fast.

How PROMPTFLUX operates (high level)

Think of a Trojan that phones an AI model, asks for a fresh script or obfuscation layer, and then executes it locally. Instead of hard-coding malicious logic, it outsources "what to do next" to a model. The result is adaptive behavior built on top of legitimate AI infrastructure.

What security, IT, and engineering teams should do now

  • Control model API egress: Create allowlists for approved AI endpoints and service accounts. Alert on endpoints calling model APIs unexpectedly or from non-dev/non-research assets.
  • Lock down secrets: Scan repos, images, and endpoints for exposed API keys. Use short-lived tokens, scoped keys, and brokered access rather than embedding credentials on workstations.
  • Instrument script activity: Detect bursts of temp file creation, repeated obfuscation layers, and processes spawning interpreters (PowerShell, Python, wscript, mshta) in tight loops.
  • Constrain automation on endpoints: Restrict who and what can execute AI-assisted scripts. Enforce code signing for internal tools and block unsigned script execution on servers.
  • Network and DNS analytics: Hunt for unusual patterns of small, frequent outbound requests to AI APIs from user machines. Correlate with script interpreter activity on the same host.
  • Email and file controls: Quarantine or detonate attachments that can bootstrap dynamic code (e.g., macros, .lnk, .ps1, .hta). Flag archives that unpack into staged scripts.
  • Developer pipeline checks: Add SAST/secret scanning for model calls, require review for code that requests dynamic script generation, and verify provenance for any AI-generated snippets.
  • Detection engineering: Write rules for "just-in-time" behavior: frequent file rewrites followed by immediate execution, interpreter chains, and processes that fetch text then run it as code.
  • AI usage policy: Define where AI models are allowed, which identities may call them, and what's logged. Treat model APIs like any high-risk SaaS dependency.
  • Tabletop an AI-assisted intrusion: Simulate initial access via a dropper that fetches model prompts, then practice containment when payloads keep changing.

AI vs. AI

The same tech that helps attackers can help defenders. Google's "Big Sleep" agent targets software vulnerabilities using AI, and industry frameworks are emerging to secure model use. If your org builds with or depends on LLMs, bake these controls in now-before adversaries force the issue.

Learn more: Google's Secure AI Framework (SAIF) and OWASP Top 10 for LLM Applications.

Upskill your team

If you're rolling out AI internally and need practical training by role, explore our curated options here: AI courses by job.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)