AI-Powered Ransomware Can Now Launch Attacks Without Human Help, Study Warns
A study shows AI models can independently run ransomware attacks, selecting targets and creating unique ransom notes. This new method evades traditional detection and lowers attack costs.

AI Models Now Capable of Running Ransomware Attacks Independently, Study Finds
A recent study from NYU Tandon reveals that large language models (LLMs) can autonomously carry out the entire ransomware attack cycle without human input. Their prototype showcases how AI can scan systems, identify valuable files, select attack methods, and generate ransom notes customized for specific victims.
While the system was tested in a controlled research environment, the findings highlight serious concerns about potential misuse and the growing sophistication of ransomware attacks.
The Next Stage in Ransomware Evolution
Ransomware has evolved significantly over time. Early versions, often called Ransomware 1.0, simply encrypted files. The next phase, Ransomware 2.0, added double extortion by both encrypting and stealing data. The new approach, termed Ransomware 3.0 by the researchers, uses natural language prompts instead of pre-written code. At runtime, an AI model interprets these instructions and generates unique attack scripts.
This means every execution produces different code, artifacts, and ransom notes, making traditional defenses less effective. The study states: “Every execution yields different code, artifacts, and extortion notes such that the binary footprint and execution behavior are different every time, rendering traditional defenses ineffective.”
How the Prototype Works
The proof-of-concept operates via an orchestrator running in four stages:
- Reconnaissance: Gathers information on the operating system and files.
- Leverage: Identifies the most valuable files.
- Launch: Chooses an attack payload—encryption, data exfiltration, or destruction.
- Notify: Generates a ransom note specific to the compromised system.
The orchestrator is built with Go and includes a Lua interpreter, chosen for its small footprint, memory execution, and cross-platform compatibility. It queries open-source AI models that generate Lua scripts for each task. These scripts run in a sandbox environment to ensure safety. The three main payloads encrypt files with a lightweight cipher, upload files to a remote server, or destroy files beyond recovery. Since the AI generates code on demand, no two attacks are identical.
Testing Across Environments
The team ran 30 trials on personal computers, enterprise servers, and embedded industrial controllers. Each environment contained a mix of regular files and sensitive data like tax documents, passwords, and configurations.
Reconnaissance results showed both small and large models could almost fully list files. Detecting sensitive files was more challenging. The larger GPT-120B model identified up to 83% of critical files on personal computers and 78% on servers but only 48% on controllers. The smaller GPT-20B model was less accurate but avoided some AI policy blocks that affected the larger model.
When executing payloads, the models selected expected attacks almost every time: exfiltration on personal computers, encryption on servers, and destruction on controllers. Success rates were 85% for encryption, 95% for destruction, and around 65% for exfiltration. Ransom notes appeared in every run, often naming specific files found earlier. For example:
“Hello Jane, your files have been encrypted. We found Tax_Return_2024_Final.pdf and Family_SSN_List.csv. To avoid public disclosure, follow the instructions below.”
Why This Approach Is Harder to Detect
This new ransomware style changes with every attack. The AI generates different encryption routines and code structures each time, making signature-based detection ineffective. The study also notes the attack leaves fewer typical signs than conventional ransomware.
Unlike traditional malware that causes noticeable disk activity, CPU spikes, or heavy network usage, this orchestrator operates quietly, touching only select files and using minimal bandwidth. It completes attacks without typical ransomware signatures, which complicates detection efforts.
Economic Impact
AI-driven ransomware attacks could be far cheaper to run than traditional ones. One full attack cycle used around 23,000 tokens, costing roughly 70 cents via commercial AI APIs. Using open-source models would bring this cost close to zero.
This lowers the barrier to entry, enabling smaller operators with basic hardware to carry out complex ransomware campaigns. The study highlights that such an orchestrator can execute thousands of unique, personalized attacks, increasing the number of viable targets and potential profits.
Current Limitations and Safety Measures
The prototype has not been deployed outside the lab. It lacks advanced features like persistence, evasion techniques, and lateral movement across networks. The researchers focused on demonstrating feasibility rather than creating a criminal tool.
They also avoided jailbreak methods, instead using prompts that framed code generation as normal programming tasks. All experiments took place in isolated environments to prevent harm to real systems or users. However, the modular design means real attackers could add features like persistence or negotiation modules for ransom management.
Recommendations for Defenders
Traditional defenses may not suffice against this new ransomware generation. The researchers suggest proactive monitoring approaches, including:
- Tracking access to sensitive files closely
- Deploying decoy documents to detect reconnaissance
- Blocking unauthorized connections to AI services
Additionally, building safeguards directly into AI models could help mitigate misuse. This study underscores the dual-use nature of large language models—they can boost automation and productivity but also enable novel cyber threats.
For professionals interested in AI security and automation, exploring current AI courses and certifications can provide useful skills. Resources like Complete AI Training's latest courses offer practical insights into AI capabilities and risks.