Experimental PromptLock ransomware uses AI to encrypt, steal data
Researchers have identified PromptLock, the first ransomware leveraging AI to steal and encrypt data across Windows, macOS, and Linux platforms. This malware stands out by dynamically generating its malicious Lua scripts using OpenAI’s gpt-oss:20b model, accessed through the Ollama API. These scripts handle tasks from file enumeration to encryption and data exfiltration.
How PromptLock Works
Developed in Golang, PromptLock connects to a remote server hosting the gpt-oss:20b large language model via a proxy tunnel. It uses hard-coded prompts that instruct the AI model to produce Lua scripts on demand. These scripts perform key ransomware functions such as scanning the local filesystem, inspecting target files, exfiltrating sensitive data, and encrypting files.
Interestingly, PromptLock employs the SPECK 128-bit encryption algorithm, which is uncommon in ransomware and typically used in RFID applications. Although the malware includes a placeholder for data destruction capabilities, this feature remains inactive.
Current Status: A Proof-of-Concept
PromptLock has not been detected in active attacks but was discovered on VirusTotal. Security analysts consider it a proof-of-concept or work-in-progress rather than an operational threat. Several indicators support this, such as the use of a weak encryption cipher, a hard-coded Bitcoin address linked to Satoshi Nakamoto, and the unimplemented data destruction functionality.
Following the publication of these findings, a security researcher claimed ownership of PromptLock, stating it was their project that leaked unintentionally. Regardless, PromptLock highlights the potential for AI to be weaponized in malware, providing cross-platform support, operational flexibility, and evasion capabilities while lowering technical barriers for cybercriminals.
AI in Malware: A Growing Trend
PromptLock is not an isolated example. In July, Ukraine’s CERT reported on LameHug, malware powered by large language models as well. LameHug uses APIs from Hugging Face and Alibaba’s Qwen-2.5-Coder-32B to generate Windows shell commands dynamically. Believed to be linked to the APT28 hacking group, it relies on API calls rather than proxy tunnels like PromptLock. Both methods enable on-the-fly code generation, demonstrating different approaches to integrating AI into malware.
The emergence of AI-driven ransomware and malware tools suggests a shift in how cyber threats might operate, combining automation with adaptability. For IT professionals and developers, staying updated on these developments is essential to strengthen defenses and understand emerging risks.
For those interested in learning more about AI technologies and their practical applications, consider exploring Complete AI Training’s latest courses.
Your membership also unlocks: