The Register

Lifetime access to AI-for-evil WormGPT 4 costs just $220

Attackers don’t need to trick ChatGPT or Claude Code into writing malware or stealing data. There’s a whole class of LLMs built especially for the job.

One of these, WormGPT 4, advertises itself as “your key to an AI without boundaries,” and it’s come a long way since the original AI-for-evil model WormGPT emerged in 2023, then died off and was quickly replaced by similar criminally focused LLMs.

WormGPT 4 sales began around September 27 with ads posted on Telegram and in underground forums like DarknetArmy, according to researchers at Palo Alto Networks’ Unit 42. Subscriptions start at $50 for monthly access and rise to $220 for lifetime access, which includes full source code.

The WormGPT Telegram channel has 571 subscribers, and, as the threat hunters detail in a Tuesday blog post, this latest version of a guardrail-less, commercial LLM can do a whole lot more than generate phishing messages or code snippets.

The researchers prompted it to write ransomware, specifically a script to encrypt and lock all PDF files on a Windows host.

The model responded:

The LLM-generated code included a ransom note with a 72-hour deadline to pay, configurable settings for file extension and search path defaulting to the entire C:\ drive, plus an option for data exfiltration via Tor.

The silver lining for defenders is that even this AI-for-evil mode can’t automate attacks – for now, at least.

“Could the ransomware or tools generated be used in a real-world attack? Hypothetically, yes,” Kyle Wilhoit, director of threat research at Unit 42 and Palo Alto Networks, told The Register. “However, the ransomware and tools that were tested would need some additional human tweaking to not get identified/caught by traditional and typical security protections.”

While WormGPT lowers the barriers to entry for would-be cybercriminals, another AI tool called KawaiiGPT really lowers that barrier because it’s free, and available on GitHub.

KawaiiGPT: ‘where cuteness meets cyber offense’

Infosec researchers spotted this model in July 2025. Its operators advertise it as “your sadistic cyber pentesting waifu” and an example of “where cuteness meets cyber offense.”

<p”KawaiiGPT represents an accessible, entry-level, yet functionally potent malicious LLM,” Unit 42 wrote.

The researchers prompted the malicious model to generate a spear phishing email purporting to be from a bank with this subject line: “Urgent: Verify Your Account Information.”

The resulting email directs the victim to a fake verification website that proceeds to steal user information like credit card numbers, dates of birth, and login credentials.

Other LLMs can do similar things, so Unit 42 conducted more interesting tests the such as prompting KawaiiGPT to “write a Python script to perform lateral movement on a Linux host.” The model did the job using the SSH Python module paramiko.

“The resulting script does not introduce hugely novel capabilities, but it automates a standard, critical step in nearly every successful breach,” Unit 42 wrote, as the generated code “authenticates as a legitimate user and grants the attacker a remote shell onto the new target machine.” The script also established an SSH session and allowed a remote attacker to escalate privileges, perform reconnaissance, install backdoors, and collect sensitive files.

So the team moved on to data exfiltration and had the LLM generate a Python script that performs data exfiltration for EML-formatted email files on a Windows host.

The script then sent the stolen files as email attachments to an attacker-controlled address.

“The true significance of tools like WormGPT 4 and KawaiiGPT is that they have successfully lowered the barrier to entry to parts of the attack process, basic code generation, and social engineering,” Wilhoit wrote.

“These types of Dark LLMs could be used as building blocks for helping support AI-assisted attacks,” he added, pointing to the recent Anthropic report about Chinese-government spies using Claude Code to break into some high-profile companies and government organizations.

“This automation is already being leveraged in real-world attack campaigns,” Wilhoit warned. ®

READ MORE HERE