The viral AI agent Moltbot is a security mess – 5 red flags you shouldn’t ignore (before it’s too late)

Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- Moltbot, formerly known as Clawdbot, has gone viral as an “AI that actually does things.”
- Security experts have warned against joining the trend and using the AI assistant without caution.
- If you plan on trying out Moltbot for yourself, be aware of these security issues.
Clawdbot, now rebranded as Moltbot following an IP nudge from Anthropic, has been at the center of a viral whirlwind this week — but there are security ramifications of using the AI assistant you need to be aware of.
What is Moltbot?
Moltbot, displayed as a cute crustacean, promotes itself as an “AI that actually does things.” Spawned from the mind of Austrian developer Peter Steinberger, the open-source AI assistant has been designed to manage aspects of your digital life, including handling your email, sending messages, and even performing actions on your behalf, such as checking you in for flights and other services.
Also: 10 ways AI can inflict unprecedented damage in 2026
As previously reported by ZDNET, this agent, stored on individual computers, communicates with its users via chat messaging apps, including iMessage, WhatsApp, and Telegram. There are over 50 integrations, skills, and plugins, persistent memory, and both browser and full system control functionality.
Rather than operating a standalone backend AI model, Moltbot harnesses the power of Anthropic’s Claude (guess why the name change from Clawdbot was requested, or check out the lobster’s lore page) and OpenAI’s ChatGPT.
In only a matter of days, Moltbot has gone viral. On GitHub, it now has hundreds of contributors and around 100,000 stars — making Moltbot one of the fastest-growing AI open source projects on the platform to date.
So, what’s the problem?
1. Viral interest creates opportunities for scammers
Many of us like open source software for its code transparency, the opportunity for anyone to audit software for vulnerabilities and security issues, and, in general, the community that popular projects create.
However, breakneck-speed popularity and changes can also allow malicious developments to slip through the cracks, with reported fake repos and crypto scams already in circulation. Taking advantage of the sudden name change, scammers launched a fake Clawdbot AI token that managed to raise $16 million before it crashed.
So, if you are planning to try it out, ensure you use only trusted repositories.
2. Handing over the keys to your digital kingdom
If you opt to install Moltbot and want to use the AI as a personal, autonomous assistant, you will need to grant it access to your accounts and enable system-level controls.
There’s no perfectly secure setup, as Moltbot’s documentation acknowledges, and Cisco calls Moltbot an “absolute nightmare” from a security perspective. As the bot’s autonomy relies on permissions to run shell commands, read or write files, execute scripts, and perform computational tasks on your behalf, these privileges can expose you and your data to danger if they are misconfigured or if malware infects your machine.
Also: Linux after Linus? The kernel community finally drafts a plan for replacing Torvalds
“Moltbot has already been reported to have leaked plaintext API keys and credentials, which can be stolen by threat actors via prompt injection or unsecured endpoints,” Cisco’s security researchers said. “Moltbot’s integration with messaging applications extends the attack surface to those applications, where threat actors can craft malicious prompts that cause unintended behavior.”
3. Exposed credentials
Offensive security researcher and Dvuln founder Jamieson O’Reilly has been monitoring Moltbot and found exposed, misconfigured instances connected to the web without any authentication protection, joining other researchers also exploring this area. Out of hundreds of instances, some had no protections at all, which leaked Anthropic API keys, Telegram bot tokens, Slack OAuth credentials, and signing secrets, as well as conversation histories.
While developers immediately leapt into action and introduced new security measures that may mitigate this issue, if you want to use Moltbot, you must be confident in how you configure it.
4. Prompt injection attacks
Prompt injection attacks are nightmare fuel for cybersecurity experts now involved in AI. Rahul Sood, CEO and co-founder of Irreverent Labs, has listed an array of potential security problems associated with proactive AI agents, saying that Moltbot/Clawdbot’s security model “scares the sh*t out of me.”
Also: The best free AI courses and certificates for upskilling in 2026 – and I’ve tried them all
This attack vector requires an AI assistant to read and execute malicious instructions, which could, for example, be hidden in source web material or URLs. An AI agent may then leak sensitive data, send information to attacker-controlled servers, or execute tasks on your machine — should it have the privileges to do so.
Sood expanded on the topic on X, commenting:
“And wherever you run it… Cloud, home server, Mac Mini in the closet… remember that you’re not just giving access to a bot. You’re giving access to a system that will read content from sources you don’t control. Think of it this way, scammers around the world are rejoicing as they prepare to destroy your life. So please, scope accordingly.”
As Moltbot’s documentation notes, with all AI assistants and agents, the prompt injection attack issue hasn’t been resolved. There are measures you can take to mitigate the threat of becoming a victim, but combining widespread system and account access with malicious prompts sounds like a recipe for disaster.
“Even if only you can message the bot, prompt injection can still happen via any untrusted content the bot reads (web search/fetch results, browser pages, emails, docs, attachments, pasted logs/code),” the documentation reads. “In other words: the sender is not the only threat surface; the content itself can carry adversarial instructions.”
5. Malicious skills and content
Cybersecurity researchers have already uncovered instances of malicious skills suitable for use with Moltbot appearing online. In one such example, on Jan. 27, a new VS Code extension called “ClawdBot Agent” was flagged as malicious. This extension was actually a fully-fledged Trojan that utilizes remote access software likely for the purposes of surveillance and data theft.
Moltbot doesn’t have a VS Code extension, but this case does highlight how the agent’s rising popularity will likely lead to a full crop of malicious extensions and skills that repositories will have to detect and manage. If users accidentally install one, they may be inadvertently providing an open door for their setups and accounts to be compromised.
Also: Claude Cowork automates complex tasks for you now – at your own risk
To highlight this issue, O’Reilly built a safe, but backdoored skill, and released it. It wasn’t long before the skill was downloaded thousands of times.
While I urge caution in adopting AI assistants and agents that have high levels of autonomy and access to your accounts, it’s not to say that these innovative models and tools don’t have value. Moltbot might be the first iteration of how AI agents will weave themselves into our future lives, but we should still exercise extreme caution and avoid choosing convenience over personal security.
READ MORE HERE
