The Register

MIT Sloan quietly shelves AI ransomware study after researcher calls BS

Do 80 percent of ransomware attacks really come from AI? MIT Sloan has now withdrawn a working paper that made that eyebrow-raising claim after criticism from security researcher Kevin Beaumont.

The withdrawn paper [PDF], co-authored by researchers from MIT Sloan and Safe Security, claimed, “Our recent analysis of over 2800 ransomware incidents has revealed an alarming trend: AI plays an increasingly significant role in these attacks. In 2024, 80.83 percent of recorded ransomware events were attributed to threat actors utilizing AI.”

Completed in April, the paper was cited in an MIT Sloan blog post last month titled “80 percent of ransomware attacks now use artificial intelligence.” It has since been echoed elsewhere.

“The paper is absolutely ridiculous,” Beaumont wrote in a Mastodon thread last week. “It describes almost every major ransomware group as using AI – without any evidence (it’s also not true, I monitor many of them). It even talks about Emotet (which hasn’t existed for many years) as being AI driven.”

Marcus Hutchins, another cybersecurity expert of some note, concurred with Beaumont’s assessment in a LinkedIn post. “The paper was so absurd I burst out laughing at the title,” he wrote. “Then when I read their methodology I laughed even harder … It’s very hard to get people to take cybersecurity seriously when we have a bunch of cracked out corporate marketing bozos posting nonsense ‘research’ to scientific journals.”

Even Google’s verify-after-using AI Overview cast doubt on the stat, responding to a query about that figure by emitting the following tokens: “The claim that 80 percent of ransomware attacks use AI is not supported by current data.”

Two days ago, Beaumont noted that following his online salvo, MIT removed the study, which has been cited in the Financial Times. The title of the associated post now reads, “AI cyberattacks and three pillars for defense.”

MIT Sloan did not respond to a request for comment. Safe Security also did not respond to our inquiry.

Michael Siegel, director of cybersecurity for MIT Sloan and one of the paper’s four co-authors, told The Register in an email, “We did not have time on Friday to post a replacement.”

He said that the paper’s URL now points to a document [PDF] that says:

You have reached the Early Research Papers section of our website. The Working Paper you have requested is being updated based on some recent reviews. We expect the updated version to appear here shortly. 

Thank you for your interest in this work

Siegel explained, “We received some recent comments on the working paper and are working as fast as possible to provide an updated version. The main points of the paper are that the use of AI in ransomware attacks is increasing, we should find a way to measure it, and there are things companies can do now to prepare.”

In a blog post on Monday, Beaumont, summarized the affair and went on to pillory MIT Sloan and Safe Security for publishing dubious claims.

“The paper is almost complete nonsense,” he wrote. “It’s jaw droppingly bad. It’s so bad it’s difficult to know where to start.”

The paper, he observed, lists MIT authors Siegel and Sander Zeijlemaker, alongside Safe Security’s Vidit Baxi and Sharavanan Raajah.

Beaumont proposes the term “cyberslop” to describe the work. “Cyberslop is where trusted institutions use baseless claims about cyber threats from generative AI to profit, abusing their perceived expertise,” he wrote.

Beaumont argues that the presence of Siegel and another MIT professor on the board of the company that’s paying MIT Sloan to promote its research is problematic.

“The incentives are… not well managed here, and the industry is very sick,” Beaumont concludes. “Everybody just played along with it, and it results in CISOs being presented the wrong information.” ®

READ MORE HERE