The Register

Spy turned startup CEO: ‘The WannaCry of AI will happen’

Interview “In my past life, it would take us 360 days to develop an amazing zero day,” Zafran Security CEO Sanaz Yashar said.

She’s talking about the 15 years she spent working as a spy – she prefers “hacking architect” – inside the Israel Defense Forces’ elite cyber group, Unit 8200. 

“Now, the volume and speed is changing so much that for the first time ever, we have a negative time-to-exploit, meaning it takes less than a day to see vulnerabilities being exploited, being weaponized before they were patched,” Yashar told The Register. “That is not something you used to see.”

The reason: AI. This technology isn’t helping criminals develop novel or more sophisticated attack chains entirely without humans in the loop, she said. “But AI is helping the threat actors do more, and faster,” according to Yashar – and the more and faster is what worries her.

As a teen, Yashar’s family moved from Tehran to Israel, and the Israeli military intelligence corps recruited her while she was working as a research assistant at Tel Aviv University. 

In 2022, Yashar co-founded Zafran, which uses AI to help companies map and manage their cyber-threat exposure. But before heading up her own security startup, she led threat intelligence at Cybereason and worked as a manager at Google’s incident response and threat intel biz, Mandiant.

AI is helping the threat actors do more, and faster

She’s citing Mandiant’s recent analysis that found the average time-to-exploit (TTE) in 2024 hit -1. This is how Google and Mandiant define the average number of days it takes attackers to exploit a bug before or after the vendor issues a patch, and this is the first time ever the security analysts have seen a negative TTE. Crims are getting to exploit bugs a day before they’re patched now.

“And we saw 78 percent of the vulnerabilities being weaponized by LLMs and AI,” Yashar said.

In addition to attackers using AI to improve the speed and efficiency of breaches, organizations’ increasing use of this same technology – in some cases, just stuffing AI into every product and process – expands the attack surface. 

This includes attackers misusing corporate AI systems through things like prompt injection and tricking AI agents into bypassing safety guardrails to develop exploit chains, or access data they’re not supposed to. 

Plus, there’s also software vulnerabilities within the AI systems and frameworks themselves, and Yashar worries about the “collateral damage” caused from exploiting these bugs, especially if they fall into the hands of “junior” hackers: the Scattered Spider, ShinyHunters-type cybercrime collectives or governments just beginning to develop or buy a cyber-weapons arsenal or experimenting with agentic AI.

“Sometimes the ones that don’t understand what they are doing are more dangerous than Russia, Iran, Israel, US, China – they understand what can happen if something goes wrong,” she explained. “Even if they do bad things, there is a decision they understand.”

“The new threat actors are going to utilize these vulnerabilities, not understanding that they can shut down half of the world,” Yashar said. “And the collateral damage is going to be something that we cannot expect and we cannot deal with. I do think the WannaCry of AI has not yet happened. It’s going to happen. I don’t know where it’s going to come from, but it’s going to happen. The question is, how are you going to mitigate – because you cannot remediate it – so how you’re going to mitigate your own risk?”

WannaCry, which took place in May 2017, was one of the largest worldwide ransomware attacks, hitting hundreds of thousands of computers and causing untold damage that’s estimated to be in the hundreds of millions or billions.

The answer, according to Yashar, is also AI. Not coincidentally, Zafran has developed a threat-exposure management platform that uses AI to find and remediate exploitable vulnerabilities and perform proactive threat hunting.

“The way we do security is going to completely change,” she said. “Companies that just show you insight wouldn’t be enough. They have to get the job done. And to get the job done, you need to use agents, even with human intel.”

AI agents will investigate and triage threats, and develop an action plan for an organization to mitigate them. “The AI is going to build those packages according to your risk appetite, and there’s going to be a human to make sure that you want to do this action according to your risk appetite,” Yashar said.

Humans, she adds, will remain in the loop for the foreseeable future because “human behaviour changes slower than technology,” and when it comes to completely turning over the reins to AI agents, we’re not there yet. ®

READ MORE HERE