The Register

From AI to analog, cybersecurity tabletop exercises look a little different this year

It’s the most wonderful time of the year … for corporate security bosses to run tabletop exercises, simulating a hypothetical cyberattack or other emergency, running through incident processes, and practicing responses to ensure preparedness if when a digital disaster occurs.

“We’re ultimately testing how resilient is the organization,” said Palo Alto Networks Chief Security Intelligence Officer Wendi Whitmore in an interview with The Register. “It’s not if we get attacked, it’s: How quickly do we respond and contain these attacks.” 

And this year, organizations need to account for the speed of AI, both in terms of how attackers use these tools to find and exploit bugs, and how defenders can use AI in their response.

“Threat actors are exploiting CVEs at an increased rate with AI,” Google Cloud’s Office of the CISO Public Sector Advisor Enrique Alvarez told The Register. “Tabletop exercises should consider a scenario where a CVE is published affecting a software system in use by the company with an immediate exploit via a cyber adversary.”

Whitmore said her threat analysts see a vulnerability released with exploits attempted within five minutes. 

“On the defender side, like our own SOC: We’re looking at 90 billion attack events coming in per day, which we can synthesize down into 26,000 that are correlated, and then one per day that requires human manual intervention of tier-three analysts to dive in, and run additional queries and analysis,” she added.

Indeed, if 2025 taught us anything, it’s that criminals and state-backed threat actors are increasingly adding AI to their arsenals, while enterprises’ AI use vastly expands their attack surface

From the attackers’ side, this means more targeted, convincing phishing emails, faster reconnaissance and scanning for vulnerabilities, and troves of sensitive data that can be quickly scanned and stolen. Meanwhile, defenders need to ensure their LLMs aren’t leaky, and AI agents aren’t accessing data that they shouldn’t have access to.

The Reg asked several incident responders to weigh in on best practices for these year-end tabletop exercises as they face more AI generated or assisted attacks, and take measures to secure their internal AI systems and models.

Tabletop exercises now need to reflect two realities: attackers using AI to move faster, quieter, and at massive scale, and attackers targeting the AI systems we deploy

“Tabletop exercises now need to reflect two realities: attackers using AI to move faster, quieter, and at massive scale, and attackers targeting the AI systems we deploy,” Tanmay Ganacharya, VP of Microsoft threat protection research, told The Register.

“The best exercises simulate adaptive, AI-powered phishing and rapid moving attack chains, while also preparing teams for scenarios targeting AI systems like prompt injection, misconfiguration, and AI-driven data exfiltration,” Ganacharya said. “The goal is to rehearse faster decisions, verify information in low trust environments, and ensure teams understand how AI changes every stage of the kill chain.”

Ultimately, the goal of these exercises is to educate both the C-suite and technical responders about what can happen, and for them to practice their responses to various security scenarios while identifying areas for improvement.

“As much as it is about establishing muscle memory and ensuring that you have a good process and can execute against that process, it’s also just as much about education,” Mark Lance, GuidePoint security VP of digital forensics and incident response and threat intel, told The Register. “So for instance, a senior leadership team learning about ransomware typically walks away from it saying, ‘I know more about this and the potential risks and threats associated with it.'”

Using AI to fight AI

One way organizations can account for the influx of AI-related threats is to use AI to develop scenarios, said Bill Reid, a security advisor to healthcare and life sciences organizations in Google Cloud’s Office of the CISO. “Want to test AI fakes? Make one and use it in the tabletop exercise,” he told The Register.

In addition to using AI to develop the exercises, companies should also use it to “measure and facilitate exercises and outcomes,” said Taylor Lehman, director of Google Cloud Office of the CISO’s healthcare and life sciences division.  

“Expose information about your environment – like threats, controls, vulnerabilities, assets of all types, key risks, stakeholders, customer personas, etc. – to AI systems who can then help craft very meaningful and very specific, realistic scenarios that will help you hone the scenario and deliver specific types of outcomes you want as part of an exercise,” Lehman told The Register.

Other new-ish AI threats include deepfakes, and this especially affects Google Cloud’s financial services clients, Alvarez said. Because of this, many organizations in the financial sector have – or should – add deepfakes, both audio and video, to their scenarios.

“However, the use of AI-generated attacks is not only deepfakes,” Mandiant Consulting director David Wong said. “AI is also used in each stage of the attack lifecycle and increases the volume and speed of the attacks. Designers of tabletop exercises must adapt to the speed and volume in their scenarios.”

When a deepfake CEO demands a money transfer, the drill shouldn’t be about detection software, but strictly testing a mandatory out-of-band verification via a standard phone call

Alvarez also suggests reaching out to a local FBI Field Office and asking the Cyber Assistant Special Agent in Charge (ASAC) if they can provide an agent to participate. “This is a good way to establish a point-of-contact at the field office for future reference and communication,” he said, adding that for full-scale exercises including C-suite, board members, and other internal stakeholders, consider reaching out to CISA to participate, too. 

CISA, the US Cybersecurity and Infrastructure Security Agency, also provides several free resources designed to help companies conduct their own exercises covering a range of threat scenarios.

One Google Cloud Office of the CISO senior consultant, Anton Chuvakin, advocated for analog – and caution – when it comes to “fighting AI with more AI. Instead, focus your tabletop exercises on introducing analog friction to break the adversary’s speed,” he told The Register. “When a deepfake CEO demands a money transfer, the drill shouldn’t be about detection software, but strictly testing a mandatory out-of-band verification via a standard phone call.”

Plus, don’t rely only on online files, he added. “Exercises must practice reverting to minimum viable business operations, utilizing offline golden copies of data and robust approval processes that an algorithm cannot spoof,” Chuvakin said. “If you can’t trust what you see on the screen, your strongest defense is actually process, not technology.”

Who should participate?

All the experts The Reg talked with recommended at least one or two tabletops per year, and tailoring these exercises to specific audiences, separating C-suite leaders and technical responders, for example. 

“From my prior FBI experience, the vast majority of companies have never done a tabletop exercise,” Alvarez said. “A good starting point or goal should be at least twice a year with the second tabletop exercise incorporating the lessons learned from the first.” 

Participation should vary based on the scenario, according to Ganacharya, with the C-suite participating “at least semiannually because AI-powered attacks demand executive level decisions.”

Technical drills, such as trialing new ransomware procedures, may only require the Security Operations Center (SOC) and incident response teams, while high-impact scenarios like insider leaks and reputational risk should also include legal, PR, HR and senior leadership.

Other operational leaders may require more frequent exercises with targeted scenarios, Ganacharya said. This includes deputies, directors, and SOC leaders – the people that executives rely on to carry out day-to-day operations.

And, as always, remember Murphy’s Law. As Ganacharya put it: “Every exercise should include alternates, because real incidents rarely happen when your first choice responder is available.” ® 

READ MORE HERE