The Register

Davos discussion mulls how to keep AI agents from running wild

AI agents arrived in Davos this week with the question of how to secure them – and prevent agents from becoming the ultimate insider threat – taking center stage during a panel discussion on cyber threats.

“We have enough difficulty getting the humans trained to be effective at preventing cyberattacks. Now I’ve got to do it for humans and agents in combination,” Pearson Chief Technology Officer Dave Treat said.

Pearson’s a global education and training company, and Treat was speaking during the question-and-answer part of the panel as an audience member, not a panelist. Like many companies, Pearson is introducing AI agents into its environments, Treat said. 

This opens up a whole new set of challenges for organizations that don’t want to miss out on the efficiency gains that AI agents can provide – but they also don’t want these agents to access data and systems that should be off limits to them, or perform tasks that can harm the business or individuals.

AI agents, Treat said, “tend to want to please. How are we creating and tuning these agents to be suspicious and not be fooled by the same ploys and tactics that humans are fooled with?”

We have enough difficulty getting the humans trained to be effective at preventing cyberattacks. Now I’ve got to do it for humans and agents

No one has a good answer to this question – at least not yet. This remains the challenge with other security threats related to AI and agents, like prompt injection.

For now, implementing zero trust and least-privilege access remains high on the list of best practices. And, we should note, these concerns are also triggering M&A activity among security firms looking to scoop up smaller, AI-focused startups.

“With agents, you need to think about them as an extension of your team, an extension of your employee base,” Cloudflare co-founder and president Michelle Zatlyn said, speaking on the Davos panel. “Organizations are adopting zero trust for their employees. The same thing will happen with agents.”

Hatem Dowidar, group CEO of e&, an Emirati state-owned communications, technology, and investment group, suggested more guardrails and guard agents to monitor their AI minions. 

“With human agents, remember many, many years ago we started saying ‘all calls are recorded for quality purposes?’ We need to create that also for AI agents,” Dowidar said. “We need to set up guardrails and have guard agents that are in a separate system that look into how your AI agents are behaving and immediately flagging anything that is going out of the ordinary.”

Mastercard CEO Michael Miebach said organizations should take a page from the banking industry’s security and threat-intelligence practices, and collect as many signals as possible from relevant data streams and other indicators to determine if activity is safe or malicious. 

He also noted that Mastercard acquired Recorded Future for this type of proactive, threat-hunting purpose. 

Identifying threats, Miebach said, “comes down to many things. It could be identity. It could also be your location data. It’s many, many data sets that come together with a 99 percent probability score. This is a good transaction. Let it happen.”

Analyzing all of these signals to improve security defenses requires companies to have access to their data, and this is where AI and security use cases intersect, according to Miebach.

“You can use the updated data infrastructure and lineage work to also drive the defenses,” Miebach said.

This, according to Dowidar, is also an area where network defenders can use AI agents to boost their own security posture.

“We need more intelligent networks,” he said. “We need to continuously monitor for different behaviors. People are using AI capabilities or agents for hacking or for bad actions, we also have agents that are looking at new behavior or different behavior and isolating it early on to be able to protect the network.” ®

READ MORE HERE