ZDNet | Security

96% of IT pros say AI agents are a security risk, but they’re deploying them anyway

aaareddomino-gettyimages-1442101331

Wong Yu Liang/Getty Images

AI agents are being rapidly deployed within organizations even as they sow security fears, according to a new report from data governance firm SailPoint.

Based on a global survey of more than 350 IT professionals, the report found that the widespread embrace of agents — AI systems capable of formulating plans and taking action without human oversight — is taking place within a security vacuum. Of IT pros who responded, 84% said their organizations already use agents internally, but just over half that number (44%) currently have policies in place to control the agents’ behavior. 

Also: AI could erase half of entry-level white collar jobs in 5 years, CEO warns

Even more strikingly, 96% of respondents said they view agents as a security risk, yet 98% also said their employers plan to expand their use of agents in the coming year.

The rise of agents

Agents are the latest wave in a flood of innovation surrounding generative AI, which began in earnest following OpenAI’s release of ChatGPT in late 2022. Unlike traditional chatbots, which require meticulous human prompting, agents have much more autonomy to perform tasks. 

(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) 

Big tech companies have been building and releasing agents in droves in recent months, hoping to cash in on the generative AI gold rush. Smaller businesses, meanwhile — as SailPoint’s new report shows — have been quick to embrace these new systems, drawn by the allure of increased efficiency and by the ever-mounting pressure to be early adopters of what’s commonly described — mostly by big tech companies — as a radically disruptive technology. 

Heightened security risks

Agents’ ability to take action without human oversight also makes them a heightened cybersecurity risk, according to the report.

“These autonomous agents are transforming how work gets done, but they also introduce a new attack surface,” Chandra Gnanasambandam, SailPoint’s EVP of Product and CTO, said in a statement. “They often operate with broad access to sensitive systems and data, yet have limited oversight.”

To Gnanasambandam’s point, SailPoint’s survey revealed an alarming irony: The vast majority (92%) of respondents feel that adequate governance of AI agents is key to safeguarding their organizations’ cybersecurity, and yet many (80%) also report that agents have already acted in unexpected and potentially risky ways, including accessing unauthorized resources and sharing sensitive data.

Also: AI agent adoption is driving increases in opportunities, threats, and IT budgets

Given their privileged and largely unfettered access to sensitive organizational data, Gnanasambandam recommends treating AI agents with the same security protocol that applies to human employees.

 “As organizations expand their use of AI agents, they must take an identity-first approach to ensure these agents are governed as strictly as human users, with real-time permissions, least privilege, and full visibility into their actions,” he writes.

Want more stories about AI? Sign up for Innovation, our weekly newsletter.

READ MORE HERE