Microsoft Secure

Securing and governing the rise of autonomous agents​​

In this blog, you will hear directly from Corporate Vice President and Deputy Chief Information Security Officer (CISO) for Identity, Igor Sakhnov, about how to secure and govern autonomous agents. This blog is part of a new ongoing series where our Deputy CISOs share their thoughts on what is most important in their respective domains. In this series you will get practical advice, forward-looking commentary on where the industry is going, things you should stop doing, and more.

By 2026, enterprises may have more autonomous agents than human users. Are we ready to secure and govern them?

2024 was a year defined by learning about generative AI. Organizations were experimenting with it: testing its boundaries and exploring its potential. In 2025, organizations moved into execution. Autonomous agents are no longer theoretical. They’re now being deployed across development, operations, and business workflows.

This shift is being driven by platforms like Microsoft Copilot Studio and Azure AI Foundry and accelerated by patterns like Model Context Protocol (MCP) and Agent-to-Agent (A2A) interactions. These agents are evolving from tools into digital actors—ones capable of reasoning, acting, and collaborating.

That evolution brings real value. But it also introduces a new class of risk—and with it, a new set of responsibilities.

The rise of the agent: What’s here and what’s next

To understand the rise of autonomous agents, it’s worth starting at the beginning. Generative AI first captured the spotlight with models that could produce human-like text, code, and imagery. Meanwhile, researchers were also advancing autonomous systems designed to perceive, decide, and act independently. As these two domains converged, a new class of AI emerged—agents capable not just of generating output, but of taking action towards goals with limited human input. Today, these agents are beginning to surface across each layer of the cloud stack, each designed to tackle different layers of complexity:

  • Software as a service (SaaS)-based agents, often built using low-code or no-code platforms like Copilot Studio, are enabling business users to automate tasks with minimal technical support.
  • Platform as a service (PaaS)-based agents support both low-code and pro-code development, offering flexibility for teams building more sophisticated solutions. Azure AI Foundry is a good example.
  • Infrastructure as a service (IaaS)-based agents are typically deployed in virtual networks (VNETs), virtual private clouds (VPCs), or on-premises environments, often as custom models or services integrated into enterprise infrastructure.

Each of these categories includes both custom-built first-party and third-party individual software vendors (ISVs) agents, all of whom are rapidly multiplying across the enterprise. As organizations embrace this diversity and scale, the number of agents will soon outpace human users—making visibility, oversight, and robust governance not just important, but essential.

The new risk landscape: Why agents are different

While autonomous agents unlock new levels of efficiency, scalability, and continuous operation for organizations, they also introduce a fundamentally different risk profile:

  • Self-initiating: Agents can act without direct human prompts, enabling automation and responsiveness at scale—but this autonomy also means they may take unintended actions or operate outside established guardrails.
  • Persistent: Running continuously with long-lived access allows agents to deliver ongoing value and handle tasks around the clock. However, persistent presence increases the risk of over-permissioning, lifecycle drift, and undetected misuse.
  • Opaque: Their ability to operate as “black boxes” can simplify complex workflows and abstract away technical details, but it also makes them difficult to audit, explain, or troubleshoot—especially when built on large language models (LLMs).
  • Prolific: The ease with which agents can be created, even by non-technical users, accelerates innovation and experimentation—while simultaneously increasing the risk of shadow agents, sprawl, and inconsistent governance.
  • Interconnected: By calling other agents and services, they can orchestrate complex, multi-step processes—but this interconnectedness creates complex dependencies and new attack surfaces that are challenging to secure and monitor.

Given this new risk profile, these autonomous agents aren’t a minor extension of existing identity or application governance—they’re a new workload. Treat them accordingly.

What’s more—as they scale, they will soon outnumber human users in the enterprise.

Common failure points in autonomous agents

Despite their impressive capabilities, AI agents can still make mistakes. These errors tend to arise during long-running tasks, where “task drift” can occur, or when the agent encounters malicious input such as Cross Prompt Injection Attacks (XPIA). In these cases, the agent may veer off course or even be manipulated into acting against its intended purpose.

That’s why it’s useful to approach agent security the same way you would approach working with a junior employee: by setting clear guardrails, monitoring behavior, and establishing strong protections. Microsoft is addressing XPIA with prompt shields and evolving best practices. Robust authentication can help counter deepfakes, and improved prompt engineering through orchestration or employee training can reduce hallucinations and strengthen overall response accuracy.

Understanding Model Context Protocol for agent governance

One of the most powerful enablers of the growth of autonomous agents is the Model Context Protocol (MCP). MCP is an open standard that allows AI agents to securely and effectively connect with external data sources, tools, and services—providing flexibility to fetch real-time data, call external tools, and operate autonomously. This open standard essentially acts as a “USB-C port for AI.”

But with that flexibility comes risk. Poorly governed MCP implementations can expose agents to data exfiltration, prompt injection, or access to unvetted services. Because MCPs are easy to create, they can proliferate quickly, often without proper access controls or oversight. This is where role-based access control (RBAC) becomes critical: MCP’s ability to connect agents to a wide range of resources means that robust, granular access controls are essential to prevent misuse. However, implementing effective role-based access control for MCP-enabled agents is complex: it requires dynamic, context-aware permissions that can adapt to rapidly changing agent behaviors and access needs. Without this rigor, organizations risk over-permissioning agents, losing visibility into who can access what, and ultimately exposing sensitive data or critical services to unauthorized use.

In short, agents don’t sleep, they don’t forget, and they don’t always follow the rules. That’s why governance and thought-through authorization can’t be optional, for both agents and MCP servers.

Securing and governing agents starts with visibility

The first challenge customers raise is simple: “Do I even know which agents I have?” Before any meaningful governance or security can take place, organizations must achieve observability. Without a clear inventory of agents—across SaaS, PaaS, IaaS, and local environments—governance is guesswork. Visibility provides the foundation for everything that follows: it helps organizations to audit agent activity, understand ownership, and assess access patterns. Only with this single, unified view can organizations move from reactive oversight to proactive control.

Once visibility is in place, securing and governing agents requires a layered approach built on seven core capabilities:

Identity management

Agents must have unique, traceable identities. These identities might be identities derived, but distinguishable, from user identities or independent identities like those used by services—but no matter what they are, these identities need to be governed throughout their lifecycle (from creation to deactivation) with clear sponsorship and accountability to prevent sprawl.

Access control

Agents should operate with the minimum permissions required. Whether acting autonomously or on behalf of a user, access must be scoped, time-bound, and revocable in real time.

Data security

Sensitive data must be protected at every step. This requires implementing inline data loss prevention (DLP), sensitivity-aware controls, and adaptive policies to prevent oversharing. These safeguards are especially critical in low-code environments where agents are created quickly and often without sufficient oversight.

Posture management

Security posture must be continuously assessed. Organizations need to continually identify misconfigurations, excessive permissions, and vulnerable components across the agent stack to maintain a strong baseline.

Threat protection

Agents introduce new attack surfaces; therefore, prompt injection, misuse, and anomalous behavior must be detected early. To mitigate this increased surface area for attacks, signals from across the compute, data, and AI layers should feed into existing extended detection and response (XDR) platforms for proactive defense.

Network security

Just like users and devices, agents need secure network access. That includes controlling which agents can access which resources, inspecting traffic, and blocking access to malicious or non-compliant destinations.

Compliance

Agent activities must align with internal policies and external regulations. Organizations should audit interactions, enforce retention policies, and demonstrate compliance across the agent lifecycle.

These are not theoretical requirements; they are essential for building trust in agentic systems at scale.

Building the foundation: Agent identity

To address the need for augmented governance, Microsoft is introducing Entra Agent ID—a new identity designed specifically for AI agents. You can think of them the same way as managed identities (MSIs) with no default permissions. They can act on behalf of users, other agents, or independently, with just-in-time access that’s automatically revoked when no longer needed. They’re secure by default, auditable, and easy for developers to use. As organizations move beyond managing just users and applications, the need to extend these foundational identity principles to AI agents becomes increasingly important.

An emerging strategy to manage AI agents at scale and improve risk management is the concept of an agent registry. While the directory of Microsoft Entra ID is an authoritative source for both human users and application artifacts, there is a need to provide a similar authoritative store for all agent-specific metadata. This is where the concept of an agent registry comes in—serving as a natural extension to the directory, tailored to capture the unique attributes, relationships, and operational context of AI agents as they proliferate across the enterprise. As these registries evolve, they are likely to integrate with core components like MCP servers, reflecting the expanding role of agents within the ecosystem. Together, these tools will allow organizations to achieve observability, manage risk, and scale governance.

Extending Microsoft Security to meet the moment

To meet organizational needs that come with autonomous agents, Microsoft is building on a strong foundation and extending our existing security products to meet the unique demands of the agentic era, grounded in a Zero Trust approach that protects both people and AI agents.

Microsoft’s security stack—including Entra, Purview, Defender, and more—adapts identity management, access control, data protection, secure network access, threat detection, posture management, and compliance to support AI agents across both first-party and third-party ecosystems. We are innovating from this baseline to deliver agent-specific capabilities:

  • Microsoft Entra extends identity management and access control to AI agents, ensuring each agent has a unique, governed identity and operates with just-in-time, least-privilege access.
  • Microsoft Purview brings robust data security and compliance controls to AI agents, helping organizations prevent data oversharing, manage regulatory requirements, and gain visibility into AI-specific risks.
  • Microsoft Defender integrates AI security posture management and runtime threat protection, empowering developers and security teams to proactively mitigate risks and respond to emerging threats in agentic environments.

This isn’t a separate security silo for AI. It’s agent governance becoming a natural extension of the security investments customers already trust—ones that are integrated, consistent, and ready to scale with them.

A call to action

The agentic era is here, and the opportunities are real—but so are the risks.

To move quickly without compromising trust, we need to integrate governance into the core of agent design. This begins with visibility, scales with identity, access, and data controls, and matures with posture, threat, and compliance capabilities that treat agents as first-class workloads.

Let’s build a future where agents are not just powerful—but trustworthy by design.

Learn more with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

READ MORE HERE