Most security models were built around a simple idea: people log in, systems respond, and access is reviewed over time. That idea held up through the shift to cloud and automation. It still mostly works for services and pipelines.
Agentic AI changes that balance, and the gap it creates isn't theoretical. It's operational, compounding, and increasingly easy to miss until something breaks.
»How access gets away from you
Here's a scenario that's already playing out across enterprise environments. A user delegates a task to an AI cowork agent — say, pulling data from a CRM, cross-referencing a financial system, and generating a report. Identity is assumed by the cowork agent, the agent completes the task, and everything looks fine.
Except the access path it opened doesn't close cleanly. The role it assumed stays warm. The credential it used gets cached. Three months later, during an incident review, someone asks: which agent did that? Under whose authority? What else did it touch? Nobody can answer with confidence, because at the time, the access looked like a user doing their job.
That's the shape of the problem. Not a dramatic breach, a quiet accumulation of access that nobody explicitly approved, and nobody thought to revoke.
»The identity model we built doesn't fit the world we're entering
Traditional identity and access management was built around people. Even when we expanded it to applications and services, the underlying assumptions stayed largely intact: identities were provisioned deliberately; permissions were reviewed periodically, and access decisions were made ahead of time.
Agents don't fit that shape. They request access dynamically. They call new tools. They assume roles. In some cases, they generate credentials to complete a task. Over time, those interactions can create access paths no one explicitly approved, reviewed, or even anticipated, leading to privilege that grows not in one obvious jump, but in small steps that are easy to miss individually and hard to see in aggregate.
This isn't just another form of complexity. It's a fundamentally new kind of identity control gap, and it has two distinct flavors.
The first is delegated access where agents are acting on behalf of a human, inheriting that user's identity to carry out a task. Copilots and coding assistants work this way. The agent does things the user could do, but the user isn't watching every step. The second is autonomous access where agents are operating with their own identity, authenticating independently, and taking action outside the scope of any individual user's authority. Infrastructure agents and workflow orchestrators work this way. Both models are legitimate. Both create real governance challenges. And in most environments today, the controls for each are being built separately, inconsistently, or not at all.
»A new attack surface
When agents inherit a user's identity, their actions are indistinguishable from a human's. When something goes wrong, there's no clean way to separate intent from execution or what a person approved versus what an agent decided on its own. When agents operate autonomously, they carry their own credentials, which means every agent is a potential target, a potential source of sprawl, and a potential audit gap.
Either way, machine identities already outnumber human ones in most enterprises. Agents accelerate that imbalance. Every workflow needs credentials. Every tool call needs access. When teams are under pressure to move fast, those credentials tend to stick around longer than they should — long-lived, overly permissive, and sometimes shared — because managing uniqueness at scale feels like overhead no one has time for right now.
That's how secrets end up in code; roles accumulate privileges, and access quietly spreads.
»Why Day 1 isn't the hard part
Most organizations don't fail at securing the initial deployment. They define roles, plug in existing IAM, and move forward. The failure happens later.
Secrets rotate late, or not at all. Certificates expire unexpectedly. IAM roles quietly accumulate permissions. Remediation happens once, then drifts. And agents make this worse in a specific way. An autonomous system that can modify infrastructure and trigger workflows doesn't just inherit existing gaps, it can reintroduce vulnerabilities that were previously fixed, undoing security work between review cycles without anyone noticing.
When incidents happen, the lack of clear attribution becomes the real problem. Teams struggle to answer basic questions like who authorized this action, which agent executed it, what credentials were used, what changed, and when? For regulated industries, that uncertainty isn't just inconvenient; it can stop an audit in its tracks.
»Static controls in a moving system
Most security controls still assume access is something you grant in advance. Once authenticated, a system trusts that identity until something changes.
Agents don't respect that boundary. A delegated agent might legitimately access a system in one moment and, two steps later in its reasoning chain, try to reach something its user never intended to authorize. An autonomous agent might operate across three cloud environments in the span of a single workflow. Context shifts constantly. What was appropriate five minutes ago may not be appropriate now.
This is where existing tools run into their limits. IAM platforms focus on who you are. PAM was built around how humans access systems. Secrets management focuses on storing credentials. None of these were designed for an identity that changes context at machine speed, across environments, with no natural pauses.
Trust can't be a one-time decision in these environments. Identity needs to be verified continuously, not just at login. Access needs to be scoped to the task at hand, not the lifecycle of the agent. Credentials should expire naturally, tied to specific context and purpose, without relying on cleanup processes that run quarterly. And authorization decisions have to happen at the point of action, not when something is provisioned.
»Extending zero trust to non-human identities
For teams already working toward zero trust, agentic AI exposes the next gap to close, and it's the gap where existing controls are most likely to fail.
The principles still apply: least privilege, continuous verification, strong identity at the center. What changes is the surface and the speed. Zero trust as most organizations have implemented it was designed for humans authenticating to systems. It assumed a person would log in, establish a session, and do work within that session. Agents don't work in sessions. They work in actions, thousands of them, across environments, triggered by other agents, chained into workflows that no human is watching in real time.
Extending zero trust to agents means every agent has its own verifiable identity, not a shared key or borrowed role. It means access is temporary by default, and when a task ends, permissions should too. It means credentials are short-lived and issued just-in-time, not stored and rotated on a schedule. And it means actions are observable not just as events, but as attributable decisions: which identity authorized this, under what scope, on whose behalf.
That's not a theoretical posture. It's a concrete set of controls that already exist for human-centric workflows like dynamic secrets, certificate-based identity, policy-enforced access, and comprehensive audit logging. The engineering challenge is extending them to cover agents at the scale and speed they operate.
»Moving forward without losing control
Agentic AI isn't experimental anymore. Teams are adopting it because it works, and the pressure to move fast is real.
The challenge is that speed creates the conditions for the scenario described at the start of this piece: access that accumulates quietly, through behavior rather than design, until the audit question comes and nobody has a clean answer. That's not a failure of tooling so much as a failure of assumptions. Security models that were built for a world where access was provisioned deliberately and reviewed periodically are now applied to systems that provision dynamically and never stop.
The organizations that will handle this well aren't the ones that slow down adoption. They're the ones that connect identity, access, and execution into a coherent picture where every agent has a clear identity, every action is attributable, and the controls are enforced at the moment work happens, not the moment it's reviewed.
That's how autonomy becomes something you can actually rely on. Not because you're watching everything, but because the system itself knows what it should and shouldn't do, and leaves a clear record either way.
To learn more, check out our use case page or watch our explainer video.









