»The current state of AI across the enterprise landscape
Organizations worldwide are quickly evolving from leveraging simple chat and code assistants to implementing AI agents that can read data, interact with tools, and act autonomously. Microsoft’s 2025 Work Trend Index Annual Report states that 81% of leaders expect agents to be integrated into their AI strategy within the next 12 to 18 months, and 24% state that they have already deployed AI across their organization.
The underlying infrastructure of AI workloads is also already complex, and this complexity is increasing exponentially as AI adoption accelerates. The 2025 HashiCorp Cloud Complexity Report states that 97% of organizations use multiple tools or services to manage cloud environments, and 73% of respondents stated that platform engineering and security are not operating as a unified function. This introduces new layers of complexity and challenges as AI adoption accelerates.
»AI and agentic workflows
Traditional identity and access management (IAM) toolsets and workflows were designed as human-centric and tailored for predictable patterns and behaviors where access is typically assigned through roles, which define what resources a user can interact with.
Traditional human-centric workflows also tend to follow defined paths and patterns. This is not the case for agents. Agents can act autonomously across a wide variety of tools, databases, and APIs, and even have the capability to invoke other agents. This level of autonomy is exactly why agents can provide so much value. However, from a security standpoint, it also introduces risk, since these paths and patterns are no longer defined and can instead change from one agent run to another. This is where the legacy, static IAM model begins to fall apart quickly.
Agentic AI adoption also presents a challenge at scale. Gartner states that machine-to-human identities are growing at a 45:1 ratio. And, as most organizations look at deploying agentic workloads in the near future, it is important to understand that each new agent introduces a new identity and a new set of credential paths. It also expands policy boundaries and increases audit requirements across the environment.
A solid security and operational foundation is required before scaling agentic AI adoption organization-wide. If agentic AI strategies are built on fragmented foundations, these AI agents will accomplish the exact opposite of their intent and actually amplify operational complexity and risk across your organization.
»Critical risks within agentic AI
There are four common critical risk areas we are observing within most AI workflows across the industry:
»Overprivilege without visibility
Agents tend to accumulate far more access than they require. Figure 1 shows how typical workflows tend to follow this common pattern:
- A human invokes an application 2. That application invokes an AI agent, which can potentially invoke another AI agent, and so forth 3. The AI agent accesses a certain resource or performs a task

Figure 1 – Multi-layered agentic AI workflow
The invoking of agents by other agents can be repeated various times, with different sets of permissions flowing down that chain as it goes along. In most environments, nobody even has a clear view into the full chain or understands what is actually occurring end-to-end. This results in overprovisioned permissions to accommodate all potential tasks the agent can perform, which creates a large blast radius if those agents are manipulated or compromised by a bad actor.
»Lack of real-time enforcement
As we already determined, AI agents will eventually reach a point where they will call a tool, query a database, or modify a system. It is at that point where policies must be enforced to ensure the agent(s) have the appropriate guardrails in place to only perform actions they are allowed to. Many teams just assume these guardrails are already in place or being handled by another team. However, in the majority of cases, these checks are actually non-existent. This is where end-to-end security fails within most organizations’ workflows.
»Impersonation and invisible delegation
Most organizations simply allow the agent(s) to perform actions using the identity of the human that invoked them. While this is convenient, it also breaks audit trails and hides delegation. Instead, explicit delegation with consent should be used instead. This ensures the user is authorizing the agent to perform actions, and the system records that delegation. This allows security teams to fully understand which actions were performed by the user, and which actions performed by the agent(s).
»Zero accountability
Without unique agent identities, runtime policy checks, or detailed logging, security-related questions become hard to answer. Questions such as “who approved this action?,” “which agent executed it?,” and “what authority did the agent use?” are not optional. They are baseline control questions, not just for security teams, but also for auditors and regulators. You need to ensure these questions can be answered to remain compliant as AI agents are rolled out.
»Why immediate action is needed
The IBM 2025 Cost of a Data Breach Report states that the global average breach costs organizations $4.4 million. The report also showcases that 97% of organizations that reported an AI-related security incident lacked proper AI-dedicated access controls, and 63% did not have any sort of AI governance policies to manage AI or prevent shadow AI. These statistics, coupled with the fact that agent compromise is currently the fastest-growing attack vector industry-wide, underscore why it is urgent to establish an agentic runtime security strategy before your organization is impacted.
»Regulatory pressure
Control requirements such as SOC2, GDPR, and PCI DSS demand demonstrating clear unique identities, audit trails, and quick permission revocation. And as we determined earlier, while most organizations plan to deploy AI agents within the next 12 to 18 months, only 21% say they have a mature model for agent governance. Moving forward with these deployments without proper governance will result in control failures.
»Operational sprawl
As organizations launch dozens or hundreds of agents, sprawl and privilege creep will increase rapidly if teams act in silos and create their own AI policies and deployments. Tool and secret sprawl are already common issues faced by platform, development, and security teams across the industry. Agent sprawl will only compound the problem.
»Agentic AI implementation best practices
There are five implementation imperatives each organization needs to take into account when establishing their agentic AI strategy.

Figure 2 – Five imperatives for AI/Agentic enterprise implementations
»Register every agent
Each agent needs a unique, verifiable, cryptographically bound identity. This means no shared keys or service accounts, and no hiding behind human principals. This can be established via methods such as mTLS, SPIFFE, or by using cloud provider identities.
»Strip standing privileges
Establishing least privilege begins by revoking standing access. A system that can provide Just-in-time (JIT) dynamic credentials with a specific time-to-live (TTL) lasting only as long as the required tasks all throughout the execution chain will significantly reduce the blast radius in the event of a compromised agent.
»Tie actions to intent
When requests involve user-specific data or administrative actions, the system must capture user context, consent, and delegation. Associating actions to intent is what transforms the nebulous and incomplete narrative of “agent X can do this” into a much more precise “agent X can do this for user Y, for purpose Z, during session B.”
»Enforcement at point of use
Each API call, query, and tool invocation should be verified against required policies at runtime. If the agent is not allowed to access a target system/resource, the request should be denied. This check needs to happen before the action is executed, not at login or deploy time.
»Produce proof of control
Security teams require solid evidence, not assumptions. Audit trails need to answer questions quickly and provide signed proof of control. Teams should be able to detect violations, such as an agent reaching a database it was never meant to access, in near-real time. Clear separation of responsibility is also crucial to preserve accountability. User authentication, single-sign-on (SSO), and consent belong to the identity provider (IdP), while workload identity, credential brokering, policy enforcement, and auditing belong to the secrets management system.
»Agentic AI use case examples
When it comes to providing an identity to agents, HashiCorp Vault has the capability to leverage identity-based controls to protect, inspect, connect, and manage the lifecycle of secrets, machine identities, service identities, and data access credentials.
In terms of policy-based access, Vault’s policies can grant fine-grained access to secrets, identities, PKI, and operations such as encryption/decryption and key signing. And when it comes to reporting and auditing, Vault also provides a centralized location for detailed logs, reporting, auditing, and compliance.
These capabilities make Vault the ideal secrets management tool, and a central pillar for your agentic AI strategy. Let's go over three agentic AI use cases that demonstrate this.
»Use case #1: Read-only information retrieval agents
In this example, a user (Alice) is interacting with a chatbot UI, asking questions such as “How do I reset my password?” or “What are your business hours?”, information that is identical for all users. Behind the UI is an AI agent that will interact with Vault to retrieve dynamic, JIT credentials required to access the downstream data source containing the answers to Alice’s questions.
In this use case, no user context or consent is required. Vault creates the JIT credentials in the data source with an explicit token TTL. Vault can also renew the token automatically before expiration if required.

Figure 3 - Use case #1: Read-only information retrieval agents
»Use case #2: Personalized information retrieval agents
In this use case, the support chatbot now needs to query customer-specific data, account information, and personalized recommendations for each user. Since user context and consent are now required, an OAuth2.0 authorization flow with user consent using IBM Verify as an IdP has been introduced. IBM Verify, or any other IdP of your choice, will return a JWT token containing specific user context, session ID, and delegation claims.
Identical to the previous use case, Vault handles the creation of the JIT dynamic credentials in the required data source(s) for user content access.

Figure 4 - Use case #2: Personalized information retrieval agents
»Use case #3: Personalized and privileged agents
In this last use case, we’re now introducing elevated privileges to our agent to also perform actions such as banking operations, agentic shopping, document authoring, or HR functions such as onboarding/offboarding employees. In addition to user context and consent, we also require delegation. This can be done via an OAuth2.0 Client-Initiated Backchannel Authentication (CIBA) authorization flow with user context provided by our IdP, IBM Verify.
The user (Alice) will receive a notification on their mobile device whenever the AI agent attempts to perform an elevated operation on their behalf. This ensures proof of control and full auditability and provides clear separation of responsibility throughout the entire flow of operations.

Figure 5 - Use case #3: Personalized and privileged agents
»Conclusion
Establishing consistent runtime security patterns in the early adoption stages of agentic AI is critical for any organization. Without solid foundations and standards, individual teams will implement their own siloed approaches to agent identity, access, and policy enforcement, which results in fragmentation, inconsistent controls, and increased risk.
Defining these patterns upfront provides the standardization that enables teams to build and scale agentic AI workflows in a secure manner, without the need to reinvent security controls for each new use-case.
To learn more about how HashiCorp Vault provides the controls essential for safe, scalable agentic AI adoption, check out the Agentic Runtime Security Explained video on the IBM Technology YouTube page, or contact our team for a tailored consultation.










