Secure AI identity with HashiCorp Vault
HashiCorp Vault's dynamic credentials give AI applications traceable, short-lived identities with just-in-time access, replacing risky static credentials. Try our proof-of-concept LangChain application to see how this can work.
Today, most requests in modern systems don’t come from people, they come from non-human identities (NHIs) such as containers, microservices, and CI/CD jobs. With AI usage in software development on the rise, AI agents are being used more and more to automate a variety of processes. Managing access for these identities is already hard, but AI makes it even trickier. These agents act on behalf of other systems, touch sensitive data, and trigger actions across multiple services. Suddenly, figuring out who (or what) should have access to what becomes a whole lot more complex and a lot harder to secure.
» The AI identity problem
Consider an example where you’re using AI to analyze sensitive data and then act on it, like getting reimbursed from your health insurance company for some treatment you underwent.
As a human, you need to follow a complex set of steps, including gathering data about patient health and treatment, understanding coverage, submitting the claim along with any required approvals, and reimbursing the patient. Each step in this process is fraught with dangers, including payment delays or denials, leaks of confidential data, and fraudulent activity. Moreover, this process likely falls under dozens of regulations across various authorities and jurisdictions.
From a company’s perspective, automating this process using AI means building systems that can navigate this complexity on the user’s behalf**.** This involves getting a wide range of systems and components to interact securely and intelligently, which poses three problems.
- Defining policies that prove and ensure that only the right agents and NHIs can perform certain actions on behalf of others in a restricted set of circumstances is a highly complex business problem.
- Protecting the process at all stages using various AI agents to gather, analyze, and act requires careful layers of authentication, authorization, and data encryption, often with massive scale and availability requirements, especially for high-volume transactions like insurance claims.
- Providing visibility and auditability so you are able to trace a process from start to finish and demonstrate that it happened in accordance with the rules, and when something goes awry, be able to determine exactly what happened.
When a security incident happens in an AI system, traditional audit methods make investigation nearly impossible. Traditional logging only reveals that a generic service account like svc-genai-myapp
accessed data, offering no insight into which user session or prompt initiated the action. With AI agents proliferating across enterprises, this visibility gap becomes a critical security and compliance risk.
» Why static credentials fail for AI
As AI gets woven into more systems and workflows, one of the biggest blind spots is how credentials are managed. Most AI pipelines still rely on static, long-lived secrets that are hard coded in configuration files or CI/CD pipelines. These secrets rarely get rotated and often have more access than they should, because it’s more convenient.
This approach may have worked in simpler environments, but it breaks down in dynamic AI systems where agents are making decisions, accessing sensitive data, and working across services in real time. In these environments, traditional static credentials like API keys and shared secrets introduce four fundamental risks:
Lack of context: Static credentials are shared amongst multiple users, applications or services. As a result, they cannot be tied to specific queries, sessions, or users, making incident investigation impossible. Audit logs might show the credential being used, but not who or what specifically initiated the action. Dynamic credentials, by contrast, are generated for specific, short-lived sessions, making every action traceable back to its origin.
Overprivileged: In prompt-driven AI systems, a single prompt can trigger access to sensitive data, initiate actions, or cross service boundaries. When AI agents (NHI) are overprivileged, they can do far more than intended, especially in dynamic learning environments or feedback loops. This creates serious risk: a poorly scoped prompt, a misconfigured permission, or a compromised agent can unintentionally expose data or influence system behavior. To reduce this blast radius, AI identities must be tightly scoped with just-enough access, tied to the specific task and context.
Difficult to rotate: Updating shared credentials across multiple system instances creates operational complexity. For example, Canva had to divert significant engineering hours every time they wanted to do a large-scale rotation of static secrets.
Long-lived: Because rotation is difficult, credentials remain valid for months, giving attackers extended access if compromised. By contrast, a short-lived (dynamic) credential is automatically revoked after just minutes or hours, leaving attackers with a tiny window — if any — to exploit stolen credentials.
» Dynamic credentials for AI
HashiCorp Vault provides centralized secrets management and automates the generation, revocation, and monitoring of dynamic credentials. Dynamic credentials (or dynamic secrets) are an important component in solving the challenges listed above. With Vault, each AI application can have unique, traceable identities tied to individual users or sessions. This approach provides:
- Just-enough access for specific tasks
- Just-in-time credentials that expire quickly
- Complete traceability through audit logs
- Automatic rotation without operational overhead
» Proof of concept: Natural language database queries
We built a proof-of-concept LangChain application to show how Vault can be used in LLM AI workflows. The application lets authenticated employees query PostgreSQL databases using natural language instead of SQL. The implementation demonstrates secure AI identity patterns:
- Zero hard-coded secrets: Database credentials are retrieved from Vault at runtime
- Session-specific access: Each chat session gets unique credentials that expire in 5 minutes
- Platform-native authentication: The application uses Kubernetes Service Account JWTs to authenticate to Vault
- Complete audit trail: Every credential request and renewal is logged with session correlation IDs
» Getting started
The proof-of-concept application is available on GitHub. While not production-ready, it demonstrates key patterns for:
- Platform-native authentication using the Vault Agent Injector for Kubernetes
- Dynamic secret acquisition and renewal using Vault’s database secrets engine
- End-to-end traceability for AI system actions
When every AI agent has its own identity, developers can move fast without managing keys, security teams maintain visibility and control, and auditors get the detailed logs they need for compliance.
Example using AKS with tutorial
If you use Azure Kubernetes Service (AKS), then you can also use this example available on GitHub by following the tutorial in this Microsoft blog post: Automating Secure and Scalable AI Deployments on Azure with HashiCorp | All things Azure
Sign up for the latest HashiCorp news
More blog posts like this one

SCEP: A bridge from legacy PKI to modern certificate management
Vault Enterprise now supports SCEP, empowering secure certificate enrollment for legacy and device-constrained environments while helping teams plan their evolution to modern protocols like EST and ACME.

Build secure, AI-driven workflows with Terraform and Vault MCP servers
At AWS Summit New York, HashiCorp introduced new capabilities that bring Terraform, Vault, and Vault Radar into the age of AI agents — advancing secure, automated infrastructure through composable, agentic systems.

HashiCorp Vault lost secrets recovery, explained
Secret recovery provides a delegatable recovery mechanism for restoring deleted or mistakenly changed secrets that prioritizes Vault’s availability.