Platform teams managing Kubernetes often discover that there’s a massive security gap and struggle to reliably and efficiently manage security controls for secrets when scaling their environments without slowing down development. Even if we discuss Kubernetes as a complete enterprise-ready platform, we'll think Red Hat OpenShift, but though OpenShift made significant strides to tackle these security gaps along with all other gaps, the barebones of Kubernetes is there in OpenShift and thus similar issues must be tackled.
Kubernetes offers native Kubernetes Secrets, but they are not designed to meet the governance needs of an enterprise.
As environments grow across clusters and clouds, the question shifts from "how do I get a secret into my pod?" to "how do I manage the entire lifecycle of that secret, from generating, to injecting, to rotating, and revoking, without slowing down development?"
Since managing sensitive data and identity-based access across hybrid clouds has become a top priority for platform teams, it’s no question that a robust, scalable, and secure method for delivering secrets to production workloads is table stakes, and that organizations must go beyond native Kubernetes Secrets or at least enhance them, especially as most secrets will also be used outside Kubernetes. There is a clear need for a centralized, platform-agnostic secret management solution.
With Vault being the widely adopted enterprise standard for centralized secrets management, even for Kubernetes and OpenShift, teams need a pattern that standardizes delivery and lifecycle automation in these environments.
However, multiple Vault integration patterns currently exist with distinct operational and security tradeoffs, and knowing which to use, or even which is the latest and greatest, can be overwhelming.
So, to answer "What’s the best way to get secrets from Vault into my Kubernetes or OpenShift pods out of all the options?", we will demystify the different methods of integrating Vault with Kubernetes or OpenShift for automated secret lifecycle management. We will go over each method’s tradeoffs and show why the Vault Secrets Operator (VSO) is now the recommended standard for modern delivery in most organizations and use cases, while not changing how you already interact with secrets in your pods.
We will cover:
Vault Secrets Operator (VSO)
VSO protected secrets (VSO with a built-in CSI companion driver)
Secrets Store CSI driver (SSCSI)
Vault sidecar agent injector
Third-party secrets operators
Though historically some teams defaulted to the Vault agent sidecar injector, as it was the first robust solution available, as the partnership between HashiCorp and Red Hat has deepened through IBM, we’ve introduced a modern Kubernetes-native approach with our Vault Secrets Operator (VSO).
To effectively use secrets in Kubernetes or OpenShift with HashiCorp Vault, the recommended approach is:
Standardize on Vault Secrets Operator (VSO) as the default and most modern integration pattern.
Use VSO protected secrets (VSO with a CSI companion driver) for high-regulation environments where policy requires no secrets be stored in the Kubernetes cluster state within
etcd.
Other patterns exist, as covered below, but they are legacy, one-way sync, or trade away lifecycle automation, rollout orchestration, and operational simplicity.
»1. The gold standard: Vault Secrets Operator (VSO) or VSO protected secrets
The Vault Secrets Operator (VSO) is an OpenShift-certified operator that’s available in the OpenShift OperatorHub, and our most advanced Kubernetes-native integration, which is continuously improving.
Rather than requiring your applications to be "Vault-aware" by integrating them with Vault’s APIs, VSO uses custom resource definitions (CRDs) for synchronizing the secrets in and/or managed by Vault into Kubernetes Secrets, which Kubernetes then orchestrates natively and directly into your cluster.
This means if you’re already using Kubernetes Secrets, VSO won’t change the way your pod is already accessing them, but instead will augment them with much better Security Lifecyle Management by ensuring they’re dynamically pulled from Vault and injected, without requiring manual overhead.
< example CRD YAML here:
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
name: app-db-creds
namespace: my-app
spec:
vaultAuthRef: vault-auth
mount: kv
type: kv-v2
path: apps/my-app/database
refreshAfter: 30s
destination:
create: true
name: app-db-creds />
This CRD tells VSO what Vault path to read and which Kubernetes secret to keep reconciled.
destination.name is the Kubernetes secret your app mounts/consumes.
refreshAfter is a safety net as VSO can also react to Vault-side change notifications depending on configuration.
Unlike other methods, it doesn't just move the data one way, but truly automates the entire credential lifecycle. It automatically handles generation, rotation, and revocation for both static and dynamic secrets, enables auditing capabilities for all, and performs post-rotation actions such as refreshing an application of an updated secret by triggering a rolling restart update. It even has support for basic PKI workflows, though for a richer certificate lifecycle management experience we recommend the dedicated certificate-focused operator cert-manager that specializes in these use cases.
»Two modes of operation:
1) VSO (native Kubernetes secrets): Caches and syncs Vault secrets into native Kubernetes secret objects that are stored in etcd and can be consumed as environment variables or volume mounts. This is suitable for most workloads where developer ease-of-use is the priority.

VSO
2) VSO protected secrets (VSO + CSI companion): Exclusively on Vault Enterprise editions and ideal for highly regulated environments where secrets must not be stored in cluster state. VSO protected secrets uses a CSI companion driver to mount and deliver secrets as ephemeral, in-memory volumes (tmpfs) that only exist for the pod’s lifetime. Secrets are never stored in etcd, since the driver does not cache secrets as Kubernetes secrets. One thing to note is that at the time of this writing not all secret types that VSO supports are supported by VSO protected secrets, but this is rapidly evolving.

VSO protected secrets
To learn more about both models and the differences, please see this blog.
»Why VSO is the recommended path:
Native Kubernetes integration: Define
VaultStaticSecretorVaultDynamicSecretCRDs to map Vault secrets to existing Kubernetes secrets.Instant updates: VSO can subscribe to Vault events for change notifications and update Kubernetes secrets faster than polling, with refresh just as a backup.
Automated drift remediation: If a Kubernetes secret is modified, VSO reverts it to the Vault-sourced true system of the record, so secrets don’t quietly diverge from the source.
Automated rotation rollouts: VSO can monitor TTLs and trigger rolling restarts of deployments when secrets change to ensure your apps pick up the new credentials with zero downtime and no manual intervention.
Efficiency: VSO runs a single instance per cluster (one per node for CSI), significantly reducing the resource overhead compared to running an agent in every pod.
Can bypass persistent storage: For high-value or regulated data, VSO offers a CSI companion driver to bypass
etcdstorage entirely, instead fetching secrets on demand and mounting them as ephemeral, in-memory volumes that disappear the moment a pod terminates.Secure: Centralizes Vault as the secure source of truth with least privilege identity and policy-based access controls.
»2. Secrets Store CSI (SSCSI) driver
Like VSO protected secrets, SSCSI allows you to mount secrets as a volume and therefore bypass secrets from being stored in etcd, as they may be plaintext. SSCSI is a vendor-neutral driver built by the community and commonly supported; HashiCorp also maintains a provider. Though it can retrieve secrets from multiple providers, when using it with Vault it would lack the advanced features of Vault Enterprise that you would get with VSO or VSO protected secrets such as automatic drift remediation and integration with the full suite of secret engines. This of course makes sense since VSO and VSO protected secrets were written by HashiCorp and optimized specifically for Vault, with more capabilities on the way.
»3. Vault agent sidecar injector
The Vault agent injector is a legacy method for pulling secrets directly from Vault and delivering them into application pods using a mutating admission webhook and typically following a sidecar model. It supports two modes: init mode and sidecar mode. In init mode, the Vault agent runs once at pod startup to render secrets, which means secrets are only delivered at startup and are not refreshed during the pod’s lifecycle, so any secret updates require a pod restart. In sidecar mode, the injector appends a Vault agent container to each pod at runtime. This sidecar continuously authenticates, retrieves, and renders secrets into a shared memory volume, enabling a secret refresh without pod restarts, like VSO.
While both modes are secure and support advanced secret templating, they, and especially the sidecar mode, are resource-intensive. Running a dedicated Vault agent sidecar for every pod can introduce significant resource overhead and bloat, which can become unsustainable when scaling to thousands of microservices.
»4. Third-party secrets operators
As the Kubernetes community is very active, there are no doubt valuable and loved community-driven and third-party tools out there. So a common temptation we see is using an external open source secrets operator with Vault, but this tends to fall short of the true enterprise-ready lifecycle management you unlock with Vault if you use something built especially with Vault in mind like VSO.
Out-of-the-box, third-party operators often lack lifecycle management features on their own. So you may lose the ability to automatically rotate secrets and have dynamic secrets, or the ability to automate rollout restarts based on changes, which are features that most users have come to expect. You may also lose the ability to ensure secrets stay updated with all changes to keep Vault as the source of truth.
You can try to patch these features together by combining multiple tools, but then you need to be sure to implement this setup consistently across your environments while trying to build guardrails to ensure enforcement. This results in heavy operational sprawl, which is counterproductive and may be time consuming, since the goal should be standardization with a consistently low-overhead, production-ready pattern.
If you’ve chosen Vault as your enterprise secrets platform, VSO is the only Kubernetes-native integration purpose-built for Vault to ensure you get the security convenience and confidence you need without slowing you down. It keeps Vault as the authoritative system while delivering secrets in the native format developers need, without having to bring different solutions together, because built-in lifecycle automation is VSO’s default. Lastly, as it is supported by HashiCorp, you can rest assured that it will stay maintained, patched, and production-ready, with support from HashiCorp available when needed, unlike some community projects which may be at risk to keep contributors.
OpenShift teams should also know that Red Hat now offers a supported External Secrets Operator based on the upstream external-secrets project, but with significant security and stability improvements. That gives OpenShift users a supported, provider-neutral path for synchronizing external secrets into native Kubernetes secrets. For organizations standardizing on Vault, however, VSO of course remains the more purpose-built option as described above, especially to support all Vault secret engines with capabilities such as drift remediation, rollout orchestration, and more.
VSO | VSO protected secrets | SSCSI driver | Agent injector | Third party | |
Automated roatation | Yes | At pod start | At pod start | Automatic with sidecar, otherwise pod start | No |
Emphemerial delivery | No | Yes | Yes | Yes | No |
Storage in etcd | No | Yes | Yes | Yes | No |
Dynamic secrets | Yes | Yes | Yes | Yes | No |
Drift remediation | Yes | Yes | No | No | No |
Secret templating | Yes | Yes | No | Yes | No |
Secret caching | Yes | No | No | Yes | No |
Multi-tenancy support | Yes | Yes | Partial | Partial | No |
Cluster resource impact | Low | Medium | Low | High | Varies |
Maintainer | HashiCorp | HashiCorp | HashiCorp and community | HashiCorp | Community |
»Why Vault Enterprise is essential for success at scale
Many organizations begin with the community version of Vault because they know it’s quick to set up and prove value, but as environments grow with multiple teams, clusters, and even lines of businesses, the need for Vault Enterprise becomes clear. To maintain business continuity, optimal uptime, and consistent governance, Vault Enterprise features are essential:
Multi-tenancy via namespaces: Only Vault Enterprise offers namespaces, allowing you to create isolated "vaults within Vault." This lets different lines of business manage their own policy, secret, and identity domains, while maintaining a unified governance layer.
Advanced governance: Features like Sentinel (policy as code) ensure that security requirements like "no secrets in Git" or "rotation every 24 hours" are enforced automatically without breaking developer workflows.
High-availability: Optimize uptime and recovery expectations with high-availability, disaster recovery, and performance replication.
IBM Z and LinuxONE support: Only Vault Enterprise is validated for OpenShift on mainframe, ensuring consistent security across all platforms from mainframe to edge.
This is where Vault Enterprise capabilities become central to long-term success and where VSO becomes the standard Kubernetes-based delivery mechanism for a modern and secure operating model.
»Start right with VSO
The goal of modern secret management is to decouple security from the application lifecycle. By moving away from sidecars and toward operators, developers can focus on code while Vault handles the rest.
That is why for most organizations, VSO is the recommended delivery standard for both Kubernetes and OpenShift with the strongest balance of performance, security, automation, and developer experience.
It provides a Kubernetes-native interface while preserving Vault as the system of record and reduces complexity. This improves security posture without requiring application owners to know or change how they work, while developers focus on code as Vault handles the "heavy lifting" of machine identity and credential rotation.
For the most efficient, scalable, and secure path forward, use VSO for most workloads and choose VSO protected secrets for the same premium experience when you must avoid etcd and use ephemeral volume mounts.
Note: These patterns and examples are equally applicable to both OpenShift and any standard Kubernetes distribution. For simplicity, this article uses Kubernetes as shorthand.
Get started with HashiCorp Vault today.






