Vault 1.11 focuses on improving Vault’s core workflows and making key features production-ready.
We are pleased to announce the general availability of HashiCorp Vault 1.11. Vault provides secrets management, data encryption, and identity management for any application on any infrastructure.
Vault 1.11 focuses on improving Vault’s core workflows and making key features production-ready. In this release, Vault adds a new Kubernetes secrets engine to dynamically generate credentials, improves the KV (key-value) secrets engine’s usability, adds support for the PKI engine for non-disruptive rotation, enables bring your own key (BYOK) for Transit, and brings many other improvements.
Key features and improvements include:
/issuersendpoints to allow import, generation, and configuration of any number of keys or issuers within a PKI mount, giving operators the ability to rotate certificates in place without affecting existing client configurations. Also has support for CPS URL in custom policy identifiers when generating certificates using the PKI engine.
transit/randomendpoints to support user defined random byte source from an HSM.
custom_endpointoption so that Google service endpoints used by the underlying client can be customized to support both public and private services.
service_identitiesto be set on the Consul token creation.
This release also includes additional new features, workflow enhancements, general improvements, and bug fixes. The Vault 1.11 changelog and release notes list all the updates. Please visit the Vault HashiCorp Learn page for step-by-step tutorials demonstrating the new features.
We are happy to announce a new Kubernetes secrets engine for dynamically generating Kubernetes service account tokens, service accounts, role bindings, and roles. After the Kubernetes secrets engine has been configured and a user has authenticated to Vault with sufficient permissions, you can write to the endpoint and Vault will generate a new service account token.
$ vault write kubernetes/creds/my-role \ kubernetes_namespace=dev-test Key Value–-- -----lease_id kubernetes/creds/my-role/31d771a6-...lease_duration 10m0slease_renwable falseservice_account_name dev-test-service-account-with-generated-tokenservice_account_namespace dev-testservice_account_token eyJHbGci0iJSUzI1NiIsImtpZCI6ImlrUEE...
Kubernetes service accounts are normally manually generated and passed to a Kubernetes configuration file
.kubeconfig, or on the command line using a CLI tool such as
kubectl to interact with clusters. Using this method, Kubernetes service account credentials, which contain static secrets, are often long-lived credentials that could be exposed and so normally require a periodic manual rotation.
To address this issue, we now support generating short-lived dynamic service accounts and associate role bindings to specific Kubernetes namespaces. We also made improvements to the Vault Helm chart and Vault Agent sidecar injector along with making the Vault CSI provider generally available.
For more information, please see the Kubernetes secrets engine documentation.
Vault 1.7 introduced integrated storage autopilot, which enables automatic, operator-friendly management of the integrated storage servers (similar to HashiCorp Consul’s autopilot subsystem). Autopilot can monitor cluster node health, prevent disruption to the Raft quorum due to an unstable new node, and periodically check and automatically clean up failed servers.
With Vault 1.11, autopilot is now able to perform seamless automated upgrades and gets support for redundancy zones to improve cluster resiliency when running Vault Enterprise. This feature enables you to automatically upgrade a cluster by joining new Vault nodes, promoting and demoting voter and non-voter nodes, and then initiating a leadership transfer. Redundancy zones provide both scaling and resiliency options by deploying non-voting nodes alongside voting nodes on a per-availability-zone basis. Each redundancy zone will have exactly one voting node and as many additional non-voting nodes as desired. These non-voting nodes not only function as hot standbys, but also increase read scalability for highly demanding workloads.
The KV secrets engine is a generic key-value store used to store arbitrary secrets within the configured physical storage for Vault. The KV version 2 secrets engine in Vault 1.11 includes a new set of command options and updated documentation for easier retrieval of key-value secrets and metadata.
kv commands can alternatively refer to the path of the KV secrets engine using a flag-based syntax like
vault kv get -mount=secret password instead of
vault kv get secret/password. The mount flag syntax was created to mitigate confusion caused by the fact that for KV version 2 secrets, their full path actually contains a nested
/data/ element (e.g.
secret/data/password) that can be easily overlooked when using the above KV version 1-like syntax
secret/password. To avoid this confusion, all KV-specific docs pages will now use the
-mount flag. The following KV version 2 enhancements were made:
vault kv get -mount=secret foo).
-output-policy) for any Vault CLI command.
Transform is a Vault Enterprise feature that lets Vault use data transformations and tokenization to protect secrets residing in untrusted or semi-trusted systems. This includes protecting compliance-regulated data such as social security numbers and credit card numbers. Oftentimes, data must reside within file systems or databases for performance but must be protected in case the system in which it resides is compromised. Transform is built for these kinds of use cases.
With this release, Transform now supports convergent tokenization and the ability to look up the value of a token given its plaintext.
By default, every tokenization
encode operation produces a unique token that makes the resulting token fully independent of the original encoded plaintext. However, it may be beneficial if the tokenization of a plaintext/expiration pair tokenizes consistently to the same value through the use of convergent tokenization. For example, if you wanted to do a statistical analysis or if you needed to tokenize in two different systems and compare the results, using the tokenization transformation that is convergent would enable that. When convergent tokenization is enabled at transformation-creation time, Vault alters the calculation so that encoding a plaintext and expiration tokenizes to the same value every time, and storage keeps only a single entry of that token.
For some use cases, you may want to look up the value of a token given its plaintext. For use cases that require it, the token lookup operation is supported for some configurations of the tokenization transformation. Token lookup is supported when convergence is enabled, or if the mapping mode is exportable and the storage backend is external.
Many new features in Vault 1.11 have been developed over the course of the 1.10.x releases. You can learn more about how to use these features in our detailed, hands-on HashiCorp Learn guides. Consult the changelog for full details, but here are a few of the larger changes:
sys/license/signedendpoints have been removed in favor of autoloaded licenses. For migration details please see the License Autoloading documentation.
As always, we recommend upgrading and testing new releases in an isolated environment. If you experience any issues, please report them on the Vault GitHub issue tracker or post to the Vault discussion forum. As a reminder, if you believe you have found a security issue in Vault, please responsibly disclose it by emailing firstname.lastname@example.org — do not use the public issue tracker. For more information, please consult our security policy and our PGP key.
We hope you enjoy HashiCorp Vault 1.11.
Get a step-by-step guide to building a free solution for Day 1 Vault logging and alerting on AWS.
Our new documentation platform makes it easy to learn from dozens of interactive lab environments, hundreds of tutorials, and thousands of reference docs.
HashiCorp Vault Enterprise 1.10 has been evaluated as conformant with the Federal Information Processing Standard (FIPS) 140-2 standards.