HashiCorp Vault 0.6.2

HashiCorp Vault 0.6.2

Oct 06 2016 Jeff Mitchell

We are proud to announce the release of Vault 0.6.2. Vault is a tool for managing secrets. From API keys and encrypting sensitive data to being a complete internal CA, Vault is meant to be a solution for all secret management needs.

This blog post covers two releases: 0.6.1 and 0.6.2, which together comprise a major feature release, plus large numbers of additional improvements and bug fixes.

In Detail:

In Brief:

Please see the full Vault 0.6.2 CHANGELOG for more details. Additionally, please be sure to read the upgrade information at the end of this post.

As always, a big thanks to our community for their ideas, bug reports, and pull requests.

Read on to learn more about the major new features in Vault 0.6.1/0.6.2.

AppRole Authentication Backend

The AppRole authentication backend is the successor to (and deprecates) App ID. It provides for the same workflows as App ID while also adding a significant amount of extra functionality and greatly enhanced security.

The App ID backend was intended to provide a way to allow authentication to Vault using two independent identification factors: an app ID and a user ID. An out-of-band trusted process would create these values push them into Vault, then push them independently into a client. The mechanism here depended on the specific setup. As one example, one factor could be baked into a specific AMI; the other factor could be delivered to a specific host via Chef or Puppet. At each stage along the way, one half of the identifying factor might be visible to the transport mechanism, but both together would only be visible to the generating service and the client.

There were a few problems with this approach. One is building a trusted third-party service for coordination; ideally Vault should fulfill the coordination role of mapping one identifier to another. Another is keeping both factors fully private from all eyes except for the trusted service and the client; this is often quite challenging.

The AppRole backend was built from the ground up to take advantage of Vault's current feature set. It supports listing, expiration of entries, and issuing periodic tokens. Most importantly, while it retains the ability to use the App ID workflow (known as the "Push" model in AppRole) by manually specifying a role ID and secret ID, it also introduces a "Pull" model.

In the Pull model, rather than a third party service creating credentials and pushing them into Vault, Vault is the source of truth for credentials. Each role has a role ID, with access to read this role ID ACL'd. Each role also has its own set of secret IDs. An authorised user can make a call against the role to generate and return a secret ID, which when combined with the role ID acts to authenticate the caller to Vault.

Importantly, the calls to fetch both the role ID and secret ID can utilize response wrapping to protect them in transit and to guarantee that the secret that was generated was only ever seen by the intended client; even by the caller of the generation endpoint would be unable to see it.

Additionally, the mapping is generated internally and its lifetime is controlled by Vault. Vault can clean up the secret IDs once they are past their validity period and they can be listed and revoked using accessors, similarly to token accessors.

The end result is a more secure, more flexible, and more manageable machine-oriented authentication backend than App ID, and we think it is a very worthy successor.

Convergent Encryption in transit

The Transit Secret Backend has gained support for convergent encryption.

Normally, using the encryption mode currently offered (AES-GCM), each encryption operation uses a unique nonce, and multiple encryption operations with the same plaintext will produce different ciphertexts. This is the generally-recommended mode of operation for AES-GCM, because if the same nonce is used for more than one encryption operation, the resulting ciphertext can be used to derive information about the key, and all previous and future ciphertext values are now at risk of disclosure.

However, there are times when it is desirable to have the same ciphertext output when the same plaintext is input: convergent encryption. A real-world example is being able to search encrypted data. Imagine that you have customer credit card transaction data that you want to encrypt, but you still want it to be searchable. In order to keep the number of entities that can access the unencrypted data as low as possible, a possible workflow looks something like this:

  1. A customer credit card transaction arrives containing the credit card number (in addition to other information, such as transaction ID)

  2. The ingress server has permission to encrypt the data with key K and encrypts the credit card number

  3. The ingress server places this information into a search service (e.g. Elasticsearch) which indexes using the encrypted credit card number value

You now have a database filled with credit card transactions containing encrypted credit card information. Now, suppose you want to find all transactions given a credit card number:

  1. An application server receives a request from a customer to find all transactions made with their card

  2. The application server has permission to both encrypt and decrypt with key K and finds the encrypted value of the customer's credit card number

  3. The application server kicks off a search using the encrypted credit card number

  4. The application server returns the results to the client, without the credit card number ever being durably stored in plaintext

Convergent encryption in transit makes this workflow possible while retaining security due to its use of the transit backend's key derivation mode and generating AES-GCM nonces based on both the context and the plaintext.

Request Forwarding and Retrying

Since its inception, Vault has redirected clients making requests to standby nodes in a cluster to the active node. While conceptually simple, this had a few disadvantages, most notably when using load balancers in front.

In the best case scenario, the load balancer may fail requests while it waits for a configuration update with the new active node. With some load balancers, the minimum speed that this can happen is ten seconds, even if the new active node was ready nine seconds earlier and the other standby nodes were aware of this change in active status the entire time. To help combat this scenario, Vault now includes request retrying in the CLI and Go API. A request encountering a 5xx

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now