SE Hangout

Manage secrets, access, and encryption in the public cloud with Vault

Get a full introduction to HashiCorp Vault and a live demo of Vault on a public cloud.

Speaker

Lance Larsen introduces HashiCorp Vault, explains how it helps customers manage secrets, access, and encryption, and gives a demo in which he securely introduces applications to Vault in the public cloud. Lance wraps up by answering questions about Vault.

Transcript

I am one of HashiCorp’s systems engineers here in San Francisco, and I’m excited to talk to you all a little bit about Vault today. To start off, we’re just going to go into a deck and do a quick refresher on the fundamentals around Vault, what it’s good at solving, how we can leverage it, and common deployment patterns today. Then we’re going to go through a live demo and show a little bit about how we would actually implement these use cases in a common microservices pattern.

When we look at Vault, what we’re really trying to do is find a tool that not only is very first-class in the way it can talk to the public cloud, but also support these hybrid use cases as organizations transition their workloads from on-prem to public cloud, and most of these workloads will exist in parallel and possibly in perpetuity as these organizations move forward and deploy into the cloud.

The agenda today: We’ll talk a little bit about the problem that Vault solves—and I think it solves it very well—the use cases for Vault, and after that, we’ll talk a little bit about the architecture for our demo and do some live stuff with Vault that hopefully will cement some of these concepts.

I think when you look at the shift from static to dynamic infrastructure, traditionally, when we look at how we used to deploy applications, you had three kinds of people. You had developers, you had operators, and you had security folks. Traditionally, these groups had competing priorities and would have to work in the organization together to push software. And in the static and consolidated world, we had application platforms, some flavor of Linux OS or Windows that our apps would be deployed on, and they’d be inside a security and network perimeter, running on some kind of physical or virtualized core infrastructure, so maybe bare metal or vSphere. As we’ve moved into the public cloud, this world is no longer static and consolidated. We might be running workloads in multiple clouds at the same time in parallel with our on-premises workloads, and in some of these environments, they’re essentially zero-trust.

We might not own all the network pipes, they might be traversing more public areas, and at the end of the day, we really don’t have a hardened and locked-down perimeter where we can have things that are unencrypted in plaintext properties files and things like that floating around, where people can get access. We’ve seen very elaborate attacks mounted from the inside when things like rogue AWS keys fall into the wrong hands.

Really, what we’re trying to do with Vault is find one tool that really can bridge all these environments together and allow operators, developers, and security folks to deploy Vault at speed. Boiling that problem statement into three things we need to do well, it’s really eliminating secrets for all and finding a tool where we can get secrets out of four flavors of configuration management, Excel spreadsheets, plaintext passwords, things like that.

Then, once we’ve done that and centralized that all inside Vault, we have a flexible tool that allows us to store any kind of secret. It might be a database credential, it might just be static text, it might be a dynamic secret that allows us to create credentials on the fly. When we have a tool that’s centralized and flexible enough to support all the systems we need to talk to, we can then provide governance over it as we’re spinning machines up and down and creating credentials and trying to audit and control these workflows. What that alludes to is access secrets and encryption. Today, we’ll be primarily focusing on the secrets and encryption piece, but I think is similar to our other tools, finding a problem space that’s well defined and solving it very well. We’re going to cover a few examples of what these use cases look like for Vault.

Starting with secrets management, we’re just trying to store, access, and deploy sensitive information within our organization. We do have a back end where you can just store static secrets, so these would be any kind of arbitrary stream value. And I think Vault has done a really nice job of creating first-class ways to interface with the public cloud and these very ephemeral spin-up and spin-down environments. With dynamics secrets, essentially you create some rooted trust with the target system, and Vault has these roles for the system where it can grant least privilege and have an enforceable lease around that credential. It’s possible to generate a database credential for, let’s say, 30 minutes to an hour for a batch job. Once that batch job has ended, Vault will broker that credential and actually remove it and take the onus of doing that rotation and removal away from a human operator.

An example of this would be a Postgres database, and we’ll show this in our live demo. As you can see here, I’ve created a Postgres role—I’ve called it “production”—and once I’m authenticated—and this is a CLI example, but it could be done with the API or the UI—I’m able to get that credential back, and this is good for, I believe, an hour, based on that lease duration. It’s renewable, so if it’s a long-running app, it can continue to remove that credential. If it doesn’t renew it, that credential gets removed out of rotation, and this is all handled centrally through Vault.

We support many dynamic secrets. We have an awesome community that helps us build these, and as these become more popular, we try to merge these into our mainline. Take Oracle, as an example: A lot of the database back ends are already part of our core open-source offering, and if you have systems that you’re looking to interface with that aren’t already available, we have an easy interface for you to build that kind of workflow into those systems.

When we look at encryption as a service, it’s a question of how do we give developers and operators—infosec folks—access to encryption services without getting any access to the underlying keys. What’s nice about Vault is you can authenticate, in any sort of way, that that application can talk, maybe in Kubernetes, the Kubernetes back end; maybe in Nomad. You get a token injected. Maybe you’re deploying on bare metal and you have an app role set up that works with some of your existing deployment tools. But essentially, the keys never leave the Vault. The developer gets an easy API to talk to Vault, to encrypt plaintext and get back ciphertext, and then he or she can go and store that inside the database. Here’s an example from the UI, but we’re going to show this live. This is a big part of our demo.

Very easy to do this, and we have use cases where you can export these keys, but if you don’t want to do that and you don’t need to do that, the encryption service can be completely consumed to be a REST API, and the keys never leave the Vault. I did want to talk about this today, but we won’t actually show this in our demo, but privilege access management is important as well, especially when we’re creating dynamic instances on EC2 or Azure or DCP. We might need access to that machine for a short period, and then maybe it goes away. There are a few use cases here, but one I think Vault does particularly well is its innate ability to act as a cert authority.

We’ve seen a lot of people build signed-key infrastructures, where you can store Vault’s cert authority on a target machine, and Vault would be able to authenticate a user machine’s public key, return the signed key back, and the target system that’s under Vault’s management would honor that certificate. In this case, I’m able to authenticate as John Smith, and I’ve put a command-line example. Actually, the SSH Vault CLI will handle all this for you transparently, but it’s a very easy call that I can make to get my public key signed, and then Vault will honor that certificate. This is all based on open SSH, but Vault just does a lot of heavy lifting for you, because it’s very easy to use and to administer cert authority.

If you can’t introduce any of your machines or users to Vault in a secure way, it doesn’t matter how well you consult the use cases. We have a lot of flexibility in how we authenticate users and machines. Whether we’re trying to introduce applications on an orchestrator, no matter Kubernetes or on bare metal, with AppRole or in the cloud, with signed meta-data services from AWS or GCP, it’s very flexible in helping us do this introduction across many different environments in parallel.

That wraps up my deck. At this point, I would like to jump right into the demo. I’m not sure how familiar you are with Spring, but I used to do a decent amount of job work, and I think the folks over at Spring have done an excellent job with the Spring Cloud Vault library, and that’s what we’re going to be showing today as our demo application. Walking through my projects, it’s actually pretty simple in what it does. The main application is just an ORM-based job app with Hibernate, and we have an order class, and this is in Postgres, and I’ll show you now what that looks like. We have a table where we store order ID, we store customer name encrypted, product name, and the order date.

This is a Spring boot microservice, and we leverage the Spring Cloud Vault library to do a lot of the heavy lifting and communicating with Vault for us. Most of these classes are just for the Spring implementation. I have a basic order class to model out that table. I have an API controller that can return all the orders that I have in my database. It can post new orders, and I can also delete all the orders. Most of these classes, we’re just extending the core JPA stuff from the library. But what’s interesting here is—and this is going to happen dynamically in the environment, this is just hard coded for dev—but we have some interesting stuff here in our bootstrap file.

We’re doing a token authentication, and we’re going to run this app inside Nomad. When we schedule it, it’ll get a dynamic token, but what’s cool is Spring is going to completely handle the secure introduction with Vault, and it’s going to take this to Token, and because life cycle is enabled for both our database role and the actual token, it’s going to continue to renew these credentials as the app runs. We really just have one class here that communicates with Vault, but we get this Vault ops class as part of the Spring library, and in a very abstract way, we can do encrypt and decrpyt operations.

Basically, what this is saying is: Every record that we pull out of the database we want to decrypt for the consuming API client and for every post to the API, we want to encrypt it to the database. This looks good; I know that this is the functional behavior that I want, so if we look at a few things on the Vault side, we can see that I have HAProxy up, and I’m actually running a Vault in the East region. I’ve logged into the Vault with the root token, so I can see all the secrets involved. This is where I could do things like get AWS credentials in the self-service way, as an example. I configured a role for an S3 bucket, and just to show you how this workflow works, I could go in here, grab a dynamic credential, and validate in real time that this was created by Vault, just so you can see this trusted broker model.

This is a credential that I just created, and if I wanted to remove this credential, I could go into the lease, since I’ve logged in as root, and just go ahead and remove this. If I revoke this lease, I can go into AWS console, as an example, and I can see that this is immediately taken out of rotation. If you have apps that need to get access to buckets or humans, you can broker all this access through Vault, and with Vault Enterprise, you could get access to a self-service portal for secrets. For the actual application, I created this policy called “order,” which basically just gives us a few capabilities. This is going to be for our application.

It allows it to go and get dynamic database credentials. It can read on the secret application path, which is just a default for Spring root, and it can encrypt and decrypt ops. We have Consul running in a global environment. I have a Consul in U.S. East, Consul in U.S. West, and Consul in EU West. We’ll just be focusing on U.S. East today. I can get a quick, holistic snapshot of what we’re running, I have Nomad, which we’re going to use for our secure introduction here. Client and server are in this environment. My Spring app’s already running, and I have my Vault up as well. We’re actually using fabio, which is a work scheduler, to do some load balancing into the AWS environment.

If I go into fabio, I can see that through Consul, it was able to dynamically do some routing for us and help us get to this API. In Nomad, I can verify that these two jobs are running in the East as well. This looks good, and now I can start to walk through some of this code and play with our API. If I go to my console window, I’m already in the EC2 AWS environment where I’m running the Hashi stack, and I can look at all my jobs. Looking at all my jobs, I can see that fabio and Spring are running. If I want to drill into the Spring job, I can see that I have one allocation place, and, looking at this job structure’s probably important here.

So just a backup: This is what we’re running inside the AWS environment. I’m targeting this job at the U.S. East region, I built that Java application that I was showing you, and pushed to the dock repository, and I’m using Nomad to dynamically template some of this information out.

As you saw earlier, when I was showing you this project inside of Clips, when it’s running in our cloud environment, we’re doing some different things here. For one, Nomad is actually gonna generate a new Vault token for every drop that’s placed, and we’re gonna pull this EnV, environment variable, so it’s dynamic. We’re using Consul to help us find some of the Vault services. This is important: If we think about deploying inside of the cloud environment, we don’t have the same ability to hard-code necessarily where these services are, to have pointers and different things that point to things that are just long-lived, because you know these environments are very ephemeral. Having tools like Nomad that not only can do this secure introduction piece—help us dynamically inject these credentials because Nomad is a Vault of our scheduler,—but also using tools like Consul to help discover these services and create these roles.

This all looks good, and you can see this Vault policy I have here ties to the policy that I showed you in Vault. We have some tagging to work with fabio, and we’ve had some basic resource allocations if we wanted to impact this on a single machine. What we do here is template out this file, and then mount it as a volume into the job when it’s running. Then to configure these Vault rules, it’s pretty easy to do. I created a database role, I created my GRANT and DROP SQL, and then gave some basic parameters around the role—you know: What is the default? Time to “live” for that credential? And how long can it be in rotation? This is all very configurable. Once these roles are set up, it’s very easy for me to read out these roles, whether I’m a human being or a machine.

This looks good. This is the behavior that I want, and we can start looking at some of the more interesting stuff inside this environment. The job is running, and now I want to verify that this dynamic introduction happened. If I go back and look at what my Spring boot app does: When it initializes, it goes out and the Spring library kicks in and starts to get some of these values. It returns a Vault token, and because I have the database roles already configured, it’s gonna go out and get these credentials. Using Nomad, I can introspect this job. If I do Nomad alloc status, I can look at this allocation that was placed. And again, think about Kubernetes or running in some other orchestrator. It’d be a similar workflow, but we’re showing this in Nomad today.

If I look at this, the task was started, and I can see what machine it started on, what port it was running on. If I want to drill into the file system, we can confirm that these placements in this dynamic bootstrapping happened successfully. This confirms that I got a dynamic Vault token, and then I also got a user and a password. You can see here that this Vault token matches. These two match, so we were able to create a dynamic reuser when Spring initialized. And this wasn’t anything we did ahead of time. This was fully managed by Vault.

We can go and look at the running container and confirm this as well. If I see that this Docker container was up, our Spring job, we can drill into this.

This is that main Docker image that I built, and if we look at the bootstrap file, we can confirm that, again, these are not hard-coded values. Everything that we’re seeing—the database users that we got, that token that we got dynamically—this was all driven by this Nomad configuration for the job. And this is essentially how that secure intro is able to happen: Nomad has a relationship with Vault, and it’s able to give us a token that scoped least privilege just to these policies, and do that bootstrapping activity.

We can try this API now. I have a load balancer set up in front of that Spring job. Basically, there’s a load balancer that sits out and does health checks on fabio, running in a high-availability configuration. If we were running Git in here, it should return us all the orders that we had today. And it does.

This might look a little prettier in Postman, but what I want to call out here is that today I have these four orders, and I can post new orders to this, which we’ll do in a second. But this maps one-to-one with the orders that we have inside our database. But what’s cool here is that, because we’re using Vault’s transit services—and you could combine this with some kind of transparent data encryption at the database level—if you’re already, say, a big shot that was using Java and had this model, it would be very easy to just create a Vault class and bootstrap your Spring configuration. In fact, the only thing that brings Vault into this is that bootstrap EML file, the dependencies in the class path, and then adding this converter class. Like I said earlier, anything that goes into the database gets encrypted, everything that goes out of the database gets decrypted, and at the API level, this happens transparently to the user. But we can just add an order to create a new one.

I’m gonna add another new order. Andy’s gonna join us. Let’s say I wanted to order Nomad, and then Andy wanted to order Vault with an HSM hardware security module. I can post these two orders to our API. We get that immediate acknowledgment back, but none of this information is encrypted till it gets to the database level. I can confirm that these two new orders were placed. I see my order for Nomad. I have some other ones for Amanda, Andy, and myself, and then I can confirm that these orders are in the database.

But I can’t actually see any of this data. And again, we’re just trying to create a trivial use case here. And in this case we’re saying, the customers who are ordering this, we’d want to protect that, not only at rest, with something maybe like, transparent data encryption, but also in flight. And because this ciphertext is protected by Vault, someone would not only have to compromise your TD layer to get to this. They would also have to compromise Vault for any of this data to be useful. And we’ve done some blogs around this, how you can combine these to have that relationship.

But this looks good. I’m getting the behavior I want. I’m able to use this Spring API to get the dynamic credentials, talk to the database, and also access the transit services to do all this encryption and decryption on the fly. But let’s say we had a use case where we wanted to come in as a user who has access to this value, and have users and machines proving their identity to Vault, and then acting on the same information. If I go back to my Vault UI, I can go into this transit back end, which we mounted, and I can decrypt any of these values without any access to the underlying keys. If I was curious about this Vault HSM, I could take this encrypted ciphertext, put it back in the UI and decrypt it, and get the name of that customer. Because both the human and the machine user have access to the same policy, even though they’re proving their identity to Vault in different ways, they can consume these same services.

I think that’s very powerful. When you talk about it from an organizational perspective, doing a tool rationalization across Vault, both for the actors, whether they be humans or machines, and the use cases—secrets, access, and encryption—where we can do this in one place. So, Amanda, I know we covered a lot of material there. I’d love to open this back up to the audience and dive in deeper anywhere in the demo where there are questions, or just to answer some questions in general around some of the functionality that we saw today.

Amanda: We haven’t gotten any questions yet, Lance, but for anyone that does have questions or wants Lance to go back through anything a little slower, please use the questions box to let us know. There’s one, Lance. Do you see that question?

Lance: The question is around the fact that, as an enterprise feature for Vault, we sell enhanced integrations with HSM: How are the master keys secured, and what additional synergies do we get with those two products?

For those folks who don’t know what an HSM is, it’s a hardware security module. For very compliance-driven orgs, processing standards for information such as FIPS Federal Information Processing Standards 140-2 are areas where customers need to adhere to compliance, just around the storage of that data. And HSMs give you tamper-proofing, so the private keys don’t ever leave this secure appliance, and if people try to tamper with them, they self-delete. And Vault is a piece of software, right? It is not a hardware. So, as a piece of software, because of how FIPS is written, we often can adhere to FIPS. We’ve built integrations with common HSMs just via the PKCS-11 Public-Key Cryptography Standard 11 interface to do this.

There are two pieces. We have this concept of unsealing, and in “open source” your options are sharding that SHA key into N number of keys that you give to operators and they can come together in a key ceremony and do the unsealing. With the HSM integration, the first benefit is you get auto-unsealing. And Vault will just use that pin-in-slot authentication via PKCS-11 to go get the master key and unseal itself.

There are certain system back ends that are designed to use the HSM in a more additional way. And we call this functionality “seal wrapping.” What seal wrapping does is, essentially, you round-trip through the HSM before that data is ever stored at rest in, for example, Consul. If you specify “mount,” you can say whether you want to seal-wrap it or not, and if it’s seal-wrapped, anytime anything is put into Consul, let’s say, it’ll round-trip through the HSM and there’s an additional encryption that goes to that module before it’s at rest. And we’ve had Leidos look at that integration, and they evaluated us around the critical security parameters for key transport and key storage, and determined that we were FIPS-compliant, in terms of how we were handling that cryptography internally.

If you’re looking to tell your regulators a better narrative around FIPS, but you want to leverage the benefits of how Vault allows you to deploy at velocity and in a secure way, customers who have HSM integrations can do this PKCS-11 backing with Vault, and then get some of those synergies that I talked about. It’s certainly something that’s pretty easy to try. AWS has a cloud HSM service where you can get access to HSMs for $1.60 an hour. The barrier of entry to try out this integration, if you’re an Enterprise customer, is relatively low.

So, thanks, that’s a good question.

One of the next questions we have is, “Are there any additional container orchestration integrations planned?” This one is specifically asking about Mesosphere. The answer to that is no. We today support Nomad with an authentication back end. We also support Kubernetes with an authentication back end, but there are lots of flavors of orchestration tools out there. So I think, with the ease of building in things like AppRole with some of your existing deployment solutions that you have in place for orchestration, it’s generally not that much code to do your own bootstrap in a Mesos environment, as an example, versus us trying to go out and build integrations for all of those orchestrators. Today, no, there’s nothing on the immediate roadmap in Open Source around that.

The next question is for Oracle credentials. I don’t have an example of generating an Oracle credential, but you can go to our documentation, and there are plenty of examples on there of how you do that, how you would generate an Oracle credential with that back end.

And the next question is, “How is Open Source Vault different from Vault Enterprise?” That’s a good question. Essentially, in Open Source—and Open Source is very much about those key features for Vault that are around solving the developer use cases—pretty much everything we talked about for access, rights, SSH one-time passwords, SSH certs, dynamic credentials, the static KB, all the encryption services, the secure plug-in interface, all those features are in Open Source, and Pro and Premium flavors of Vault are really about “How do we operationalize Vault across large teams, and how do we make it work in an enterprise multi-data-center environment where governance and HSMs might be required for compliance?” Pro gives you that DR recovery, so essentially a warm standby that can be promoted with all the underlying secrets safe-replicated, plus leases and tokens. You get the UI, you can manage the cluster—those revocational relationships—through the UI, you get unified identity, which is a way to tie someone’s authentication through multiple back ends to a single identity, so you can manage that policy in one place instead of across all three back ends. You can init and unseal in the UI.

We also have Cloud Auto-unseal, where you can use, today, GCP and AWS to auto-unseal the Vault and get additional seal wrapping at rest from those environments. In Premium, we have the HSM integration I already talked about; performance scaling, where clusters share the underlying secret state and can respond to client requests; mount filters for data sovereignty (how do you restrict certain replication data to adhere to things like GDPR?); and then enhanced MFA multi-factor authentication that’s essentially managed for multiple providers. And then you also get our Sentinel policy as code engine on top of it. Again, Open Source is very much about the developer. Enterprise Pro is about those large teams, and Enterprise Premium is really about large multi-data-center governance and compliance use cases around Vault. That was a great question, thank you.

The next question: “How can we use the K/V backend and Consul as storage for the secrets we intend to protect through Vault? Is it possible to use both the K/V and Consul together?” It involves when it’s running. It’s supposed to proxy the storage component of the solution away from the user. And when we talk about things like replication, Consul has no idea that it’s in a replicated environment. All the core replication logic is built through the core Vault code, and it uses Merkle tree indexing and write-ahead logs to stream those replication events and confirm them on the secondary. Vault doesn’t expect its underlying data to be modified out from under it, and that can actually create problems, so you shouldn’t manipulate Consul directly in any way when you’re using Vault.

The next question is, “How do applications talk to Vault without human interaction?” That’s what we showed when we were looking at that config file in the Nomad job, that’s something that we were declarative about, we declared what policies the application should get. And when that job runs, Nomad has a periodic Vault token that it was using to create tokens on behalf of the applications that we were onboarding, so there was absolutely zero human interaction in that flow. And that would be similar in Kubernetes or if you built your own integration with AppRole. For all those secure introduction use cases, there’s no interaction that any humans have with those. Those applications and deployments are completely autonomous.

The next question is, “How can you back up a Vault cluster and restore on a different cluster for DR?” AWe as a company provide enterprise users with out-of-the-box DR, and it’s built very elegantly with Merkle trees and write-ahead logging. It does not involve the manipulation of Consul at all. It may appear to work, but tools like Consul Replicator manipulating Consul directly actually is dangerous and can result in data loss and many other issues that you wouldn’t want in a production environment. So the proper way to do DR would be to consume it as an enterprise customer. If you tried to roll it yourself in Open Source, you’d essentially wind up with a fork of our open source code that acts in the same way.

I do advise people that have DR to use Consul snapshotting as a backup, because that replication is immediate. So if someone does something destructive, it’ll be immediately replicated to all those DR secondaries, but with snapshotting alone, the interval that you’d have to take snapshots to protect production workloads isn’t really feasible. The correct way to do DR would be to consume it as an enterprise, or you’d wind up with a Vault fork of that application code.

The next question is, “When would I use cubbyhole back end and transit back end?” Anytime you have the need to encrypt data at rest and at flight, transit back end is a good choice. The cubbyhole back end is much easier to consume now through the response-wrapping API, but anytime you want to get a wrapped token that an application would unwrap, it basically protects the value until the final authenticating client can unwrap it. If anyone intercepts it along the way, the application would get a 400 error. A lot of people use tools like Elastic and Splunk to report on those events, so it’s a way of passing the token through systems that may not be totally secure in a wrapped fashion. Again, you can alert on that token if it’s intercepted in the process and potentially revoke it from Vault in an autonomous fashion.

“What is the best way to secure the pin on Vault box to interact with HSN?” I would say that the correct way to do that is through environment variables, and Vault will obfuscate the pin if it’s applied via environment variable. If you have to use configuration files with a pin and the slots are hard-coded, you should be very careful about what operators can get to the file where that information is stored.

The next question is, “I was intrigued to see the SSH key signing. It’s different from the SSH back end I saw, last I looked at Vault. What’s different with the new SSH back end?” Not sure exactly what that is referring to. It might be referring to the dynamic keys back end that we have. But basically, due to some intrinsic issues with how that back end works, we don’t recommend that anyone use that. In the new SSH CA back end works where Vault, as a cert authority in an OpenSSH setup, becomes that user trust CA file, and then you get an API CLI UI way to get your own public key sign that you would already have on your machine. The signed key comes back. If you look at the cert, you know it has a validity period and a principle, and then the target machine, based on OpenSSH and having Vault’s CA in that file, will authenticate you.

That’s not something I had planned for today. If anyone wants to stay for office hours, shortly after this—and I believe Amanda will supply that info—we can do the SSH CA keys demo in office hours. So, please, if you’d like to see that, stay around, and we’ll cover that shortly after this session.

The next question is, “How to verify from Vault that the connection to its storage back end using Consul is still healthy?” I would use Consul health checking to do that. It’s important to note that you do need a healthy cluster to use Vault, because as we’re aware with Consul, if you don’t have quorum or you’re in a bad state, you won’t be able to make any changes to the underlying Consul state because the machine won’t be able to agree on entries.

The next question is, “Is Vault suitable for securing the file storage, for example, the uploaded documents by the customers?” The answer is no. Well, the answer is less about “Is it secure or not secure?” It’s just that Vault in its storage back end is not designed to be a database. Like the example I showed, if you want to encrypt documents, as an example, you should use the transit back end to either export keys or, depending on how big the documents are, to send those as plaintext and get the ciphertext back and then go and store that in either a document repository or database that’s designed to secure that material.

The next question is, “How are machines authenticated against Vault, specifically in a dynamic environment where machines come and go on the fly?” That was one of the examples that we showed. We showed an integration with Nomad involved, where Nomad has a longer-lived periodic token, and that token gives it permissions to create lesser scoped tokens. In the example that we showed, an order token that gave access to the Postgres database to read and write from that table and the transit services do the encrypt and decryption.

Secure introduction is going to differ, depending on what environment you’re in, but we have a lot of authentication back ends to help you do that, whether you’re in AWS or GCP, soon-to-be Azure. Authenticating machine identities with the metadata services that the cloud providers give you in an orchestrator like Kubernetes or Nomad. You can use the Kubernetes back end or Nomad integration in Marathon, maybe you write an init container using AppRole. You get lots of authentication back ends to help you meet those needs when we’re talking about how to authenticate those machines dynamically.

The next question is, “Is there an out-of-the-box integration with OpenShift?” OpenShift is just basically managed Kubernetes, so there’s no reason that you couldn’t tie into that API server that comes with OpenShift. Just recently, we became a certified partner on the OpenShift platform, so you’re certainly able to use both of those back ends on OpenShift, if that is where you’re deploying those applications.

So, Amanda, unless folks have some more questions, I think we’ve gotten through the majority of the material for today in the Q&A period.

Amanda: Yeah, I think so, unless there are any final questions. But probably if there are, you could join the office hours that Lance mentioned. If you check out in the GoToWebinar portal under “chat,” our colleague Brett put the link in there, it’s a link, so it’s like a video call meeting, and anyone’s welcome to join that. There are gonna be a few of our solution engineers, including Lance, on that call, and that’s really a great opportunity to get your individual questions answered and have some of that one-on-one time with all of them. Please join us for that if you’re interested.

Thank you, Lance, this was a great demo and great questions from the audience. I hope everyone enjoyed today’s webinar and was able to learn a little bit more about Vault with public clouds. Thank you to everyone who joined, and a big thank you to Lance for his time today. I also want to mention that we have training partners who offer training of HashiCorp tools, where you can go even more in depth.

If you like what you heard today and you do want to go a little deeper, you can take a look at our training page at HashiCorp.com/training to learn more. And, again, those office hours, that link is in our chat. And finally, as I mentioned at the beginning, this webinar was recorded, and we will make the recording available on our website after processing. I will send an email to everyone who registered with the recording link. You can also keep your eyes out on Twitter, where I’ll post the recording link a little later today. Have a great day, everyone. Goodbye. Thank you.

Lance: Thanks.

More resources like this one

  • 4/11/2024
  • FAQ

Introduction to HashiCorp Vault

Vault identity diagram
  • 12/28/2023
  • FAQ

Why should we use identity-based or "identity-first" security as we adopt cloud infrastructure?

  • 3/14/2023
  • Article

5 best practices for secrets management

  • 2/3/2023
  • Case Study

Automating Multi-Cloud, Multi-Region Vault for Teams and Landing Zones