Learn the different methods for Vault and Kubernetes Integration.
Kubernetes is an extroverted orchestration system. Vault is an introverted secret broker. Both thrive in modern, dynamic infrastructure, but how do you make that first introduction?
In this session, HashiCorp Vault engineer Clint Shryock will look at different methods to integrate Vault and Kubernetes, covering topics such as:
I'm going to start with a story, I heard that was a good idea. When I got into software development, right out of college, I joined a web consultancy. This is 2004—what I felt was cutting edge. We were using Dreamweaver, and we had things mounted on a shared server. We had passwords—it was cool.
Whenever you needed access to things, you could go to the code. We decided that didn't scale well—going into all these projects, trying to check them out, and stuff. We came up with this idea of, “We're going to have this file called passwords.txt, and that's where it's going to be.” You know that that scaled as well as you can imagine it would.
Eventually, it was time to move on. I told myself, I can't wait to go to an organization where we don't have that—we're going to be cutting edge. I joined this Rails organization, I moved on, and I was like, "Man, we use Rackspace here and, we have Git." Not only Git—we had Subversion (SVN) at the last place—and the place before that. Nothing wrong with that. We had PHP too. There's nothing wrong with that. I will discuss that with any of you if you want to later.
I go to this new place, and I'm like, "Yeah, Rackspace. We're looking at using Chef to do this stuff. These are cool things I've heard of. I can't wait to be using them." I get there, and it's my first week. I want to do some stuff—how do I get access? They're, "Yeah, we'll show you how to get access. You've got to go to our shared drive. There's a password.txt file.”
My name is Clint Shryock. I'm a Vault engineer at HashiCorp—and I'm proud to say I don't use password.txt’s anymore. You can find me anytime this conference if you want to talk about how you still use password.txt. It's totally fine. We can help you with that.
Today I'm going to talk about “breaking the ice.” Secure introduction with Vault and Kubernetes. I chose ‘breaking the ice’ because I was going to a tech conference, and I find that something that I struggle with at conferences. I see a bunch of random people who, in theory, have similar interests, but I still don't know how to make conversations. That's daunting to me. Just like if you're using a large platform like Kubernetes—how do you integrate with a new thing? Vault's nontrivial. How do you get across that threshold?
Turns out integration's not that hard, but you have to start somewhere—and starting can be very hard. Today we're going to look at integration things. We're going to cover these things very quickly. I hope I updated this slide correctly throughout the whole presentation.
We're going to talk about that because I don't know how many of you come to this room with Vault experience or knowledge. Who would say they know Kubernetes better than they know Vault here? All right, I'm going to pretend that's more hands than the other way around of, “I know more Vault than Kubernetes." Who knows more Vault than Kubernetes?
Well, I'm going to go with my first thing of more Kubernetes than Vault. We're going to talk about that. We're going to talk about integrating, which is a very fancy word for using Kubernetes to authenticate. We'll talk briefly about using Vault as a service inside Kubernetes, which was something announced—newish—that we support now. In the end, I'm going to talk about secret injection—which is another vague, nebulous topic—but we'll get to that.
We're talking about Vault. If you know anything about video games, you may have heard of a speedrun. I'm going to give you a speedrun of Vault, assuming you know more Kubernetes.
We're going to show you some quick things about Vault. Just to get you the idea of what it is, and what it can do. Vault is highly available. It is a distributed application; it has a Leader/Follower pattern—very similar to our other products, Nomad, and Consul.
Externally you communicate with Vault over an HTTPS API; you don't need a special SDK or something that's all HTTP. Internally, it's very similar to Consul and Nomad in that we use RPC to separate a set of core functionality from extendible backends as we call them—core and plugin backends.
Backends generally fall into two categories. There are auth backends or auth methods—kind of use that interchangeably—and secret backends. Auth methods are for proving your identity—secret backends are where we do the fun stuff.
Identity can be humans or machines. You can use Vault with GitHub. You can authenticate with Vault using your GitHub credentials or login into GitHub or LDAP. For machines, you could use IAM profiles or service account tokens—because that's what we are doing today.
When you authenticate, you log in to Vault. You log in for a specific role, and these roles are tied to the authentication backend roles that you're trying to use. Those roles have policies attached to them. These policies dictate whether you can do certain things. Can you interact with this backend? Can you talk to this? Can you operate Vault itself? Vault is identity and policy-driven.
Some of the secret backends—this is a very, very small list—but Vault itself can act as encryption as a service. You can ask Vault to encrypt a certain thing; it'll return that to you. In that case, we don't store it. We just have the keys that maintain the means of encrypting and decrypting. You can also do things like manage your PKI infrastructure, generate certificates for servers to talk to each other. You could probably use that for creating certificates in Kubernetes.
Then this other thing, which I feel like I spend a lot of time discussing, is this concept of dynamic secrets. Dynamic secrets are like my slides. They're created just in time, right before you need them.
A dynamic secret is like my passwords.txt file—except nothing like that. Dynamically created the moment I need to access a database. I say, “Vault, I need to talk to this database.” Vault says, “Here’s your username, here's your password.”
Like most secrets in Vault, it has what's called a lease. It exists for a certain amount of time, and it's configurable by backend. Well, let's say it exists for an hour. In an hour, Vault will automatically revoke that database username and password. Unless, of course, I go and, try to renew it. I tell it, "Hey, I'm still using it,” but I have to check in.
The advantage there is you can give people secrets and access to things—and know that they're not going to end up in a passwords.txt file. Even if they did, it wouldn't matter because Vault already automatically expired them.
Now let's say someone is malicious and, they do a password.txt file. They keep renewing it, and you discover it. Because we have this concept of leases, an administrator can go and revoke that lease manually. Dynamic secrets are meant to be short-lived. They can live longer—they're supposed to be per-use scenarios. I'll cover that in a little bit.
In review—Vault; highly available. We have a Leader/Follower set up. All requests to followers get sent to the leader. The followers don't necessarily do anything—they are hot standbys. The exceptions there being our enterprise version of Vault, which has performance nodes, and some other things like that. Those are enterprise features. I'm focusing mostly on open source today.
Everything you put into Vault is securely stored by default. We use a system of encryption keys. It's another topic called Shamir keys to generate a master key, which then stores everything in your configured backend—securely and encrypted. We have a lot of different backends, and that will be covered later.
Identity, role, and policy-driven. Your identity is scoped to certain permissions. By default, you have no permissions. You have to explicitly have policies added that give you things. The idea is to promote secure by default, meaning you can't do anything.
Then we have managed secrets, dynamically generated as they're needed and, if ignored, they get cleaned up later. All right, so really quick: What is Vault, and why should you use it? It's amazing—and it does great things. That's a quick speedrun. Let's go to the next thing.
We already knew the Kubernetes is great. That's why you're here. You had an idea that Vault was great also—which is why you're here. We're going to talk about integrating Vault and Kubernetes. We're going to get these two systems to talk. We're going to want a Kubernetes application that's going to get dynamic secrets for a database.
We'll use that as an example. We have an app, and we can clearly see that this app is written Golang. This application needs a database. Here we have a database—this database is clearly PostgreSQL. I'm not trying to make anyone mad. I don't want to run any of these things myself because I'm not necessarily an infrastructure person, and I don't like babysitting applications and databases.
We're going to put these things in a service scheduler, which is clearly Kubernetes because it says so. I don't want a passwords.txt file involved. We're going to make our application get its information from Vault. That's clearly Vault, it says so.
That dotted line there is meant to represent a problem we have that's commonly referred to as the secure introduction problem. How does Vault know that this app—this machine—should be entitled to a secret? How do we provide that very first step? How do we get the very first secret established so that level of trust is established? To do this as a human, we have a lot of different ways, but we use authentication backends to ask somebody else to verify this identity.
In the example earlier, I said GitHub. We set up GitHub authentication in bulk. Then we get redirected to GitHub. You have to log in there, and GitHub says if it was successful or not. For machines, you have things like IAM, or you have service account tokens. We ended up leveraging the platforms themselves that we're running these things on to establish trust.
This is a fancy image that I got off of our website, and, it's pretty much going to reiterate what I say. You use a secure supplied unique token that an application has access to—to pass that to Vault. Vault then asks this identity provider, “Is this legitimate?” That all maps down to the role that the person tried to log in as.
When the platform verifies it, we say, "Okay, I can now issue you a Vault token," which is a secret itself—I should say string— and, says, "Here, you can use this now as your means of authenticating with me.” You get this nice Vault token, and all your requests after that, you say, "Hey, here's my Vault token, give me this secret."
The Vault token itself has those policies bound to it. When you use the token, and say, "I am this person,” Vault will then check the policies on that token, and say, "Do you have access to the secrets you're asking for or not?” Integrating between Kubernetes and Vault is actually really simple. We need the service account token from your pod and, super-hand-wavy magic. We're authenticated, and we are configured to give you certain policies and certain secrets.
This is obviously very complex. But we show that Kubernetes and Vault here need to talk. Let's look at the look workflow again—really quick.
We're going to use the service account token to authenticate with Vault. We're going to receive a token that we can then turn around and, ask Vault again to do the secret things that we want to do. There are a couple hoops to jump through here. But those secret things we can do are—as I mentioned earlier—encryption as a service. We can encrypt things for our users and, we don't store them. We can give them the encrypted value—but we hold the keys to unencrypt them. You can restore—access—sensitive information; maybe URLs, maybe you went ahead and did a static user and password—like in a key-value store. Or you can do dynamic credentials.
In Kubernetes, we obviously have a lot of these things running. We have many of them. They're in a 12-factor style application where they don't have a lot of information themselves. They have to get information from their environment. In this case, we're going to use the service account token to kick all that off.
To get the actual integration going, we need to set up Kubernetes to have the things that we need to have. That starts by simply creating a service account. This is a service account—and it needs to have special permissions. This is the service account that Vault is going to use itself—not one that your pods are going to use.
Hopefully, that's readable, it's on our website. It's very generic.
This is a role-based access control definition that says, “This service account I'm creating has access to the token review API.” To configure the Kubernetes authentication backend, we need to have a token that we can use to communicate with Kubernetes and, say, “This other token I've got; is it valid or not? What is its real identity?"
We need a token that has the capabilities to do that. Then we also need this other thing. We're going to use this for our web server or our apps. That's going to be the service account token associated with your pods that you run this application for. That's all you have to do on the Kubernetes side.
On the Vault side, I mentioned policies and identities. We have to create a Simple policy here. Policies can be things like listing values, reading values, creating values, deleting values. We have what some people think is very jarring at first. This concept of dynamic credentials being read and, not necessarily created. Reading is the way you create them. It was odd to me at first, but they don't exist until you've read them. Sounds like creation, but we're good that.
Everything in Vault is path-based. We see here the path is
database/creds/awesome-db. That doesn't exist yet. But when it's a policy that doesn't have to exist, you can write a policy like this—and everyone loves to write policies.
We write that HCL file, we store that to Vault. Sorry, I skipped the part where we use HCL to define our policies. If you're not familiar with HCL, I'm afraid I don't have an introduction for it, but it's like JSON. I have no problem with JSON. Some people are not big fans of YAML though.
This is a policy—we are going to set up Kubernetes authentication in Vault. We're going to enable the backend. Vault has a lot of stuff built in, but as I mentioned earlier, we try to be secure by default. We have very little turned on to begin with. I think Vault gets turned on with KVS and token and—maybe, user pass—I don't know. Basically nothing.
We're going to enable Kubernetes. We're going to say, "Hey, we want to talk to Kubernetes." It already ships with this backend. Vault will then mount it to a path. You can provide a path, but it overflowed the screen. I removed the path. If you don't provide a path, it takes the default name. In a more production environment, you probably don't want to use the same name or just Kubernetes. You want to give it a nice name like Demo or Production. It'll save you trouble later.
I also skipped over the Vault off jwt there. JWT—I pronounce it jot. We've got one thumb up, that's enough for me. I skipped over how you get that, but you need to use the Kubectl to get there.
Every service account has a secret name. You can access the secret name by asking Kubectl about the service account—and from there, you can get the token. We need to have access to that. This must be set up by an administrator who had those things. We need to know which generic host and, we need to have access to a
ca_cert file—a certificate file that Kubernetes is using.
We use that to establish the connection with the Kubernetes API in a secure way. We then store the Vault token we're going to use that for future API access.
In this screen, we're creating a role. We're saying there exists a role that someone can log into. We are binding it to the web server service account name—we'll say the default namespaces. You should ideally probably narrow this down as specifically as possible. But when someone logs in with a token for the
role/web, we need to make sure that the service account token is the web server token, and it is in the default namespace. If all that is successful, I'm going to return to them a token that has the awesome-db policies.
This token—I am being generous—lives for 25 hours by default. After 25 hours, you can continue to use the token, but it'll fail. We'll say no. Those tokens—like any token--are renewable. You can configure that to maybe not be renewable.
We are going to run a web server. We're going to associate the service account with each one back in Kubernetes. That's what we're doing here. We're going to run this app and, because we love demos that fit on screens, we're not using a deployment file—we're doing it right away.
Then we say the service account—the image is our awesome app image and, then we go in manually—for demonstration purposes. Then we can get the Kubernetes token—the default service account token. Then we can
curl out because everyone loves curl to Vault—and we can say, “Here's our Kube token.”
We want the web server role, and Vault gives us back this. It's a lot of neat information. That lease duration probably doesn't match up with the TTL (time-to-live), but don't worry about that detail. Worry about that client token detail. That is your authentication token—that you can then turn around and ask Vault for secrets—and your requests will be checked against the policies of that token.
In this case, the policies are, "I want to talk to the awesome-db, and I want credentials for it." When I ask Vault, I say, "Give me these dynamic credentials." It returns me credentials that did not exist until the moment I asked for them. These credentials are bound by the way we configured the database to expire after a certain amount of time.
You can configure the database—I left that as an exercise for later. You get custom SQL creation things. You could say that this role only has access to certain tables. This is a read-only role. Those are all configured by the database backend. You get credentials that now give you access to this.
I did go over that fast, but I tried to be general. It's not very complicated the workflow, but it is arguably non-trivial. It's hoops you have to jump through. By hoops, we review. Read the service account token, ask Vault to log in, ask Vault again for the actual secrets you need. Then you can finally go about doing what you came to do in the first place.
Or the application would finally get to do these things. It's more steps. You have to add code to your applications. I don't know about you, but I'm the kind of software developer who would rather not maintain code at all. You know that's like the best code. But now we've asked you to add more code and, the name of security and, integration you now have to add code.
That's a simple integration. That's an easy way to start talking to Kubernetes. From there on, it's API calls. But we want to make this easier for you. We look at these things, and we know we're authenticating with the platform; we know these things are there. We need to do some of these things for you. If you're unfamiliar with it, we have a feature in Vault called Vault Agent.
It's Vault itself— it's the same binary, but you run it in a specific mode. In that mode, it operates like a daemon. You configure it, not like you would configure Vault with backend storage and stuff; you configure it as another Vault client.
You say, “Here's my Vault, here's my authentication method, and here is where I want my token to go.” When you run Vault Agent in this context of Kubernetes, it knows where to go for the service account token. By default, it has a location—but you can also configure that token path and, it automatically goes and, does some of those things for you. It logs in. It gives you a token for the role you want—then it writes that token to a spot on disk that you've specified in the configuration.
Now I say on disk; it writes it to a volume that you've configured. In this case, you would want to configure it to write to a tmpfs drive or something—something that's kept in memory but not written down in state. Granted, the tokens are short-lived—still don't need to write things down. It actually gets persisted to disk unless you absolutely have to.
The next thing it does—maybe the biggest thing it does for you—is it manages the token lifecycle. I mentioned earlier, tokens are like secrets. They have leases. If you get a token and you don't renew it, all of your access to your secrets will be gone when the token expires.
The idea here is to run Vault Agent in something like a sidecar, automatically authenticate, and write this token to file for you. Then Vault Agent will continuously monitor the token lifecycle and will renew the token as needed.
If it hits the point where it can no longer renew, it'll automatically re-log in and, give you a new token. We'll always keep that token viable unless your credentials have been revoked or something.
Here's a sample HCL. This is where you would configure Vault Agent. It's not Vault itself, it's a client mode. Vault still has to be running somewhere else. That's a separate problem that I haven't talked about yet. I say “auto auth, method Kubernetes, mount path Kubernetes.”
This probably had demo written in it earlier. I think if the path is matching up with the method, you don't need the mount name. Then config—I say the role is web server. You can also give it a token path if you've used something like projected tokens. This is a somewhat new feature in Kubernetes, I believe. I get confused with Kubernetes because the new things come out, but then the major platforms don't support them right away.
Using Vault Agent to automatically authenticate with Kubernetes takes care of a lot of that grunt work for you. You don't have to have code anymore that knows to read the service account token. You don't have to reach out to Vault to login. You still have to ask Vault for tokens. But you know, we've cut two of the four steps out here. We’re trying to make things easy for you.
To review; that was a very simple, high-level way of integrating with Kubernetes. The entry-level integration if you will. It gets you access to Vault and all of the services it provides. All you need is your service account token to do it. It's scoped to the permissions and secrets you should have available to you.
This part's even quicker. I don't know if you want to call this an integration, but I do.
At HashiCorp, both Vault and Consul have taken a stance of maybe running in Kubernetes isn't a great idea.
Specifically, with Vault, we have a security-critical application. Kubernetes is a very dynamic environment—and although we have external backends for your database storage, it was always, "We don't want to say you can't, but we generally didn't recommend it. We generally recommended you run Vault on either dedicated machines or virtual machines." We understand, the performance and reliability of those things much better than a dynamic environment.
Plus, you can run Vault directly there as opposed to having to put Vault in a Docker container. It introduces some new things that—we were not against it—you can, but it didn't fit in some areas.
Up until about a year ago, Vault had this concept of coming up sealed. You need to use the keys that were created during its initialization—to unseal it. Well, in a dynamic environment, every time a new Vault server would come up, it comes up sealed. Now you can imagine running a Vault cluster in Kubernetes, and there's a node failure, or something gets preempted—something happens, and something gets rescheduled.
Well, if all or some, or maybe just the leader Vault server comes up sealed, your infrastructure's hosed—if you rely on Vault. Up until about a year ago, you needed to get in there and unseal it manually. Well, we open-sourced Auto Unseal. That was like the last barrier of, "Why can't we do this?"
And, again, I mentioned it’s a security-sensitive product. In retrospect, security is a spectrum. What we always felt maybe wasn't safe to do might be totally fine to you. It's up to every organization to determine what they feel is safe. While we don't think it's a safe way to do things, is not a great excuse for not doing it—because that's up to you all to decide.
Then it turns out people were running Vault in Kubernetes anyway. Now we're looking today at first-class support of running Vault in Kubernetes generally with the Auto Unseal feature turned on. We want to enable that using Helm. Helm is an industry-standard of running and, generating templates for Kubernetes. It was widely supported and adopted, so we want to make sure that works.
We have several modes—dev mode; that's just for experimenting, trying out. There's no persistence there. Standalone is a deployment with a persistent volume, so you can write to the disk there, and all of that data is encrypted. But it's a single server node, so you still have the problem of if it goes down, your actual entire Vault cluster is down until it gets rescheduled.
Then we do have HA mode. HA mode depends on your backend—where you use to store all your data. Those are all configurable, but not all of the backends support HA mode. At HashiCorp, we generally recommend you use Consul for HA. If you're running Consul—Vault and Kubernetes can actually run really great. We also have a Helm chart for running Consul and, Kubernetes. You can do both of these things now.
Some other features about the Helm chart—by default, we can help you set up end-to-end TLS to make sure all of your connection is secure. We have anti-affinity rules in there to make sure you don't have two Vault pods on the same node. You need to leverage a platform's KMS—or key management system—to provide Auto Unseal. Our Helm chart will help you get set up with all of those.
Vault in Kubernetes. Yes, you can. An important caveat is that it's only open source right now. The enterprise version will come, but HashiCorp is generally a very practitioner-focused company. We want to nail the user experience of running Vault and Kubernetes first in the open source world. Make sure that we figure everything out that we need to before we start using start supporting enterprise. But enterprise support is coming. I just can't tell you when—I don't actually know.
If we review here, this is what you had to do at first. You had to read the token; you had to ask Vault, and do these things. Then with Agent, we did a lot of that for you, but we still left more.
Now we take on all of that stuff—we want to do all of those things for you. But to date we haven’t—and a lot of people have figured out how to do it on their own. But we want to have Vault as a sidecar such that all you need to do is read something and, get going. We want to reduce the amount of Vault awareness your application has. You should be able to launch your pod and get the secret you need.
Before I show this—this is very alpha. It kind of works—well, it does work, but it works on my machine. We'll phrase it that way. I love that phrase, especially when it helps me.
To set all this up, we are following Consul's lead. We're creating a separate application. Consul has an application—a binary called Consul k8s. We're following their lead. We're going to introduce a separate binary called Vault k8s. You install it using our Helm chart, and it acts—first and foremost—as a mutating webhook or an admissions hook—however, you want to phrase that. It looks for pod creation that has specific annotations. It will then automatically mutate your pod spec to have a Vault Agent sidecar running—knowing how to configure against a certain role—and, based on annotations, can then automatically grab secrets for you—and write them to a tmpfs disk space.
I'm going to go through this quickly. I will try to show some of the things I've got. I'm going to run scripts. They are shortcuts of the things we've already done. There is going to be some hand-wavy-magic there, but I'll try to look at the scripts first before I do. I'm using Minikube, which also makes things interesting because Minikube is a single node, not a real one.
We're configuring our Minikube set-up to have the right certificates so Vault and Minikube can talk securely. This is using our Helm chart, but it's a private branch of it at the moment because it has this mutating webhook stuff that's very alpha.
We can see our Vault service is running. We've launched Vault on Kubernetes. We're going to run Vault here. Then we have the injector pod. This is the part where we set up the Kubernetes auth provider.
Well, my time is over, and I have completely failed my demo. The things flash up there. Then I log in to the pod, and I
cat—or I use
last—because an old mentor of mine insists I use
cat —and it shows dynamic secrets were generated to disk.
My colleague Jason, who worked very hard to push the Helm chart over the end. He helped with this demo and did amazing work. I swear it worked 20 minutes before I came on stage, and now I screwed it up.
Jason, you're going to watch this. I am sorry. If you want to see the demo, you can find me in the hall where it will work. I'm supposed to wrap up, but I felt witty. Alpha software, see I didn't even call it beta.
I'm over time. We need your feedback.
Automatic injection is coming. We're working on it. That's probably the next three to four months horizon. Beyond that, we put out a blog post, and we have GitHub issues. We want to know the community's feedback here.
We're considering some syncer process—obviously installed like the new day in webhook that will sync automatically—select secrets from Vault directly into Kubernetes secrets. Not like a sidecar. You access it from the Kube secrets way. That is pretty hand-wavy-magic right now. We don't know what that entails or what people want there.
We have two GitHub issues. If you saw the wonderful presentation this morning with Rita and Lachlan—Container Storage Interface (CSI), they have a great provider framework for that.
Mishra, My colleague has already done a great demo for that. It’s in a prototype stage, but again, we want to know what is the community wants out of that—how would they expect it to work? How would they interact with it?
We have GitHub issues—I can point you to them directly later—on all of these things. Go there, give us your feedback—then you can find the driver itself at Deislabs.
The future: We’ve talked about the future. There's no recap for the future because that doesn't make any sense at all.
I'm Clint and, I'm very thankful you all came and watched me embarrass myself with a terrible demo. All right, thanks, everybody.
HashiCorp Deep Dive Demos from Ignite and KubeCon Europe
How Remote Work is Driving the Need for Multi-Cloud DevSecOps: How to Build a Pipeline
Secure Your Multi-Cloud Delivery Pipeline with HashiCorp Vault
Orchestration to Delivery: Integrating GitLab with HashiCorp Terraform, Packer, Vault, Consul, and Waypoint