Mother May I? ABN AMRO on Why You Need To Implement the Principle of Least Privilege in Vault
Dec 03, 2020
Maintaining good secrets hygiene in a multi-cloud world has become a daunting task. In this talk, Sarah Polan of ABN AMRO will demonstrate why a secure Vault system does not stop at the implementation, and why knowing the landscape is critical when designing ACL policies.
For a case study going further into how ABN AMRO uses HashiCorp Vault, visit our case study page.
For more on Vault Policies, see the talk and transcript for A Vault Policy Masterclass.
Welcome to HashiConf Digital 2020. My name is Sarah. Today, we're here to talk about Mother May I? That entails more of the policy-centric part of Vault — as opposed to the operational and developmental side, which I feel is something that gets overlooked probably a little more often than it should.
Who am I? Why should you trust me?
I am an expert secret keeper. I started in DevSecOps as a secrets management expert, and have moved on to be the Secrets Management Architect for the Corporate Information Security Office at ABN AMRO — a major bank here in the Netherlands. I'm also a Vault Enterprise trainer. As well as ABN AMRO, I have also trained other organizations on the use of Vault — both financial and non-financial.
Choosing Vault for Secrets Management at ABM Amro
To give you a little bit of context, within ABN AMRO we recognized a need for secrets management to, particularly, alleviate secrets sprawl. We were finding that there were too many secrets that we weren't aware of, and we wanted to reduce that attack vector.
We also wanted to look into making sure that secrets weren’t seen by human eyes. If they were machine-to-machine workloads, only those machine workloads needed to be able to see the actual secrets themselves. As with many in the industry right now, there's a greater need for containerized and ephemeral workloads — both Kubernetes and Docker related. So we wanted to get ahead of that and make sure that we were properly securing containerized workloads before the bank started adopting them on a large scale.
Another thing — and one of the major reasons we chose Vault — was the community backing. We knew that if, at any point, our developers needed assistance, there was a large development community behind it — and they would be able to have assistance.
Automation — we're a DevOps shop. We need to make sure that we can automate everything. The fact that Vault was API-driven was a large factor in its adoption. Lastly — but not least importantly, I think — the possibility to enable teams and help them improve their security standpoint and their posture without hindering their development — and hopefully speeding up their development — was critical.
As one of my mentors used to say, "Three can keep a secret if two of them are dead," and that has to do with the fact that humans are terrible at keeping secrets. Whether it's because it's going to make life easier if you share it — or don't want to go through the pain of finding the correct process to deal with this. We find that it's just not going to happen if there's human interference — and those secrets are going to get leaked.
What Is a Secret?
For the sake of this talk, let's discuss what a secret is. Sometimes when we talk about secrets, that can involve personally identifiable information — or sensitive information. But, for these purposes, we're going to say that a secret is a piece of information used for authentication and authorization. That can be within microservices, that can be applications — it can be to open to anything: Username and password, SSH keys, API keys, TLS and SSL certificates, and private keys.
Why Is It Important to Keep Secrets Secret?
I've established that I think it's really important to keep these secrets. But what is the end goal of a secrets management program? It's really to protect your assets. Assets can be tangible, they can be monetary, but they can also be intangible things like data and your company's reputation.
In Vault's case, we're looking to protect personally identifiable information, financial information, medical information. It's important to meet those regulator needs because that can impact your liquidity, of course. But it also has a huge impact on the reputation of your company. If you get a fine from the FCC because you aren't properly implementing secrets management, that's going to become common knowledge — and it's going to hurt you in the long run.
Crypto mining is more common than I would've thought. Things get hijacked for crypto mining. Preventing lateral movement within your network. If you can segregate your secrets, you're going to protect your internal network better than if you have secrets sprawl or easily accessible secrets.
Your audit logs have a wealth of information about:
- How does your system run?
- What does it consume?
And something we don't often think of, as well, are the system configs. Well, system configs are essentially a map for an attacker.
What Is Secrets Management?
We've established why, but then what is secrets management? Secrets management is ensuring the lifecycle of your secrets — and how do you do that in an automated and scripted fashion? To manage this lifecycle, you also need to know what's consuming your secrets.
- Are they microservices?
- Are they containerized workloads?
- How do you store that secret?
- Are you storing it in KeePass, or is it just stored in GitHub?
How is the secret rotated? This can be almost more important than the longevity, and the rotation of the secret itself, because if you rotate it improperly, you're essentially spreading the secret again. But then, how often do you rotate that secret? Is it every 90 days? Or are you rotating it every ten years? The longer that secret is in play, the greater your attack surface.
- How are you initiating secret zero? Secret zero, being that first secret. How are you getting it from your source into Vault, for example?
- How are you going to eliminate humans seeing that secret?
Like I've said before, once humans have seen that secret — it can be passed, and it's no longer secret.
…And to Further Complicate Things
That's a lot to begin with. But then, we went and complicated things even further. We went from this traditional static datacenter; something with a perimeter, and something that we like to consider an M&M, essentially. It's hard on the outside and a little softer on the inside. But since that outside is protected, we can get away with that.
But, now, we're moving into the evolution of cloud. With cloud, that perimeter disappears — and it disappears even more once you start introducing multi-cloud. Then, because there are government regulations — there's GDPR — we have to make sure that certain sensitive information is exactly where we need it to be. So now we're also introducing the hybrid cloud.
How will you get your secrets to flow in a fluid manner from one cloud to another, without hindering any development — or your network — but still only giving just as much information as it needs to function? We get it; it's complicated.
What's the Real Issue and Why Do We Need It?
Well, secrets sprawl, as in our case. We needed to make sure that secrets were more or less contained — and we knew who was using them and how. No centralized solution. A centralized solution allows you to bring all of those secrets together. Secrets hardcoded in the source code, I've unfortunately seen this more than I care to admit. It's an actual issue, and we need to get them out.
Secrets committed to GitHub. That could be almost worse, especially if you're committing those into a public repo, and everyone can access them. How many people have seen that secret? Is it 1, 2, 5, 1,000? The longer that secret is in play, the higher probability that more people have seen the secret.
Do you actually know who's using that secret? You assume it's just you and your colleague. But if you revoke a secret, is your colleague down the hall in his cubicle going to start screaming because he's just lost access?
Secrets Management in the News
If you still don't believe me, well, it's been in the news. We have Android, who had secret keys leaked, API and crypto keys leaking on GitHub. You even have AWS telling developers to scrub their GitHub. You have crypto mining, which costs a lot of money. One that affects me quite personally — is Equifax. Well, they used admin/admin and all the private information got leaked. We can assume that they probably didn't have a very robust secrets management program.
Luckily, we have tools like Vault. Vault allows us to implement as code, Vault allows for dynamic secrets, dynamic PKI, which is great. But there's still an issue here. Tools are only as good as how you implement them.
Let's Talk Policies
Policies require you to know what your landscape looks like. What applications are speaking to which applications? How are you acquiring those secrets? They require you to know what your regulations are. Are you being held accountable with the regulators — with the FCC? In the case of Europe — with GDPR? How are all of these things being spread?
The Principle of Least Privilege
As somebody in cybersecurity, this is drilled into our brains from the day we set foot in the door. You can only give the bare minimum of privileges that allows a user to complete his job. Don't give them more, but don't give them less either, because giving them less can cause a denial of service inadvertently.
Let's start with a little scenario here. We have an app, and it needs to be able to read some database credentials, and that's it. Just read them. It doesn't need to write, it doesn't need to delete, it just needs to read them. But, unfortunately, the developers — just to be sure — have decided to give it a few more permissions. As a developer myself, this is something that I am unfortunately guilty of and I have to admit to.
Vault and Kubernetes Vulnerable Policy Demo
But what happens now? To set this up for you a little bit, I have prepared a demo, and this demo runs Vault and a Kubernetes cluster. We're going to set up the Kubernetes auth method within Vault. I'm going to show you what happens if you use a vulnerable policy — essentially a policy that gives way too many permissions for what we need to be able to accomplish.
Unfortunately, this is a policy that's actually making the rounds on GitHub as a suggestion for a workaround when people are having a little bit of a difficult time determining what the path of their secret is.
Initial Vault Setup
I'm going to walk you through the Vault setup to make sure that we're all on the same page. We need to export our variables, determine where we're going to talk to Vault — enter that token. Then we're going to enable the key-value store and write a static secret. I'm going to place my secret — which are my database credentials for my Java Spring app.
Here we can see that we're on version one. This is the first secret that's been placed in there. Here is my vulnerable policy. You can see I've written it to the path secret and then put a wild card after it, which means anything that comes after secret is in play, and it can read, list, create — everything. I need to create a YAML for my service account, and then I will create my Kubernetes service account.
Last thing I need to do is apply that YAML. All of this pertains to Kubernetes. You don't need to know what it's doing — only that it allows Vault to speak with the control plane of Kubernetes. Here, we can see we have our service account up and running, so we should be good to go on that front.
Now, we need to let Vault know certain things about our Kubernetes cluster and the control plane — in general — so it can access the token review. We're going to give it the Kube server. We're going to give it the service account name, the JWT token, which will allow it to leverage and authenticate, and then the certificate. Let's go over to Vault and enable the Kubernetes auth. Once that's done, we're going to tell Vault about all of these different variables that we just got from our Kubernetes cluster. That was easy. Everything's configured — we're good to go.
The last thing we need to do is tell Vault about the role — the Kubernetes role itself. When we do that, we're going to tell it what policies it's allowed to do. In this case, the policy that it's allowed to do is that vulnerable policy we wrote earlier. We're telling it that it can do everything. So, now I'm in — as long as it says that I can do it in the policy, Vault and Kubernetes are not going to have any issue with this at all.
Simulating an Attack
In an attempt to simulate an attack in less than five minutes, I'm going to — for the sake of argument — execute into a Kubernetes pod and do a bit of housekeeping here to make sure that I can do everything that I need to. I'm going to add
To authenticate to the API, I still need to know about the JWT, so I'm going to have to pass that in as one of my arguments. Here you see, I have my JWT. Then — as always — I need to tell my CLI where to actually speak to Vault. So, I'm going to put in the Vault address. Now we have our authentication method, we know where Vault needs to speak to, and the last thing we need to do is make that API call. There we go. You can see I'm passing in the JWT, which is our authentication. I'm also specifying the role — the RBAC — that we specified earlier.
Here you can see something interesting. Vault is acting exactly as it should, and it's giving me the service token. With the service token, I can authenticate myself, and use Vault as I would if I were an authorized user. Because I have fat fingers, we're going to export the Vault token.
Let's play around; let's see what we can do with this. Well, we know we placed a secret in there earlier, so let's start with trying to retrieve that secret. We're going to make that API call, passing in the Vault token. You can see there's our secret, exactly where it should be — Vault is behaving exactly as it should. The get works — let's see if the post works.
Let's create a secret. We're going to send that in, and you can see that it created version 2. Reasonably, I can expect that my secret is there. But if you don't believe me, let's verify that. There you go — the secret that we just posted is there.
Destroying A Secret and Database Credentials
It's all well and good, but now what can I do? Well, maybe I can destroy something. Let's try that. I'm going to try to destroy the secret I just created. It's not going to cause any issues. Creating a secret caused more issues than deleting it. There you go. There's no data there. I've just destroyed the data.
What happens if I do that with my database credentials — the ones that the application actually needs to connect? I'm going to try the destroy — and hit that API. Seems like it works. Let's double-check, and there you have it. I've just deleted the secret that I need to be able to connect to my database. But I didn't just delete it, right? I destroyed it — which means I can't revert it.
I sense some doubt. Let's flip over to the UI. We created the secret on the secrets path under my app if you recall. Well, version two, it's been destroyed. We know that it's been there. We know that there was a secret created there, but it's not there. Also, if I try to investigate further, it's just not there.
What about version one, which was our database? That's also gone. The issue here is if you would like to restore a secret, you're going to have to do some of restore — which takes a lot of time and could have been fully mitigated completely by placing my app and then a wild card after secrets, as opposed to secrets wildcard. Because what that secrets wildcard did was open up the destroy.
The create also causes some issues because that would have caused a denial of service. But at least that you can roll back to a previous version. If somebody destroys one of your secrets, it's just gone.
How Do You Prevent This From Happening?
What can you do? How can you create a secrets management program that's going to enable you to keep your secrets as secure and refined as possible?
Know Your Requirements
- What are your naming conventions?
- Are you using namespacing, or are you just relying on path?
- Are you going to divide by environment or team name?
These things will help you determine exactly how your services are going to speak to each other. You need to know which secrets each application needs to consume. If you give all applications access to all secrets, you're not doing yourself any favors — you're not reducing your attack surface.
- Where do those secrets need to reside?
- Are they secrets that are going to be implemented directly into an application? Or are they secrets that are being called on runtime when you start scaling up your Kubernetes?
Also, which compliance frameworks are driving your organization? The requirements from NIST aren't the same as the requirements from ISO 27001. When you start adding things like PCI DSS, or SOX on top of both things, it becomes more complicated.
Create Your Policies
After you know all of your requirements, you need to create your policies. These policies —like we said before — need to be created with this principle of least privilege. Make sure that these policies only allow enough information. Don't give them too much. Again, don't give them too little because that's not doing yourself any favors either. Be explicit about it — know which path it needs to be on, and make sure you give that path permission to do that. If you don't, you end up in a denial of service again because Vault isn't going to let you connect to them.
Test using penetration testing, chaos engineering, but also just let your team members do it. Within your development teams, see if you can retrieve a secret and automate the retrieval of that secret. These things are going to help you significantly know if you've done it properly.
Go Forth and Adjust Your Policies
We've covered here that secrets management is not as straightforward as it initially appears to be. You can have great tooling — you can implement great tooling. But as long as you haven't implemented the correct policies and the correct ways of retrieval, your tooling is only as good as what you've implemented. Go forth and adjust your policies.
Thanks for joining us.