Case Study

The HashiCorp Vault AWS IAM backend: A deep dive with the author

Joel Thompson describes how to use Vault and AWS IAM to distribute authentication credentials to applications and how Bridgewater uses it as part of the solution to manage $160 billion of pension funds.

HashiCorp Vault’s AWS authentication backend now includes a new authentication type. It allows you to authenticate using AWS IAM credentials, mapping an IAM user or role to a Vault role.

In many cases, AWS already does the hard work of securely providing your compute resources with IAM credentials, such as EC2 instances in an instance profile, AWS Lambda functions, ECS jobs, and AWS CodeBuild steps. AWS does the hard work of providing your resources with IAM credentials, and now your applications can simply consume these credentials to authenticate to Vault -- in order to access their secrets.

Clients sign an AWS API request, sts:GetCallerIdentity, and then send the signed request to Vault. The actual AWS secret key is never sent to Vault, helping you keep your sensitive credentials safe.

In this in-depth talk, the author of Vault’s IAM support shows you how.

Speaker

Transcript

My name is Joel Thompson, I work at Bridgewater associates. To give you a little bit of introduction about me, I am a Vault enthusiast and community member, earlier this year I wrote the IAM auth method of Vault's AWS authentication backend. If you remember when it went from AWS EC2 to just AWS, I was the person who wrote that code, and I have spent the last nine years at Bridgewater in a bunch of different roles spanning development, operations and security. I've has my fingers in a lot of the pies.

I've mentioned Bridgewater a few times, who is Bridgewater? I think it's really well put by Brian Kreider who is the head of our Client Services and Marketing Department. He said that from a values perspective, we are trying to understand the way the world works. That's what our business is and so we're really interested in people that have a deep curiosity. People that have the patience to understand deep and complex systems; whether those are biological systems, economic systems or political systems, it doesn't really matter.

In particular, we try to understand timeless and universal, fundamental and systematic principles that drive the world's economies. What do I mean by this? When I say timeless, we're not just looking at what's going on in the US economy today, not just in the dot com boom or the dot com bust, the great depression or great recession, but across all time. We're looking for not just the US but universal, what drives the world's economies all over the world.

When I talk about fundamental, what we search for is cause and effect relationship, to try and reason about the cause and effect relationships that lead to the economic changes that we observe. We're not just observing correlations and trading on those.

Then we want to make our understanding systematic. We take logic and we encode it in software so that when we have an idea, we can repeatably trade on that same idea over and over again and if it doesn't make money we can stop doing the things that aren't making money. Over time, we build a compounded understanding of what drives the world's economies. We take that understanding and we manage about 160 billion dollars for institutional investors all over the globe.

When I say institutional investors, that's important. Our clients aren't high net worth individuals, they aren't rich people who want to get more money. Our clients are institutions like public school teacher pension funds. These are the people who depend upon us to remain secure and to have the income into retirement.

Why is Bridgewater here? 160 billion dollars is a big target that a lot of bad guys are gonna want to steal from and we care very, very deeply about protecting our clients. That 160 billion isn't our money, it's our clients' money and we have an obligation to do everything we can to protect it. We are a massively technology-enabled company. When I said we're systematic, we cannot be systematic and compound our understanding, what drives the world's economies, without a lot of technology in order to encode our understanding and make the same sorts of decisions day after day.

Over the last several years, we have been placing a large bet on AWS in order to improve our technology agility as well as keep up with the evolving security threat landscape that exists out in the rest of the industry. As we started going into our cloud journey, we ran into a pretty fundamental security challenge. I'll talk about that in a second.

The problem as I call it is the auto-scaling problem, which is that these are awesome. You let Amazon give you more capacity when you need it but there's a big question, which is: How do these instances get things like their database passwords? The Vault team calls this the secure introduction problem. Vault is awesome, so the answer could very well be that you use Vault. But then if you just say use Vault, that introduces an equivalent problem: How do you get a Vault token onto the instance that's using it? Over time, I became convinced that there weren't any good solutions, only awkward workarounds. As Amazon's expanded their services to include things like ECS and Lambda, it's even more important of a problem to solve so we can take advantage of these great new services that cloud providers are offering and integrate them into a unified secret management system so that we can stay secure.

What are some of these awkward workarounds? Just a disclaimer, I'm not actually recommending any of these. The first one is that we can just bake the secret into the AMI, but really this requires a unique AMI per security boundary. As a result it requires a different AMI between QA and Prod. I'm assuming you do want different secrets between QA and Prod, right? It also makes rotating the secret hard.

I'm being told I need to change my mic, one second. Alright much better, sorry about that. Technical difficulties.

As I was saying, if you bake your secret into the AMI, rotating it becomes much harder. You need some sort of OMI pipeline and I'm assuming you are rotating your secrets, right? That's why you're all using Vault. What you end up with is you share secrets across sentences. When you do this, you lose individual accountability. It becomes very hard to track who leaks your secrets when they get leaked. Not if, but when because this type of compromise is inevitable.

Second awkward workaround is you could store the secret in S3. Now back in the days when we first started working with this, this is before KMS and before VPC endpoints, what that meant is we would have to store the secrets in plain text in S3. You could say we could encrypt them on the client, but then the question is where does the client get his decryption key? It's basically turtles all the way down. We also had to have our instances talk out to the internet. This is before the days of VPC endpoints, as I said, and a huge part of our security threat model that we're worried about is data exfiltration, so instances talking out to the internet gives us a bit of a heartburn.

With the introduction of KMS and VPC endpoints, it's a more palatable solution. We can actually rotate the secrets so we no longer need to bake an entire new AMI, we just update it in place in S3. But we still share secrets across sentences and, even worse, this becomes an incredibly difficult problem to reason about. The things that are going to get rid of access to your secrets if you do this include: the IM policy and role that your EC2 instance is in, the VPC endpoint policy that your EC2 instances are using to talk to S3, the bucket policy on your bucket, the ACL on your S3 bucket, the ACL on your S3 object, your KMS key policy and your KMS key grants. That's a lot of different things and in each one of those you could easily screw up and potentially have a catastrophic failure of your entire security system. That is also a very difficult challenge to deal with.

The third awkward workaround is we could use some sort of external orchestrator to position the secret. This one comes in two flavors. The first one is we could just use the orchestrator to directly spin up instances and then SSH on when they come up and position the secret. If we do this, we lose the benefits of AWS-managed dynamic scaling. The other flavor is we could set up a listener to AWS auto-scale events and when Amazon spins up a new instance, SSH on and position the secret; but this gives us increased start-up latency before the service becomes available. Both of these also end up being more complex, more code you have to write and maintain, more systems you have to keep updated and patched. So it's not really a great solution either.

I've talked a little bit about some of the problems, what are the drawbacks of these, but really the question is: What do we actually want, what are some of the security properties of a good system? Note that I am focusing on security properties here. This also has to be easy to use. I'm not intending this list to be exhaustive, it's just illustrative of some of the things we want.

Does that help a little bit? Cool.

The first thing we want is granular auditability. We want to be able to track down to an individual instance or individual client, what any secret, anything is going on in the environment. If you share a token, it makes this much, much harder. The second thing we want is we want no long-lived secrets. The longer a secret lives, the harder it is to know everywhere it goes, the less confidence you have about who possesses it. You no longer understand where your secrets are and then it makes you even more afraid to rotate them and the problem just compounds itself. One of Vault's key differentiators is that it's very strongly opinionated that all secrets should be short-lived and temporary, it sort of forces you to do that with the least concept.

The third thing we want is protection from replay attacks. When I talk about a replay attack, what I mean is if I send the same request to Vault twice, am I going to get a valid login authentication token back? TLS provides some great protections here at the transport layer but we actually want to pull it up a little bit higher into the application level protocol to provide even better protection.

The fourth thing we want is credential theft. An example of this is, you'd never want to send secrets over the wire, it's just asking for them to be compromised. An LBAT is probably the worst offender in this space. For example, if you have a dev vault server and a production vault server, both talk to the same LBAT backend, if someone compromises your dev vault server, they could just steal that password and use it to login to the production vault system or any other system that depends upon the same LBAT backend. Sending secrets over the wire is always going to be a recipe for disaster.

The fifth thing we want is you want it to be usable from any AWS service. We want it to be forward compatible as new services come along, that way we can take advantage of the new services that Amazon offers and be able to securely integrate them into our secret management process.

How do these awkward workaround that I mentioned earlier stand up to the framework? If we bake it into an AMI, obviously we are not being granular because every instance that gets spun up has the exact same secret. Temporary credentials are possible but you're gonna have to do additional work to make them self destruct and also have a regular rotation process. You don't get any protection from replay attacks or credential theft because, by definition, all the instances are sending the same data to a Vault server to get a login token and it only work for EC2, it's not going to work for any other AWS service. If you sort in S3, it is essentially all the same properties except that rotating it is much easier and potentially works for every AWS service.

If you use an external orchestrator, there's a lot of knobs you can tweak to try and gain some of these properties. You can be very granular. For every instance that comes up, your orchestrator could just get a wrapped Vault token from Vault and position onto the node. So you get lots of granularity, you get the temporariness for free because Vault provides that with token TTLs, you get replay protection and theft protection because your instances are never actually talking directly to the Vault server but, again, it only works for EC2 instances and doesn't work for other AWS services.

Enter Lyft's Confidant. I'm only going to talk very briefly about this because this is a talk about Vault, not Lyft and not Confidant. Around the time that we were starting to run into this, Lyft released Confidant, which is also a player in the same space. In fact, Confidant uses KMS in a pretty clever way that actually checks all of the boxes that we want. So why not just abandon Vault for Confidant? A few reasons.

First of all, it only runs on AWS, so we lose flexibility. We're an enterprise with an on-premise data center footprint still and so we can't just ignore an important part that's thorning the business. Secondly, it requires clients to talk directly to KMS, which as of today is only available over the public internet and we get the same heartburn I mentioned earlier of having compute resources talk directly to the public internet.

The third thing is, and in my opinion this is the biggest deal, Confidant does not have Vault opinionation that all secrets should be temporary and expiring. It's a secret sharing solution, it's not a secret managing solution.

Let's talk a little bit about the EC2 Instance Identity document. Amazon provides this to all EC2 instances. If you look at their docs, this is an example of what it looks like. If you look at it you see it contains metadata about an EC2 instance. What you don't see here is that it's also cryptographically signed by AWS and is only available via the instance metadata service so you can only get it from being on the instance itself. Could we use this to authenticate the vault? Well, what would this naive approach look like? Let's just take a naïve approach.

What the workflow would basically be: the client is going to retrieve a signed instance identity document from the metadata service, the client sends this signed document to Vault, Vault verifies the signature and then a Vault role would map an instance to policies using attributes about the instance.

How would this naïve approach fit in our framework? This is where we stood before with the awkward workarounds and if we add in the naïve identity document method, we get good granularity because you know every single EC2 instance. It's not temporary though, because the instance identity document doesn't really change as long as an instance is running. You also don't get replay protection or theft protection because any instance or any other process could also retrieve it, send it to Vault and impersonate that instance. Additionally, it doesn't work for every other AWS service.

There's a few things that the Vault engineering team worked on to improve this naïve approach because, I said it before, it has a number of drawbacks. I'm not going to talk into detail about this because I'm assuming that you all want to get to the closing happy hour tonight. Some of these knobs include things like role tags. A role tag is a tag you can apply on an EC2 instance that makes a role strictly less permissive. Client nonces is something that clients use at first authentication to prevent somebody else from re-using tat identity document to log into Vault.

Disallow re-authentication is a way of administratively forcing, telling Vault that this instance should never be able to get another login token again. Allow instance migration is a feature that if the hypervisor sees the instance stops and starts, you can actually relax some of the requirements around reusing that identity document.

The fact is, these knobs just add complexity. Every knob you add increases complexity in the system. The old adage is true, you can't secure what you don't understand.

This is original AWS-EC2 authentication backend. I'd like to thank Seth Vargo for this diagram, I shamelessly borrowed it from the Vault documentation. The workflow is basically, the EC2 instance gets its identity document from the metadata service, sends it to Vault, Vault verifies the signature, goes out to the public EC2 APIs to verify that it's a valid instance and look up other metadata about it and, if it's a valid instance, it compares the metadata about the instance to what's allowed in the role. If all of that passes, the client gets a Vault token back.

How does this fit into the framework? I want to preface this discussion by saying that I'm talking about some of the drawbacks of the original AWS-EC2 backend but please do not hear this as a critique of the Vault engineering team or the work they did. They were very constrained by what was possible in the AWS ecosystem at the time and I think that it's actually a great and innovative solution given what was possible with an AWS but there are still trade-offs. Every system has trade-offs, right?

It gives you pretty good granularity, you don't really get temporary credentials, you get some replay attack protection and some theft protection, but it doesn't work for any AWS service. It's still EC2 only.

Let me just take a bit of a digression into the AWS authentication protocol. AWS authentication protocol uses an access key and a secret key, fundamentally. You can think of these like using a password. The key difference though, is that the secret key is never actually sent over the wire. What happens is clients create what Amazon calls a canonical request, the clients then sign that canonical request using HMAC-SHA256 with a key drive from the secret key and the signature is then appended into the request sent to Amazon. Amazon, when it gets the request, regenerates the canonical request, retrieves the secret key, re-assigns the request and compares the signature. In addition, the timestamp is included in the signed request, which provides major replay protection. If you're more than 15 minutes off, Amazon just denies the request.

Lastly, Amazon has a notion of temporary credentials, credentials that self-destruct after a certain period of time, which is also very appealing to us if we don't want long-lived credentials.

Can we use this for Vault? The reason it's so appealing is because Amazon has already done the hard work of giving credentials to things running on it. For example, EC2 instances get credentials via instance profiles through the instance metadata service. Lambda functions get credentials through environment variables, ECS task instances get credentials from an ECS specific metadata service and one of the big points of going into the cloud, beyond some of the ecological impacts I talked about earlier, is letting the cloud provider do a lot of the hard work for you.

Another piece of hard work that Amazon's already done is their authentication protocol is already battle tested and used across the world. So it meets most of our requirements, it'd be great if we could use it. Unfortunately, the use of HMAC-SHA256 in the secret key, means that if Vault were to validate the signature, it would need to know the actual AWS secret key in order to verify the signature. AWS, for good reason, won't share it with Vault. I don't want them sharing my secret key right in front of my other processes, that is a recipe for disaster.

We're not disaster method yet. Imagine for a second if Amazon had some sort of conceptual WhoAmI method. It would solve this problem. Basically, the workflow would look something like: client signs this WhoAmI APR request, sends the signed request to Vault, Vault could just forward it on to Amazon, Amazon would validate the signature and if it's validated, just tell Vault who signed the request. Effectively, this is turning Vault into a man in the middle but we're doing it intentionally, so it's okay.

Furthermore, this would work for any IAM principle type who could call this Who Am I method. The downside, though, is we sort of need Amazon to actually implement this WhoAmI method before we can use it. It turns out that in mid-2016, Amazon finally added it after a couple years of us asking for it, is the form of sts:GetCallerIdentity. Hooray!

What this means is it's now possible for Vault to use a native AWS authentication protocol. Again, the naïve implementation is you would send what I'm calling a bare signed GetCallerIdentity request to Vault and then a Vault role would bind an AIM principle arm to set up policies. You can think of this similar to how pre-signed S3 URLs work, for those of you who are familiar with that, a similar concept. The one important thing to note about this is that AWS appears to add no authorization whatsoever around who can GetCallerIdentity. Every IAM principle can call up regardless of the policy and furthermore has no knowledge of MFA. What this means is you can't add an additional major of restrictions around who can login to Vault through the IAM policy. It's not a big deal but it's something to be aware of so you don't assume that, "Oh, I have denial unless MFA authenticated so therefore you must be MFA authenticated to login to Vault. I actually tried that, doesn't work, sorry.

How does the naïve AWS IAM fit into our framework? This is where we stood before. If we add the AWS IAM, it provides some reasonable reasonable granularity. It provides us granularity on the basis of IAM principle which is actually fairly similar to how AWS handles it. Temporary credentials are possible because Amazon already includes as a first class citizen so we basically get that for free. We also get replay protection for free. We get some credential theft protection, for example, the secret key is never sent over the wire so a compromised dev Vault server could never actually be used to authenticate itself to AWS for anything other than this GetCallerIdentity request. However, if you have the same IAM principle on bound to roles in both a dev Vault server and a prod Vault server, then a compromised dev Vault server could actually then re-authenticate to the production Vault server and get production Vault credentials. It makes us a little nervous. We can fix it. It would work for any AWS service, to which you have IAM credentials.

How do we protect against this credential theft? Amazon, as I said, gives us a lot for free but we can do better. The solution is actually quite simple: add a header. The AWS off protocol provides a mechanism to include arbitrary request header within this canonical request that gets signed. In particular, Vault knows about a special header, this header is X-VAULT-AWSIAM-SERVER-ID. This is also not case sensitive, by the way. What you can do is per mound of the AWS off backend, you specify a required value. My recommendation would be specify a unique required value per security boundary. For example, it could be the Vault cluster address that clients use to talk to in. If you configure that value, Vault does a few things. It validates that the header value, if it's included in the request, is what was configured administratively on the mound. It also validates that the header is actually inside of the signed headers in the canonical request that gets sent to Amazon and forwards it to Amazon. Amazon validates the signature for us, making sure that nobody's tampering with it, and we get it for free. You should configure it.

Now, the question is, how do we send the signed request to Vault? I've sort of been glossing over it. It is the most complex part of this whole workflow. Basically, Vault needs to recreate the original signed request so that it can forward it onto AWS. In order to do that, it needs 4 pieces of information, all which are in the canonical request.

The first is the HTTP method, and right now, POST is the only one that Vault supports. You need the HTTP request URI, which Vault expects to be Base64 encoded, you need the HTTP request headers, which is a Base64 encoded JSON serialization, and then you also need the HTTP request Body, again Base64 encoded. So what do these look like? IAM HTTP request method is just POST. Pretty simple, it's the only we support so you kind of have to put it there. If you look at the IAM request URL, it's this Base64 encoded string which is really sts.Amazon.AWS.com. As a side note, Amazon just canonicalizes this to a slash, you can do either, doesn't matter. IAM request body is this bit of Base64 which in reality is just the encoding of action=GetCallerIdentity and version is 2011-06-15. Request method is POST, these are the POST parameters, nothing too surprising here. The last one is IAM request headers and that's a bit of an intimidating piece of Base64, I will admit that.

If you Base64 the code and pretty print the JSON, this is what you get. There's a few things to note in this. The first is you see this X-Vault-awsiam-Server-ID header. That's the header that we configure and tell Vault. If you look in this authorization header, about mid-way through the second line you see where it says "Signed Header" this host X-AMZ-DATE AND X-Vault-awsiam-ServerID. That's how Vault verifies that it's inside of the signed request that goes to Amazon. You also see the X-AMZ-DATE, that's how Amazon provides replay attack protection.

This whole authorization strings probably still looks pretty magical to most of you. The question is, how do we generate it? If you actually look at the user range you'll see a hint as to how I generated this particular request, which was using Botocore. One last note, in this request information that's sent to Vault, in this case, all the values are strings. Vault accepts both strings and arrays of strings because different languages do different ones by default, so we're trying to make it a little more friendly for users.

As I said, generating these headers is the longest part of the login method. Easiest way is the Vault CLI will do it for you. Vault auth - method=aws role=myrole. Vault creates all the headers for you, sends it to the Vault server, Vault server takes care of all the authentication and then gives you a token back. Super easy, right?

A little bit trickier but the other recommended way is you could just ask the AWS SDKs to do it for you, you just have to know the right parameters to do. If you want a golang implementation, just look at the Vault source code. It's open source, that's the great part about open source, right? In fact, you could even import the Vault as a library and use that to do it for you. For other languages, ask on the mailing list. I, for example have posted Python2 and Python3 implementations. I would not recommend trying to actually implement the AWS sigV4 protocol yourself because you can spend a few hours on the Amazon documentation and it's a little opaque.

This is pretty close to the final IAM Auth method of AWS. We get some granularity, temporary credentials are possible, gives us some replay protection, we can now get theft protection with a header if you configure it and it works for any AWS service.

Just comparing the two: AWS EC2 authenticates EC2 instances and it evolved to allow specific EC2 instances. For example, is the instance in a particular VPC or in a particular subnav? IAM authenticates IAM principles so it's ignorant of the binding that are specific to an EC2 instance because it could be anything. IAM loses its magic for an EC2 instance not inside of an instance profile. This used to be a bigger deal but now Amazon lets you dynamically change the instance profile of an instance on the fly, so you don't have to worry about that. However, customers like the flexibility of binding to EC2 specific attributes, so can we get some of that back?

Another digression into AWS' sts:AssumeRole. The way to Think about AssumeRole is a credential transformer. You take one side of credentials, you sign the sts:AssumeRole request, you send it to Amazon and Amazon returns back a different set of temporary credentials.

There's to parameters that are germane to this discussion: the Target role ARN and the Role Session Name. Target Role ARN is the thing that has all the policies and permissions that you get from assuming it and is governed by normal AWS sign in permissions. Then there's this Role Session Name parameter which is chosen arbitrarily by the client. Importantly, both of these things, both role name and the role session name are visible to the response of GetCallerIdentity. Furthermore, the way to think about an EC2 instance in the instance profile is if EC2 itself calls AssumeRole on your behalf and then presents the return credentials to the EC2 instance and it chooses the instance ID as the role session name. What this is means is that EC2 instances and an instance profile have a recognizable pattern.

Vault has a feature I can inferencing. If you turned it on, it tells Vault to infer that the authenticated principle is also some other type on entity. Right now the only type supported is EC2 instance. If you configure it on a role, Vault does a couple of things. It ensures the role session name as a verifiable EC2 instance ID and if it does that, it actually allows most of the binding specific to an EC2 instance, whether it's in a particular VPC or subnet, but it does tell Vault how to validate the instance. Really, the additional thing you need to know is what region it's in. IAM is a global service, EC2 is region specific so you have to tell Vault what region to look up that instance ID.

One final note, is you must be careful about IAM:UpdateAssumeRolePolicy.The reason is because of what I said earlier. Role session name is chosen arbitrarily by whoever is calling AssumeRole. If I could call UpdateAssumeRolePolicy, I could let myself assume the role that the EC2 instance is in. I could just give myself permission to do that and I could just call AssumeRole, pass on the instance ID as Role session name and masquerade as your EC2 instance to Vault. Now, it's not dire because if some have UpdateAssumeRolePolicy, it's a pretty privileged person in your account already, but it's an important part to know about the security model because Vault is all about being transparent about the security trade-offs that are available in Vault. If you're uncomfortable with this then you can either go back to the EC2 auth method or don't use inferencing.

The last thing I want to talk about is AWS IAM internal IDs. Every IAM principle, whether that's a user, a group or a role, gets a unique internal ID that, generally, isn't exposed to the console. If you delete and recreate a principle of the same name, you get a new ID. So if I create an IAM username Joel and I delete and create a new username Joel, that's gonna have a different internal ID. By default, Vault uses this internal ID. So when you create a role or when you update boundiampriciple_arn , vault looks up that unique ID. At client login time to Vault, Vault sees the unique ID and the GetCallerIdentity response and compares that with the stored value. So if I configure a bound iam principle arn of the Joel IAM user, delete it and recreate it, get new credentials, try and login to Vault, it'll fail because Vault sees that internal ID is changed.

Again, this is the exact same way that AWS operates internally, if you pass on an IAM ARN, it'll just translate that to an internal ID and if you delete that entity and recreate it, you'll see the internal ID back in your policies. This is naturally aligned with the way AWS manages a lot of their IAM permissions, however, wild card binds don't and can't use internal ID. A wild card bind is where you specify a glob at the end of a bound IAM principle ARN and those will just do a plain textual match because a) it's not possible to do anything else and b) that's sort of what AWS does as well.

This is the final IAM auth method, pretty much as it is today in Vault. Do you get granular? I think you get pretty good granularity. You get the granularity based on IAM principle ARN and also based on EC2 principles if you want it. Temporary credentials are possible because they're first class citizens. You get pretty good replay protection because Amazon gives you tat for free. You can configure the header value to get theft protection, and it works for any AWS service.

More importantly, I think it's gonna be future proof. For any AWS services that come up where you get compute resources, Amazon is probably going to provide them with credentials and you can just use that to authenticate to Vault. We don't have to worry about trying a new login method every time Amazon launches a new service. I think it gives customers a lot of flexibility to meet their security goals in order to easily login to Vault, authenticate to Vault and maintain a holistic security environment.

If anyone has questions, hit up the Vault mailing list. I'm pretty active on the Vault mailing list. You can also email me directly. I'd love to collaborate if people want to collaborate.

Lastly, just a few acknowledgements. I'd like to thank Dan Peebles who's @copumpkin on GetHub. He's a co-worker of mine at Birdgewater and really is just been an invaluable thought partner as I worked through this. He reviewed all of the code before the ever saw the light of day of a public poll request. In general, all around, super smart guy and really helpful. I'd like to thank Jeff Mitchell and Vishal Nayak of HashiCorp for all of their patience as we worked throu this poll request. In fact, it was actually two poll requests and at the time they got merged, I was responsible for two of the top five most commented poll requests in the Vault repository. Tremendous thanks to them for all of their patience and help in how to make this great and a really great solution for all the customers.

I'd like to thank our AWS Enterprise support team as well for help on that. There's some assumptions we wanted to make about GetCallerIdentity that weren't necessarily included in the documentation and you don't want to have the core of your security system depend upon some assumptions in a third party product, that's not a good idea. They were super helpful and responsive, clarifying how the assumptions we were making worked and allowed us to build a really supportable, stable product. I'd also like to thank you all, the community, for being a great community. It's really a fun group to be a part of.

Lastly, I'd like to thank my pet cat Mio. When I first started writing the code she wanted to help me write the code so here she is, literally helping me write this code. If there's any bugs, blame them on her.

Thank you all.

More resources like this one

  • 2/3/2023
  • Case Study

Automating Multi-Cloud, Multi-Region Vault for Teams and Landing Zones

  • 1/5/2023
  • Case Study

How Discover Manages 2000+ Terraform Enterprise Workspaces

  • 12/22/2022
  • Case Study

Architecting Geo-Distributed Mobile Edge Applications with Consul

zero-trust
  • 12/13/2022
  • White Paper

A Field Guide to Zero Trust Security in the Public Sector