Interview

HashiConf 2020 Security Discussion Panel

Join Jeff Mitchell, HashiCorp | Andy Manoske, HashiCorp | Rob Cameron, Roblox | Sarah Polan, ABN AMRO | Gregory Lebovitz, Consultant for a security roundtable.

HashiConf 2020 Security Discussion Panel

A panel of two brand-name customers, a 5-year HashiCorp technologist, and a principal engineer, moderated by a security expert, talk about: - What challenges they had in secrets management, - How Vault solves these challenges, - The coolest new features helping their business from Vault v1.4 & v1.5, - The security problems looming on the horizon and what they’d like HashiCorp to do to solve them

Join Jeff Mitchell, HashiCorp | Andy Manoske, HashiCorp | Rob Cameron, Roblox | Sarah Polan, ABN AMRO | Gregory Lebovitz, Consultant for a security roundtable.

Transcript

Gregory Lebovitz: Welcome to the security discussion panel. My name is Gregory Lebovitz. I am a security executive and moderator of today’s panel. We’re going to be talking about the problem of secrets management, and specifically how we started dealing with this problem. We’re doing that today with the help of some HashiCorp customers and some of the earliest technologists at HashiCorp in this area.

HashiCorp Customer Introductions

Let’s get started. Our customers with us today are Rob Cameron, who is the technical director of infrastructure at Roblox — and also a very long-term and dear friend; we worked together at several previous posts. Then also, we have with us Sarah Polan, who is the secrets management architect at ABN AMRO. Welcome, Sarah and Rob.

Rob Cameron: Thanks for having us, Gregory.

Sarah Polan: Thanks for having us.

Gregory Lebovitz: Let’s dive in; our other two panelists we’ll introduce right as they start to speak in a moment. Rob and Sarah, so glad you could be with us again.

Tell us a little bit about how you personally got into this whole area of secrets management. What was the problem you were addressing when you started going, huh, I think I need some product in this area to help me? And how did you go about getting into it?

Rob Cameron: I started at Roblox in December ’17, and our goal was to look at some more modern architectures around infrastructure. One area we struggled with was secret management, surprisingly, as we’re here to talk about. We had several different tools that utilized the ability to stored secrets — keep them in a specified location.

While they definitely worked for us very well, they didn’t have integrations from an API perspective, the ability to easily pull information out of them — out of the secret systems — and then being able to inject it into various jobs. This helped us look at several different solutions, which of course, we ultimately ended up with Vault as our secret management product.

Gregory Lebovitz: Cool. Sarah, how about you?

Sarah Polan: I think on our end, it was this need to centralize secrets and make sure that we didn’t have a secret sprawl problem. We had so many development teams and so many people using various types of secrets that we needed to get those into one place, platform-independent.

Especially with this rise of containerization and containerized workloads, we wanted to make sure that everything was secure as possible. When we were looking for various products — when we did the proof of concept for Vault — we were keen on the secure multitenancy that we can get with the Enterprise namespacing. The dynamic credentials we liked because that reduces our attack surface greatly. Also, the fact that it’s API-driven, and we can automate it as much as we want to — or as little as we want to. In our case, the less hands-on people are with secrets, the happier that we are.

Then definitely a major consideration was that there’s such a huge community around Vault. Even if we’re using the enterprise product, the developer community can still reach out to the open source community and get a lot of the information they need to automate and optimize an integration with Vault.

The Path to Vault Enterprise

Gregory Lebovitz: So that was an interesting comment about open source. Did you start with secrets management using Vault open source, or did you start with the commercial version — the Enterprise version?

Rob Cameron: We started our journey for all of our HashiCorp products on the open source side. We found that right out of the gate, we got an amazing amount of capabilities from the product itself. But we wanted to get a little bit more for things like namespacing, to be able to have deep support for some of the use cases that we wanted — as well as some more of the advanced high availability replication features. That helped us drive into Enterprise.

I liked with we could do everything out of the gate with open source, but effectively we grew into Enterprise, where those needs became apparent. It was an easy transition for us to start utilizing it and all the more advanced feature sets.

Sarah Polan: For us, we went straight for the enterprise product because of the government regulations and internal regulations that we have in terms of data security. We were quite limited and needed that secure multitenancy from the beginning.

We could run hundreds of different Vault instances or just opt for the namespacing. That helps us control everything. We know that the policies are being applied correctly to each namespace. If there’s ever any issue with an application or a specific secret, we know that that’s not going to affect our entire ecosystem. It’s limited to that one thing. That was the driving factor for the Enterprise — as opposed to the open source.

Gregory Lebovitz: I want to introduce two other people. These folks are HashiCorp employees and have been with the company for quite a while. Jeff Mitchell is the principal engineer at HashiCorp. He was one of the first developers on Vault. Andy Manoske is principal product manager at HashiCorp, and he was the first Vault dedicated product manager. Jeff and Andy, welcome.

Combatting Secret Sprawl

Sarah, I want to go back to something you mentioned when you shared how you got into secrets management. You mentioned this term — secret sprawl.

Can you describe what you mean by secret sprawl and give a practical example? Maybe also give a practical example of when that happened in your environment — or a previous environment you might have worked with Vault in?

Sarah Polan: I think secret sprawl is something that most organizations — especially if they have more than one system running — deal with. That’s, where do you keep your secrets; those API keys, and your certificates. It’s easy to end up with some in the source code and some in GitHub.

If you’re running a multi-cloud or a hybrid platform, then maybe you end up with them and KMS and Azure Key Vault. There’s no unified spot for you to keep your secrets.

Whereas when you’re presented with Vault, it gives you the opportunity to keep all of those secrets in one place. You know exactly who’s accessing them, which workloads are accessing them, how they’re being accessed, how long have they've been there.

The challenge with dealing with secret sprawl in an organization is that you don’t have any control over that. You don’t know how many people have seen them or how long they’ve been in play. Getting rid of that secret sprawl for all is something I think that’s quite critical to most organizations. It’s not limited to the financial industry or the technical industry; it’s anyone that has any systems needs to address the secret sprawl issue.

Gregory Lebovitz: In addressing it, do you end up getting a record or a trail of everybody or every machine or every process that’s touched a secret? I know there are a ton of compliance issues in the financial industry. Is that something that has to happen for you all, or is that something that you prefer just to have a higher level of security?

Sarah Polan: At any given point, I know particularly with PCI DSS — which is the credit card compliance — you have to be able to track all of these users at any given time and make sure that you know essentially which secret is being leveraged by which user.

Whether that’s a machine or a person, it doesn’t matter — but it’s part of this whole overall compliance. You can argue that it’s part of that regulation, but as you see more and more data attacks and data breaches, it’s something that everyone should be paying attention to. It’s not just limited to the financial industry.

Gregory Lebovitz: I got it.

Moving from Static to Dynamic Secrets

I want to clarify one thing. You mentioned certificates. Are you storing the certificate involved, or are you storing something else about the certificate involved?

Sarah Polan: For teams that aren’t quite ready to move on to these dynamic certificates — which we would love, especially within the Kubernetes workload and any containerized solution — they’re storing the certificates themselves.

Usually, they’re third parties that have authorized these certificates, and then they leverage those just as a static secret. Not quite as advantageous as a dynamic certificate would be or using Vault for that PKI. But as long as it’s contained, then we consider that a step in the right direction.

Gregory Lebovitz: Then that’s the actual private key. The public key can be passed anywhere anytime; we don’t care. It’s the private keys we’re thinking about there when we’re talking about the certificates in the chaining?

Sarah Polan: It’s the private keys, but also, as long as we know where that public key is and can trace that public key, we’re happy also to use Vault for that.

Jeff Mitchell: I think there’s a lovely crawl, walk, run story. We like to say, crawl, walk, run at HashiCorp to describe the journey from on-prem to cloud or the journey from static secrets and more towards dynamic secrets. I think that exemplifies how that works.

You have teams that aren’t able to get to that point where they can dynamically generate secrets and consume those. In the meantime, you have a solution in Vault that can tackle that workload that you have. But when you’re ready to take that next step and generate certificates much more dynamically and for shorter lifetimes and more ephemerally, Vault is also ready to do that.

We’ve tried to make sure, over the years, that we’re addressing all aspects of that crawl, walk, run cycle so that you can always continue your journey to better security. But Vault’s always there to help you and be ready for you to do what you need to do at that point — and at that stage.

Andy Manoske: What’s interesting about this is when you look at traditionally how certificate management and public key infrastructure worked, these were in many ways different products. These were independent solutions that would ultimately be the result of an independent team. There would be an independent way to deploy that solution, attribute identity and access rights afforded to it, etc.

This highlights one of the most exciting parts about Vault. Vault treats secrets management as a holistic problem. It’s not I need to protect one secret differently than the other. It’s that all secrets are protected with an aggressive level of cryptography. It’s simplified from a development and operator point of view. But at the same time, we respect that there are different workflows associated with different types of secrets.

If you are ready and able to adapt to a certain workflow, Vault can do it — Vault can automate a lot of that workflow. There are a number of usability reasons why this is important. But I think from a security perspective we’re most excited that it mitigates side-channel attacks. A side-channel attack is an attack used by an adversary to circumvent cryptography.

Rather than going through the math associated with how you make it difficult or computationally intractable to break information and encryption, you go around it. You exploit a flaw in how the mechanism to handle keys, handle the creation or decryption of that cipher is used, etc.

Secret Engines Within Vault

The PKI engine, I think, really well highlights this. The different secret engines within Vault — which are just another way of saying the different ways that Vault understands there’s a specific type of workflow and tries to automate — as much as possible that workflow associated with a specific type of secret highlights what we’re trying to do when we build Vault — which is minimize side-channel attacks by taking away the complexity associated with how I would keep secrets protected within the context of that workflow.

I think it’s interesting, at least because it highlights the fact that it’s not just PKI. It’s a whole host of mechanisms that exist within Vault to protect uniformly secrets to a very high degree of security. But at the same time, simplify the process such that an adversary couldn’t exploit complexity inherent to that workflow that a secret would be used in.

Gregory Lebovitz: Hearing the history of that and how you and the rest of the development team saw the problem as it was emerging is fascinating to me.

Real-Life Vault Customer Use Case 1: Secret Management

Rob, what are some of the things that today you’re addressing? That you’re right in the middle of working on these particular challenges or these particular problem sets with Vault?

Rob Cameron: Surprisingly — and I always joke — is secret management. Obviously a big part for us is being able to put static secrets in. We primarily utilize Vault with HashiCorp Nomad as well. That way, when we build out jobs to run in the orchestrator, it can pull these secrets, whether they’re Slack, API keys, or other standardized elements.

Gregory Lebovitz: Maybe give us 10 seconds on what does Nomad do for you.

Rob Cameron: For us, Nomad is our primary orchestrator versus Kubernetes. We utilize it to run somewhere around 50,000 containers today in production, and across — I think — 24 different clusters.

It powers, as well, all of our game servers. We don’t necessarily run the games themselves inside Nomad, but we use our custom matchmaking orchestrator and manage its global deployment using Nomad.

Before we had done this primarily on Windows — again worked well for us and a good solution. But migrating over to Nomad took global deployments — for example — from 45 minutes down to sub 10. Then we can do game updates using the system to do instantaneous updates.

We can spin out any game instance types that we want at any time. Nomad, overall, has been huge for us — and a little bit different than the world of Kubernetes. We still do — I love — Kubernetes, and it’s running here next to me. But unfortunately, can’t always be together based on the technical requirements that we had in the organization.

Gregory Lebovitz: Getting back to — you’re running Vault and secrets management with Nomad, and that’s where I interrupted you. Sorry.

Rob Cameron: No worries.

Well, a big part of it too is the dynamic PKI certificate. Using Vault’s templating engine, we can specify all the certificates that we’d want to be able to put into a job. From this, we can start up a new instance of whatever service we want — whether it’s Telegraph for adjusting metrics, logging. Whatever it may be, we can spin up certificates per job and then have Vault manage the entire life cycle for them.

What I love about it is the best secret that you have is the secret you never have to see or expose. They’re automatically generated, injected into the job, consumed, and then updated. And as part of an interesting thing, we run the HashiStack Nomad Consul and Vault on top of Nomad itself — in four of our POP locations.

The certificate or TLS for Consul, Nomad, and Vault are automatically managed through Nomad and through Vault. They rotate — I believe it’s once every few days to once a week — depending on the needs there. They have run for almost two years now at the end of the month. Surprisingly, we’ve never had to touch those certificates, know what they are — they just continue to spin up and continue to go with us — which is awesome.

Gregory Lebovitz: On that real quick, with the whole dynamic certificates. The opportunity for an attacker to be able to breach, explore, move laterally, identify where those keys might be, or a system that’s using those keys in memory right at the moment — and then grab the key. That timeframe is small — it sounds like?

Rob Cameron: It’s very small. Also, the way we use it is that each POP has its own certificate authority set — so in the event we’re ever concerned that there’s a breach or anything that could have potentially happened, we can rotate the entire HashiStack for a specific POP, including all the TLS, including the CA. So even if the CAs were breached for any reason within half an hour. That way, we can spin everything out and just replace everything in that event.

Knock on wood, that has not happened. But you always want to have — I like to think of — a collapsible site. If there’s any concern about breach, you’re not necessarily pausing and looking at every machine for 20-30 hours to do forensics. You can wipe the entire site, and we don’t have any issues or concerns about certificates or anything being leaked, which is fantastic for us.

Gregory Lebovitz: Wow. That’s a cool use case. You were about to share another use case?

Real-Life Vault Customer Use Case 2: SSH Key Signing

Rob Cameron: My favorite personally, and it’s silly, but I love it. It’s this SSH key signing. I set this up when I initially rolled out the game servers around two years ago. That way, we could have users securely access the hosts. We just didn’t end up using it and productionizing it. But we ended up getting around to it last fall, and we enabled developers or admins to be able to get into the servers dynamically using the signed SSH keys.

What I love about it — as opposed to managing keys and rotating them and having all sorts of software do it — they get the signing, we have the authorization, and the Vault audit logs. Then when they SSH into the host, they have the authorization around which user was signing-in based on what was signed on the certificate.

The user can use their own keys — it doesn’t particularly matter to us. They get the authentication validated through the SSH signing in Vault, and it’s very simple then to access. I even set up a cool SSH-type of proxy solution. That way, if you’re accessing a very far POP that’s 200-300 milliseconds away, you get a little bit of TCP acceleration with this middleman service as well.

What I love about it — again — it’s a few commands to get off Vault to get your certificate, and then — boom — you’re ready to start administering and going all over the world to do what you need to do. Mostly, people that use this are non-infrastructure focused, more users or developers with the game system — and they seem to like it.

Gregory Lebovitz: Andy, I don’t know if you caught that, but we got to think about maybe productizing that.

Andy Manoske: We’ll get right on it. Rob’s point here, really well tied back — and Sarah’s too — to something that I think is important here and has been very fundamental to Vault from the very beginning. In fact — some of these are secret engines that Jeff himself personally developed.

Protecting Secrets Workflows

When we talk about protecting secrets in flight not rest, get holistically protecting secret workflows. One of the best ways we’ve learned throughout the development of Vault has been to use dynamism.

What I mean by saying use dynamism, and introducing buzzwordy speak, is that we think like an adversary does. We try to define how we protect secrets within Vault — techniques that we call adversarial modeling.

We build Vault to protect against adversaries of a very wide degree. Both in open source and Enterprise, we try to protect against well-resourced adversaries that we assume were able to circumvent the exterior protections of a system, get into your network, eavesdrop potentially on ciphertexts that are being transmitted between Vault.

Assuming that this is all the case, it’s important here that we minimize the transmission of long-lived stateful details, credentials, certificates, etc. The best way to ensure that an adversary couldn’t theoretically breach various layers of security and cartography and steal a credential is to generate credentials or details that are very ephemeral. They’re short-lived. They expire very quickly. That dynamic credential creation process — or dynamic certification process — is something that we think is very important, not just within the context of these different types of PKI workloads — or secrets management workloads — but across Vault.

Anytime we are trying to generate a credential or something that we know you’re going to use that is very sensitive, we want to make it as instantiable and minimize the period that it is usable as much as possible.

I think the best example of this is certainly the dynamic credential process that we’re talking about. For example, I could have Vault protect your secret key for AWS, and I’m sure that only the right users are given access to it. But there’s a challenge that if one of those users is fraudulent — or somehow the cryptography protecting it is circumvented — that credential could be used to breach your entire system.

So, rather than giving you the keys to the kingdom, we can use the keys to the kingdom to be able to create very short-lived — very scoped-to-a-specific-workflow credentials — that could theoretically be stolen and not give access to an adversary into your entire infrastructure.

I’m excited to hear Sarah and Rob talk about that this is a valuable proposition. Because this isn’t just a Vault problem, this is a security problem worldwide. We need to get away from taking out the keys to the kingdom every time we try to open the car door. We need to start moving away from long-lived, very powerful static credentials, and instead keep those very protected while creating dynamic credentials.

This is something that I know Jeff, as well as Mitchell and Armand, were passionate about from the very beginnings of Vault. But it’s something that we continue to focus on today as we learn more about use cases like Sarah’s and Rob’s. Where this is a place where we can minimize the ability of an adversary to be able to steal very sensitive information and cause more data breaches.

Sarah Polan: I think it fits in quite nicely with this kind of mindset that as you move to this multi-cloud — hybrid cloud — you just have to assume breach. We need to try to stop a breach of any kind, but we also need to be looking at what happens when we are breached.

Dynamics, in my opinion, are incredibly powerful and limit how much access somebody would ever be able to gain to any given database or certificate or cluster — just because they’re constantly rotating and they’re constantly assuming that something has been breached or compromised in some form.

Customer Reception of New Vault Features — Integrated Storage and Splunk Integration

Gregory Lebovitz: We released Vault on the enterprise side about a release per four months. In the last eight months or so version 1.4 — 1.5 — have both come out. Sarah, are there a specific feature or two that have been something you all were waiting for and you’re glad for, and either you’ve now put into practice 1.4, 1.5, or you’re about to?

Sarah Polan: Integrated storage for us ended up being a complete game-changer.

Gregory Lebovitz: What does that mean? What is integrated storage, and how is it different than what it was before?

Sarah Polan: Instead of using Consul as a backend, we can just use Vault and its own storage to store secrets and make sure that everything is stored adequately with enough redundancy.

When we were using Consul, as much as we loved Consul, it took more nodes essentially. And because of our limited infrastructure — because of various requirements and regulations — it was quite difficult for us to auto-scale something of that size.

Once we could reduce the Consul nodes and come down to running Vault — and the storage was already built into that — it allowed us to auto-scale and have a more auto-healing setup.

Another thing we like with 1.5 is the Splunk integration. We have a great engineering team, but like any engineering team, our resources are a little limited. To have metrics and logging completely available to us at the drop of a hat saved us a lot of time and headache. It means we don’t have to keep up with that infrastructure change and the constant updates that can occur with running your own logging and infrastructure on that side.

Gregory Lebovitz: Andy, Jeff — maybe something different than Sarah might have said. Anything else that you’ve seen that customers have found interesting in the last eight months or so?

Andy Manoske: One of the things I’m most excited hearing about with Sarah is that these are features — integrated storage as well as the Splunk integration — that we originally did not think of. These are the result of feedback, both from the Vault Enterprise community, the open source community, as well as new members that joined the team.

My colleague, Darshana, on our team spearheaded the initiatives to deploy integrated storage as well as a Splunk integration. Much of that was guided by her research with newer Vault Enterprise users, as well as prospects, the open source community, and her personal experience previously coming from a background in metrics and integration.

I think this highlights the evolution of the team. There was a time when Vault was Jeff, someone like one of our software engineers. Arman continued to code very aggressively at this time and actually developed a significant portion of replication. Various people in the company that would jump in and out of GitHub to throw in code — and myself. That showed a time of Vault where we were guiding based off of our previous experiences.

Today, Vault represents hundreds of Vault Enterprise users’ experiences, and an open source community that has tens of thousands of users — and a growing team that has a very diverse set of backgrounds and perspectives on how we ultimately march towards this future of better protection of secrets. It’s exciting to hear that these are valuable features because these are things that were not part of the original plan in some cases.

Vault’s Transform Secrets Engine

In terms of other features in Vault 1.5 that are very interesting, we’ve seen a lot of uptake with something called the Transform Secrets Engine. The Transform Secrets Engine is a secret engine that was part of an initiative that we call the Advanced Data Protection package in Vault Enterprise.

ADP focuses on: how does Vault protect workflows that live outside of the barrier of Vault? The cryptographic barrier — what we call — is ultimately the exterior protections of Vault. Both Rob and Sarah talked about that Vault serves to protect various sacrosanct secrets that live within the context of a diverse multi-cloud infrastructure. Well, in that case, we have seen since the beginning of Vault, Vault as the bulletproof vest you wear in the tank. It is the last line of defense between an adversary and very sacrosanct information.

It’s easy for Vault to defend itself when it is a bulletproof vest in a tank. But what happens when I need to throw that bulletproof vest on someone that lives outside of the tank —outside of the protections of your infrastructure and outside of what things Vault can control? That’s what the ADP module is for. The Transform Secrets engine allows you to protect data that lives in a workflow and resides completely exogenous to Vault.

How do I protect data like credit card numbers, social security numbers — in a variety of different forms — in a variety of different workflows that are going to reside outside of Vault’s cryptographic barrier?

Format-Preserving Encryption

1.5 released a number of capabilities around simplifying the process for — as a developer — using that workflow to protect any secrets. Especially within the context of using format-preserving encryption — a very fast-growing area of cryptography — to protect and automate the protection of those secrets that might reside within a file system, a database, or splayed across multiple clouds.

It certainly has seen a lot of uptake since its release in 1.4 and 1.5. Because — like every other feature within Vault — it focuses on: how do I adapt Vault to protect what I’ve already got? I already have security infrastructure; I already have tooling. I’ve already got a diverse set of places where I store and use data. How do I make sure that — regardless of where it sits — I can have Vault protect that and automate that data such that I don’t need a Ph.D. in computer science to instrument a deep cryptographic infrastructure?

Gregory Lebovitz: We have a few minutes left. I want to shift to the future. Jeff, Andy, maybe start with Jeff.

We have this open source community. They’re giving us feedback. We have our enterprise customers. What are we seeing on the horizon? What problems are people trying to solve — no roadmap commitments, no, “we’re definitely doing this and this timing”. What are some of the problems in the next 2-3 years that we think are important for us to be paying attention to and to be looking at?

HashiCorp Boundary

Jeff Mitchell: One of the things, obviously, is a human-to-machine interaction. This is where Boundary comes in. Boundary being a new product that’s dealing with what’s called software-defined perimeter. That’s one name for it. But really, it’s about gating access to network resources with just-in-time checks that are tied to identity.

It’s leaving behind the firewall and saying I have knowledge of who this person is, potentially from their IDP — their identity provider. They want to connect a resource, and I’m deciding at that moment, are they allowed to? And potentially doing things like injecting credentials and other types of checks.

We looked for over a long time at how is Vault is handling the machine-to-machine interaction really well. It’s very API-driven. What about the human-to-machine interaction? In a lot of cases, you still have humans that are either accessing Vault, but it’s not super friendly for that purpose. Or they’re storing the human-oriented credentials and a password manager, and then they’re storing the machinery into the credentials in Vault. Then you still have a lot of sharing that goes on with that password manager.

That’s a market that we looked at — this a use case, a workflow that Vault doesn’t tackle particularly well. What would it look like for us to tackle this? And that’s the genesis of Boundary. It’s very early days for the product, obviously. There’s a lot that we have left to do, but we want to get to that point where it’s extremely easy — extremely seamless access — for users to get onto services. Even as it’s still entirely API-oriented, it’s easy to deploy with Terraform, and it’s all these other things that you would expect from a HashiCorp product. That’s what I’m excited about personally.

Gregory Lebovitz: Boundary — it’s all about administrators, DevOps, developers, getting access to infrastructure. Those could be dynamic workloads, containers, the infrastructure that’s setting those things up and running and monitoring those things. Or it could be very static things like routers or firewalls, or bare metal servers?

Jeff Mitchell: Correct.

Gregory Lebovitz: It’s all about privileged access and how we control, monitor, audit, track that.

Jeff Mitchell: We’re looking at that problem space very holistically. As I said, it’s early days. But we have a large scope in mind in terms of things you can connect to and how you can connect to them, and where you can source the credentials — such as Vault, and so on. I hope that we can build the same community around it that we did with Vault. But I think it’s going to solve a lot of problems for a lot of people.

Gregory Lebovitz: Andy, what’s one thing on your radar?

Simplifying Difficult Vault Workloads

Andy Manoske: Well, one thing is probaby: how does Vault simplify workloads that have inherently been very difficult to simplify?*

When I say that, I think the most important one is in hardware. When we look at things like utilization of system resources, like TPM chips — to be able to retrieve secrets, authenticate systems from a trusted platform module, and better integrate with hardware.

It highlights that there’s still a very large world of enterprise security where things have just inherently not been simplified in any degree, shape, or form. So, introducing ways for Vault to automate the process of integrating with that deep level of security hardware is important. Because one exciting thing about Vault is there is a wide range of use cases. There is a very wide range of adversaries that our users — both in open source and Enterprise — pose Vault against on a day-to-day basis.

For example, users today are integrating Vault with quantum computers via the entropy augmentation features — as well as Vault’s PKCS 11 integration — to be able to draw entropy from a quantum computer. That’s only a few lines of changes to the stanza for Vault for the config to be able to do that.

We want to bring that same level of simplicity for very inherently complex workflows to other areas of hardware security — as well as enterprise-grade security that’s typically only seen in the Fortune 10, Fortune 5, and allow everyone to have that level of security with just a few lines of text.

Gregory Lebovitz: Move it down market, make it accessible, make that same strength accessible to everyone.

Andy Manoske: Exactly, complexity breeds insecurity. At the end of the day, when I introduce complexity in any kind of security workflow, one thing we’ve learned, well throughout our careers — before coming to Vault as well — is we’ve seen firsthand within Vault is that if you want to make a workflow secure, you have to make it as simple as possible.

This is not just a Vault thing. This is something that you see really well within Boundary as well. Where you simplify the context of how do I establish a session? And how do I ultimately protect how users and applications communicate with an end target system?

If we can simplify workflows like that — if we can simplify inherently complex workflows — we can dramatically improve the quality of security. This is something that I know drove me to the company and that I remain very passionate about today — as well as the rest of the team. Because there’s so much complexity out there in security — especially in data security — we continue to simplify for improving security. Not just for Vault users, but for everyone that could be a recipient of Vault’s protection.

Gregory Lebovitz: Sarah, any of that resonate with you? What are you pushing for, for us to do for you?

Sarah Polan: For us, already the integrated storage. But then having the snapshot capabilities in the backup and restore. For us, that simplifies things and makes sure that if we do have a situation where we need to get a Vault back up and running — for whatever reason. Maybe somebody’s deleted a secret — or if we lose a Vault completely. Having those backup and restores and the snapshots help us do that in a timely manner and make sure that our workloads don’t experience any downtime.

For me, something that I’d love to see is pushing that envelope a little bit with Vault. As somebody in security, the Internet of Things can be a little worrisome for me because there’s no standard of encryption. There’s no way to make sure that the information that passes between your device and your hardware is going to be secured in any manner. So, what Andy is saying about leveraging boundary and leveraging Vault to enable hardware — well, how far can we push that? Can we push that to the edge? Can we push that into the fog? And is that something we can do with that immutable code?

For me, it’s quite interesting to hear all of you guys talk about your use cases and what you’re thinking of going down the line because then that allows me to push that out and start thinking about things that I would like to see — or things that I think the industry could use in general.

Gregory Lebovitz: That’s helpful. Well, I want to thank Sarah, Rob, Jeff, Andy so much. It’s been fun sitting with you, hearing about your stories, hearing about the origin stories, hearing about what we’re able to do, and getting some ideas for the future. Thanks very much, everybody.

Sarah Polan: Thank you.

Rob Cameron: Thank you.

More resources like this one

  • 4/11/2024
  • FAQ

Introduction to HashiCorp Vault

Vault identity diagram
  • 12/28/2023
  • FAQ

Why should we use identity-based or "identity-first" security as we adopt cloud infrastructure?

  • 3/15/2023
  • Case Study

Using Consul Dataplane on Kubernetes to implement service mesh at an Adfinis client

  • 3/14/2023
  • Article

5 best practices for secrets management