Presentation

Keynote - HCP Vault, HCP Consul, and Boundary

Watch the Day 1 Keynote from HashiConf Digital US 2020

Transcript

Dave McJannet: I'd like to welcome you all to the second ever HashiConf digital event. For those of you that have been in our community for quite some time, you'll know that we traditionally run two events. One in the summer in Europe, one in North America in the fall. And we're excited to be able to bring you all back together again, albeit digitally.

Introduction and Overview

In terms of a quick agenda, I'm going to spend a couple of minutes giving a little bit of backdrop on HashiCorp — the company — what we've been doing for the last 12 months. Then I'm going to hand things over to Mitchell and then to Armand to share what you've come here for — some of the cool product news that we've got to share.

Although we prefer to be in person, we've learned over the last six months that these digital events can be an even more personal experience than the in-person events that we have become accustomed to.

I want to thank our events team, who've been working tirelessly to build a platform to allow this to be a unique digital experience and curate a set of content that represents the depth and the breadth of the product portfolio that is HashiCorp — the company.

Welcome to HashiConf Digital

One of the best things about doing this event digitally is the scale that we can achieve. Our European event was a great example. We anticipated having 800 people at the in-person event, and we ended up with over 8,000 people attending digitally. Likewise, today, we have over 12,000 people registered for this digital event. Thank you again for making the effort to be here digitally.

You represent 102 different countries and six different continents, which is amazing because we have never visited or participated in many of these countries. As a software company, it's fascinating to me that modern software gets adopted by users that discover the product on the internet — bring those into their organizations. Fast forward, we now see applications around the world that are built using our technology in places that we've never engaged with.

Trust me, as a company, we're trying to grow as quickly as we can to better service the growing roster of really large organizations that run our products every day. And we thank you again for being part of our community.

Your Support Enables Our Growth

Some of you who are with us at HashiConf Portland in 2016 can attest to how much we've grown. Since that time, we've added over 1,000 employees to HashiCorp — the company. These are the people that write the documentation, engage with the users, and — most importantly — build the products that have innovated in incredible ways since then.

Think about the product evolution of what Terraform, Vault, Consul, Nomad, Packer, Vagrant were doing four years ago and compare that to the breadth of problems that we solve for people in 2020. It speaks to the continued investment in the company that only your support enables.

We run this really unique model that has a gigantic community, an enormous partner ecosystem, and a growing roster of commercial customers that fund a virtuous cycle to allow us to continue to bring products to market. This is the community that makes everything possible.

HashiCorp #4 in the Forbes Cloud 100

We've been fortunate to notice a bunch of recognition thrown our way over the last 12 months. One of them was for that — the second year in a row — we have been ranked number four on the Cloud 100 list from Forbes.

While that is certainly humbling for all of us, what got us more excited was looking at the other 100 companies on that list and knowing that the vast majority are built using HashiCorp products somewhere.

That tells us the cloud-native companies that understand how to run cloud infrastructure have adopted a blueprint that largely contains, Terraform, Vault, Consul — in many cases — Nomad.

This means we know that — as the mainstream of the Global 2,000 starts to adopt cloud — they are looking to the cloud-native ecosystem for the blueprint of the cloud operating model.

HashiCorp Users Share Their Experiences

We've also seen some of that become more and more public. Really awesome for us to watch the biggest applications in the world — people like Slack — talk about how Terraform and Consul underpin their processes.

We also saw a blog post from Cloudflare about how Nomad and Consul play a critical role in that application that literally supports a double-digit percentage of the world's internet traffic.

That's a role we aspire to play. We like to play this behind the scenes role. We like to play this enabling role for the most important applications on the planet. That's what we're going to keep doing.

For those of you that are gamers, you'll also see Roblox speak here. All about how they use the HashiCorp stack to bring a gaming experience that has become really important for a huge percentage of the population in the last six months as we've all been stuck at home. The scale of what they've achieved is truly astounding — thank you to them for sharing.

We've seen over the last 12 months, in particular, a growing roster of commercial Global 2,000 companies that have become our partners as they adopt this cloud model for infrastructure. They represent every vertical — every geography — because the cloud is a horizontal thing.

The push for digital transformation — as the consultants like to talk about — really means people need to build net new digital applications. And those digital applications are going to be on the cloud platform.

The implication is that all of these companies are taking their cue from the cloud-native companies as to how to do that in a consistent way. And you're seeing some of the very largest companies on the planet talk about how they've adopted our products to do that. Thank you to them. They help us make this all work.

We now have well over 1,000 commercial customers and a material proportion of the Fortune 500 that we are fortunate to count as commercial customers. We know the role we play from them. We understand how serious the role that we play is — and we're going to keep doing it.

A Strong and Growing Ecosystem

But it’s also fun for us to watch the growth of Terraform Cloud because that speaks to the ubiquity of the products that we bring to market every day. We continue to have over 6,000 users every single month sign up for Terraform Cloud — 6,000 new users every single month. That's hundreds every single day, to whom we bring this collaboration experience around the use of Terraform that is driving so much of the infrastructure spend on the cloud platforms today.

Secondarily, in addition to the commercial side of our business that allows us to fund this, we have a growing roster of technology integrations. We were humbled to win Partner of the Year from Google and Azure last year.

We play a very critical role for Amazon who does not have an equivalent program, which speaks to the truth of what it is we're enabling. We are enabling the cloud providers to bring more workloads under management through the use of our tech.

With our growing roster of technology partners, we know we play this role of enabling consistency, which means we have an integration challenge that we solve for a lot of people. You can use Terraform today to provision no less than almost 400 different kinds of technologies using a consistent workflow.

Lastly, our growing roster of system integrator partners now totals over 400. They are doing the heavy work of bringing their skills to bear inside the Global 2,000, which are starting to move en masse cloud.

Thank You to Our Sponsors

I'd be remiss if I didn't give a shout out to our partners, Amazon, Azure, Google, who are predominant platinum sponsors — sponsoring this event and helping us bring it to you at the scale that you see here.

I'll also call out separately, Cisco, because it's interesting to me how our products are increasingly being used in the on-prem datacenter. They're increasingly being used in the networking sphere. People use Terraform to configure networking gear. They use Consul to automate networking gear — in many instances, to extend the automation capabilities of networking.

This is a growing trend you’ll see from us over the next 12-24 months, as our products become increasingly cemented as a common way to interface to infrastructure of all types.

Amazing Community Growth

Finally, I know — and you know — this is all about the user. The practitioner is at the center of everything that we do. We continue to invest deeply in making our users successful.

Whether this is in our user groups around the world, which now number almost 35,000 participants. Given the backdrop of a remote environment, we understand how hard this is for everybody. And a special shoutout to all the user group organizers that make all this possible in a digital world. We're all here to help you in whichever way we possibly can.

We've also invested in the new flexible learning paths on learn.hashicorp.com. You can see we've invested deeply in free content for how-tos on how to use all of our products.

Finally, we've introduced certifications because we see a lot of jobs out there in the market that call for HashiCorp skills. We think by providing a certification mechanism, we can reduce the friction to consumption of cloud infrastructure by providing some modicum of standardization around the skill sets required to interface to the cloud. We have huge numbers of people that have taken our Terraform and Vault certifications. Keep an eye out for the Consul certifications as well. I'm going to thank you for letting me share a little bit about what's going on at the company. I'm going to hand things over first to Mitchell and Armon to talk a bit about what we're doing in the world of security. But I'd also ask that you stay tuned and participate in our keynote tomorrow morning, where we're going to have some interesting practitioner-based announcements as well — Mitchell, over to you.

Mitchell Hashimoto: Thanks, Dave. As Dave said, today, we're going to be focusing on security. We're going to be talking all things security and what HashiCorp has been doing in that space.

Developing App Workloads Consistently in the Cloud

When HashiCorp talks about its mission, we talk about making it more efficient to take applications into the cloud. When we talk about that, we talk about the four pillars of provisioning, secure, networking, and runtime.

When we look at these four pillars, we have our software alongside next to it. These are the important things for organizations to look at and the changes between on-prem environments and cloud environments. Or more generically, traditional environments that are more static and dynamic, more modern environments.

Static to Dynamic

Today, we're going to focus on the security layer of this diagram. Thinking through that transition from static to dynamic for security, these diagrams give you an idea of what we're going to be talking about here.

On the left, we have the more traditional static approach. You can notice that there's a four-wall perimeter approach to this. We'll talk more about that in a second. On the right, you see a more modern approach, which lives across multiple platforms — potentially multiple cloud platforms — and with dozens of applications that need to communicate to each other securely.

Migration to the Cloud

So, diving in with more detail of this transition from static to dynamic. On the static and the traditional side, we often see security practices focused on perimeter-based or networking-based security.

In this model, you have the idea that on your network you tend to have four physical walls — potentially your datacenter — and all the stuff inside is trusted, while the stuff outside is generally untrusted. In between, you have something like a firewall protecting that. Within your environment, we usually use IP-based security to prevent or allow access between different internal services.

In a more modern approach, this doesn't work super well. In modern environments, you tend to not have a physical structure — such as the datacenter that you own — to provide that security. It's much more of a software-based security model. You deploy things into software-based networks that you don't control the physical resources to.

This requires a slightly different way of thinking. We have to think more about dynamic access because creating new services, new servers — bringing online new components of your infrastructure — is just an API call away. We have to get used to that dynamic access part of security that you tend not to have as much in a traditional static world.

The other thing is moving away from perimeter and IP-based security to more of an identity-based security approach. You have to do this because the perimeter is mostly gone. And as you're talking between multiple platforms, the network connectivity and the IP control might not be there. Identity-based security makes the most sense.

As applications have evolved, we're moving away from one application per VM — or one application per IP — type of approaches and more into a multiplex scheduled workload environment with tools like Nomad and Kubernetes. That even pushes it further towards this identity-based paradigm.

Multi-Cloud Security in a “Zero Trust” World

Often, when we think about these modern security workloads, the term zero trust comes up. Zero trust can be a little confusing because it's often not well defined. We believe in the zero trust model as well, but I want to start by defining it and talking about how we're addressing zero trust very specifically as part of its definition.

In a zero trust world, it is the idea of moving towards identity-based controls as the source of all security. When we think about the things that need to have security and the identity-based controls attached to it, we come up with these four primary categories: Machine authorization and authentication, machine-to-machine access, human-to-machine access, and human authentication and authorization.

These are the four broad categories that address all the needs of our security and identity-based approach. The goal of this is to be able to trust nothing. With a zero trust approach, there's zero trust — we want to trust nothing, but to do that we have to authenticate and authorize everything using these known identities.

So, given that, let's go through each one of these categories and more concretely define what we're talking about here.

Machine Authentication and Authorization with Vault

Services that are nonhuman actors within your infrastructure need to have the ability to authenticate themselves and authorize themselves to access different parts of the infrastructure. This might be database credentials, data itself, or each other.

If we start with this problem for machines. We took a look at this with Vault. Vault is a piece of software that came out in 2015. With Vault, we were addressing this machine authentication and authorization problem directly.

The Vault Approach

Vault works like this — on one side, you have clients that want to access some secret material. That could be a static set of secrets, database credentials — it could be other sorts of credentials, etc.

To do that, they first need to prove their identity. They do this using a dynamic approach. They use the identity that they have — and this identity may change depending on the environment they're in.

If we're talking about an EC2 instance on AWS, we're likely using our AWS credentials to prove our identity. But if we're talking about an on-premise service, we're probably using something like Active Directory or some other on-premise solution to prove our identity. Vault allows you to mix these depending on who is talking.

First, we authenticate using that identity with Vault. Then, based on a policy within Vault, we're able to authorize whether that identity can access the secret material that they want to access. Assuming the policy does pass, the client gets access to that secret material.

Using this highly dynamic approach, Vault has been able, in a very flexible way, to support clients on any platform — traditional and modern — and bring it all together to access any sort of secret material. Whether it's plain data like files and key-value, or something more dynamic, like generating SQL credentials on the fly.

Vault has been popular. It's grown tremendously in five years. In the past year, there have been over almost 16 million downloads of Vault. Of the 16 million downloads, over 600,000 were on the Vault and Kubernetes integration alone. Vault now serves trillions of secrets every year by our users and customers.

AMN AMRO: A Vault Case Study

Over 70% of the top 20 US banks are now using the commercial version of Vault. To showcase a bank that is using Vault, I'd like to introduce Sarah Polan, who's going to talk more about how ABN AMRO uses Vault — Sarah. Sarah Polan: Thank you for the introduction, Mitchell. At ABN AMRO, we're currently working to enable security through automation. o, we received a directive from our CSO that indicated we needed to direct and automate security decisions.

I think that's an excellent decision. However, how do you logistically manage that for 26 different business applications — all of which are leveraging different technologies, and with regulation requirements across 19 different countries, for 18,000 employees and contractors. It's not a small feat.

Enabling Security Through Automation

We decided to focus on our secrets hygiene. This means keeping our secrets in a safe spot and teaching teams how to leverage that lifecycle. We decided we wanted to have a centralized solution — something that could handle multi-cloud and also on-prem solutions. Secure multi-tenancy was non-negotiable — we needed to be sure that whatever we were using was limited to that singular space.

Last but not least, we thought support was incredibly important. Not only vendor support — knowing that they would be there if we had an issue or could help us with the underlying architecture — but also the community support around it. Something that our developers themselves could go and leverage should they have a question about the best way to integrate Vault.

Why We Chose Vault

When we initially chose for Vault, we thought we were going to be leveraging a secure, stable method for storing secrets — anything that a normal Vault would do. We made this more defined by indicating that we wanted this to be for authentication and authorization use — for identity and access management.

At the time, we started looking at different solutions. We were saying API keys, certificates — specifically static certificates — usernames, and passwords. We were noticing a lot of database credentials that needed to leverage privileged users for databases.

As we discovered a little bit more about Vault, we wanted to allow teams to use the dynamic secrets. We thought that would help our security positioning the best, and we thought it would be best for teams to be able to leverage that. We started looking at that, particularly for ephemeral workloads.

Lastly, we needed to empower teams to automate secrets to their relevant use case. We had this great plan where we were going to create a campaign around secrets management and teach teams how to integrate Vault for their use case.

Vault Exceeded Our Expectations

Well, as with any use case, things evolve. Vault has actually provided us with a fully onboarded solution. For teams, the onboarding is completely automated. They don't have to know about Vault. There's also no human interaction — and for us, that's key. That means no human eyes — secrets are actually secrets.

We've also shifted our narrative on TLS certificates. Instead of having a large CA infrastructure, we started looking at how can we shift this to dynamic certificates that are shorter-lived and will also increase our security posture.

We've opened the dialog for Encryption as a Service. As many of you know, encryption is incredibly expensive. It's difficult to implement within applications. So the fact that we might be able to do this with Vault — well — there's a huge business case around that.

But lastly — and most importantly to me —  is that it's permitted a CISO team to become enablers. Instead of standing in the way of our development teams, we are now able to leverage Vault to maintain their velocity and, in some cases, maybe even increase their velocity.

Back to you, Mitchell.

Mitchell Hashimoto: Thank you very much, Sarah.

Vault Customer Feedback

Over the past five years, there have been millions of downloads and happy users using Vault. And over those five years, we've continuously listened to feedback improve Vault. Today, the most common two pieces of feedback that we hear are: 1. Finding the skills necessary to use Vault is sometimes challenging 2. Letting up and running with Vault can take a little bit longer than people would like. We wanted to address these two pieces of feedback.

Addressing the getting started with Vault challenge is there are multiple delivery options to use Vault. You could continue to use the way Vault works since 0.1 — which is that you can manage it yourself. You could download Vault, run it on your own infrastructure, and you could run this wherever you want. That's what existed up to today.

The HashiCorp Cloud Platform Vision

The second is thinking about a cloud service and managed Vault. Earlier this summer, we talked about how a cloud-based Vault offering was on the way — and that cloud-based Vault offering would be based on something we call the HashiCorp Cloud Platform.

The HashiCorp Cloud Platform is based on three main goals. We want to provide push-button deployment of our software. Second, we want all the infrastructure running the software to be fully managed. You shouldn't have to worry about OS patching, spinning up infrastructure, etc. Third, we want to provide one multi-cloud workflow for all our tools. We want this to be able to work across different cloud platforms.

Announcing HCP Vault Private Beta on AWS

Based on HCP, we're proud today to announce HCP Vault on AWS. This is now available in private beta. With HCP Vault, you get these three pillars of HCP.

 Push-Button Deployment

You log in, name your cluster, choose a network to attach it to, click create. And in a few minutes, you'll have a full running Vault cluster. All of this is based on fully-managed infrastructure. You don't have to provide any servers or spin up any on your own — we handle all of that for you. We also handle any of the security issues, OS patching, upgrades, etc., associated with that infrastructure.

One Multi-Cloud Workflow

This is all built for one multi-cloud workflow. This means that — while today we're announcing HCP Vault on AWS — all of the features and underpinnings are there to support a multi-cloud replicated environment in the future.

You could see this based on our abstraction of networks in HashiCorp Cloud. In this screenshot, you could see the HashiCorp virtual network and how you could attach a Consul cluster and a Vault cluster to it. This virtual network tends to live in AWS. But in the future, we'll be able to use this to span multiple clouds automatically.

The Goals of HCP

By using it, you could get faster cloud adoption. You don't need to spend as much time setting up our software, learning how to operate it, etc. You could click one button and get it up and running. If you could get our software up and running easier, that will quickly increase productivity for your applications. They can be in consuming our software right away.

Third, it gives you that multi-cloud flexibility much more easily. Software like Vault, Consul, and others make it much easier to use a common set of APIs to do things like security or networking in a multi-cloud way.

HCP Vault on AWS is available in private beta today. You could sign up for the beta by visiting hashi.co/cloud-platform and requesting access. That covers machine authentication and authorization. Next, I want to talk about machine-to-machine access

Machine-to-Machine Access with Consul

Once you've established identity with machines, they then need to communicate with each other and prove that — on both ends — the correct people are talking to each other.

When we think about this, it’s a networking problem. Two machines are trying to communicate. And for that, we’ve built Consul. Consul is a tool that provides service networking across any cloud. To do that, Consul provides three primary pieces of functionality; service discovery, a multi-cloud service mesh, and network infrastructure automation. I'm going to dive into each one of these.

Service Discovery

This is the original feature that came out with Consul 0.1. Service discovery provides a global catalog — a global view of all available services currently deployed in your network. Along with the service availability, we cover the health of that service. Is it healthy? Is it unhealthy? Is it running? Is it not running? And you could see this in one UI — in one view — across every service in your infrastructure.

Multi-Cloud Service Mesh

Built more on top of this service discovery side, we now can attach identity and facilitate networking between multiple services. The service mesh lets you ensure that every connection is authorized by proving identity on both sides — as well as encrypted to protect the data in transit. The service mesh we built is multi-cloud, so you could run different endpoints in different cloud platforms and have the networking work throughout all of it.

Network Infrastructure Automation Feature Set

Utilizing Consul’s ability of this real-time global catalog, you could do things such as updating load balancers in real time. You don't need to submit a ticket or wait to manually update something like the nodes in a load balancer. You could now use the API that Consul provides and the real time updates that it has — and dynamically update that load balancer configuration. With Consul, we provide a number of tools beyond the API for you to do that.

Announcing HCP Consul Public Beta on AWS

Earlier this year, we talked about HCP Consul being in private beta. And today, I'm happy to announce that HCP Consul on AWS is now available in public beta. You can sign up for HCP Consul at hashi.co/cloud-platform .

To review — HCP Consul is just like HCP Vault that we just talked about. With HCP Consul, you could log in, click a button and get a Consul cluster up and running on AWS in just a push of a button.

Announcing Consul 1.9 Beta

In addition to HCP Consul, we're excited to announce a new version of Consul today. Consul 1.9. Consul 1.9 is available as a beta today from consul.io. Consul 1.9 has a ton of exciting new features.

Richer Kubernetes Integration

Consul 1.9 lets you configure Consul's service mesh capabilities using Kubernetes CRDs — or custom resource definitions. This feels like a very Kube-native experience where you could mix in Consul resources as YAML directly with your other application resources and deploy them using your standard Kubernetes tooling.

Improved Service Mesh Observability Features

In the UI you can now see how traffic flows through your different environments and the effect it’s having on your infrastructure. If you're utilizing the service mesh that Consul provides, this is a great way to get insights into how your network is working and the impacts it's having.

HTTP and gRPC Intentions

Finally, we've introduced Layer 7 HTTP and gRPC-based intentions for the service mesh. Intentions are the way that we authorize traffic — whether connections are allowed or disallowed from happening. Before, you had to do that more at a lower level based on machine and service identity. But now, you could do this-based on things such as HTTP paths or gRPC metadata.

In this UI screenshot, you could see some of these rules being created — or certain paths are routed and allowed and disallowed in different ways. To learn more about Consul 1.9 — and all the amazing things that Consul team has been working on — please see the Consul product keynote tomorrow at 1:00 PM Pacific Time.

To continue talking about zero trust and the other pillars that I brought up, I want to invite Armon Dadgar, co-founder and CTO of HashiCorp.

Armon Dadgar: Thanks so much, Mitchell. As Mitchell said, he already did a great job covering the first half of the spectrum in terms of how we think about machine authentication and machine-to-machine access.

Human Authentication and Authorization

The other half gets a little bit strange when we start to bring people into the mix. The moment we start to bring people, we also introduce a similar almost parallel challenge on everything we had with machines. Which is how do we start by asserting some notion of identity? We each have our own personal identities. But how do we make it something that the computers can trust and understand that is cryptographically verifiable?

That's where we have this pillar around human-based authentication and authorization. It’s about establishing that identity in a trusted, secure, way — such that then we can use that identity downstream with other systems programmatically.

This is a problem — as you might imagine — we've had for a very long time. Ever since users started interacting with systems, they needed some way to prove their identity to the system.

Establishing Human Identity

Maybe starting with this bucket, we've seen there are well-established patterns of how to do this — and these have evolved over time with different generations of technology as well.

The most basic approach is simply distributing some form of credentials to users. This might be usernames and passwords, and they're logging in to the system directly. This might be some form of certificates. It could be a hardware device that they use to assert their identity. But we're distributing something to the user, and the user is providing that back.

As we get a little bit more sophisticated, we don't necessarily want to have a user have a specific username and password for every system they might interact with. This starts to become cumbersome as you have many different systems — many different users — so you start to move towards systems that provide a single sign-on experience.

In a more traditional private datacenter, this might have been powered by something like Active Directory. It might have been powered by something like OpenLDAP — where the user would provide their credentials once to the Active Directory server or the LDAP server. Then that identity would flow downstream to other systems so that you'd get the single sign-on.

As we're moving to a more cloud-based architecture, the same pattern exists, but we're starting to use a more cloud-oriented set of systems. This might be Okta, it might be Ping, it might be ADFS — but the idea is similar. We would do our login one time against these cloud-based systems — then those cloud-based systems provide our identity, our authentication, and authorization to other downstream systems. There's a very common pattern that we've applied here but generationally has evolved over time. So, the next piece of this is: We have asserted a user identity, but how do we now interact with the machines, the applications, the services we want — that maybe understand those identities, or maybe don't understand those identities.

Traditional Workflow for System Access

When we talk about the traditional workflow for accessing these systems, it starts when a user probably requests access to some private resource. This could be — let's say — an internal database that's running on our private network. Maybe they're a database administrator — they need access to that database to perform routine operations.

They have probably provided a set of credentials that allow them to get on to the private network to begin with. This could be VPN credentials, could be SSH keys, etc. Then they need to know the hostnames and IP addresses of the database so that once they're on the network, they know what to connect to. And lastly, they need a set of application-specific credentials. In this case — database username and password — to be able to interact with the endpoint system.

Then their workflow goes left to right. First, they have to log in to the VPN server or the SSH server using those credentials that they have. Next, they need to request access over the network to that private system using the hostname or IPs they know about. Then once they're connected to the database, they would provide the application-specific credential — in this case — the database username and password. Then they'd be connected, and they can interact and perform whatever operations they need.

Now, there's a number of challenges with a traditional approach. This ties back to Mitchell's earlier point around this transition we're going through from static-based systems to more dynamic environments.

Onboarding Is Difficult

How do we think about onboarding new users? For every new user, do we need to distribute a set of new SSH keys, VPN credentials, database credentials, etc.? What about when that user leaves? What about having to do periodic password rotation or credential rotation? You can start to see how this onboarding process becomes cumbersome at scale.

 User Has Network Access

Next, the user is connecting directly to a VPN or directly to an SSH Bastion Host. That — in effect — brings the user on to our private network. While that has its advantage and that the user can now connect to these resources that are on the private network, it also has a disadvantage. The user can connect to all sorts of things that are on our private network. We don't necessarily want the user directly on the private network.

IPs Are Brittle

As a result, because the user should really only have access to a handful of systems, we typically would deploy a firewall in between the VPN and the target systems — or in between the SSH Bastion and the target systems. But that firewall operates at an IP level. There's a set of IP controls that constrain which set of users or which set of IPs have access to which set of IPs.

The challenge with this IP-based approach is that it's brittle. It works great in a very static environment. But the moment we have endpoints that are auto-scaling up and down, we're deploying new services, maybe we're running on Kubernetes where — if a node dies — the application gets moved to a different node and a different IP address. In these very dynamic environments, we have the challenge of keeping these IP rules — these IP controls — up to date. It becomes very brittle as our environment gets more and more dynamic.

Credentials Exposed

The last piece of this is the user has to have those endpoint credentials — the database username and password in this case — to connect to the target machine. This means we're disclosing it to the user. That user could potentially leak it, leave it on a passwords.txt on their desktop, post it into Slack, etc., — so we create an additional risk of those credentials to get exposed.

Dynamic Workflow for Access

A different way of thinking about this is to use identity as the core primitive. This would start where the user again logs in with their trusted form of identity. They don't use a specific set of VPN credentials or SSH keys that were distributed to them. Rather, they use their same single sign-on and use that one point of identity that they already have that the users onboard when they start.

Next, ideally, we would select the system we want to connect to from a set of existing hosts or services in a catalog. We wouldn't want to know a set of DNS names or hostnames or IP addresses that may or may not change in advance. We'd rather look at a dynamic catalog that shows us what we have access to.

The next piece is we want the controls of what we have access to, to not be at that IP level where it's a dynamic — but rather at a logical level, where it's service-to-service. I want my database administrators to have access to my set of databases, regardless of what the IP of those databases is.

Lastly, we want the connection to happen automatically to the endpoint service without necessarily giving the user the credentials underneath the hood. This has a number of advantages — if we can do this.

Onboarding is Easy

One is that onboarding and offboarding is dramatically simplified. We don't need to distribute a bunch of specific credentials. We don't need a rotation workflow. We don't need to offboard that user. We add them to our IDP or identity provider, and then we remove them from our identity provider. And everything is linked to that.

Network Remains Private

Next, because we're selecting a host from the service catalog, we don't need to give users direct access to the network. They don't need to know what the internal IP address is. They don't need to be on that private network because they just care about the target host — the target service that they're trying to access.

Configuration Is Stable

The advantage of moving the rules up from an IP level to a logical service space level is it's much less brittle. Now, we can put those services in an autoscaling group — scale them up and down. We can have them on a node, do a failover, and move the app to a different node. We can deploy net new services. And we don't have to worry about changing our controls all the time to keep pace.

Credentials Not Exposed

Lastly — because we're not distributing the credentials to the user themselves — they don't necessarily have the database username and password. When they connect automatically, they're authenticated against the database, and they've never seen the database credential — making it that much harder to expose it or cause an additional data leak.

Announcing HashiCorp Boundary

How do we make this workflow real? How do we move towards this identity-based model, rather than the more static traditional workflow? Today, I'm very excited to announce a brand-new project called HashiCorp Boundary. It's free, it's open source — and I want to spend a little bit of time today diving into what it is and how it works.

At the very highest level, Boundary is trying to provide the workflow that we talked about focused on that identity-centric workflow for how do we authenticate and authorize access to systems. This starts — as you would guess — with a user trying to access an endpoint system.

First, that user is going to authenticate themselves through one of these trusted forms of identity. Boundary makes this a highly pluggable thing. Whether that identity is being provided by Okta, by Ping, by ADFS, by Active Directory, it doesn't matter —  there's a pluggable provider, much like Vault that allows that identity to be bridged in from whatever existing IDP you have.

Once the user is in, we have a logical-based set of authorizations in terms of what that user can access. We might map that user into the group of database administrators. We'll say database administrators have access to a set of databases.

The notion of what a set of databases is, is dynamically maintained in a catalog. That catalog can be programmatically updated using a Terraform provider. It could be kept in sync by integration with Consul — where we're querying Consul’s view and service registry of what services are where. But it can also be integrated into other service catalogs such as Kubernetes, or AWS, or Cloud APIs.

These other systems have a notion of these services and the ability to tag them or add different selectors. Being able to import those and reference those in a dynamic way — rather than have to deal with static IPs — makes this much simpler to manage.

Lastly, when the user goes to connect, we don't want to provide the credential where we can until we can integrate with a system like Vault to provide the credential dynamically just in time.

In certain cases, we have no choice — we might have to provide the user with a static credential. But in cases where we can use Vault's dynamic secret capability, how do we generate a credential unique to that session that's short-lived and time-bounded? So the user can connect to the database with a unique credential for that session that they never even see? And at the end of their session, that credential can be revoked and cleaned up and isn't a long live static credential that we have to think about and manage?

The Goals for Boundary

The goals of Boundary are fourfold.

On-Demand Access

One is how do we do this on-demand access that's simple and secure. We don't want you to have to do a whole lot of pre-configuration and pre-setup. We want it to be very much push-button and on-demand.

Dynamic Environments

Two is we acknowledge the world is becoming much more dynamic, much more ephemeral. How do we support that? That's around a few of these different pieces. Making the system very API-driven and programmatic — integration with dynamic service catalogs, integration with dynamic secrets, and leaning into this notion that we don't want to manage static IPs because our world doesn't consist of static IPs.

Easy to Use

The other piece is making the system easy to use. We want it to be user friendly because you have administrators who are configuring it and maybe understand the system in depth — we have end users who don't care how it works. They just want to have access to these endpoint systems. And we want it as easy to use as possible.

Free and Open Source

We've seen time and time again with HashiCorp is that the best way to make these products successful is to build thriving communities on top and around them. We're excited to do exactly that with Boundary as well.

A Deeper Dive into Boundary

Here's a screenshot where you can see Boundary's UI. You'll see four different boxes that describe logical services that we might want to connect to. You'll notice we're not talking about IPs. We're not talking about a low-level host. We want to connect to this bucket of high-level services and the environment they're running in. This ties back to the three aims. Identity-based access and control, nothing is IP driven, nothing is host driven.

Two is we want an automated access workflow. We want this to be API-driven, to integrate it into our scripting environments, CLIs, automation tools, CI/CD pipelines, etc. So it’s a rich API that allows all of this to be automated.

Then lastly, how do we have first-class session management visibility and auditability of what's taking place? If we have privileged users accessing sensitive systems, we want to have full visibility of when and where that took place — and have that type of insight whether for security or compliance reasons. Of course, we know this is designed for practitioners because we also have a dark mode UI that's visible here as well.

Boundary Connect

Here's a different example of interacting, which is what we expect the day-to-day to be like. If I'm a Boundary end user — I'm not an administrator — I'm just trying to SSH into a target machine, how can we make this dead simple?

Here's how simple it can be; it's a single command. It's boundary connect ssh and the target that we're trying to get to. You can see that we are dropped right into a shell — and that's the goal. Behind the scenes, there's a whole lot of machinery that's making this possible. Our SSH client — that's running locally — is talking to a local agent that's spun up as part of this command; as part of boundary connect. That agent is doing the authentication for us against the gateway for Boundary and then establishing a connection to the Boundary gateway.

The Boundary gateway is then authenticating and authorizing us — and then connecting back to the endpoint system. Now we have an end-to-end connection, going from our machine to the gateway to the target environment.

But to the degree possible, all of this is automated and invisible to the user. They can continue to use whatever local tools they're comfortable with. Use your local SSH tool, use psql — use whatever local tooling — and you're always talking to a local agent that's proxying all the traffic back; very similar to how SSH port forwarding would work.

We're very excited about Boundary. Today is the launch of the product, and the 0.1 is available. Please go check it out at boundaryproject.io. It’s also on our GitHub page.

If you find issues, give us that feedback and engage with us (in our community forum). We’re super excited, and there’s going to be more content on Boundary later in the day.

Keynote Summary and Conclusion

Taking a quick step back, there are these four pillars as we think about zero trust security.

  1. How do we assign workload identity?
  2. How do we authenticate and authorize those machine workloads? That's our focus with Vault (the first 2 pillars).
  3. How do we take that identity and broker machine-to-machine access in a secure and automated way? Our big focus there is with Consul.
  4. As we bring humans into the loop, how do they authenticate and connect to these systems as well? We're introducing Boundary to look at solving this problem.

Then there are a whole slew of great existing solutions in terms of: How to do single sign-on and bring human identity in a scalable way.

All of this is part of our broader goal as we think about the security umbrella — the security focus at HashiCorp — how we do security the right way in these modern environments. Our strong conviction is the zero trust approach and identity-driven approach — where we have explicit authentication and explicit authorization for everything — is the right way to do this moving forward.

HashiConf Security Deep-Dive Breakouts

As we shared earlier, the theme of today is talking about security. Right after this keynote, we have a great line-up of more security-related talks. If you're interested in Vault's future, we're going to have a deep dive on that. If you're curious to learn more about Boundary — our new tool — we're going to have a deep dive session specifically on going into Boundary. We'll talk about HCP Vault and what that looks like and how that works, as well as a few other great security-related talks. Stay tuned right after this for all of those talks as well.

Looking ahead to HashiConf Day Two

We already announced a whole bunch of new things today — including Boundary, HCP Vault, and Consul. But we’re not quite done yet. You may have seen some of our teasers on Twitter as well as on email, but we have one more big thing to share with you. Stay tuned for that update during tomorrow’s opening keynote as well.

The rest of the sessions — both today and tomorrow — we have a great lineup of speakers, a lot of great content. All of this is going to be available on-demand if you can't catch it all live — so don't worry and stress out about that. But at this point, I hope you all enjoyed the conference, and I'd like to hand it back over to our wonderful MCs.

More resources like this one

  • 4/11/2024
  • FAQ

Introduction to HashiCorp Vault

Vault identity diagram
  • 12/28/2023
  • FAQ

Why should we use identity-based or "identity-first" security as we adopt cloud infrastructure?

  • 3/15/2023
  • Case Study

Using Consul Dataplane on Kubernetes to implement service mesh at an Adfinis client

  • 3/14/2023
  • Article

5 best practices for secrets management