Tooling for the Modern Cloud Native Application Stack on Microsoft Azure

Learn how HashiCorp Vault, Terraform, and Consul work with Azure features and workflows.

The modern cloud native application stack typically consists of different runtimes whether it be virtual machines, Kubernetes, or serverless and cloud hosted services. How do you use the tools you know and love to build across these different environments?

In this talk, you'll learn how HashiCorp and Microsoft Azure are collaborating in open source to provide an integrated, consistent tooling experience. Whether it be securing Kubernetes secrets with Vault, having a Consul-based service mesh that spans Virtual Machines and Kubernetes, or even packaging your application stack up in a portable bundle using Terraform. Watch several examples of these tools in action on Azure and see a demo using Microsoft's the Service Mesh Interface (SMI) with Consul.


  • Rita Zhang
    Rita ZhangPrincipal Software Engineer, Microsoft
  • Lachlan Evenson
    Lachlan EvensonCloud Native OSS @Azure, Microsoft


Lachlan Everson: Well, it is wonderful to be here. How awesome are HashiCorp tools? Can you give it up—everyone? Am I in the right room?

Today we're going to share how you can integrate those tools into your cloud native application stack. Specifically, we're going to walk through three different tools and the problems they solve. Don't worry, there'll be a lot of demos. Everything is open source so you can try it after the conference from the comfort of your own home. Let's get into it.

What we hear from our customers

At Azure, we have the privilege of working with many customers. We want our customers to have a fantastic experience when using HashiCorp tooling on Azure. These are the top questions we're hearing from our customers when bringing their HashiCorp tooling to Azure:

How do I integrate Consul as a service mesh for both Kubernetes and VMs?

Rita Zhang: How do I secure secrets in Kubernetes with HashiCorp Vault?

Lachlan Everson: How do I bundle up my application and all of its dependencies into a single unit?

And when we dig deeper, what our customers are really asking is, "How can we remove integration pain?" They want solutions that natively integrate into these clouds so that they can have one tool for the job.

Rita Zhang: Many of our customers are already using HashiCorp tools for years, and they want to continue to use the tools that they love and trust.

Lachlan Everson: They want flexibility. They're operating in multiple environments, whether it be on-prem, in the public cloud, or even on the laptop. They want to take these tools and have the same experience everywhere.

Introducing our team

Rita Zhang: This is exactly what our team does. We are part of the open source team at Azure, and we design and build open source solutions for our customers and the community members alike. And because we designed and implemented all these solutions in the open, we hope that we can provide a consistent, standardized experience for every user.

Lachlan Everson: Before we get into it, we best tell you who we are. My name is Lachlan Eversonvanson. I'm a program manager at Azure. I have been a HashiCorp fan circa 2013. The other night, I was remembering talking to Armon back in 2013. Our team put the question forward to the early HashiCorp of 2013, "Do you think it is okay for us to run a 2000-node Serf cluster in production?" Armon gave us a one-word answer, "Absolutely." I go way back. I also have a more recent past with Consul, and I built the first upstream Consul chart to run Consul on Kubernetes.

Rita Zhang: My name is Rita Zhang. I'm a software engineer on the same team as Lachy. And my journey with HashiCorp started with Vagrant years ago. I'm one of the maintainers of the Secrets Store CSI Driver that I will be demoing shortly, in partnership with HashiCorp engineers. I'm also one of the maintainers of Open Policy Agent Gatekeeper to help users enforce policies for Kubernetes.

Case study 1: How do I integrate Consul as a service mesh for Kubernetes and VMs?

Lachlan Everson: For those of you who've already noticed service mesh is hot—and why is it hot in the marketplace at the moment? It's because of this explosion of microservices. People want to rationalize them—have a single experience and a single service mesh to rationalize all of their services connecting to each other.

Rita Zhang: So why Consul as a service mesh?

Lachlan Everson: Thanks for asking, Rita: Because it supports multiple run times. It can support VMs. It has a long past of running on VMs—but there also are upstream Helm charts that run on Kubernetes. So, whether it be VMs, whether it be containers or whether it be even FaaS—Consul will be there to support all those things.

With Consul Connect, you can build a network that spans multiple underlying providers. At the booth yesterday, somebody asked, "Hey, how do I expand my Consul cluster in AWS to Azure?" We were able to answer that. Consul has those answers. It is production-ready.

I mentioned Serf—the predecessor to Consul—but it's had a long and vivid past in production situations. It's been there and it's stood the test of time. It's flexible; it has a rich feature set to apply quality sets of features to service mesh.

Let's take a look specifically at integrating Consul into Kubernetes. It runs natively on Kubernetes. There's an upstream chart that the HashiCorp team support. You can install it simply with Helm.

We’ve been working with the HashiCorp team on what we call Service Mesh Interface. You may have seen the cloud pong with Brandon yesterday. That's using the Kubernetes API to configure Consul as a service mesh. You can use Kubernetes native tooling like kubectl to configure Consul and have a single pane of glass.

Before we get into the demo, let's take a look at what I'm going to be showing you. We have a two-node cluster. On the left side, we have the dashboard pod. On the right side, we had the counting pod. We have Consul connecting those and using SMI via the Kubernetes API, we are going to configure intentions which allow the dashboard pod access to the counting pod.

There's a link down the bottom. You can follow that along at home, the SMI spec. But what this looks like on Kubernetes, if we take a look at this resource—which defines our access policy, if we go down to line 10, you'll see sources—sources are a service account in Kubernetes. So we're using Kubernetes native primitives, but we're configuring Consul—that’s the power here. The dashboard service as a source—the destination on line 6—is the counting service as a service account, and we have the specs down in line 14, which defines what traffic.

With this policy, we are going to allow TCP traffic from the dashboard service to the counting service. It is that simple. Let's see it in action.

You see, our pods up on the top left. On the right-hand side, you will see the Consul UI. You will see that we have no intentions configured. We will pop over and show you the dashboard which cannot currently connect to the counting pod. We are going to install the SMI controller, which reads these resources and translates them to Consul. Then we're going to apply the policy that I just showed you. Down the bottom is the log file for the controller.

Here we see that same policy definition that we shared that says destination counting can be reached by TCP from the dashboard. Let's use kubectl to apply that. That has now been applied. If we pop back over to the dashboard, we should see that it is now indeed connected. If we take a look in the Consul UI, you can now see that the dashboard can connect to the counting service. It is that simple. But if you don't believe me, we'll go ahead and remove that and show you that the access will be instantly removed. Pop back over to the dashboard and we can see now that the connectivity is now gone.

So with SMI, you can have Consul running in Kubernetes and VMs and use the Kubernetes API to configure your service mesh. Consul is a super-powerful tool, and I recommend taking a look at it to deliver your service mesh needs.

Case study 2: How do I secure secrets in Kubernetes with Vault?

Rita Zhang: We've heard from customers over and over, "I already have HashiCorp running in my company. I really love it, and I trust it. Why should Kubernetes be any different? Instead of storing my application secrets in etcd, is it possible that I continue to manage my secret contents in Vault but be able to use it in my Kubernetes applications?"

Well, this is why we created the Secrets Store CSI Driver. With this solution, your organizations can continue to have separation of concerns. The folks that manage secrets, keys, and Serf can continue to do that in HashiCorp Vault or whatever secrets store of choice. And the developers can continue to leverage those secret content in their applications. The solution implements provider interface and that's why it's easy to add additional new providers.

Today we have support for HashiCorp Vault working with Misha from HashiCorp, who spearheaded the implementation for Vault integration. Thank you, Misha. We also support Azure Key Vault as another provider.

For the developers out there, this means that your applications are now portable. It will run regardless of what cloud providers you're on or what secret providers you're leveraging. You can switch them, swap them, or use them at the same time. Because the solution was designed and implemented in the open—with the community—we really incorporated feedback from the community, ensuring this provides a standard for all of our users.

Lachlan Everson.: But hang on, Rita, are you talking about CSI Miami?

Rita Zhang: No.

Lachlan Everson: What is CSI?

Rita Zhang: Great question. The Container Storage Interface was created to standardize the way we bring third-party storage systems into containerized workloads. Once you deploy the CSI plugin into your Kubernetes clusters, users can then create volumes. Once the volume is attached to the pod, the data within the volumes are now mounted into the container’s file system.

With that design in mind, the Secrets Store's CSI Driver mimics how natively Kubernetes mounts Kubernetes seekers as volumes into a pod. When a pod is created with the CSI driver specified, the CSI driver connects to the secrets service provider of your choice—In this case, HashiCorp Vault— and it retrieves the content that you've specified.

Once the content is retrieved, the CSI driver creates a tmpfs volume mounted to the pod and writes the secret content within the container's file system. All the processes in the container will then have access to the secretive data for the entire life cycle of the pod.

And here you have the link. If you want to go check out the repo.

Lachlan Everson: But that all sounds far too easy. Can we actually configure that, Rita?

Rita Zhang: Great question. Much like Consul—there’s a very easy way to get the Secrets Store CSI Driver deployed into your cluster with Helm. Once you've done the helm install, then the very next thing that an operator needs to do is define, "How do I access the vault instance and what kind of provider do I want to use?" Here, as you can see, we have a Secret Provider Class Kubernetes resource. The operator would need to create this resource. Line 6 is where we specify we want to use the Vault provider. Lines 8 and 9 is how we connect to the Vault instance, and the role we want to use to connect to it. From lines 11-16, this is how we specify, "Here is the secret content I want to retrieve and the path that I want it from."

Once a resource is created in the cluster—then as an application developer—I have a pod YAML. First, I need to create a volume. On line 16, I tell Kubernetes, "I want to use a Secrets Store CSI Driver." On line 19—this driver on line 19, needs to be able to say, "I want to use a vault-foo Secret Provider Class Resource that I created previously.

With all of that, I'm able to talk to my vault instance. On line 11 is how I can bring in that secret content and mount it into my container. That all sounds great. Let's take a look at a quick demo:

As you can see, I have a two-agent node cluster—1.15.2, and I already have a Vault instance running in this cluster. First, I need to get the vault service endpoint so that I can connect to it—here’s the API for it.

Next, with the Vault CLI, I can create some secret content. Here I'm creating a bar secret at the path /foo. Once this secret is created, let's deploy the Secrets Store CSI Driver in my cluster. As you can see, it deploys a bunch of resources. But specifically, each agent node gets a CSI driver running on the node. We also create some custom resource definitions. This allows us to have provider-specific parameters.

Like I showed earlier, this is how we connect to the Vault instance. Then we're referencing the bar secret that we created and the path in which we want to retrieve the content from Vault instance. Let's quickly apply that in the cluster. Once that's created, the application developer tells Kubernetes, "I want to create a volume with the CSI driver."

The next thing is the vault-foo for resource class that I have created— and then the mount path that I want to mount the content to. Once we created the pod—let’s see it running. As you can see, the Nginx pod is created—that is talking to the CSI driver on the same node.

Let's see if this works. I'm going to exec into the container and at the path, and voila, that's the secret content retrieved from Vault. Hopefully, that was a demonstration of how you can continue to create and manage your secrets and keys or Serf or in Vault—and then bring that into your Kubernetes applications.

Case study 3: How can I bundle up my application and all its dependencies into a single unit?

Lachlan Everson: Thank you Rita.

Rita Zhang: So what is a cloud native application?

Lachlan Everson: A cloud native application consists of many parts. It could be business logic, it could be your infrastructure, one or many microservices It could be databases that are hosted or run on a cluster. But it's all of these things together. So how do you rationalize all these things into one specific unit?

People are doing this today using awesome tools like Terraform to do the infrastructure load—maybe even manage Kubernetes—but they have Kubernetes resources as well. And let's face it, we all have those bash scripts that hang around forever. You want to package all of those up and put them into one place and have that express your cloud native application—soup to nuts.

This is exactly what Cloud Native Application Bundles (CNAB) are for; take the tools you already use, apply the configuration and credentials that you already have, and distribute them via existing mechanisms. When we say existing mechanisms, we're talking things like containers and VMs, and how do we rationalize this out?

We built a tool that talks about building and creating Cloud Native Application Bundles. We have a tool called Porter, and its role is to make this very easy. We can package up your apps, smart bundles out of the box so we can talk to things like Terraform, Helm, those random bash scripts—and then wrap your operational verbs; install, upgrade, uninstall into one seamless integration. That's enough of me talking about it. Let's go through this whole demo, and we'll build something on stage today.

But what are we going to build? We're going to create a bucket. We're going to set up Terraform to use that bucket as a remote state backend. We're going to deploy a Kubernetes cluster and a hosted PostgreSQL database. We're going to deploy a Spring music app, which represents that monolithic Java app that we need to put on top of Kubernetes. Connect it to that hosted PostgresSQL. And we're going to do it all on Digital Ocean. Let's get into it.

We're going to run a Porter install. We're going to have a look at this command and all its features. First, we see we're specifying a tag here. You can run this locally or from a Docker registry. That reference is a Docker image that's hosted in Docker Hub. We have our credentials, which is our access to the Digital Ocean APIs.

Finally, I want to take a look at the parameters. We're feeding in these parameters. Specifically, I want you to remember the last one, database_name=springmusicdb. We're going to bring this one up later. But let's follow the thread.

Run the install. We'll see we're going to create the S3 bucket. Once the S3 bucket is created, we’ll reset, do a terraform init and set the backend there—install the provider plugins, and then we're going to run a terraform apply.

Now let's have a look at that terraform apply. Have a look at the variable. We're passing in that database name again, springmusicdb, and we're creating the cluster.

Let's have a look at one we deployed a little earlier. We're going to use Porter to have a look at one that's already deployed. Porter instances list—we have the springmusic created 18 minutes ago. If we look at that, we can actually get a little more detail from it.

We see that it was successfully installed, and we see a bunch of outputs. Now, remember we've gone from nothing to the full stack of the application—this cloud native app stack. We can open that URL, and here is our application. What do you think, Rita?

Rita Zhang: Wow, that almost looked too easy.

Lachlan Everson: It almost looks too easy.

Rita Zhang: Can you show us how that works?

Lachlan Everson: Absolutely. Let's take a look. Here is the directory structure. We have a Docker file in here. This is what the bundle consists of; charts, Terraform, and our random bash script waiting for IP. We also have a porter.yaml.

Let's take a look at the charts. This is the Helm chart for the Spring music app, and let's take a look at the Terraform files. Here are the TF files to create all that infrastructure on Digital Ocean. All familiar, all tools you already know—this is nothing new. We've just put them in a location, and we packaged them up.

Let's take a look at the porter.yaml. This contains all the metadata about how we want this thing deployed. We see some metadata around the name, the container image that it's going to be stored as and tagged. We see these mix-ins which allow us to call out to things like Helm, Terraform, or even do a local exec for things like our special bash scripts.

We will now go down and have a look at the install verb and take a look at that. I'm a little ahead of the demo, but that's the way it goes. We take a look at the exec as part of the install, which, when we were doing the install, it was to create an S3 bucket. We see what's happening under the hood here. We're running s3cmd— passing those arguments. Some of those arguments are coming from the parameters. Some are coming from the credentials. You can also see the Terraform, where we set up the backend config. Also coming in from the parameters and credentials and we're setting the database name, which I asked you to remember because this is going to be important when we run Helm. We want to connect that Spring music app.

We're grabbing these outputs here and then feeding them into the next stage as input parameters to Helm. Our database access that came from Digital Ocean is now fed out of Terraform into Helm so that we can connect the Spring music app.

Finally, we have our beloved bash scripts—wait for IP—and another exec script which goes out and grabs that public endpoint so that we can hit it in the outputs. So that's it. Now we're going to have a look at bundling this up and pushing it out.

If we do a porter build, we're building this locally, like a docker build. And here it is. We've built the local version into a container, and then we can publish it. Here we can see the container name that will be pushed to Docker Hub, and we can see the parameters, the outputs, and the credentials. What do you think?

Rita Zhang: Well, I know that works locally, but what if I want all this to work with my CI?

Lachlan Everson: Well, if you’ve used GitHub's actions, it makes it super-simple. Here's a GitHub action that says on push. I'm going to do a porter publish. This is going to push this up to a Docker registry, meaning the dev next to me can also have this Spring music app. Let's bump the version number, check it in, and then we're going to push it up to GitHub, and we'll watch that action fire and publish it out.

Commit—great commit message or, as always, update, very informative. Pushing origin master—there we go. Now we're going to go and have a look at the GitHub UI. We take a look at the action, which is past. We'll take a look at the log file for that. Right down the bottom here, you'll see it build. It'll run a build and a publish. We can see that this has been tagged and pushed successfully up to Docker Hub—meaning that somebody else can have access to all this goodness that I've bundled up.

Finally, we can generate a UI using the Porter.yaml. Here is an electron app for those who don't want to use the CLI but want to have access to this cloud native app stack. We can generate an electron app based on the porter.yaml.

We can see we have an install operation here. Let's click on that. We can see the variables are all there, and we can fill them in—the credential sets are there as well. A great experience for those who don't want to get their hands dirty in the CLI—or are not comfortable—and don't have these tools like Helm or Terraform actually installed on their machine. They are all packaged up and locally available inside that container run time.

So that's it. That's everything you can do with CNAB. Take your whole app, soup to nuts, and roll it out onto Digital Ocean in this case.

Thank you, Rita.

Rita Zhang: Well, there you have it. Three different case studies. We've shown all these tools were built in the open, working with our partners and in the community. B—y doing so, we could ensure that we have native integration, we're community-friendly and definitely no vendor lock-in.

So what does this mean for you? Well, right after this keynote, we really think you should go try it, and we look forward to your feedback on GitHub, and please swing by our booth. We are definitely looking forward to seeing you in the community. Thank you.

Lachlan Everson: Thank you very much.

More resources like this one