HashiConf 2018 Keynote: New Integrations With Kubernetes, Private Data Centers, Community

Nov 14, 2018

HashiCorp co-founder Mitchell Hashimoto discusses the latest ecosystem integrations for the HashiCorp tool suite.


  • Mitchell Hashimoto

    Mitchell Hashimoto

    Founder & Co-CTO, HashiCorp

» Transcript

Next I want to talk about ecosystem. A core element of the Tao of HashiCorp is workflows, not technologies. And this mindset is also part of our name, the “Hashi” in HashiCorp, which also happens to be in my last name, means “bridge,” and we did do that purposefully because we want to build software that supports organizations wherever they are, whether that’s VMs versus bare metal, whether that’s cloud, whether that’s containers, schedulers, and so on. We want to meet you where you are, and we want to build a workflow that works across these technologies.

So Terraform Apply could create resources in cloud, could create vSphere things, SaaS things, much more. Nomad can run things in Docker, can run things in QEMU, can mix and match those things together. Vagrant up can spin up development environments and VirtualBox, VMware, Docker, and the cloud.

And all of these same workflows enable technology flexibility. This has immense value to organizations, because you only need to educate the workflow once and as new paradigms come up, the workflow should still work while you learn about that technology. And this year we made a few major investments into certain parts of the ecosystem. The first of that is Kubernetes. As we’ve looked at Kubernetes, we’ve focused on 2 major different deployment models that we’ve seen with our customers and Kubernetes. The first deployment model is the one on the left. This is the Kubernetes native or Kubernetes first environment. This is where all your workloads are running on Kubernetes. You’re aiming for 100%, and this is common in startups, small companies, or in greenfield arms of large companies.

Then on the right we have what we like to call “mixed-mode Kubernetes.” This is a much more realistic deployment model for an existing company, and in this model, whether you plan on being 100% Kubernetes or not, there is a long multi-year period where you have workloads that need to interact between multiple environments, usually Kube, the cloud, and often also your private data centers. And these 2 environments have 2 very unique, different challenges that we’ve been looking at.

The Kubernetes native side of things

What we’ve been trying to do there is create an experience for our products where you don’t need to leave Kubernetes, because we found that if you need to leave Kubernetes and this environment, immediately that solution is a second-class option. What we’ve been doing to enable these environments is creating official helm installers, so you can install our software directly within your Kubernetes cluster. We’ve also been looking to enhance existing Kubernetes features rather than trying to replace them.

When you look at something like Consul, we don’t try to say, “You should use Consul DNS for service discovery,” because Kubernetes already builds in first-class, very feature-rich service discovery. Instead what we do is say, “We’ll sync our service catalog into Kubernetes, so you could continue as a developer to use those first-class features that the platform gives you and experience Kubernetes in a native way.”

Mixed-mode Kubernetes

The most important thing for mixed-mode Kubernetes is solving this workflow problem. How do we get a consistent workflow to solve multiple challenges, with these challenges being serving and consuming applications across platforms? How does a service in Kubernetes talk to a service outside of Kubernetes, and how do they do that securely? How do you manage resources required to run both in Kubernetes and outside of Kubernetes?

I’m spinning up VMs, I’m spinning up pods, I’m spinning up deployments, and I’m on-premises infrastructure. Are those 2 completely different tool sets, or can those be one? There are security challenges, and also, even with just Kubernetes, there are challenges around multi-data center Kubernetes. How do you bridge those gaps? So we’ve been looking at this as well.


To go through each product and how we’ve done this, let’s start with Consul. Earlier this summer we published a blog post where we outlined our big plan for integrating Consul with Kubernetes initially. And we’ve delivered on every aspect of this plan today. It’s all available now. What we’ve done is we’ve published and support an official helm chart for Consul running everything needed for Consul directly within Kubernetes. We’ve extended our cloud auto-join functionality to support discovering other clients that are running within Kubernetes, which is extremely important for nodes running outside of the Kube cluster to join servers running inside the Kube cluster, which is a common deployment pattern we see.

We’ve shipped service catalog sync, so that services outside of Kubernetes could just use basic Consul DNS that they’re used to, to discover and connect to services inside Kubernetes without realizing where they are, or caring where they are.

And then vice versa, services inside Kube could find and discover using first-class service discovery that the platform provides to connect the services living on your VMs or on-prem. And then finally, we’ve introduced an automatic sidecar injector for your pods, for secure pod-to-pod communication. So that as you deploy things, they’re automatically using Connect for mutual TLS connections both within and across Kubernetes clusters. And all of this is available today.


With Terraform, we have 2 providers directly focused on the Kubernetes community: the Kubernetes provider, and the helm provider. The helm provider was very recently made official thanks to our community, and we also have hired full-time engineering staff to focus on these providers within the past couple of months. So both of these providers are a major focus for us. They have full-time engineering dedicated to them, and they’ll see rapid improvements as we go on here.


Last year at HashiConf we announced the ability for Kubernetes service accounts to authenticate with Vault. That still exists. We’ve improved it, and its available today. In the future, we’re working on a number of ways that Vault can interact more with Kubernetes. We will be supporting an official helm chart to install Vault directly on Kube. Bringing auto-seal to open source helps immensely and is a big part of the Kubernetes strategy, where you’ll be able to automatically unseal that within the Kube cluster. And then finally we’re looking at ways to expose secrets in a way that’s more easily consumable for these Kubernetes-native environments, from Kube secrets to flex volumes and more. Those will all be available soon and expect us to talk about those a lot more publicly before the year ends.

And so Kubernetes is one major place that we’re investing. We’re also investing in the opposite side, very heavily, in the private data center. And this might sound funny because we are a cloud-first, cloud-native company, and cloud adoption is of course happening undeniably everywhere. But what’s fun is we’re also seeing growing private data center investments. They’re growing, they’re getting bigger, and people want to move workloads into the private data center. The tools we’ve built enable what we call the “cloud operating model,” a mindset of both tooling and philosophies that enable you to be more self-service, more as-code-oriented, more agile in your deployment and operations methodologies. And now a lot of companies are interested in asking the question, “Can we take those learnings that we’re applying to our cloud and future-facing environments, and bring those back down to the private data center and get those benefits right there?”

And the answer’s yes. Here’s a list of software that’s very common in the private data center and plays a major component in the private data center. I’ve split it across 3 categories. There’s “Compute,” with things like vSphere and Nutanix OpenStack. There’s “Networking,” with things like F5 and Cisco. And then there are application-level concerns, which are things like Active Directory, HSMs, and more. And the really fun part about this slide is our software integrates with every single one of these. And how we integrate is across multiple pieces of software. So with Terraform, we support providers. We have a vSphere provider, there’s an F5 provider, Nutanix, much more. For Consul, we’re integrating directly with that software for things like Connect. So we’re integrating with Venafi to be a Connect CA. And with Vault, for multiple years, we’ve integrated directly with HSMs, and we’re also looking at other integrations now.

And so the roadmap for private data center investment is to expand the partners that we have there and work with them to expand support in our software for private data center-focused tooling. We’re growing the ecosystem engineering team. Ecosystem engineering is a role at HashiCorp that’s embedded directly on each team, and they work directly on these sorts of integrations. So, that sounds interesting: We’re also hiring ecosystem engineers. We’re also working on the education team and what Armon talked about with things like the Learn platform to clearly document case studies, architecture diagrams, and use cases directly targeted at private data center, because we’ve heard from our customers that this is what they want to do, and they’ve seen immense value of using tools like Terraform to manage purely private data center deployments. And so we’re going to keep doing this, and we’re excited about what this holds for our customers.

And then there’s still cloud. We’ve always invested in cloud, and while Kubernetes and private data center are relatively new, we’re just doubling down on cloud and remain committed. We’ve now partnered with all the major cloud providers all around the world, so we’re not just focused on one specific region, from AliCloud to VMware, Oracle, Google, Azure, and AWS. We have some great partnerships with all of these organizations. To highlight a few and show how these partnerships have worked: With AWS this year, we’ve had multiple launch-day Terraform support items. So both EKS and ElasticMQ. The day that AWS announced the availability, we’ve also had Terraform support right alongside of that. We’ve also worked with Amazon to develop and support quick-start guides for Consul, Nomad, and Vault, and these quick-start guides, every single one, is in the top 10% of most-viewed quick-start guides for Amazon.

We also have a great partnership with Google. We’ve worked with Google for multiple launch-day items as well. This year they announced Cloud HSM, which had launch-day support for Vault right away. They’ve helped develop the GCP secrets engine in Vault, and their interaction with us on the Terraform provider is very close and they’re very good and focusing on 100% coverage of Google cloud resources.

And then with Azure, very similarly, we had a couple of launch-day items, including AKS. Also Microsoft this year announced an image-building service for Azure, and that’s based completely on Packer under the covers, and they’re public about that. They also announced an ARM and Terraform integration so that if you are an ARM user, you could use Terraform providers directly from ARM. These are all major things that have happened since the last HashiConf. These are all brand new.

In addition to our partnerships, we’ve launched what we call “integration programs.” A little over a year ago, our first integration program was the Terraform provider development program. This is what Paul was alluding to earlier of how we’ve been able to grow providers so quickly. So, a bit of a history: A little over a year ago, HashiCorp ourselves were responsible for maintaining about 90 Terraform providers. So if you reported an issue, made a pull request, there was a team of about 5 in our company that across 90 providers had to respond to that. And we found that we were being very reactive. It was, “Oh, there’s a new issue here, so let’s go react over here.” We couldn’t plan very well because we didn’t know when these things would come up. So we developed the Terraform provider development program to work directly with vendors and the community to establish a relationship where we could help provide test environments for nightlies, release infrastructure, those sorts of things, while the community and the vendors help maintain the providers themselves.

We went from maintaining 90 providers ourselves to, just 12 months later, there are 89 official providers. And 89 official providers means that they are maintained by the vendor and the community, with HashiCorp providing some support. There are 74 community providers, and these providers are explicitly unsupported by HashiCorp, but usually have a vibrant community and are documented so on the Terraform website and are still generally good providers. They just set the expectation there of our relationship. And there are still around 10 or fewer HashiCorp-maintained providers, but these are generally quite small and other random providers, the TLS providers, things like that.

All the major cloud providers are part of our partnerships now, and they’re providing help there. And I think this is a major success story for Terraform because it’s enabled us to grow the number of providers dramatically, but also, for those in the community, you’ve probably seen that the number of releases across things like the Azure provider, AWS provider, Google provider are much more frequent and much more feature-rich. There’s a lot of stuff coming out of the Terraform community.

And then there’s Vault. And this is really new for us. We only launched the Vault integration program a month ago. The Vault integration program helps vendors navigate how to integrate HSMs off backend secrets engines and more. It’s very similar to the Terraform provider development program. We don’t have anybody that has come out of this program yet, but we have 3 that are very close. We’re working directly with 3 vendors that are pushing through this program, and they’ll be graduated soon, and we’re excited to see where this goes.

In addition to the integration programs, we’ve also doubled the number of technology partners that we work with. Technology partners have a direct line of access to HashiCorp and vice versa. We share elements of our roadmap, documents, training. We work with them on blog posts, marketing. We work in the field with customers to provide great support and integration help for them. And we’ve doubled this number, which is great and a huge testament to our ecosystem partnering team. So ecosystem: super important to us. We’re making some major technical investments, people investments, and I’m really excited about it.

HashiCorp co-founder Mitchell Hashimoto discusses the latest ecosystem integrations for the HashiCorp tool suite.

Stay Informed

Subscribe to our monthly newsletter to get the latest news and product updates.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now