HashiCast Episode 6 - Mitchell Hashimoto and Paul Banks, HashiCorp

Aug 09, 2018

This episode of HashiCast features Mitchell Hashimoto, founder and co-CTO of HashiCorp and Paul Banks, software engineer on the Consul team. Join us as we talk all things Consul Connect, and maybe we will reveal Mishra's secret DJ identity.


  • Mitchell Hashimoto

    Mitchell Hashimoto

    Founder & Co-CTO, HashiCorp
  • Paul Banks

    Paul Banks

    Software Engineer, HashiCorp

In this episode we talk to Mitchell Hashimoto, founder and co-CTO of HashiCorp and Paul Banks, software engineer on the Consul team.

We get a little insight into the personalities and backgrounds of both guests, most importantly we learn some amazing things about the new feature in Consul called Connect. Why we need a Service Mesh and how it helps with security for dynamically scheduled and legacy applications.



  • Anubhav Mishra

    Anubhav Mishra

    Developer Advocate, HashiCorp
  • Nic Jackson

    Nic Jackson

    Developer Advocate, HashiCorp

Nic Jackson: Welcome to HashiCast, the self-proclaimed number one podcast about the world of DevOps practices, tools, and practitioners.

Today on HashiCast, we're gonna talk all things Consul Connect, and we've got two very special guests for you.

We've got Mitchell Hashimoto, the co-founder and CTO of HashiCorp, and we've got Paul Banks, who's a software engineer on the Consul team and one of the main contributors to the new Connect service mesh feature.

Mitchell created Vagrant while in college, back in Seattle. He met Armon Dadgar, he founded HashiCorp and the rest is history. Hopefully today we're gonna find out a little about that history.

Welcome, Mitchell, it's always a pleasure to talk to you.

Mitchell Hashimoto: Thanks, Nic, I'm excited to be here.

Anubhav Mishra: And we also have Paul Banks, who's been with HashiCorp for just over a year now. He's previously worked on the platform team at Compose, which is an IBM company, building their database-as-a-service platform.

He was also the principal infrastructure engineer at DeviantArt.com. He has over ten years of experience building applications and infrastructure with open-source technology.

Welcome, Paul.

Paul Banks: Hey, thanks. Great to be here.

Nic: Alright. Before we dive into Connect, I'm curious. We had Armon on the show a few episodes ago, and we asked him all things about your partnership. He didn't throw any shade, but we're interested. What's your perspective? Can you give us your first impression of Armon?

Mitchell: My first impression of Armon was actually, "What the heck? Why is this guy so young?" A little-known fact about Armon is, he went to college two years early. So when we first met, I was 19 or something, but he should have been, in my eyes, in high school and it was sort of apparent. So when we first met, I was like, "Wait..."

He was in charge of the project I was joining, so I was like, "Who is this kid that's in charge of this project?" But after that it was really apparent that he's a brilliant guy.

Nic: And from that partnership some amazing things have come. Did you think when you were sitting hacking on Vagrant that you'd go on to form HashiCorp and create many of the products?

Mitchell: Form HashiCorp? Definitely, no. There was never any plan to start a company. From the product side, we had a lot of ideas and I was optimistic that we would build them one day. I didn't really think that through early on, how we would make that possible and build communities and things like that.

A lot of the ideas, not all of them, but a lot of the ideas, we ended up building here at HashiCorp, we definitely wanted to build back then.

Nic: The process is really interesting, as well, because I remember you and Armon talking about that process. Specifically this was when you went through the ideas and the process of designing the Sentinel language.

But can you talk us through this? What's the secret sauce, what's the process that HashiCorp goes through as an organization when it's trying to come up with a new product or develop an idea?

Mitchell: I think this is where Armon's and my biases or skillsets, I dunno how you'd describe 'em, become really beneficial. Historically, Armon really likes to think about things from an architecture-first perspective. He likes to think through, "What algorithms are we out to study? What areas of CS are going to be important here? How are we gonna structure the system?"

I care about that, but my first inclination is always to think about, "How are we gonna use it? What's it gonna feel like? What does it look like?" I guess I'm more of a ReadMe-driven-development type of individual and Armon is more of a system-design type of person.

» The genesis and future of Terraform

Whenever we would work on projects together, what we would usually do is go lean on those strengths. I would write a bunch of readmes, I'd create fake shell scripts that acted like the thing existed. I would use it, play around with it and make fake configurations of what I wanted it to do, and Armon would go back and think, okay, Terraform, for example, one of the initial ideas was, "Each resource is a finite state machine, and we're synchronizing finite state machines across, maybe multiple machines." Looking at what that would require and things like that.

The benefit is, surprisingly, we end up meeting in the middle, usually, and I think we make better products because of it. We make products that technically work quite well, but feel good at the same time. That's sort of the process, and that's the process also we've been trying to figure out. How to continue mirroring that as we grow, the engineering org at the company.

Nic: The process seems to be working pretty well. I've been a Terraform user for many years and other than the proposed 0.12 changes, HCL hasn't changed a great deal, if at all.

Mitchell: Yeah, yeah. It has its issues and that's why we're doing 0.12. It had some major issues and we're working on 'em. But all things considered, for how quickly that came together, it's been surprising.

We expected a lot more negativity when we released Terraform. We thought we would get a lot more backlash, but it's been four years now, and we've done a lot of user study of new users and customers and really advanced users. Across our products, Terraform consistently, with HCL, is the one people enjoy the most over JSON with Packer, or Review with Vagrant.

There are pros and cons, for sure. I'm not saying it's a silver bullet that solves all these problems, but it's the one that's been the most successful in terms of user happiness and productivity. I think we hit something really good, and that's why we've also been expanding HCL to all our other tools as well.

Nic: Awesome.

Mishra: For me what's been interesting is looking at the 0.12 release that's coming up and seeing how the team's going about previewing features. We can't just drop an 0.12 release, we have to tease certain features and get the community ready.

We've never done this in the past, Mitchell, so why have we taken this approach with 0.12?

Mitchell: There are a few reasons. We wanted to prepare users for what was coming, because there are some breaking changes. We wanted users to know months in advance what those breaking changes would be. We wanted to solicit feedback of what people thought early on, so we could fix it.

But also, we were afraid that if we released 0.12 and just dumped all these features, you wouldn't see how much is actually there. Honestly, I thought that the breaking changes would stand out above everything else and you wouldn't realize that we had introduced a handful—literally three or four—breaking changes. But for those three or four breaking changes, you're getting over a dozen major new language features.

The trade-off is one that I think any Terraform user would take, and the breaking changes aren't even that bad. They're not like, rewrite your whole config, they're little things. I think that's a trade-off anyone would make, but I think that if we released it all at once, it wouldn't be obvious.

Releasing them one at a time, on a weekly blog-post series has also helped focus the feedback. Each week we'll get feedback just for the thing that we talked about, just because it's hot in terms of Twitter or the community, and it's the focus. That's been super helpful.

Generally, it's been really positive. We have gotten a few concerns, but we're using that feedback to shape what tooling is built in the Terraform team to help with upgrades, and where we should focus better error messages and things.

Ideally it's everywhere, but in reality you've gotta focus on some stuff, so we're using that feedback to do that.

Mishra: I'm glad we're taking feedback, even if it's negative or in some cases positive, which is pretty awesome for us.

Mitchell: Yeah.

» How Paul Banks got into tech

Mishra: Before we dive into Connect, Paul, I would like to ask you a few questions to get a sense of your background and where you come from. How did you get started in tech, how did you get into computer science, is this something you were interested in in high school or when did that transition happen for you?

Paul: That's a good question. I'm not really sure if I ever officially got into computer science, but here I am. My degree is in music technology and sound engineering. I only really got into making websites because I had to make one for a band that I was sound engineering for at the time.

Fast-forward a few years, I realized that IT is better than the music industry in a lot of ways, and I did a lot of web design stuff for a while. Over the years, I got more and more interested in the layers that were underneath what I was doing and ended up being quite deep in the infrastructure and just really loving networking and databases. Kind of that low-level stuff that makes everything go. So that's where I've ended up.

Mishra: It's funny, the more infrastructure people I talk to, a lot of people have the music connection. I used to produce music in high school and university and wanted to do a minor in music, but I never followed through because my mom wouldn't allow me to take music as a minor. She's like, "No, it would not look good on your degree when you put it up on the wall." I was like, "Mom, come on. You can't say that."

Paul: That's great.

Nic: All the best music's in a minor key as well.

Mitchell: If you watch the TV show "Parks and Recreation," I hear you have a Duke Silver identity somewhere, like some secret music-producing identity no one knows about.

Mishra: Oh my god.

Mitchell: I'm gonna be at a jazz club and I'm going to be like, "Is that Mishra?!"

Mishra: No, I wish I would be into jazz, but I was progressive house music, and that was my thing for a while. I do have a name, an anonymous DJ name that I produce with, but no one will know that, so that's fine.

Nic: I played a lot of jazz in college, actually. I'm into it. That's me.

Mitchell: That's awesome.

Mishra: I wish I could get into jazz, but yeah.

Let's just switch and talk about networking. I did some research on you, Paul, and when you were at Compose, you were working with vSwitch, OLN networks and things like that.

Could you tell us a bit more about your work in that area?

Paul: Yeah, my work in that area was actually, "Here's a complicated production system running tens of thousands of databases for customers that are all built on open vSwitch, and no one understands it. Help."

So I had to learn a lot about networking and overlays, in a relatively short time. But I got into it and enjoyed the challenge, and I guess formed a lot of ideas about what it's like. How interesting it can be to do very clever overlay networking, flipping packets around. And also how much of a burden it is in production to have no one on your engineering team really understand what's going on in this overlay network.

Any problem that comes up, everyone just goes, "Oh, it's in the overlay network, a black box, can't do anything about it." And that's actually really disempowering for organizations. There are pros and cons. You can do a lot of interesting stuff, but there's also a really high cost there.

I got interested in that whole space, and it's fun to see how we can enable some of the cool stuff that you can do in SDM, but without the magic and the black box and the bit that no one understands in the middle, using basic protocols.

Anubhav: Today practitioners have so many choices when it comes to overlay networks. They're using different technologies, you can choose between the things that you're familiar with. Everything has to do with the Linux kernel and you have to understand some bits there. But people sometimes don't really think about those things and it can be painful, in the long run, I feel.

» Consul Connect

It's a good segue in talking about Consul Connect, and I'm sure the listeners are dying to hear a little bit more of the ins and outs of Connect.

Let's talk about the design process. Could you explain to the listeners what Consul Connect is and how you went about designing Connect?

Paul: Consul Connect is service-to-service authorization, using mutual TLS.

In your service mesh, we wanna be able to secure the access between the different services that are communicating in your data center. Once you have really dynamic workloads, you can't do that by just configuring files manually. It doesn't work. And even if you configure them with tooling, you end up with a multiplication. For all the different instances, you have this explosion in the number of rules you need, and the rate at which they change when services are moving from host to host.

Connect aims to solve that by just moving identity enforcement to the service level, using TLS and certificates, rather than being at the firewall/IP level where you have to maintain these IP lists everywhere, all the time.

Mitchell: The way I like to describe it is, think of any side project you've ever started, like a fun little website. If you ever connected even the web server to the database or if you did microservice and then two services, you probably didn't encrypt that. I know I didn't encrypt that. You're just trying to launch a side project. You just get it going.

But it should be encrypted, right? If it were easy, I would do it. And the whole point of Connect is to make that the default, and make that easy. So how do you make it so that, I launch these two different things that need to talk to each other ... I just feel like I'm connecting over an unencrypted connection, because that's what's easiest for me as a developer, and the complexity is Consul in the background and Connect and making that all encrypted for me.

That's how I like to view it, from that user-centric point of view.

Mishra: And just to be clear, is Connect a separate product? It's a feature in Consul, right?

Paul: It's a major feature, and it rounds out the story for using Consul as a service mesh. We already did the discovery part, we already did distributed configuration and being able to watch things. It solves the problem of securing the connections between the two as well.

Mishra: Cool. Could you give some insight into some of the early design process and how we approach problem solving? I know Consul's been solving the services config storage problem for a while at really large scale.

I think bringing in yet another feature, it could be difficult. It could be challenging. What were the design challenges that you foresaw in terms of implementing it at a scale that Consul was already used to?

Paul: For a bit of history, the original motivation for what is now Connect came from customers. It came from users who had very complicated infrastructure. Lots of legacy stuff around. Maybe they were doing public cloud, cloud native-y things, but that certainly wasn't the bulk of it. They had old systems. They had monolithic apps. Even hardware and mainframes and things, and data centers that all needed to connect.

The problem that they had with that infrastructure, that we didn't have a solution for was, "How do we secure all this stuff? How do we keep up with the changes and enforce the network policy that we know we want? How do we map that to these tens of thousands of IPtables rules across all these hosts?" It was a nightmare.

That was kind of the problem definition that Armon and Mitchell were thinking about. It quickly became obvious that even if you automate some of that programmatically, you still have this explosion in the number of rules when everything is tied to network addresses.

That's kind of where the identity-based security idea came from. That was just a way to get fit with the control plane that we had already built. The nice thing about Consul is that it's got all the bits you need for a data-center control plane. If you like, it's got health checking built in. It scales to huge clusters because of the combination of eventually consistent gossip pool along with the strongly consistent central state.

There's a lot of moving parts in there, and blocking queries, so you can watch for real-time updates. It's got all the bits we need for building a data-center control plane. The question was: How do we use that to solve this security-and-access problem? Mutual TLS was the obvious and simplest answer we could come up with.

We could have built a new software to find networking thing. Who knows? Maybe we will one day. Mutual TLS was the obvious thing to do first. That's where it came from.

In terms of design process, once that was nailed down, it took several people quite a while thinking about the problem in a general way. Then when we had a good idea about the scope and what we wanted to build, we wrote a lot of design. I think we spent two or three weeks up front just writing RFC.

Mitchell: I want to say something people may not know. Even when it was just Armon and I, for years our major new product features and new products in general ... We are a very heavy design-specification-oriented company. We lean further towards a military contractor than an agile startup, right?

I think some people have laughed that this is kind of waterfall-ish. It is, in a sense. We try to make them smaller, so it's not like that, but with new products, it's definitely kind of like that. Paul could correct me if I'm wrong, but I think prior to any code being written for Connect, we had written at least 40 to 50 Google Docs pages of internal design of how it would work.

The whole point of that is rubber-ducking, in a sense. By forcing you to talk through it, you realize, “Oh, what about this?” Other people think that too, especially on a team. It puts people on the same page. I think that the key is to make those smaller so that you're not waterfalling an entire product at a time, but when we first started Connect, that's how it went.

Paul: I think that was broken out across maybe ten or twelve different RFC documents.

Mitchell: Yeah, that sounds right.

Paul: High-level documents. Each one was relatively focused, but it drilled into a lot of detail, especially the initial design ones where we really had to understand things. I think the first ones we wrote, one of them was, what exactly is this? What's our TLS going to look like?

The reason that's a really important thing to nail first, is because if all have this fuzzy idea that, yeah, sure, our services are going to have a certificate, but you've not really thought about exactly what's in an identity. You can get a long way through talking about system-level architecture things, and not be on the same page about exactly how it's going to work. That process, I actually find it really fun. It's a perfect match for me. I love to write verbose documents about things.

Mitchell: I like it, too. Obviously. No promises, but my dream one day is that we'll just take all the RFCs, good and bad, terrible ideas and decent ideas, and bind them together in a giant book. And be like, this was ten years of HashiCorp engineering and all the mistakes we made. Here it is in its raw form.

Paul: That'd be awesome.

Mishra: That would be a best seller.

Paul: My wife jokes that I found the perfect job because I can't write an email response to someone without sitting there for half an hour typing. She's like, "It's great, people like it."

Mishra: Do you ever think about the consumers? I do not have the patience that you all have. I remember reading one of the RFCs. I gave up. I gave up halfway through, to be honest. I was like, this is way too much to read for 9 AM.

Paul: I write it all and then I go, "I'm really glad I don't have to read this."

Mitchell: I think there is an expectation that there are very few people that read the whole thing, and that's okay. What's more important is that the knowledge was recorded and that you worked through it. The rare cases where I think people do read a lot of it, is when you get a new team member that's diving into a subsystem. It's really boring and really dry, but it's more contextually accurate than if I were to explain why we did something in Connect today. I forgot a lot of the context. It's like an encyclopedia. You don't read it cover to cover.

Paul: I think the other thing is, we have quite a set format for how we write. There are at least several elements that bring a lot of benefits. One of those is that the first page or two is a high-level problem statement. The background of why we're even solving the thing and why it's the right thing to do now, and previous attempts and that kind of stuff. Then, just a really high level, this is the proposal. This is how it's going to work.

The very next thing we do is talk about the UX. What's it going to look like on the CLI? What's the conflict going to look like? For most people, it's those first sections that you want to read. That's going to tell you what this product's going to do and what it's going to feel like. That's where the feedback's from. Then it's only the team who are really in the weeds on the details that really need to read through the other ten pages, if this is exactly the algorithm we need to use, or the data model concerns. How are we going to work out the API issues? All that sort of stuff.

Mitchell: I think we should jump back.

Mishra: I've been playing around with Connect. Both Nick and I have been playing around with it. We've been trying to use it for things like Lambda functions and things. These are all teasers. Hopefully something comes out soon. We'll do an official blog post or something.

» Connect: Why we started with layer 4

What was interesting for me was the focus of Connect, which is unlike other solutions that are out there. Right now, it's on layer 4, and I know we will extend the functionality to layer seven. My question was why did we start with layer 4? Why did we focus on layer-four connections and layer-four security and things like that?

Paul: That really came out of that history I mentioned where we were looking to solve the problem that people find managing their files difficult in this very dynamic environment. Layer 4 made a ton of sense because people were coming to us saying they want a way to secure the traffic between all these different things. The things were all using TCP.

What was less the case that you might imagine now, you're talking about service matches that people were coming and saying that we've got a hundred gRPC microservices in our Kubernetes cluster. Can you figure out how to secure them? The reason we started at layer 4 was because that was just what was going to work everywhere. It covers pretty much every use case people have.

Then having built out this control plane and this layer 4 proxying and other things that we built, which was just to make it easier to adopt without having to build TLS into our own application. We realized we have all the moving parts there to have a really great service mesh, higher level story as well. That's where that's come from. But, that's why it's layer 4 first. The other stuff was going to be layers on top.

Nic: Mitchell touched on this, slightly earlier. To be honest, one of the great features of running at L4 is that you can TLS-enable a database with zero effort. That's not a trivial thing to do, anyway.

Mitchell: I think that's what I was going to say. From the open-source side, what a lot of people don't realize about HashiCorp, it's a nice secret we have. We are an enterprise software vendor. Our paying customers are Global 2000 companies.

We have this one thing we care deeply about, which is the open-source community. It's like a Venn diagram. There's a huge non-overlapping world that is startups and individuals and hobbyists. We love those people. But the people that are funding us are actually this old, slow-moving, heavily legacy-burdened set of companies.

L7-first wasn't really even an option because on one side, HTTP and gRPC everywhere seems obvious, but on the other side, HTTP even is pretty not common as a service protocol.

You have to support layer four as the least common denominator. It's sort of like how we chose DNS for Consul. You've got to find these lowest common denominators that exist to enable large-scale usage. That's what we start with. We plan on heading into layer seven and doing some more stuff there, but layer four as a foundation was critical.

» SPIFFE (Secure Production Identity Framework for Everyone)

Mishra: I was going through the certificate signing and the certificate identity section, and I found that Connect uses SPIFFE](https://spiffe.io/) IDs to delegate identity to these entities. These might be servers or nodes or processes that are just running in the mesh. Could you tell us more about why we chose SPIFFE and what SPIFFE is?

Paul: SPIFFE is a new standard for service-based identity management. I can't off the top of my head remember exactly what it stands for.

Nic: Just for the listeners, SPIFFE stands for Secure Production Identity Framework for Everyone. I didn't go to the webpage or anything for that.

Paul: It was developed alongside the Istio Project. Istio's TLS certificates also use the same SPIFFE format. SPIFFE's been out and is a separate standard that's being shepherded by a startup and a bunch of corporate sponsors. SPIFFE itself describes the way you describe identities within certain documents. For now, it's just X.509, but they can add a bunch of others as well.

Then they have another part, which is called SPIRE, that's SPIFFE runtime environment, which is all about how you deal with identity. For now that's not part of the thing we're integrating with, because we're relying on Consul's existing ACLs. Tie-ins with cloud platform and identity.

The SPIFFE certificate format itself was a bit of a no-brainer because it's already a standard that's out there. It's already being used by Istio and a bunch of other tools. If we'd come up with our own, it would have only been arbitrarily different. There was no real technical reason not to use the one that was out there.

Mishra: After we were done with this really tedious design process with Connect, I think it was around 12 weeks that it took from idea to building Connect. I know I've played with it. It looks pretty impressive. Could you tell us more about what's in the feature and what are the features that people can try today? What features can they expect in going GA, let's say?

Paul: An announcement and a beta release, just over a month ago, was the core feature set of automated certificate management. We have an option to use Vault as an external CA already. We have a built-in proxy that you can use just to stand up and get things going day one.

The big things that we are adding right now are ... One of them is support for Envoy as a proxy as well, which is going to enable more features that Envoy already has. To be able to benefit from the performance and all the other things Envoy already has. That's hopefully going to come out before GA.

We're also working on integrations with Kubernetes and really making the workflow of getting Connect up and running and secure much better. Consul's ACLs need a bit of love to make that happen. There's a bunch of stuff there.

For GA that'll be a couple of enterprise features too, so multi-data center support where we automate the certificate rotation as a management across all of the different data centers.

Across all of your different data centers. I forget, is that all we have Mitchell? For 13?

Mitchell: I mean, I think the big ones are the CA and Envoy. Yeah, those are huge features.

Paul: Yeah, so that's hopefully gonna be later in the year for a kind of announcement for that and then we have a really long list of things we wanna do with this stretching to the end of the year and into next year. A lot more layer 7 support, especially once we got Envoy in there. And just more robustness in the enterprise world, things like audit logging, things like gathering telemetry and exposing insights about who's talking to who and what the security of your systems is, how it's doing. There's a lot of stuff once we got the big moving parts.

Nic: It sounds really cool and I'm looking forward to playing with the new features so hopefully we get a chance to play with those before the GA.

But Mitchell, I'm just wondering, you get out and about talking with a lot of our customers and various people not just in the industry but companies that are building technology. Have you got any feedback on companies using Connect yet?

» Feedback from companies using Connect

Mitchell: Yes. I would say the production roll-outs are pretty limited at this point, but there's quite a lot of interest and people starting to test this out. I mentioned it in one sentence in the Connect announcement, but it's one of the coolest and scariest things about Connect being built in Consul. Our most recent figure that we get from publicly connected Consul instances, and this is ignoring a huge number of Consul instances, but there were over 5 million phone-homes by Consul agents. So there's a huge existing install base out there.

We work directly with a lot of customers that themselves have huge clusters that aren't connected to the Internet so aren't part of that number. Connect did have core changes that affect Consul and so the fact that that deployed pretty successfully is really important first of all because even if they're not using Connect, they're validating a lot of the Connect architecture, but for the people that are planning on using Connect, it's also almost instantly in a fairly large scale, which is frightening. Usually you release something new especially like our background, you'd release something new and you'd slowly get larger and larger adoption. If you're successful, that's what you want.

But given Consul's preexisting nature, the companies that want to turn it on aren't just adopting Consul, they already have Consul, and dozens of data centers with thousands of notes per data center, and they just wanna flip it on. And that is the fear and the struggle that we're working with right now to make sure that that will work and why this testing process is happening and why it's not in production yet. But I think a really cool aspect of that is, there are a lot of service mesh solutions out there today, but we get to go into these large deployments just on day one basically. And that's a benefit I'm super thankful for, but we gotta do it right.

Nic: And I'm pretty excited to hear those use cases as well. I guess we'll start hearing some stuff maybe in blog posts and things, but certainly by HashiDays next year, we'll probably have some great talks from people, which is just gonna be neat.

You touch on Kubernetes. Now, there's a plan for Kubernetes and Connect. The plan is to have a native integration. Can you tell us a little how that's gonna work? How are Connect and Kubernetes gonna work in harmony?

Mitchell: Yeah and I take it back a little. It's Consul and Kubernetes and Connect is one big part of that, but it's Consul and Kubernetes and also some of our other products, but given we're talking about Connect, we'll focus on Consul. I think that there are two big opportunities, which is that when you're using Kubernetes on its own exclusively, there are a lot of features that we could enable for you that are really valuable. We're gonna make Connect with Kubernetes basically automatic. Anything you'd deploy, any pods that get started, can automatically have their connections encrypted and authorized and so on. So that's super powerful. There will be little to no work to make that a reality.

And the other use case is more like the bigger company use case where they're definitely adopting Kubernetes, but they have a ton of other applications as well. They have external services like databases, as you mentioned, but they also have stuff running on air cloud BMs, but they also have on-premise data centers, and one of the challenges they're having is encrypted, not even encrypted, just network connections between all these things, but also how do you secure those, how do you authorize those, and so for them we're working on a bunch of functionality to make it easier, and this will help everyone, but to make it easier for Consul agents to join servers that are running in Kubernetes or Consul agents to discover and communicate with Kubernetes services.

Likewise, like if you're using Kubernetes, the Kubernetes service discovery is built in, it's there, it has a lot of features, and you don't wanna have to use Consul's DNS or anything, because you wanna use the variables you get in your YAML resources and things like that to be able to reference all this stuff and so we're planning features to automatically sync Consul's catalog to Kubernetes' services catalog, and so you'll be able to talk to external services as if they were indistinguishable from if they were running in Kubernetes. And we're just gonna make this all automatic, and that's the benefit you get from software like Consul, which is specifically not tied to any platform. We run on physical data centers, we run on the cloud, and we're gonna make it run as natively as possible on Kubernetes, and you get all that across all these things, and I think that's the real power.

Nic: That also answers my next question, which is, why run Consul on Kubernetes when you've already got etcd, but I guess the benefit is that Consul extends beyond the Kubernetes cluster, far far beyond.

Mitchell: That's a really common question, and I think if you view Consul as just a key-value store, it's an obvious question, but the use cases I'm talking about aren't KV at all. None of them that I mentioned use K/V directly in any way, so it's the non-K/V features. The way I like to describe it to Kubernetes users is that etcd is a core component of Kubernetes, it's just sort of part of the foundation of that software, like we're building. And Kubernetes users always like to say, right, Kubernetes is the new Linux, it's platform, it's like this underlying thing that you use, and that's exactly what we're looking at it as. Consul is an application on top of that platform that's enabling a lot of other things, but we're not trying to replace any core components that make up that system. We're not writing a new schedule or anything like that, it's higher level functionality that's benefiting the user.

Nic: That's cool. I speak to a few people when I'm out and about, and people seem to be genuinely surprised that we do so much work with Kubernetes, however, it shouldn't really be surprising. After all, one of the concepts of The Tao is workflows, not technologies. Could you talk to us about what the future holds with further integrations between HashiCorp products and Kubernetes?

Mitchell: Yeah, so there are Tao elements to be sure, and I'll just make it a little less lofty. We have four to seven different directions that our open-source projects go that live in totally different categories right? Like Vault's in security, Consul is in this service mesh networking space, Terraform is in infrastructure. They're totally different categories, and if we took the stance that we can't integrate with anything that could potentially compete with any of those, it would be really immature for one, but it would be terrible for the business for sure.

One of the points we make is that we don't view competition really across product lines. If you're looking at Nomad, then you're probably looking at Kubernetes as well, or vice versa. It could be an either-or, it could be both, but then if you're looking at Vault, it's not an either-or. Vault integrates fantastically with Nomad. We're going to integrate Vault fantastically with Kubernetes as well, and all our engineers are aligned with that, our sales is aligned with that, like when we go in to customers, you don't have to adopt our other tools at all.

And it's very common for us to go in to the customers who think they want one thing and we give them the other thing, because we'll go in and they say they really need Consul and it turns out what they really need is Vault. We'll get that going because they already have a solution in place that we don't need to replace, it's good enough. So there's Tao, "It's workflows and technologies," but there's another document we have which Our Principles, and I think it's more about working with integrity and working with honesty is our job. We need to be a successful company, but our job isn't to pillage and extract as much money as possible. Our job is to build good technology and enable companies and be successful in the process and how do you balance all of that stuff.

Nic: I think that's really nice and certainly when I use HashiCorp tools in production rather than working for the company, it was one of the things that I did enjoy, working with the tools. They work across the board no matter where you are or what you're doing, which is really nice. We're at the end of the serious questions, so we have our somewhat traditional, yeah you love this one. We've got one last and slightly less serious question for you both and I'm gonna throw this to Paul first. Now Paul, if you were a flavor of ice cream, which flavor of ice cream would you be? And why?

Paul: I've thought about this one long and hard in my preparation, and I think it's gonna have to be salted caramel.

Nic: Interesting, why?

Paul: I'm trying to scrape around for why now, but it's great ice cream and it's a great combination of salty and sweet, right?

Mitchell: Oh damn. I don't know Paul, you haven't come off as very salty to me, but I accept that.

Paul: I can be.

Nic: What about you, Mitchell?

Mitchell: I also thought about this, we got to see in advance. I've been thinking about this a little bit, but I would have to say it's probably cookie dough ice cream. It's mostly vanilla, like I'm mostly just a what you would expect, a boring type of person, but inside there there are these little like concentrated surprising nuggets, and I think that's how I would describe myself. On the surface, it's like: "That guy's just a normal guy, boring normal guy," and then it's like, "Wait, he knows way too much about this one thing," or "He's way too fanatical about Teslas," or something weird is going on in there, and so that's me.

Paul: I think we all appreciate your surprising nuggets.

Nic: I wanna thank you both for taking the time to join us today and for sharing some interesting information with our listeners and I hope everyone's as excited as we are about Consul Connect and get out there and give the sort of the beater a try today and most importantly, give us some feedback. Let us know what you think and if there's anything that's missing, that's really, really useful to us.

So thank you, thank you both.

Mitchell: Thanks, it was fun.

Paul: Thank you.

Speaker 1: You've been listening to HashiCast, with your hosts Nic and Mishra. Today's guests have been Mitchell Hashimoto and Paul Banks from HashiCorp. Be sure to tune in next time.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now