Keynote - Introduction to Consul 1.9 and a Demo
Oct 20, 2020
HashiCorp Consul continues to add more service mesh features as it reaches its 1.9 beta release. These features include app-aware intensions, visualization enhancements, and custom resource definitions.
- Neena PemmarajuDirector of Product Management, HashiCorp
- Cody DeArklandTechnical Product Marketing Manager, Consul, HashiCorp
Watch the HashiCorp Consul share what's coming in Consul 1.9, and then showcase Consul 1.9 in a live demo.
Welcome to HashiConf Digital 2020, and thank you, Armon and Mitchell, for that look back into Consul, one of HashiCorp's most downloaded tools since 2014, with a large community of users.
Earlier this year, HashiCorp joined the Cloud Native Computing Foundation to help users of CNCF technologies become successful with HashiCorp tools, and we're excited about that association.
For those who are new to Consul, Consul is a platform that allows you to discover, automate, and secure services networking across any cloud and runtime platform. The way we do this is 3 things:
We decouple IP addresses using service name as identity.
We automate service networking based on logical services.
We secure service-to-service traffic with identity-based security policies.
There is a large number of users who have been using Consul, and we have over a million monthly downloads.
Consul serves 3 use cases:
The first one is service discovery and health monitoring
Network infrastructure automation
Multi-platform service mesh
We've typically seen Consul users following an adoption journey that goes from service discovery to network infrastructure automation to service mesh, though there are exceptions for certain organizations that start straight with the service mesh.
Let's look at the use cases deeply.
Service discovery and health monitoring
Service discovery and health monitoring enables registering new services and having it available to others to use rapidly, reducing the time from days to seconds of new services being used.
Network infrastructure automation
Network infrastructure automation enables updating manual network configuration tasks automatically and instantly, allowing for acceleration and application delivery, by automating these tasks.
Multi-platform service mesh
Multi-platform service mesh enables automated and secure service-to-service connections, providing insights into service performance and helping improve developer agility by enabling self-service.
Today, we are going to cover a number of updates on Consul open source, the Consul ecosystem, and Consul Enterprise.
What's new for Consul open source
For open source, we released a number of features in the last few releases. We released Consul 1.8 in May with our own gateway offerings to provide real technical and business value: integrating new and existing apps into the mesh with ingress and terminating gateways, and securely extending service networking across different datacenter environments with the ability to federate over the mesh gateway.
These enhancements provide a way for organizations to connect their service mesh anywhere using Consul.
The Consul ecosystem
Next, let's look at the Consul ecosystem. Consul provides a single control plane to provide consistent application networking experience across a broad ecosystem. Simplified workflows and integrations, with a focus on workflows, not technologies.
Network infrastructure automation within Consul enables organizations to move away from manual ticket-based approaches in dynamic IP environments to update configurations and load balancers, firewall, as well as software-defined networking devices.
It provides a declarative, workflow-driven network automation using Terraform, along with a partner program to build a robust ecosystem.
The tech preview of this was announced at a partner summit at HashiCorp earlier this week.
Customers who are using Consul include Criteo, Mercedes-Benz, AmpleOrganics, Pandora, Capital One, eBay, SAP Ariba, Stripe, and 24-Hour Fitness. You can see that this is a wide variety of customers across many different verticals, as well as geographies.
What is Consul Enterprise? Consul is available as open source for individual practitioners; for larger teams and organizations, we offer a number of enterprise features to increase application resiliency and performance and enhance enterprise compliance and governance.
The platform package enables operational simplicity and platform reliability for a production environment. Then, when you put in the global visibility and scale module, this supports advanced network topologies and enhances performance and resiliency at scale.
Finally, the governance and policy module provides standard policies around service naming, registration, ACLs, intentions, etc.
HashiCorp Consul Service on Azure
The next package you can use for Consul is the HashiCorp Consul Service on Azure, or HCS. We made HCS generally available in July of this year.
With HCS, the user can provision dedicated Consul infrastructure into their actual subscription directly through the Azure portal. This enables the team to secure application networks across ATS, Azure Compute, and on-premises datacenters while offloading the complicated operational aspects to HashiCorp.
The end result is a faster migration to Azure for critical workloads, with the safety and security guarantees that Consul can provide in an untrusted network environment.
We provide 3 HCS offerings to enable an individual or an organization of any size to easily consume Consul as a manual service.
For organizations requiring complex multicloud architectures, or strict application resiliency and organizational compliance not met by a managed service, content can be self-hosted and maintained by using the Consul Enterprise package.
The last package is HCP Consul. We've seen demand for cloud service offerings increase further as Terraform Cloud and Azure have become increasingly popular.
Enterprises who use multiple products, or for whom multi-cloud is becoming a necessity, have expressed interest in platform solutions. Demand for cloud services is also rising across the application infrastructure market in general.
In response to the demand and to meet the needs of HashiCorp customers, we are introducing HCP as a multi-product, multi-cloud platform. HCP will enable customers to auto-provision fully managed product infrastructure on any cloud.
To date, there are over 1,000 signups, and a public beta of HCP Consul is available now.
In summary, we see Consul being used by users to bridge the gap between applications and networks and provide progressive delivery, consistent security, and connectivity anywhere.
Finally, here are the key new capabilities that we are making available in Consul 1.9, which focuses on providing an out-of-box, Day 1 experience for service mesh and Kubernetes practitioners:
Kubernetes health check support
Service mesh visualization
Custom resource definitions
Layer 7 traffic management
Cody De Arkland, who is a technical product marketing manager for Consul with HashiCorp, will give you more details on Consul 1.9.
Cody De Arkland:
We're going to jump in and do a bit of an overview on some of the more detailed features of Consul 1.9. Then, I'm going to show you a demo around our custom resource definitions for interacting with Kubernetes and Consul.
Let's jump in. We like to pick themes whenever we release a version of Consul. In this case, the themes we picked are control, observe, and enhance. These are larger pillars that we use when we describe what we've done in this release.
Consul 1.9 Demo
From a control perspective, we want to control the way that applications communicate with each other, in a bit more of a fine-grained way than previously possible.
Oftentimes, it's more important to understand the way something is communicating, as opposed to just if it's allowed or disallowed. We do that through application-aware intentions, or Layer 7 intentions. We'll dive into a bit more detail on that in a bit.
Looking at observe, we want to provide a better visualization around the way you see how services communicate in the environment and make it easier for you to get metrics out. We do that through service mesh visualization.
Then, from an enhanced perspective, we continue to drive a great first-class experience with Kubernetes, and we really want to bring in the ability to manage Consul and Kubernetes via custom resource definitions, or CRDs.
We're also adding out-of-the-box support for OpenShift.
Consul Observability in 1.9
Taking a look at our observability capabilities in 1.9, you can see on the screen a screen capture of what our observability will look like in this release. You can see that operators will be able to come in, hit their services, and look at a topology view and understand some of the metrics around how services are communicating between one another.
We can see very easy ways to determine any sorts of problems that are in place and be able to diagnose service-to-service communication problems.
We're also able to capture key metrics and get those out of Consul to be able to feed into other dashboarding platforms. But in this case, we're seeing some key metrics within each service directly inside of the Consul UI.
I touched on application-aware intentions. They're also known as L7 intentions or Layer 7, and what these are able to do is look at the HTTP traffic of the communicating service. What is that application trying to do? What are the headers on that? What path is it trying to access? We can allow or disallow communication-based on that.
For example, you might want to say that Chrome browsers can't hit the
/dev path. We can do that via Layer 7 intentions. We're able to do that intention model of allowing or disallowing communication-based on actual request information.
These also apply to the different methods that are in place, whether it's a
put or a
get, any of those things can be also applied to this Layer 7 model.
It is going to be exciting to see how we can further the way that service mesh is secured between the applications communicating.
Custom Resource Definitions
I'm excited for this one, because we're finally bringing the ability to manage Consul on Kubernetes in a Kubernetes-native way with CRDs, custom resource definitions.
You can see an example on the right of CRDs that apply sub-service default configurations, and then finally set up a splitter to send some of that traffic between environments. You've seen that in a couple of my demos that I've done historically, where I've used these splitters to move traffic around.
It's exciting to be able to do that in the Kubernetes-native way, in this case.
Also, we're adding first-class support for OpenShift, so people who are in the OpenShift ecosystem will be able to consume Consul Day 1.
We talk about these pillars whenever we highlight Consul, the idea of progressive application deliveries, zero-trust application networking, and service-level observability, as these big-solution focuses of Consul.
Oftentimes, the phrase service mesh gets wrapped up with a lot of buzzwords, but when you start to decompose that service mesh phrase into the capabilities you get out of it—Do you want to be able to gradually roll out a service? Do you want to be able to send 10% of traffic to a destination, as opposed to just cutting over all traffic?—the answers come very quickly.
Of course, those are things we want. Do we want to be able to secure all our traffic with certs on Day 1? Of course we do. Do we want to be able to do the zero-trust networking, denial traffic that's not explicitly allowed? Of course; why wouldn't we want that?
Then finally, do we want to be able to get good visualization into the services? Yes, these are all things that make up an actual service mesh, but they get lost in that buzzword sometimes. We want to really focus on these 3 pillars as a solutions focus for how we're driving the story of Consul.
The 1.9 Beta Is Public
I'm really excited to share that Consul is going to be in public beta as of today, so you can go out and try Consul 1.9. Take a look on Consul.io and you'll be able to pull it down.
You also heard previously that we were announcing the beta for our network infrastructure automation using Terraform tech preview. This allows you to take Consul and that service catalog that collects information about all of the services inside of an environment and have Terraform drive updates against that infrastructure based on the contents of Consul.
A Demo of CRDs
As I mentioned earlier, we're going to do a demo. In this demo, I'm going to dive into the custom resource definitions and show you how we can use those CRDs to control the flow of traffic between different versions of applications.
Before we get started, let's get the lay of the land. Here we have my Consul environment, and I have a couple of applications that are already deployed into this environment.
I've got an API, a database, and a frontend service. These services are part of the service mesh currently.
Let's switch over to the command line and take a look at our Kubernetes environment. If I do
kubectl get pods, you can see a couple of new entries in this environment.
I have a controller and a webhooks cert manager. These are used to translate the CRDs that I'm going to apply, the custom resource definitions, into configuration entries that Consul can understand.
Let's take a look at our Consul helm chart configuration, so we can see how this was set up. If we take a look at this configuration file, you can see the controller stands up, and it's set to enabled.
We're going to add that custom resource definition into our deployment application, which I've done here. I'm going to add a second one for our API, so we now have our frontend and our API services set up with custom resource definitions.
I'll write this out and apply the configuration change to the environment.
We can see both of those CRDs have been created here, a frontend and an API. No changes were made to our API or frontend services.
Since we're going to use the ingress gateway, we'll go ahead and write our configuration file out for that. We've applied that configuration now to Consul.
If we take a look at our services in Kubernetes, we can see our ingress gateway here, and we'll go ahead and issue a
while loop to check the connectivity to that service. We can see that we're returning back the Version 2 of our API. This is going to continue to run throughout this demo to make sure that we're connecting successfully.
We'll go ahead and edit the V2 of our application, and we'll add in those same custom resource definitions to enable it to be able to accept traffic.
We're going to set up a new custom resource definition that's going to map out to the versions of our applications.
Our other one had a metadata version of that V1. We're going to set this up to be V2. Inside this CRD, we're going to set a default of the V1, but then set up 2 subsets, the V1 subset followed by the V2 subset.
From here, we'll add in a service splitter. The service splitter is going to be set up to split the traffic between those 2 subsets. So I create one named API, and I'll set a 50-50 split between V1 and V2 at these 2 services.
We can write this file out, and we'll go ahead and apply this configuration change. You'll see that we've created a service resolver and a service splitter CRD that also map back with our previous service default CRDs.
This traffic splitting is in place now, and if we take a look at our curling that we had done previously, we can see this splitting happening live.
Let's switch to the command line and make the change for our V2 of our frontend service. Here, I've already configured the CRDs, very similar to what I did in the API tier for my frontend.
I've got a service resolver and a service splitter in place. This service resolver, just like the APIs here, has a default to V1, and it has 2 subsets set up for V1 and V2 mapped to some metadata inside of those deployments for, again, V1 and V2.
My service splitter is set up for the frontend service, and it's set up to send 100% of the traffic to V1. This lets us apply the configuration change and still set all our traffic to the original destination, just to validate that it's configured correctly.
I've applied those configurations, and we can see we've also deployed out the V2 of our application here. This lets me deploy both my CRDs as well as my application at the same time.
When this is deployed, the running state is updated automatically. We can see the frontend service is still coming up. I've got 6 nodes up. If I go back to the services view, it's still erroring out. It just takes a few moments to come back to life, and we're good to go.
If I go back into our application and refresh, we should see the old version of our application still. It's great. We're still connecting successfully. We'll switch back in and we're going to go and change that splitting configuration to send traffic to the new service.
We already showed a gradual switchover, so I'm going to go ahead and switch all traffic over to the V2 on the fly. I'll write this out and apply the configuration change again.
We can see that CRD was configured for the splitter while the other 2 were unchanged. If we switch back to the API and hit refresh, we can see the new version of our application, welcoming everybody to HashiConf Digital.
In this demonstration, we've shown how our new custom resource definitions for Consul allow us to pair our configurations for communication, alongside our applications. In this demo, we showed how we could move the API tier to a Version 2 while keeping traffic intact.
We then showed doing the same thing for our frontend service and brought those configuration files right alongside our application deployment.
When we did our application deployment, we were able to watch live as that traffic changed and as we were able to roll out the new version of the service.
I'm really excited for these custom resource definitions to come into play. They allow Kubernetes administrators to manage their service mesh in a way native to them.
We care a lot about making sure that we can adapt to the workflows that our customers are using inside of these platforms. I really think you're going to enjoy using CRDs to manage Consul.
I'm excited also for you to check out the sessions that are following this one. Blake Covarrubias, a senior product manager for Consul, and Hannah Hearth, a UX designer for Consul, have a really great session around Layer 7 intentions, as well as visualization of the platform.
Hope you enjoyed this demo. Stay tuned for more on HashiConf, and have a great day.