Case Study

Multi-Region Networking With HCP Consul Federation

Watch this session to learn about HCP Consul's new Federation feature, which provides multi-region service networking.

Speaker: Sridhar Krishnamurthy, Riddhi Shah

Transcript

Sridhar Krishnamurthy:

Hello everyone. Thanks for joining us today. In today's presentation, we will talk about multi-region service networking using service mesh in HCP Consul. HCP is the HashiCorp Cloud Platform, in which Consul is offered as a managed service. As part of this presentation, we’d also like to announce an upcoming feature — so please stay tuned. Let me introduce myself. My name is Sridhar. I'm a product manager in the Consul cloud team. Joining me today is Riddhi Shah, an engineer from the Consul Cloud team as well. Let's get started. 

Current State of Application Development

This is relevant in the context of the multi-region discussion today. The first is acceleration to the cloud to deploy applications. We all know, the cloud reduces the friction to deploy compute. 

The second deployment pattern is microservices. Microservices-based deployment reduces the dependency between teams such that multiple teams can concurrently develop their services and accelerate their deployment. 

The third is next-gen applications are re-architected or created new for microservices patterns and even serverless patterns or function as a service. This means enterprises still end up with a heterogeneous deployment in terms of multiple runtimes, such as VMs, containers, and, serverless — and all these services have to talk to each other for a business purpose.

 

Multi-Region Deployment Drivers 

There are three broad drivers. The first is to reduce latency. This is because enterprises want to deploy their applications closer to their end customer to reduce latency and — importantly — improve user experience.

The second is high availability. Businesses require disaster recovery or business continuity requirements. With the Active/Active deployment across multiple regions, the overall availability of the application and services go up. 

The third is regulations and cost. There are data locality requirements that require the data to be local to the application in a specific region. In terms of cost — the cost structure across different regions in a cloud is different. When an application requires a significant amount of compute, it is not economical for an enterprise to deploy a replica of their application across different regions. Rather, enterprises create a shared services model and deploy them at a low-cost region — and other services will connect to these applications or services to get their business outcomes.

 

Challenges With Multi-Region Deployments

We talked about the drivers, but what are the challenges? Again, there are broad challenges here that I've outlined. The first is service discovery. That is, how does one service know the location of the other service — and importantly — whether the service is available or not?

The second is traffic management. This is the ability to dynamically control traffic routing between different services. For example, if you take a blue-green or a canary progressive deployment, an enterprise’s obligations need a dynamic way to route traffic. As part of this distributed application scenario, enterprises also need the ability to observe and troubleshoot the connectivity issues between the services. 

The third part — probably the most important here is — as the services are spread across different regions, it opens up the security risk, and creating zero-trust security or zero-trust network based on service identity is supercritical. Relying on traditional IP-based approaches is not sufficient because the workloads of today and tomorrow are going to be dynamic, and IP is ephemeral.

Consul Service Mesh

The answer to these problems — these challenges — is service mesh. Service mesh creates an underlying layer that gives the enterprises the ability to connect the services securely. Consul is a service mesh, as well as a service discovery platform. It is available in open source, as well as an enterprise — and it's highly scalable.

The graph on the right indicates Consul's Scalability. This graph shows a scale test we did at HashiCorp deploying on proxies across 10,000 nodes. This shows that in about half a millisecond, 172,000 —service instances at the peak are updated by the control plane. So, imagine, half a millisecond, 172,000 service instances updated with the Consul control plan. 

Last, Consul is designed for multi-cloud connectivity. Consul uses a feature called Federation, which enables multiple Consul clusters across the globe, across regions — across different geos — to discover services from each other and create a mesh across the entire global network.

 

Consul’s Capabilities

First is service discovery and real-time health monitoring. This gives the ability to discover services in real-time across different regions. 

Secondly, Consul is designed to be multi-platform. With Consul’s service mesh capabilities, you can securely connect services and dynamically route traffic between different services in different locations. Importantly, Consul supports multiple runtimes. For example, if you take an EKS in Kubernetes, ECS — which is Amazon Elastic Container Service — or EC2 VMs for that matter — all these multiple runtimes are supported in Consul.

The last part in this slide that I'm talking about is Network Infrastructure Automation. It is all about removing the operational overhead of manually configuring an edge load balancer or a firewall with the changes in the pool members. For example, if you take a load balancer, the pool members come up and go up and down. Consul can dynamically update the load balancers about the pool members' presence or absence.

We touched on Consul’s capabilities, but if you take the scenario of a self-hosted Consul, enterprises must still install Consul, manage Consul’s lifecycle — and that's an operational overhead. 

Is there a better solution? The answer is yes. This is where HCP Consul comes in. As I mentioned earlier, HashiCorp Cloud Platform offers Consul as a fully-managed service. 

HCP Consul in AWS 

First, it's available in AWS. We launched it in February 2021. The benefits of HCP Consul are: One, it's a managed service with a cloud SLA, so it eliminates the operational overhead of managing Consul. 

Secondly, it supports heterogeneous runtimes. As I mentioned earlier, EC2, EKS, and ECS are all supported. HCP, as a platform, is designed to be secure by default. In terms of risk and high availability. A Consul production server is spread across three availability zones in a region, increasing the availability of the Consul servers. 

Last, when it comes to the cloud, flexible consumption is of the essence. HCP Consul is available on-demand — where you can pay as you grow — as well as an annual subscription model. Importantly, from a tiered offering in HCP Consul, you can start with an entry-level skill that lets you get started with the minimum number of service instances to connect with the Consul server — or you can scale up to 10,000 service instances for your large-scale production deployments.

Coming Soon — HCP Consul Federation

Now comes the interesting part. I was talking to you about the announcement. We are excited to announce that we will release the HCP Consul Federation feature in July 2021 — and Riddhi will do a demo and show you a glimpse of what it will look like. 

We’re going to do the following: In the figure, I've created a logical representation in which I've taken two regions. When you deploy a Consul cluster in HCP — as I've indicated here, in the figure — the Consul servers are deployed over three nodes. By default, without the user having to do anything, the Consul servers are interconnected with RPC and LAN gossip

When the user enables federation at the time of creation, the HCP platform will create connectivity between the different Consul clusters and add security rules. Thereby, the set of clusters are automatically peered and federated behind the scenes. The user only has to specify the intent of federating these clusters.

I've shown here — at a high level — a logical figure of connectivity to your workloads. Consul servers will connect to your VPCs, in which your workloads are hosted. We can either use direct peering or transit gateways. You can use a central transit gateway that you manage, connect the HCP Consul to that gateway, and behind that gateway, you can connect all your VPCs that need to be part of the group. Without further ado, I'd like to hand it off to Riddhi for a quick demo of what the HCP Consul Federation feature looks like. 

Riddhi Shah:

Hey everyone, this is Riddhi. Today, I'll be doing the demo to show how multi-region service networking can be simplified using HCP Consul and Consul service mesh. 

Consul HCP Federation Demo

The diagram you see here is a visual representation of what we'll be walking through in the demo today. For the demo, I'll be using a sample application called Hashicups. It is a simple app that lets me order a cup of coffee. 

My use case is that I wish to deploy microservices of this application in Kubernetes clusters spread across different regions — and at the same time, secure and control traffic flow between these services. The different services of this app include a frontend, a public API, and some backend components like a product API and PostgreSQL database. I'll now walk through how we can solve this use case. Stay tuned until we can order ourselves a cup of good coffee.

For the demo, I already have the Kubernetes clusters created in the different VPCs. Next, we'll create the HashiCorp Cloud Platform managed resources you see in the top half of the diagram. This includes three HCP Consul clusters — all part of a single federation setup in the regions you want your services running in.

The three regions I'm using for the demo are US-west, EU-west, and EU-central. I’ve used Terraform to do the setup. We'll share the relevant repo for the demo at the end of the presentation, but the main thing worth calling out in the Terraform setup is how we create this multi-region federation.

We create the three HashiCorp virtual networks in the relevant regions — and since we use AWS as our cloud provider, these are nothing but AWS VPCs. The peering or connectivity between these VPCs is also managed and created by HCP. — this is important for the federation to work.

On the right, you can see how — with just a few lines — we can set up a multi-region HCP Consul federation — and add a cluster to this federation. We just need to specify a link to the primary cluster. The rest of the setup is the same as a standalone cluster setup.

I applied this Terraform config — so let's look at the HCP UI to see the created resources. There are the three HVNs created in the three regions and the three Consul clusters created within those HVNs. Ignore the limit warning on the top. It's only because I'm using a development account.

Note that once the federation feature is released, you'll also be able to view federation-related information on this UI. We can click into any cluster to view further details about it, like the ID, the tier, the size, the assigned HVN, snapshots — if any — and lastly, all the details required to access this cluster. 

Since I've created a public cluster, I can use a public URL from the UI and access the Consul UI directly. To log into it, I can generate an admin token from the HCP UI as well. Once logged in, I could see the federation was successful by looking at the dropdown list where all the clusters or Consul datacenters are listed — and we can easily navigate between them. 

Right now, only the leader node created by HCP is listed here. . I've used a development cluster for the demo, but ideally, you want to use a three-node server setup for production. To connect my clients to these managed servers, I can download a client config file from the HCP UI as well — and this helps me configure the clients on the EKS clusters to join the managed HCP servers.

I’ll install Consul on each EKS cluster using Helm Since I'm using a two-node EKS cluster — you can see if we go back to the Consul UI — there should now be two additional nodes or Consul clients listed as members for each datacenter.

I also configured my clients to have a mesh gateway. This helps with routing encrypted service mesh traffic between the datacenters. With all the infrastructure in place, I can finally deploy my application. I deployed all the microservices in the different Kubernetes clusters — this also automatically registers these services on Consul.

If we go back to the Consul UI, we can see the product-api and the Postgres service in EU-central. We can see the public-api registered in EU-west, and the frontend registered in US-west. Also, notice how each service has a sidecar proxy that's registered along with the service. This helps with all the inbound and outbound communications between the services — and also makes sure the TLS connections between the services are always verified and encrypted. 

Now, let me try to connect to the frontend. To do this, I use another service mesh component called the ingress gateway, which provides an entry point into the service mesh. I can use the URL off the ingress gateway — along with the port that my frontend is listening on — and try to load my application. 

You can see, I get an odd BAC access denied error. That’s because — by default — Consul restricts all traffic between services unless explicitly enabled using service intentions. You can see the same warning show up on the Consul UI as well. Under the Intentions tab, you can see we have no defined intentions or permissions.

For my use case, I want the traffic to be forwarded from the ingress gateway onto the frontend, onto the Public-API, onto the Product-API, and finally to the Postgresql database. I configure my service permissions accordingly to control traffic flow. 

If you go back to the Consul UI, you can now see these permissions listed. And, note how we have just specified the service names to define the permissions, although these services are spread across different regions. This is because federation enables multi-region service discovery. 

With all these permissions in place, let me try to load my application again. You can see this time it loaded up successfully. We're finally at a point where we can order ourselves a cup of good coffee.

Demo Summary 

We spun up three Consul clusters in the demo — all part of a single federation setup using HCP —  connected clients to it and deployed a multi-region service mesh. Federation enabled cross-region service discovery. The sidecar proxies helped with routing mutual TLS traffic between services. 

The ingress gateway provided us all the external traffic, a secure entry point into the service mesh. And, the mesh gateway helped with cross-DC traffic clouding. All these service mesh components enabled multi-region networking in a simplified, secured, and controlled manner. That's the end of the demo. Thanks for listening. 

Session Summary

Over to Sridhar for the presentation wrap-up.

Sridhar Krishnamurthy:

Thanks, Riddhi, for the demo. To summarize what we talked about in the session, HCP Consul is a managed service mesh to discover and securely connect services across heterogeneous runtimes. 

Our goal with HCP Consul is to simplify the user experience. One example you have seen is the upcoming feature — HCP Consul Federation. If you are interested in HCP Consul Federation Beta, here are the steps that you can take. First is, please sign up for an HCP Consul account. Secondly, please email us at support@hashicorp.com. We have also added a few useful resources for your reference.

First is the repository that Riddhi used to demonstrate this functionality. The last two are getting started guides for HCP Consul. Also — if you're interested in learning more about HCP Consul features — the last bullet on this slide shows a link where you can get all the information you need. Thanks for taking the time to listen to our presentation. We appreciate it. Thank you.

More resources like this one