Incrementally Adopting Consul Service Mesh
Jul 01, 2020
In this demo, you'll see how an organization can transition to a hybrid network model adopting service meshes in some areas while still integrating with non-service mesh applications. Using Consul 1.8, you'll see how to set up ingress gateways, terminating gateways, and mesh gateways.
A service mesh and its benefits are enabled by routing all service traffic through proxies. However, the transition to deploying proxies alongside services will often be made incrementally due to organizational or technical constraints. In this transition, organizations may be operating services inside and outside the service mesh for long periods of time. Learn how to make applications inside and outside the service mesh run following a single workflow for both.
Chris Piraino: Hello, my name is Chris Piraino. I'm a software engineer on the Consul team at HashiCorp. And I, along with Freddy, are going to do a talk about adopting Consul service mesh. Off to Freddy.
Freddy Vallenilla: Thanks, Chris. I'm Freddy. I'm also a software engineer on Consul. We have a lot to cover here so let's dive in. Here we have a snapshot of a large organization—there are services spanning across multiple environments. Some of these environments may have been brought in due to acquisitions; others may have been brought in during a cloud migration.
How Do We Continue to Reduce the Time That It Takes to Deploy New Services?
Focusing on the cloud migration, in particular, one common goal there is to enable teams to iterate and innovate faster. For some of you, this footprint and story may seem familiar. But regardless of scale, if you're on the public cloud, there's a common question that you might ask yourself: How do we continue to reduce the time that it takes to deploy new services?
More specifically—how can we streamline instrumenting services developed in multiple languages that span across multiple environments. Our services span across multiple networks, and they're also developed and deployed on different runtime environments. The services are developed in multiple programming languages, they also communicate using multiple protocols.
For every new service that gets deployed, there's a lot of complexity that needs to be managed to keep up with best practices. Logic for security, networking, and observability are functionality that's often embedded into applications. However, this is difficult to keep up with when you need to implement, maintain, and distribute several distinct libraries for that. Because of these costs, a lot of the logic will end up being unevenly distributed across teams and services.
Today, we're going to talk about how Consul service mesh can help you manage that complexity by extracting it from your applications. I'm going to start by reviewing key service concepts. Then, I will address important questions like, how does a service mesh fit into my existing infrastructure? How do I get started transitioning to using a service mesh? The answer for that second question is going to come in the form of a demo, where Chris is going to incrementally upgrade a multi-cloud deployment.
A refresher on Service Meshes
Here's a simple model showing how two microservices might typically communicate with one another.
There's an API service that will discover and dial the IP address of a backend service. In a service mesh, the key change is that sidecar proxies are deployed alongside every service instance. Services no longer dial each other directly, but rather do so through their proxy over localhost.
This abstracts the network away from the applications themselves, and the proxy can take care of logic such as discovering and load balancing between instances of a backend, encrypting outbound communications, retrying on 500 errors—and so much more. When Consul service mesh was first launched in version 1.2, a key focus was security. By security, I'm referring to securing service-to-service communications with automatic encryption, and identity-based authorizations.
Authorization in Consul is driven by a feature we call intentions. Sidecar proxies at a destination service will authorize or reject inbound connections based on intentions that reference their identity—and the identity of the calling service. An important thing to note here is that intentions do not reference IP addresses directly. They apply to every instance of a service—and they scale independently of the number of service instances.
As you listen to me speak about deploying sidecar proxies through your network, you might be wondering about how they're configured. And don't worry, you don't need to manage config files for every proxy instance. When starting Envoy with the Consul CLI, Consul will take care of who's generating a bootstrap configuration. Configuration for service discovery, intentions, routing, and more can be distributed at runtime.
You can set defaults for proxies all across your cluster, as well as defaults for proxies for a given logical service. You can then push this configuration by sending requests to any Consul client in your datacenter.
How Would This Fit into My Existing Infrastructure?
This may be the most common question we get about Consul service mesh. I'd like to talk about some of the recent features that we've released to address that question.
This model is awesome when both services can be deployed with sidecar proxies and do not have external dependencies. But what happens when they need to make a callout to a service, and you can't deploy a sidecar?
What if our backend needs to make a call to a managed cloud service or a managed database? Or what if it needs to call the services that we manage ourselves but having deployed proxies alongside it?
There's also the question of how to handle connections that span across multiple networks. If you have services across multiple regions, platforms, or clouds, what happens when you need to cross one of those boundaries? How can we extend the service mesh? Lastly, there's the question of how you allow others to consume workloads in the service mesh.
If a pioneering team deploys services onto a mesh, we still need to ensure that other teams can integrate with them.
These are all great and important questions—so let's take a step back and answer them one by one. The first task boils down to enabling traffic from services with sidecar proxies to services without them.
New in Consul 1.8 are terminating gateways. Terminating gateways enable your greenfield applications to integrate with existing infrastructure. As they transition to the cloud, a transition to the service mesh will not upgrade all services at the same time. Several constraints will keep services from upgrading to the mesh, but new applications will still depend on them.
By having terminating gateways as a single point of egress from a service mesh, you can easily secure access to external resources from a limited number of gateway nodes. This is particularly valuable when controlling egress from an environment like Kubernetes.
Intentions authorize connections from dynamic service addresses to the gateway. Then, your firewall has a much simpler task of securing access from gateway nodes to the final destination. One key feature about terminating gateways is that a single gateway can route to multiple services outside the mesh. You decide what subset of services you want to expose.
The next task we wanted to address was enabling traffic from the service mesh to services in another network. Released in Consul 1.6 mesh gateways enable this connectivity. They're an alternative to VPNs that are often a pain to plan and then manage. Mesh gateways send all forms of traffic from service-to-service communication to communication between Consul servers. Additionally, gateways do not hold certificates or keys for their destinations. They merely forward traffic based on the request SNI header. This means that even if a gateway is compromised, the traffic that's routed to it cannot be decrypted.
In this diagram, we have an API service on GCP dialing to some backend services on Azure. With mesh gateways deployed, this API service does not need direct connectivity to its destinations. This is powerful because it enables connections between networks that have overlapping IP addresses.
Scaling out mesh gateways is also incredibly simple. After booting up additional gateway proxies and registering with Consul, traffic will immediately start flowing through them.
One more thing about mesh gateways—originally, when joining two Consul datacenters, all Consul servers need direct connectivity to one another. That connectivity is what allowed Consul to resolve remote service discovery queries. It also allowed Consul to detect remote server failures and network partitions.
However, this came at the cost of having to expose every Consul server on the WAN. We heard from a lot of users that this experience could have been improved. Now, as of Consul 1.8, all traffic between Consul servers can be funneled through mesh gateways. This includes everything from green and gossip to remote key-value store operations.
Moving on to the last task we wanted to solve: It's accepting traffic from services outside the mesh. Also, new in 1.8, ingress gateways allow inbound access into your service mesh. They can also take advantage of native service mesh capabilities like HTTP-based routing.
Ingress gateways can route to destination services based on the path and headers of the request. Ingress gateways are the unified way to ingress the mesh regardless of whether you're on Kubernetes or VMs. They also allow us to accept potentially unauthenticated traffic, which we then encrypt and forward to destination services.
Ingress gateways can listen on a single port or multiple ports—and can route to one, or multiple services. Once again, you decide what you want to expose through the gateway.
Now we've seen the benefits of a service mesh, and addressed some of the initial concerns about adoption, how do we get started? During this process, we're going to follow some guiding principles.
Slow and Incremental Adoption
This allows us to limit the blast radius—make a small change to your system, observe what happens, and repeat. Many organizations are already doing this during migrations to the cloud. Netflix followed this approach during its cloud migration by starting with a jobs page on their corporate site—as opposed to their core streaming services and billing services.
I would like to emphasize that we do not expect organizations to atomically drop their old model when transitioning to a service mesh. When driving this kind of change, networking and security teams will need time to adapt to this new way of operating. This is something we've tried to enable with our new gateways.
The Path of Least Resistance
There's—without a doubt—additional overhead when deploying new pieces of infrastructure like sidecar proxies and gateways. Deploying these components to an environment like Kubernetes can make things simpler in several ways. For starters, the Helm chart will guide you in the creation of a complex Consul cluster.
A lot of the questions that you would need to answer for yourself when deploying onto VMs are already handled in the Helm chart. Additionally, deploying sidecar proxies along services is also made easier—and we'll show this shortly in the demo.
This is a diagram of the deployment that we're going to upgrade today. In the bottom half, you can see that we're running an on-prem datacenter with VMs that already use Consul for service discovery.
In the top half of the slide, we have a Kubernetes cluster running on Google Cloud. And all three services on Kubernetes are running in containers that we manage and can easily deploy sidecars alongside of. Lastly, the API service currently calls down to the database in the on-prem datacenter. So, off we go to the demo: Chris is going to show off how we migrate these services onto a Consul service mesh:
Migrating New Consul Services onto a Consul Service Mesh
Chris Piraino: Thank you, Freddy, and welcome to the demo. We're going to just orient ourselves real quick. On the left side, we have a terminal where we're going to be doing some commands. On the right, we have some browser windows where we can see some things happen. We're going to be incrementally migrating an existing deployment of a working service onto the Consul service mesh.
We see here the Consul UI for our on-premises datacenter. We have a couple of services in there already—being used for service discovery. Here is our fake service—this is the service we're going to be migrating. It's a classic three-tier architecture—there's a web frontend, an API backend, a cache, and a database. The database is on the on-premises datacenter. Probably, maybe we've secured it with a VPN, or what have you. But this is what we're going to be migrating.
The Web API and cache, we have running on Kubernetes because that will enable us to go faster, deploy easier—all that good stuff. We'll see over here on the left, you can look at our Kubernetes pods. We have a bunch of Prometheus pods for some metrics—and then there's an API cache and web pods.
Why Do We Want to Migrate? Well, we want to enable ourselves to move faster when we deploy. We want to do things like Canary deployment, and we want to secure all the communication between them. Intentions are a good way of default-denying everything.
Because our Kubernetes cluster doesn't have a Consul datacenter at all, the first thing we need to do is use WAN federation to extend our current deployment—and have a secondary datacenter on the Kubernetes cluster.
Adding a Mesh Gateway
So, we're going to start up a mesh gateway first in our on-premises datacenter. This is going to enable us to use that to do WAN federation over mesh gateways. This is a VM-based infrastructure, so we're using systemctl—you see it immediately pop up in our Consul UI. It's unhealthy for a bit, but we're going to become healthy soon. There we go.
So, let's click through and see some important information for this mesh gateway. This is our WAN address. This is the only Wide Area Network IP address that we need to expose to enable WAN federation—and that's very useful for us; we have some Consul WAN federation metadata telling Consul that this mesh gateway can be used for WAN federation. And that's all we need to do on our on-premises datacenter.
Installing a Secondary Cluster
We now need to take some information from our on-premises datacenter and use it to install our secondary cluster on the Kubernetes cluster. To do that, we're going to use Consul Helm. We can see our Consul Helm values here. And the Helm chart is a convenient way of automating away a lot of the bootstrapping workflow for Consul.
It does a great job of that, it's very quick. We can see a couple of important tidbits in these Helm values. We have the data sender name and the image. We have enabled TLS—that's required for our WAN federation over mesh gateways so we can verify servers. We've provided the CA cert and CA key as well. Because this is going to be a secondary datacenter, and our on-premises datacenter is the primary one, we need to provide it with the correct CA cert and CA key.
Federation is enabled here telling us that we want to do that. We're not creating a federation secret. We've already created it. Then, we have some additional information for the server—telling it how to contact the existing mesh gateways. And of course—we've enabled Connect—and the mesh gateways themselves to install those.
Now, we're going to take a look at the pieces of information that we needed to use from the on-premises datacenter to connect everything. You have this in this Kubernetes secret. There are three pieces of information here. The public CA certificate—which is that big block, the private CA key, and then some server config JSON that we need to add. Right now, it's a block of Base64 encoded stuff. But we can decode it real fast, and see what it is.
We're saying this is our primary datacenter–the on-premises datacenter. We've specified primary gateways that correspond to the WAN IP address and port we have on our mesh gateway on the on-premises datacenter. So, that's how we bootstrap this whole process. That's all we need–it's as simple as that.
Installing a Consul Helm Chart
Now we will install the Consul Helm chart using
helm install. First, we're going to apply a secret, so we have the information available on Kubernetes. That's always important. And then we're going to Helm install Consul, and we'll see it spin up.
This is going to take a little bit of time. But if you think about what's happening—it's amazing that we’re spinning up an entire Consul cluster with Raft and Gossip; and all that in a couple of minutes. We see all the pods coming here—and let's remember that this is the communication over the WAN network that is only going through the two mesh gateways.
The servers don't need to be on the WAN network. There does not need to be a complicated VPN setup—potentially. All we need is the mesh gateways, and we can lock that down well.
You'll see here the mesh gateways have errored out. That's only because the mesh gateways depend on their local client agents to be up and running to register themselves. They need to wait for the other ones to become up and running, and they'll be fine.
And so—you see here—it looks like we're almost ready; there's one last one, there we go. Now, the Consul servers are all finding themselves, and rediscovering each other, and everything looks good. We can verify this—if we use our CLI—we can look in our Consul members.
First, we're going to take a look at the on-premises datacenter. We have two VMs here, potentially. We would probably have more in a real scenario—for the demo, it's fine. Then, on WAN, we can see the GKE clusters in our on-premises datacenter with the private IP addresses of those servers. We go to our Consul UI, and we see a whole brand-new datacenter. We've connected ourselves and we have Consul installed.
And just that quickly, we can see we also have our public WAN IP there, and with metadata installed as well. This is the value that the Consul Helm chart automation gives us is we can install it that quickly.
So, we have it installed, but haven't been migrating anything over. If we think about what we just did. We started with this picture, we had a working service—we decided that we needed a Consul datacenter on our Kubernetes cluster. We wanted to federate it with our current on-premises datacenter—and that's essentially what we did. We have installed Consul on Kubernetes and connected the two over mesh gateways. The traffic for our services is still going to the same places, but that's what we're going to think about next.
Migrating Services to the Service Mesh
Now we have Consul installed, we can start to work on migrating things over to the service mesh. In that instance, there are a couple of things we want to do before moving some services. We're going to install a proxy default configuration file. This is going to be a centralized config for all the service mesh sidecars.
We can look here to proxy defaults—our global configuration. There are two pieces of information. Importantly, we've declared mesh gateway mode local. That is telling us any traffic on the service mesh that wants to go to the other datacenter should also go through those mesh gateways.
So, all traffic is going to go through these mesh gateways. Then, we've configured some StatsD URLs so that the Envoy sidecars can send metrics—and we can get the benefits of the observability aspect of a service mesh. We're going to write that–and that's going to apply to everything. We don't have anything running now, but it will be applied as we start migrating.
If we think back to our guiding principles—that we talked about—we want to think about how we could limit the blast radius of migrating our fake service over to the service mesh. If we look here, perhaps an obvious one is the cache service into a leaf node—and a call graph here. Only the API talks to it, so that's probably a good one to start migrating first.
We're going to install—use the cache service, and migrate it over to the service mesh. And then, of course, you can't have a service mesh with only one service on it. So, we're going to the API on the service mesh as well and tell it to talk to the cache only on that service mesh.
Here's a diagram of what we're going to do. We're going to add sidecars on the API and cache—there's a secured connection between the two, but nothing else. That's the first incremental step we're making. This is probably a good way of testing whether we know what we're doing and are if there any hiccups along the way.
So, now for the Kubernetes deployments—we're going to patch to them, and we can take a look at the patch file here. This is just for the cache deployment. Our settings are a couple of annotations, connect, inject annotation to inject a sidecar, and then declaring the service protocol as HTTP.
We can patch that very fast. We can take a look as Kubernetes rolls this deployment. We can see here that—originally—we had two containers running in our cache service. Those two containers were the cache service and Prometheus sidecar. Now, we have four—the two we added are an Envoy sidecar, and a connect lifecycle management sidecar.
So, we're running over here—everything seems to be working on the cache side. But of course, we've only updated the cache. There's not really a service mesh running at the moment. Our next step is to update the API as well.
Updating the API
For the API, this is going to be very similar. We can see it working, but nothing has changed. We can see the DNS addresses that we're calling here. The API service is still using Kube DNS on the cache service to contact it. Now we're going to update the API—a similar workflow just patching the Kubernetes deployment. Very similar to the first two annotations, but the additions here—we've added a service upstream for the API, declaring that we want to talk to the cache service over 9093.
We've also updated the upstream URIs down in the environment variables going over localhost. This ensures that we're going through the Envoy sidecar and getting the benefit of a service mesh.
We're going to patch this API service in Kubernetes, and let it roll out. This should update our API to go through the Envoy sidecar, to the cache's sidecar, to the cache service itself. This also has four containers running in its new pods—very similar setup to the cache service. And everything is working. So, we can test it out.
Oh, everything's red. That seems bad. What did we forget? Did we forget something? You see, we are going over localhost 9093. But luckily, one of the benefits of the service mesh is you get observability metrics. This Prometheus dashboard here that shows us some HTTP response codes, and importantly, intention, denials, or authorizations.
We see that we're denying intentions right now. Like we said previously, one of the benefits is you can default-deny all traffic—and only allow those things that you want to talk to the service to talk to it.
We need to create an intention between the API and the cache—and everything should be fine. We can manually test it ourselves in our fake service browser. Everything looks good. We are going through the localhost 9093 with the cache down at the bottom there. If we go back to our metrics, we'll be able to see that the response codes have dropped off, we're no longer denying traffic. In fact, we'll start to see some 200 response codes go up, right at the end.
This is a very important step in the migration journey. You have to know for each step, how do you declare it healthy, make sure you're looking at metrics and logs as you're moving things because things can go wrong. We can see in our Consul UI, we now have the API and cache connected with the proxies—so we know what is connected with a proxy.
Moving the Web Deployment to the Service Mesh
We're going to go one step up. This is our current picture. We have the API and cache, all communicating over the service mesh. Now, we want to move the web deployment over to the service mesh and get that going through it. It's going to be a very similar process that we did with the API. You can see the iteration that we're going through here, and you can go step-by-step with every deployment.
This is our patch file for the web service. We're going to declare an upstream for the API on 9092—update the upstream URI—so we go through the sidecar proxy. Just patch it and watch it roll out.
We see it immediately pop up in the Consul UI—so, that's very nice as one instance. One of them is probably gone, and we got two there. So, Consul UI was actually faster than the kubectl commands there. But that looks good. You can see it running—you see it on the web UI and then test it out.
And we forgot the intentions again, even though we just talked about it. I think this is a good explanation of why you should always look at logs and have alerts because everyone is fallible—even in a demo, you get things wrong.
We clearly fixed the intentions. We can look at metrics, make sure everything is working again. We can see another spike in denials and a drop in traffic, but that should go up momentarily. There we go. There's always a point to have green be the good ones, and not the bad ones, so you can tell at a glance.
We've now put our API cache and web services onto our service mesh in our Kubernetes cluster, but we need to lock it down security-wise. Currently, all the services are listening on the pod IP. It's possible to bypass the sidecar proxy and talk to it without going through the service mesh. As we can see here, we're listening on 0000—and this is valuable when you're migrating over incrementally to the service mesh.
Adding the Ingress Gateway
Now we have everything running in the service mesh. We want to tighten the security straps and make it listen only on localhost—so that to send traffic to any of these services, you have to go through the Envoy sidecar. That presents a problem, which is how do your users talk to these Services? Eventually, that traffic needs to make it there too.
This is where we want to add the ingress gateway. The ingress gateway is essentially that single point in a service mesh where we're ingressing traffic into it. There's a single point here—we’re only ingressing some traffic to the web. But you can imagine, as you have more and more services that need to be exposed, it's having a nice centralized place to manage that is very valuable.
Consul intentions are also applied on the ingress gateway. First, we're going to go a little backwards to illustrate the point: We're going to move our Kubernetes services to listen only on localhost address first—as evidenced there. Then, we're going to flip on the ingress gateways and expose traffic.
You’ll usually want to do it the other way, but this illustrates the point better. We're going to do all three of our services at once. I promise you the patch files all look very similar— there's no need to look at them again. We cache API and now web—we can make sure that they have rolled by looking at the Kubernetes CLI; we have one running, one installed, one initializing right now—that looks good.
We moved everything to listen on localhost. We can verify that by looking at the list and address again. But we've now stopped all traffic from our web.fixers.org URL because there's nothing to listen on—there's nothing to forward traffic to; we refuse to connect.
Now we need to install ingress gateways—and to do this handily, we're going to use the Consul Helm chart again. It's going to enable us to do this fast, and in a good way. It's going to automate a lot for us. Here, we have the same Consul Helm chart, but we've just added this ingress gateway configuration. We're enabling it and declaring the port that I want to expose. We only have HTTP traffic right now, so we're using port 80.
We can simply helm upgrade our cluster and see the ingress gateway install. Ingress gateways have a couple of very nice features for us—they provide a consistent networking model as you use the Consul service mesh, so you can use the same L7 configuration entries that already exist in Consul today. You can do path-based routing through the ingress gateway. You can do service splitting through the ingress gateway. You can do failover—anything that you want will work through the ingress gateway. It slots right into that L7 configuration concept.
So, we see those two pods running and installing here, we see them pop up in our Consul UI. They're not quite healthy yet, but they will be soon. Ingress gateways allow you to expose multiple services through a single gateway—a one-to-many type of deal. That's very valuable to push out configuration fast. As they spin up, they're going to be looking for their config—they're ready, and we see the Consul UI declares them ready as well.
Configuring Ingress Gateways
First, we can see our external IP. One thing we need to change—we have a DNS address pointing to another load balancer. We need to update that. We're going to do that in a cloud DNS on Google. I would usually use Terraform for this, but for the sake of the demo, we're doing it in the UI to make it nice and easy. That's going to take a bit, but now we can configure the ingress gateway as the DNS rolls out.
Ingress gateways use configuration entries to configure themselves. This is a nice centralized API for the Consul service mesh. We can see here we declare the kind—ingress gateway; we declare the name—ingress gateway. This name must match the service of the ingress gateway itself, which we can see on the right-hand side.
We declare a set of listeners. Here, we only have one on port 80. We declare the protocol and a set of services. In here, we still have one—only the web that we want to expose—and allow traffic from web.fakeservice.org to ingress into the service mesh.
We can easily use the Consul CLI to write that. There are APIs as well, whatever way you want to manage this. Once we write that, you can read it back to see what it looks like. You're going to specify the name and the kind here. We can see it has the same configuration.
We also allow you to configure enable TLS—it's off by default. But, if you enable TLS, we will present a Consul Connect certificate of the ingress gateway, which you can then use to have TLS at the ingress gateway level. That config looks great—we can check over the UI what it looks like. We have an upstream tab here, and it has our web service here, we can click it, go to the web instance—all that jazz.
Now, we can actually see are we able to ingress traffic from web.fakeservice.org? But first, as always, we have to imply intentions. The ingress gateway is a very important one to apply intentions at—because it is the point of entry for traffic into your service mesh; it's the front door. Now, we can see that it works.
We're able to Ingress traffic onto a fully-secure service mesh, and everything will work. We can check our metrics. We see a long pause of no HTTP responses where we accidentally or purposely broke our service, but that's okay. We can see the green line shooting up into the right. That's always a good sign. We have things working. It's great. It's secure. We have observability so you can see what's wrong without having to instrument every service that we own—we have a lot of value there.
We have this picture. But we have a little pesky dotted line from the API to the database in the on-premises datacenter. We should do something about that because that's the last piece of information we have. This is where the new terminating gateway feature comes in.
Adding a Terminating Gateway
With terminating gateways, we can take this database—say this database is a managed database that you're running in your on-premises datacenter or what have you. For whatever reason, you can't really put a sidecar there, and it’s just prohibitive. The terminating gateway will allow us to additively add the database to the service mesh without vastly reorganizing your infrastructure. That's very valuable when you want to migrate incrementally, and you don't want to take all the time to figure out how to get that sidecar Envoy—even if you can.
We got to go back to our on-premises datacenter. First, we need to install a terminating gateway. Like we did with the mesh gateway, we're going to systemctl start a terminating gateway here. We can see it pop up on the right. It'll become healthy in a bit. And then—like ingress gateways—we use configuration entries to declare how this works. You can see a kind—terminating gateway; name—terminating gateway, and then one exposing the DB service now.
We write that. And in the Consul UI that we can click through and see these link services. Our database is now a linked service—this is now on the service mesh. That might have seemed anticlimactic, but it's great! The database can now be used on the service mesh like any other service—even though it is a managed database. We're going to create a Consul intention here for the API to the DB to allow that.
You can imagine that securing who can talk to this managed database is even more important than usual. And we're going to update the API deployment with this DB on port 9094—in the on-prem datacenter.
We also will need to update our upstream URIs to go through the Envoy sidecar. Like before, this is the same pattern over and over again. We're going to patch the API—you can see it roll out. I can look at the Kubernetes cluster through these pods, and make sure everything is running—and everything looks good there. We can test it out in our actual browser. Everything looks good—and we can see the database is being contacted through the localhost 9094, which means it's going over the mesh gateways.
So, now, we've secured that traffic—it's no longer going through a WAN. We can look at our metrics and see that everything looks good. We have fully migrated our entire application over to the service mesh. We have security, we have observability so we can see when things go wrong. We can start to apply some advanced networking concepts. You can do Canary deployments if you want.
Where Do We Go from Here?
Well, it depends on what your problems are. You may want to expose more services onto the service mesh from your on-premises datacenter—your billing and payments versus here. But those are also maybe pesky to change. You don't want to invest all the time into putting a sidecar proxy on there. You'd rather just do something very easy.
This is where the terminating gateway starts to shine—we can put those billing and payment services onto the service mesh like we did with the database through the same terminating gateway. To do that, we're going to use a wildcard specifier in our config entry here. You can see we use
*, which says expose all the services that I have access to.
You can imagine that powerful combination here might be something like Consul enterprise namespaces—along with a terminating gateway—where all the services you want to expose are in that singular namespace. So, we can look at our link services. We can see like that one configuration push—and we have exposed billing and payments onto our service mesh.
Now, we have—again—this consistent networking model that we can apply to our new services and our old services that maybe take too much effort to change. That's a powerful concept that the terminating gateway allows.
Maybe that's not quite the problem we want to solve. Maybe we have another team within our company, and they saw our fake service application, and they were really impressed, and they want to use the API of that fake service to do something on their side that they find value in. But they’re not on the service mesh, they don't have Consul—and it would be a lot of work for them to get onto the service mesh.
We can expose the API through the same ingress gateway that we have set up already. All we did was add a new API service on the hostname api.fakeservice.org. We can just push that out to the ingress gateway, and it will update it dynamically—we don't have to restart it. It'll find the config itself.
Just like that, we've exposed the API through it. Of course, we have to apply intentions—it's important to imply intentions because with L7 configuration explicitly, it's not just the services defined in the listener that are exposed, you are potentially exposing other services. If we type in api.fakeservice.org, we now go to just the API—we’ve skipped the web frontend, but we can use this API to do something interesting and cool.
And that's the demo; we've successfully migrated. I'll hand it back to Freddie for the last bit of information.
Freddy Vallenilla: Thanks Chris. Now that we've completed the upgrade let's revisit how we did relative to our original goals.
Reduced App Complexity
We wanted to streamline the deployment of new applications in the cloud. We did that by offloading responsibility around networking, security, and observability to our service mesh. We no longer need to embed thousands of lines of code into our applications. Developers no longer need to reinvent the wheel every time or skip using wheels altogether.
By placing proxies at every network hop, we now have consistent metrics, logs, and tracing for debugging networking issues.
With a service mesh, we also have a starting point to layer on more complex routing logic—like to progressively deploy new versions of an application. Most importantly, we mitigated the risk and limited the blast radius by making small changes and observing the results.
We've been working hard in implementing these features and hope you're excited to try them out. We have step-by-step guides available for everything we demoed today on the HashiCorp Learn site. Check it out, and try them out for yourselves. We look forward to hearing your feedback.
Thank you so much for joining us today. We hope you enjoy the rest of HashiConf Digital.