FAQ

Introduction to HashiCorp Consul Connect

HashiCorp Consul Connect is a new feature that enables simplified network topologies and management while also strengthening security and maintaining high performance in a distributed system.

HashiCorp Consul is an open source utility that greatly reduces the complexity of managing and securing decoupled, service-oriented architectures like microservices. Using its new Consul Connect features, network management becomes scale-independent, and you won't need to significantly modify your applications to protect their data in transit. Connect allows operations engineers to have simpler network topologies and maintain network performance in a distributed application environment.

In this video, HashiCorp co-founder and CTO Armon Dadgar gives a whiteboard overview of the following topics:

  • Network segmentation in a monolith
  • The challenges of network segmentation in SOA
  • The features of Consul Connect
    • The service graph: Scalable network controls
    • TLS certificates: Authenticated communication
    • The sidecar proxy approach: Secure intra-application traffic without heavy modification
    • Native integrations: Saving your network performance
  • A less complex network with Consul Connect

You can learn more about why dynamic environments often need service segmentation via a service mesh for greater security in this blog post.

Speakers

Transcript

Hi. My name is Armon Dadgar, and today I wanted to do a quick introduction to Consul Connect, one of our newest features we (HashiCorp) just announced.

When we talk about traditionally managing a monolithic application, what we're really talking about is a single large application that is made of multiple discrete subcomponents. Even though it's a single, larger application, it has many subsystems; A, B, C, and D. An example of this might be to suppose we have a desktop banking app—A might be managing our login, while B might be showing our balance, C might be wire transfer, and D might be foreign currency.

Even though all of these are four discrete functions, we're packaging and deploying them as a single larger application. Now recently what we've seen is a shift away from this into deploying these as independent discrete services. So, instead of a single, large application, we'll deploy A, B, C, and D independently. This might be called a microservice or service-oriented architecture (SOA), but it's about this idea of: "How do we develop and deploy these units independent of one another?" The advantage is, now if there's a bug in A, we don't have to coordinate with everyone to package and release it as a single application, we can deploy A independent of these other services.

While this is really great for developer productivity, it comes with its own challenges. Particularly, how do we actually secure this architecture?

Network segmentation in a monolith

Historically when we looked in the monolith, the way we solved this problem was with relatively coarse-grain controls. What we would do, is have three basic zones.

  1. On the left, we'd have our DMZ, which is untrusted traffic coming in from the internet. We'd have a DMZ and a firewall that's basically allowing traffic to come in from the DMZ to our application tier.

  2. In our application tier, we really only had copies of our monolith. So we might have had one. Or, if we had scaling issues, we would deploy multiple of these things, and put a load balancer in front of it. But effectively, it's multiple copies of the same application where all of the traffic is internal. These different subsystems are communicating within that process boundary.

  3. If this application needed to talk out, it's most likely talking to a backend set of databases. So we'd have a data tier that's sitting behind our application, and that's probably being segmented once again, using a firewall.

As we got to a larger organization, maybe we had multiple of these different monolithic applications, so we'd start to segment our network. What we would do, is have this much larger network and then split it into sub-pieces internally.

This would be done with a number of different technologies. We'd use either virtual LAN, we'd use a firewall, or we would use software-defined networks. Or overlays. What these let us do is even though we have one larger physical network, we're splitting it into these smaller virtual segments. But nonetheless, this was meant to be coarse grain, because each of these segments might still have dozens or hundreds of different applications within it that can all talk to each other.

The challenges of network segmentation in SOA

As we talk about this microservice architecture. The east/west traffic pattern—the service to service access—is much more complicated now. It's no longer just the app talking to a backend database, it's many applications talking to each other. And each of these things may have independent, different data storage layers they're using, which might be traditional databases, might be caches, might be NoSQL systems, and so now we have a much, much more complicated topology than we used to have.

The features of Consul Connect

This really brings the question of: How do we bring the same security controls—the network segmentation we had here (in the monolith)—into this world (a service-oriented architecture)? This is what Consul Connect is focused on—this idea of service segmentation.

Ideally, what we'd like to do is define on a very fine grain level: Service A is allowed to talk to service B, and service C is allowed to talk to service D. This is where Consul's service graph capability comes in.

The service graph: Scalable network controls

When we talk about the service graph in Consul, it's a set of high-level rules that say which services are allowed or disallowed from communicating with one another. Now the important thing is these rules are at a logical service level. They're not at an IP level, and why that matters is, this unit is scale-independent.

What I mean by that is, if I said by web server is allowed to talk to my database, but I have 50 web servers and five databases, that's 250 different firewall rules of IP 1 can talk to IP 2, IP 3 can talk to IP 4, versus the single rule, web can talk to database. It doesn't matter if there's 1, 10, 50, 1000 web servers, this rule doesn't change. So in this way, it's scale-independent.

TLS certificates: Authenticated communication

Then the challenge we run into is—well great, we've said A can talk to B, where do we get a sense of, "Is this A talking to me or is this some other random service?" This gets solved by using a set of certificates. What we (Consul) will do is mint a certificate A and a certificate B, using standard TLS.

So we're generating TLS certificates and Consul's managing the workflow of this for us. Consul is generating these certificates, it's flowing it to Consul to be signed, or if we have an external certificate authority, such as Vault, or a hardware device, Consul will flow it out and allow it to be signed there. So it's really about that workflow of generating the certificates, signing them, and automatically rotating them, that Consul is providing us.

The sidecar proxy approach: Secure intra-application traffic without heavy modification

Then we have a challenge of—okay great, we have these logical building blocks, how do we actually implement this segmentation? There are really two ways of doing this with Consul.

The first is using a set of proxies. So what we would do is deploy our service, let's say A, alongside a proxy. This is using a sidecar type approach, and then on the other side we have a service B, it's also deployed with a proxy. Now when A wants to talk to B, it's doing so through this set of proxies. So A talks to B, the proxies are communicating between themselves, and then they're doing a few different things.

The first thing the proxies are doing is establishing a mutual TLS connection. They're using these certificates that are provided so that A can provide it's identity provably to B, and B can provably provide it's identity to A.

In this way, the proxies ensure we know who we're talking to, unlike in a network where we're just depending on an IP. All we know is we're talking to an IP. We don't know what service is running there. We're not sure if that's malicious, is it an attacker? But TLS gives us a stronger sense of identity.

The second advantage is—because we're using TLS, we're forming an encrypted channel. So the traffic between the proxies—the traffic going over the wire—is encrypted between them. This is important because we have a mandate, often cases coming from things like GDPR saying: Data in transit should be encrypted. The challenge is many of our applications—hundreds or thousands of existing apps—aren't protocol-aware to do things like TLS, so they're not assuming that they're doing encryption at the protocol level. The advantage of doing it at the proxy level, is we don't have to modify these applications to protect their data in transit.

The final piece is—just because we know one side is A and one side is B, doesn't mean they should be allowed to communicate. The proxies need to come back and interface with the service graph and look for a rule that authorizes or disallows that traffic. If the traffic's allowed, then great, the proxies allow the traffic to come to B, and now A and B are talking to each other and they're none the wiser that the proxies are intermediating traffic between them.

Native integrations: Saving your network performance

What if we have a particularly performance-sensitive application? An application where the latency is really critical or we need the full throughput of the network, what would we do in those situations?

One of the nice advantages of Connect is the ability to do native integrations. When we're doing a native integration, the application itself is Connect-aware. So we have our application, let's say A, and it's embedding an SDK that is aware of Consul. On the other side, it might be talking to an application that's also natively aware, but that other side could be using a proxy. In this case, applications A and B are talking directly to each other without the use of a proxy. They're able to bring in an SDK, which is basically standard TLS. We're connecting back to using these standard TLS certificates, but it requires a little bit of extra glue to query against the service graph and determine: Should this connection be allowed in addition to doing standard TLS verification?

A less complex network with Consul Connect

Taking a step back, when we come back and look at this architecture—our microservices architecture—what does this buy us? When we use an approach like Connect, instead of having to have a complex network topology—where we're constraining traffic to flow it through firewalls or using very complex overlay and software-defined networks—we can still have a relatively simple, flat network topology.

We're not expecting the network to provide any trust or any limitation of access between services. Instead, we're saying all of these nodes might be allowed to communicate with each other at the network level, but once the connection comes in, either a proxy or a Connect-integrated application is connecting to a higher level understanding of what should be allowed to talk to what and either allowing or terminating the connection.

This simplifies our overall networking story and allows us to have much simpler topologies while still maintaining a decoupling of which services are allowed to talk to what, and being able to enforce that in a few different ways, depending on what makes sense for our application.

This was a bit of a lightweight introduction to Consul Connect, we have a lot more reference material available on the website as well. If you're interested in going into more depth with Consul, we have introductory material there. Thank you so much.

More resources like this one

  • 3/15/2023
  • Case Study

Using Consul Dataplane on Kubernetes to implement service mesh at an Adfinis client

  • 1/20/2023
  • FAQ

Introduction to Zero Trust Security

  • 1/4/2023
  • Presentation

A New Architecture for Simplified Service Mesh Deployments in Consul

  • 12/31/2022
  • Presentation

Canary Deployments with Consul Service Mesh on K8s