HashiCorp Consul 1.14 introduces the Consul dataplane, service mesh traffic management across cluster peers, and service failover enhancements.
We’re pleased to announce general availability of HashiCorp Consul 1.14. Consul is a service networking solution that helps users discover and securely connect any application.
Consul 1.14 focuses on helping organizations simplify Consul deployment, improve resiliency, and enhance operational efficiency. Key features and improvements include:
Let’s take a closer look at all these enhancements:
Consul 1.14 introduces a simplified deployment architecture that eliminates the need to deploy node-level Consul clients on Kubernetes. The new architecture deploys a new Consul dataplane component that is injected as a sidecar in the Kubernetes workload pod. This dataplane container image packages both an Envoy container and Consul dataplane binary.
Consul's original architecture relies on deploying many Consul clients along with Consul servers. In a more traditional environment where a user has VM-centric workloads, a client agent is deployed to each VM and all the services running on that VM are registered with the client agent. The client agent sees itself as the source of truth for the services running on its node and ensures the server's catalog is kept in sync with its registry.
The client agents and servers all join as part of a single gossip pool, which allows quick detection of node failures. This enables the Consul control plane to respond to service discovery requests, or dynamically alter service mesh routing rules to return only the remaining healthy services or to steer traffic away from the services on the unhealthy node.
This architecture worked well in the past, but as new deployment patterns for Consul have emerged, it has become a source of friction for deploying Consul successfully.
A high-level architectural diagram of Consul on Kubernetes prior to Consul 1.14.
On Kubernetes, a Consul client is deployed on each node and is responsible for registering services and pods for each service. During the service registration process, Envoy sidecars are injected into pods during deployment to enable mTLS and traffic management capabilities for the mesh. Although running clients per node on Kubernetes works well for most use cases, users would occasionally encounter friction due to restrictions on Kubernetes clusters that made it difficult to reliably run clients on all nodes, or due to the gossip network requirements that require low latency between members of a gossip pool.
Starting with Consul 1.14, Consul on Kubernetes will deploy Consul dataplane as a sidecar container alongside your workload’s pods, removing the need to run Consul clients using a daemonset. The Consul dataplane component will primarily be responsible for discovering and watching the Consul servers available to the pod and managing the initial Envoy bootstrap configuration and execution of the process.
Consul dataplane's design removes the need to run Consul client agents, which brings multiple benefits:
This new architecture simplifies overall deployment and removes the Consul client agent from the default deployment of Consul on Kubernetes. Notably, it also removes the requirement to use hostPort on Kubernetes, which was needed to reliably communicate across Consul clients. For deployments outside of Kubernetes, Consul clients will continue to be supported for the foreseeable future for both service discovery and service mesh use cases.
A high-level architectural diagram of Consul on Kubernetes in Consul 1.14.
Consul service mesh allows operators to configure advanced traffic management capabilities such as canary or blue/green deployments, A/B testing, and service failover for high availability. These capabilities are configured using service splitters, routers, and resolvers. Consul supports these traffic management capabilities across WAN-federated datacenters, but many customers desired a more flexible support model.
In Consul 1.13, we introduced cluster peering in beta as an alternative to WAN federation for multi-datacenter deployments. Cluster peering is much more flexible than WAN federation, but it did not support cross-datacenter traffic management in Consul 1.13. As of Consul 1.14, cluster peering now supports both cross-datacenter and cross-partition traffic management.
The image below shows service failover to a cluster-peered partition in another datacenter:
The advantages of cluster peering compared to WAN federation include:
For more information on cluster peering, please refer to the Consul 1.13 release blog or cluster peering documentation. Visit the cluster peering tutorial to learn how to federate Consul datacenters and connect services across peered service meshes. For more information on configuring cross-datacenter traffic management, check out the service resolver documentation’s failover and redirect capabilities.
Critical services should always be available, even when infrastructure components fail. Achieving high availability involves:
Consul supports configuring service failover, enabling you to create high availability deployments for both service mesh and service discovery use cases. Consul 1.14 includes several failover enhancements.
Consul 1.14 further enhances the flexibility of service failover in all Consul deployments, enabling operators to address more-complex failover scenarios in which service failover targets may:
In Consul 1.14, service mesh failover to cluster peers offers improved resiliency compared to failover to WAN-federated datacenters.
Failover involves sending traffic to a different pool of service instances. With WAN-federation failover, the control plane triggers service failover by reconfiguring Envoy proxies in the mesh to use the next failover pool, meaning failover depends on control plane availability. In Consul 1.14, cluster peer failover is triggered by Envoy proxies themselves, providing increased failover resiliency.
We are excited for users to try these new Consul updates and further expand their service mesh implementations. The Consul 1.14 includes enhancements for all types of Consul users leveraging the product for service discovery and service mesh across multiple environments, including serverless functions. Our goal with Consul is to enable a consistent enterprise-ready control plane to discover and securely connect any application.
Learn how to use GitOps to deploy and synchronize a Consul cluster on Kubernetes with Argo CD.
Use Minikube to create multiple Kubernetes clusters with Consul and test cluster peering configurations in your local development environment.
Consul 1.16 adds new reliability, scalability, security, service mesh UX, extensibility, and service discovery capabilities.