We are pleased to announce that HashiCorp Consul Service (HCS) on Azure is now generally available. HCS on Azure enables a team to provision HashiCorp-managed Consul clusters directly through the Microsoft Azure portal. HCS on Azure clusters are preconfigured for production workloads, enabling a team to easily leverage Consul to secure the application networks within their Azure Kubernetes Service (AKS) or VM-based environments while offloading the operations to HashiCorp.
HCS enables easy access to a range of Consul use cases including service discovery, automated network configuration, and secure service-to-service communication with service mesh. Consul can be used as a platform to support modern application networking, progressive application delivery, zero-trust security, and service level observability.
Read on to learn more about how HCS on Azure can help get critical applications running in Azure quickly and securely while enabling your team to focus on building cloud native applications instead of managing clusters. Please also see the HCS GA blog post from Microsoft’s Brendan Burns on the Azure website.
HCS on Azure now supports both development and production clusters and offers a new Azure CLI integration to streamline operations.
A user can now get started with HCS on Azure by provisioning an on-demand development cluster. Development clusters are a low-cost way to evaluate key aspects of the service or execute a proof of concept.
Using a development cluster, an operator or a practitioner can consume all of the typical functionality found within a production Consul environment. For example, a user can set up a cluster as a central control plane for service discovery or to shape traffic across multiple Kubernetes clusters or VM environments using Consul’s Layer 7 traffic management capabilities. Consul Enterprise features like namespaces, audit logging, and single sign-on are also available for development and testing.
HCS now supports highly available production clusters that are backed by a 99.9% uptime SLA. A user can contact the HashiCorp sales team to provision a new production cluster or to upgrade an existing development cluster to a production cluster when an application is ready to be moved into production. On-demand production clusters with hourly billing will be available later this year.
The HCS GA release includes an Azure CLI extension to streamline HCS usage and integration. Microsoft provides comprehensive instructions for installing the Azure CLI here. Once the Azure CLI is installed, we can install the HCS extensions by leveraging the following command:
az extension add \ --source https://releases.hashicorp.com/hcs/0.1.0/hcs-0.1.0-py2.py3-none-any.whl
The HCS CLI enables an operator to perform many of the key tasks needed to consume HCS:
Adding this functionality directly into the Azure CLI provides a user experience consistent with the way operators typically want to configure and consume resources in Microsoft Azure. It also makes it easier to get started bringing Azure resources into the HCS control plane. See below for a recorded demo that describes how to leverage the CLI extensions to manage an HCS environment:
HCS on Azure supports HashiCorp Consul 1.8 clusters by default. New Consul 1.8 features like ingress gateways, terminating gateways, and single sign-on significantly raise the bar when deploying an enterprise service mesh by enabling a team to extend and integrate Consul with existing environments, including those that are backed by HCS.
Ingress gateways provide operators a method of accessing resources that live within a Consul environment. You can run ingress gateways on Kubernetes natively via Helm chart or deploy them on a virtual machine. Once added to an environment, they provide a logical point for applying policies and security controls for traffic coming into the service mesh. You can read more about ingress gateways in our ingress gateways deep-dive blog post.
It is likely that operators using HCS to run Consul in Azure will have existing resources that are not suitable for running a proxy on, and thus are not able to join the service mesh. Terminating gateways provide a means for Consul to interact in this service mesh pattern with systems that live outside the mesh.
A great example of this is Azure’s managed PostgreSQL service. Azure offers the ability to deploy a fully configured Postgres service which is managed by Microsoft. You can deploy a terminating gateway in the same resource group, provided connectivity exists to the database. Consul can leverage a terminating gateway to allow mesh communication to the service. The terminating gateway terminates mTLS near the endpoint, allowing a team to leverage Consul's service mesh capabilities against this resource.
To learn more about terminating tateways in Consul 1.8, you can review our deep-dive blog post on the topic.
Consul 1.8 exposes the ability to bind Consul’s authentication methods to an OpenID Connect (OIDC) partner. This allows an operator to bind access to the cluster to well known identity providers like Azure Active Directory and to assign ACL tokens to the users automatically. This functionality enables a team to provide a smoother process using role-based access controls as an alternative to distributing ACL tokens for each task.
One of Consul’s advantages is that it enables a consistent networking control plane across multiple environments and runtimes. In an upcoming release of HCS on Azure, we will enable users to leverage Consul’s WAN federation capabilities to connect environments over mesh gateways. This capability allows Consul to send all of its traffic (including gossip) through the mesh gateways, instead of requiring explicit communication between all nodes. This pattern simplifies the process of connecting multiple environments together across network topologies that might be complex. This can include topologies that use network address translation (most commonly found within Kubernetes clusters), cloud to on-premises networking, or overlapping IP address spaces.
With WAN federation, operators will be able to connect Consul clusters running in private datacenters to Consul clusters running in Azure, allowing HCS to operate as a central control plane across these environments. WAN federation requires only a single port to be exposed, greatly simplifying the network configurations required to connect the environments together. This functionality will ultimately enable easier service migration and failover response scenarios between multiple Azure and private data center Consul environments.
We are pleased to announce that several of our key partners including Datadog, NS1, and SignalFX (a Splunk Company) have tested and support their existing Consul integrations with the GA version of HCS on Azure. We thank them for working closely with us to validate the integration and plan to continue working with the broader ecosystem to validate additional technology partner integrations with HCS.
The recommended way to get started with HCS on Azure is through our Learn track. The following guides are available:
Another excellent resource is Cody De Arkland’s three-part HCS on Azure blog series:
We asked users how they are using HashiCorp Consul, here’s what they told us.
Get a walkthrough of the code and workflow for setting up NIA with Consul-Terraform-Sync (CTS) and A10 ADC.
The HashiCorp Cloud Platform (HCP) now has expanded capabilities for networking, single sign-on, and more. HCP will also support new configurations of HashiCorp Consul and HashiCorp Vault in the coming months.