Lauren Carey, HashiCorp Developer Relations: Can you tell us about yourself, your background, and your experience with HashiCorp?
Mario Rodríguez Hernández: I am from Spain, from the Canary Islands, and I've worked in the IT field for more than 20 years. I started as a help agent, then I moved to system administrator, coding, and different Enterprise Resource Planning (ERP) tools like Microsoft Dynamics, BC and Finance & Operations, and other kinds of tools. Eventually, I moved up into leadership, working as a CIO at different companies in the Canary Islands, and managing 10 people.
Eventually, though, I wanted to return to the more technical side of things. I was frustrated with the politics and the bureaucratic type of work I had to do. I wanted to feel the satisfaction of doing things again, of making things happen. So, that’s when I decided to pivot to a more technical role again.
During the pandemic, I started studying cloud and DevOps, and I started to prepare for and take different certifications. I now have maybe 60+ certifications. I started with the Azure certification and moved to AWS, and I am now fully certified on Google Cloud.
I also felt that Terraform and the infrastructure as code framework was an important piece of the DevOps and cloud-native fields, so I decided to go for the Terraform Associate certification as a part of my transition as well.
Lauren: Tell us about your current role.
Mario: Today is my first day at Minsait as a senior specialist in the fields of architecture and DevOps.
Mainly, I work for utilities companies in Spain and out of Spain. Minsait is part of Indra Group, which is a big company in Spain. I work for customers in Argentina, Spain, Africa, and other places all over the world, mainly focusing on utility companies and electric companies.
We have different products for utility and electrical companies, and we develop for the specific needs of the customer too. I mainly have the role of Cloud Architect, but I also participate in DevOps, pipelines, and creating infrastructure with Terraform and multiple clouds, like AWS and Azure.
Today I am working on a project with Terraform to create infrastructure in AWS. Terraform is a well-known tool in the IaC field, and having a certification demonstrates to Minsait that I know how to use this tool.
Lauren: What motivated you to earn the Terraform certification?
Mario: I chose to study Terraform, specifically, because it is the number one product in the infrastructure as code (IaC) space around the world, so I knew it would open doors to help me make the career changes I described earlier.
It implements the strategy of IaC in a tool. It helps you create infrastructure and allows you to read the infrastructure that other people have created. It gives you the opportunity to make versions of infrastructure, to experiment, to fail, and to roll back to other versions. It is a great tool to develop in, but also to work in with other people and to collaborate in, which is very important. Instead of creating infrastructure manually every time through a point-and-click cloud-vendor portal, you can write reusable code that automates this process quickly and easily.
Lauren: What role did the certification play in your career move to a more technical role? Do you think that having the certification helped you get a job at a company?
Mario: Of course. As I said before, I'm from the Canary Islands and compared to the Spanish market, it is a very, very small place. If you want to shine in the world market, you have to stand out from others, and the way that I found to stand out was with the certifications.
As I said, before: I did have knowledge of other tools and systems, but I had no certifications. I know what I am doing, but others can’t be sure. A certification is a badge proving that you know the technology.
It opened the door for me to find a job. All of the companies that gave me an interview talked to me about my certification. All of them. I'm pretty sure that if I didn’t have this certification, maybe half or more of these interviews would not have happened.
Lauren: What was your experience of preparing for and taking the Terraform Associate exam and preparing for it. Did you use any of our materials? Did you use outside materials?
Mario: I used both. I used HashiCorp’s materials, which I think are very good; the website, documentation, and sample questions are all very good. But I combined that with books related to the certification.
I practiced, of course. I had my little practice exercises. But, mainly I think that it is a very clear tool, so it is easy to learn. It's very, very logical. If you have experience with another programming language, then it's very easy to transition to. If you know a little English, all of the terms are easy to learn. It's a perfect tool.
Lauren: Do you feel more confident starting new projects because you have the certification?
Mario: Having my Terraform certification makes my employer say, “Oh you have a Terraform certification, you are a Terraform expert, I have a couple of projects for you.” If I didn’t have this certification, it would be difficult to transition onto a project like that. So yes, certification is very important for the type of projects I want to work on.
Also, my interviewer said, they didn’t have any Terraform specialists on the team, so it made me stand out that I had the certification and could be a Terraform expert for them.
Lauren: Do you have any career plans that you can envision certifications fitting into?
Mario: Now I am studying more Kubernetes and things like that. I recently passed the CKAD (Kubernetes Certified Application Developer) and am now already preparing for the CKA (Certified Kubernetes Administrator).
When you get deeper into the cloud and start to study things like Kubernetes, you feel the need to dive more into microservices, service mesh, and how to manage secrets in applications. HashiCorp Vault and Consul are perfect for that. Those certifications are on my roadmap. I don't know if it will happen this year, because I set my goals at the beginning of the year, but I surely will add another certification from HashiCorp.
Lauren: Did you know that we have a Terraform professional certification coming out? Our Professional certifications are live scenarios, so you're actually working in the application during the exam. So that'll be a good one to sort of, you know, take it to the next level.
Mario: It’s perfect, because now I have expertise with Terraform. I'm doing real projects, so that’s perfect for me.
Lauren: If someone asked your opinion on HashiCorp certifications, what would you say?
Mario: I think it’s a great certification. And taking into account the prices of other certifications, it’s inexpensive, especially considering the value you get from it. There are other certifications on the market that are quite expensive, but really I think that the Terraform Associate exam is under-priced. It is a great offer for us in the community.
I recommend studying Terraform, practicing with Terraform, and then trying to earn the certification.
Lauren: What other thoughts or stories have we not covered yet that you think might be helpful for others to know? What's something that you want to see in this interview that I didn't cover?
Mario: Oh, let me see, it's important to me that I'm a single person from a little island off of a little country, and you are talking to me from such a big company in the IT world all because I studied and I passed a certification. It means a lot to me. It serves as motivation for anyone that, if you want something, anything is possible.
Lauren: Finally, how did you celebrate when you got your certification? Did you share it?
Mario: I always share new certifications with my network on LinkedIn. I have my teammates and followers, and we share our certifications together.
]]>In previous versions of the Terraform Cloud Operator v2, the only way to start a run was by patching the restartedAt
timestamp in the Module
resource. But this approach was not intuitive, did not work for all types of workspaces and workflows, and did not allow users to control the type of run to perform. This challenge hindered migration efforts to the newest version of the Terraform Cloud Operator. . Now with version 2.3, users can declaratively start plan, apply, and refresh runs on workspaces. This enhances self-service by allowing developers to initiate runs on any workspace managed by the Operator, including VCS-driven workspaces.
The Workspace
custom resource in version 2.3 of the operator supports three new annotations to initiate workspace runs:
workspace.app.terraform.io/run-new
: Set this annotation to "true"
to trigger a new run.workspace.app.terraform.io/run-type
: Set to plan
(default), apply
, or refresh
to control the type of run.workspace.app.terraform.io/run-terraform-version
: Specifies the version of Terraform to use for a speculative plan
run. For other run types, the workspace version is used.As an example, a basic Workspace
resource looks like this:
apiVersion: app.terraform.io/v1alpha2
kind: Workspace
metadata:
name: this
spec:
organization: kubernetes-operator
token:
secretKeyRef:
name: tfc-operator
key: token
name: kubernetes-operator
Using kubectl
as shown here, annotate the above resource to immediately start a new apply run:
kubectl annotate workspace this \
workspace.app.terraform.io/run-new="true" \
workspace.app.terraform.io/run-type=apply --overwrite
The annotation is reflected in the Workspace
resource for observability:
apiVersion: app.terraform.io/v1alpha2
kind: Workspace
metadata:
annotations:
workspace.app.terraform.io/run-new: "true"
workspace.app.terraform.io/run-type: apply
name: this
spec:
organization: kubernetes-operator
token:
secretKeyRef:
name: tfc-operator
key: token
name: kubernetes-operator
After the run is successfully triggered, the operator will set the run-new
value back to "false"
.
HashiCorp works to continuously improve the Kubernetes ecosystem by enabling platform teams at scale. Learn more about the Terraform Cloud Operator by reading the documentation and the Deploy infrastructure with the Terraform Cloud Kubernetes Operator v2 tutorial. If you are completely new to Terraform, sign up for Terraform Cloud and get started using the Free offering today.
]]>Recent efforts by the HashiCorp Terraform team have focused on refining the process of associating run tasks within Terraform organizations, significantly reducing day-to-day overhead. Plus, the introduction of a new post-apply stage broadens the potential use cases for run tasks, offering even more value to users.
Initially, run tasks were tailored to meet the needs of teams provisioning infrastructure with Terraform Cloud. Recognizing the diversity of tools used in Terraform workflows, we integrated them seamlessly into Terraform Cloud as first-class run task integrations. This gave teams additional flexibility in selecting and managing run tasks for their workspaces.
As run task adoption grows within organizations, platform operations teams face challenges in ensuring consistency across the organization. Managing individual run task assignments can become cumbersome, with platform teams striving for standardization across workspaces. To address this, we've introduced scopes to organizational run tasks in Terraform Cloud. This feature allows platform teams to define the scope of organizational run tasks, targeting them globally and specifying evaluation stages for enforcement. Organization-wide enforcement eliminates configuration burden and reduces the risk of compliance gaps as new workspaces are created.
Multi-stage support further enhances the run task workflow, streamlining configuration and reducing redundant code when using the Terraform Cloud/Enterprise (tfe
) provider for run task provisioning and management.
Post-provisioning tasks are crucial for managing and optimizing infrastructure on Day 2 and beyond. These tasks include configuration management, monitoring, performance optimization, security management, cost optimization, and scaling to help ensure efficient, secure, and cost-effective operations.
Recent discussions with customers underscored the need to securely integrate third-party tools and services into Terraform workflows after infrastructure is provisioned with Terraform Cloud. Post-provisioning processes often require manual intervention before systems or services are production-ready. While API-driven workflows can expedite post-provisioning, the lack of a common workflow poses implementation challenges.
In response to these concerns, we've introduced a new post-apply stage to the run task workflow. This stage lets users seamlessly incorporate post-provisioning tasks that automate configuration management, compliance checks, and other post-deployment activities. The feature simplifies the integration of Terraform workflows with users' toolchains, prioritizing security and control.
As part of the implementation of run task scopes, we've extended support for multi-stage functionality to workspace run tasks. We also introduced two new views that offer users the flexibility to see the run tasks associated with their workspace. Now workspace administrators can choose to view their run task associations as a list or grouped by assigned stages.
The advancements in Terraform Cloud's run task workflow empower users to streamline infrastructure provisioning and management. You can elevate your workflow with scopes for organizational run tasks and harness the potential of the post-apply stage.
To learn more, explore HashiCorp’s comprehensive run tasks documentation. Additionally, we provide a Terraform run task scaffolding project written in Go to help you write your own custom run task integration.
If you're new to Terraform, sign up for Terraform Cloud today and start for free.
]]>This month, AWS AppFabric added support for Terraform Cloud, expanding an already long list of ways that Terraform can connect, secure and provision infrastructure with AWS. This post will explore the new AppFabric support and highlight two other key existing integrations: Dynamic provider credentials and AWS Service Catalog support for Terraform Cloud.
AWS AppFabric now supports Terraform Cloud. IT administrators and security analysts can use AppFabric to quickly integrate with Terraform Cloud, aggregate enriched and normalized SaaS audit logs, and audit end-user access across their SaaS apps. This launch expands AWS AppFabric supported applications used across an organization.
AWS AppFabric quickly connects SaaS applications, or data lakes like Amazon Security Lake. For Terraform Cloud users, this integration can accelerate time-to-market and help developers release new features to production faster with streamlined infrastructure provisioning and application delivery workflows.
To learn more, visit the AWS AppFabric page and then check out how to connect AppFabric to your Terraform Cloud account.
Introduced early last year, Terraform Cloud's dynamic provider credentials let you establish a trust relationship between Terraform Cloud and AWS. They limit the blast radius of compromised credentials by using unique, single-use credentials for each Terraform run. Dynamic credentials also give you fine-grained control over the resources that each of your Terraform Cloud projects and workspaces can manage. Terraform Cloud supports dynamic credentials for AWS and Vault.
To learn more, AWS and HashiCorp have since written a joint blog post on how to Simplify and Secure Terraform Workflows on AWS with Dynamic Provider Credentials and you can learn how to configure Dynamic Credentials with the AWS Provider at HashiCorp Developer.
In August 2023, AWS added AWS Service Catalog support for Terraform Cloud. This includes integrated access to key AWS Service Catalog features, including cataloging of standardized and pre-approved Terraform configurations, infrastructure as code templates, access control, resource provisioning with least-privilege access, versioning, sharing to thousands of AWS accounts, and tagging. By combining Terraform Cloud with AWS Service Catalog, we’re connecting the AWS Service Catalog interface that many customers already know, with the existing workflows and policy guardrails of Terraform Cloud.
HashiCorp and AWS have since co-presented at HashiConf (Terraform Cloud self-service provisioning with AWS Service Catalog) and partnered on AWS’s blog post on How to Use AWS Service Catalog with HashiCorp Terraform Cloud, demonstrating the workflow for provisioning a new product and offering access to getting-started guides.
Platform teams can use Terraform Cloud, HCP Waypoint, and the AWS Service Catalog to create simplified Terraform-based workflows for developers.
Terraform modules can incorporate unit testing, built-in security, policy enforcement, and reliable version updates. Using these tools, platform teams can establish standardized workflows to deploy applications and deliver a smooth and seamless developer experience. Learn more by viewing AWS and HashiCorp’s recent Self-service infrastructure is no longer a dream talk from AWS re:Invent:
]]>Run tasks allow platform teams to easily extend the Terraform Cloud run lifecycle with additional capabilities offered by services from partners.
Wiz, makers of agentless cloud security and compliance for AWS, Azure, Google Cloud, and Kubernetes, launched a new integration with Terraform run tasks that ensures only secure infrastructure is deployed. Acting as a guardrail, it prevents insecure deployments by scanning using predefined security policies, helping to reduce the organization's overall risk exposure.
We’ve also approved 17 new verified Terraform providers from 13 different partners:
AccuKnox, maker of a zero trust CNAPP (Cloud Native Application Protection) platform, has released the AccuKnox provider for Terraform, which allows for managing KubeArmor resources on Kubernetes clusters or host environments.
Chainguard, which offers Chainguard Images, a collection of secure minimal container images, released two Terraform providers: the Chainguard Terraform provider to manage Chainguard resources (IAM groups, identities, image repos, etc.) via Terraform, and the imagetest provider for authoring and executing tests using Terraform primitives, designed to work in conjunction with the Chainguard Images project.
Cisco delivers software-defined networking, cloud, and security solutions to help transform your business. Cisco DevNet has released two new providers for the Cisco Multicloud Defense and Cisco Secure Workload products: The Multicloud Defense provider is used to create and manage Multicloud Defense resources such as service VPCs/VNets, gateways, policy rulesets, address objects, service objects, and others. The Cisco Secure Workload provider can be used to manage the secure workload configuration when setting up workload protection policies for various environments.
Citrix, maker of secure, unified digital workspace technology, developed a custom Terraform provider for automating Citrix product deployments and configurations. Using the Terraform with Citrix provider, users can manage Citrix products via infrastructure as code, giving greater efficiency and consistency on infrastructure management, as well as better reusability on infrastructure configuration.
Couchbase, which manages a distributed NoSQL cloud database, has released the Terraform Couchbase Capella provider to deploy, update, and manage Couchbase Capella infrastructure as code.
Genesis Cloud offers accelerated cloud GPU computing for machine learning, visual effects rendering, big data analytics, and cognitive computing. The Genesis Cloud Terraform provider is used to interact with resources supported by Genesis Cloud via public API.
Hund offers automated monitoring to provide companies with simplified product transparency, from routine maintenance to critical system failures. The company recently published a new Terraform provider that offers resources/data sources to allow practitioners to manage objects on Hund’s hosted status page platform. Managed objects can include components, groups, issues, templates, and more.
Mondoo creates an index of all cloud, Kubernetes, and on-premises resources to help identify misconfigurations, ensure security, and support auditing and compliance. The company has released a new Mondoo Terraform provider to allow Terraform to manage Mondoo resources.
Palo Alto Networks is a multi-cloud security company. It has released a new Terraform provider for Strata Cloud Manager (SCM) that focuses on configuring the unified networking security aspect of SCM.
Ping Identity delivers identity solutions that enable companies to balance security and personalized, streamlined user experiences. Ping has released two Terraform providers: The PingDirectory Terraform provider is a plugin for Terraform that supports the management of PingDirectory configuration, while the PingFederate Terraform provider is a plugin for Terraform that supports the management of PingFederate configuration.
SquaredUp manages a visualization platform to help enterprises build, run, and optimize complex digital services by surfacing data faster. The company has released a new SquaredUp Terraform provider to help bring a unified visibility across teams and tools for greater insights and observability in your platform.
Traceable is an API security platform that identifies and tests APIs, evaluates API risk posture, stops API attacks, and provides deep analytics for threat hunting and forensic research. The company recently released two integrations: a custom Terraform provider for AWS API Gateways and a Terraform Lambda-based resource provider. These providers allow the deployment of API security tooling to reduce the risk of API security events.
VMware offers a breadth of digital solutions that power apps, services, and experiences for their customers. The NSX-T VPC Terraform provider gives NSX VPC administrators a way to automate NSX's virtual private cloud to provide virtualized networking and security services.
All integrations are available for review in the HashiCorp Terraform Registry. To verify an existing integration, please refer to our Terraform Cloud Integration Program.
If you haven’t already, try the free tier of Terraform Cloud to help simplify your Terraform workflows and management.
]]>Back in October 2023 at HashiConf, we released the beta version of test-integrated module publishing for Terraform Cloud, along with the Terraform test framework, to streamline module testing and publishing workflows. Now we are excited to announce general availability of test-integrated module publishing. This new feature helps module authors and platform teams produce high-quality modules quickly and securely with more control over when and how modules are published.
Since the beta launch, we have made several improvements.
First, branch-based publishing and test integration are now compatible with all supported VCS providers in Terraform Cloud: GitHub, GitLab, Bitbucket, and Azure DevOps. Also, test results are now reported back to the connected repository as a VCS status check when tests are initiated by a pull request or merge. This gives module developers immediate in-context feedback without leaving the VCS interface.
Finally, to support customers publishing modules at scale, both the Terraform Cloud API and the provider for Terraform Cloud and Enterprise now support branch-based publishing and enablement for test-integrated modules in addition to the UI-based publishing method.
Along with being GA in Terraform Cloud, test-integrated module publishing is also available in the January 2024 (v202401-1) release of Terraform Enterprise, available now.
After we announced the beta version of the explorer for workspace visibility back at HashiDays in May 2023, we have been receiving lots of feedback and making improvements. We are now excited to announce general availability of the explorer for workspace visibility to help users ensure that their environments are secure, reliable, and compliant.
Since the beta launch, we’ve made enhancements to allow users to find, view, and use their important operational data from Terraform Cloud more effectively as they monitor workspace efficiency, health, and compliance. For example, we improved the query speed, added more workspace data, introduced CSV exports, and provided options for filtering and conditions. Popular uses of explorer include tracking Terraform module and provider usage in workspaces, finding workspaces without a connected VCS repo, and identifying health issues like drifted workspaces and continuous validation failures. With the new public Explorer API, users can automate the integration of their data into visibility and reporting workflows outside of Terraform Cloud.
Developer environments cost money to set up and run. If they are left running after developers have finished using them, your organization is incurring unnecessary costs. Ephemeral workspaces in Terraform Cloud and Enterprise— workspaces that expire after a set time and automatically de-provision — are a way to solve this cost overrun. However, it is sometimes hard to predict how much time you should give an ephemeral workspace to live.
To give users a more dynamic mechanism for ephemeral workspace removal, we’ve introduced inactivity-based destruction for ephemeral workspaces in Terraform Cloud Plus and Terraform Enterprise (v202312-1). Users of those products can now set a workspace to "destroy if inactive", allowing administrators and developers to establish automated clean up of workspaces that haven't been updated or altered within a specified time frame. This eliminates the need for manual clean-up, reducing wasted infrastructure costs and streamlining workspace management.
Variable sets allow Terraform Cloud users to reuse both Terraform-defined and environment variables across certain workspaces or an entire organization. One of the core use cases for this feature is credential management, but variables can also manage anything that can be defined as Terraform variables. When using variable sets for credential management, it is critical to ensure that these variables cannot be tampered with by end users.
Priority variable sets for Terraform Cloud and Terraform Enterprise (v202401-1) provide a convenient way to prevent the over-writing of more infrastructure-critical variable sets, such as those used for credentials. Once the platform team has prioritized a variable set, even if a user has access to workspace variables or can modify a workspace’s Terraform configuration, they still won’t be able to override variables in that prioritized set.
When creating a new variable set, check the "Prioritize the variable values in this variable set" box to make it a priority variable set.
In the past, Terraform Cloud users were not able to use the UI to regenerate a damaged or degraded resource (or resources) for a VCS-connected workspace without switching to the CLI workflow. This was a tedious and error-prone manual process.
In some cases, a remote object may become damaged or degraded in a way that Terraform cannot automatically detect. For example, if software running inside a virtual machine crashes but the virtual machine itself is still running, Terraform will typically have no way to detect and respond to the problem because Terraform directly manages the machine as a whole.
Now, if you know that an object is damaged or if you want to force Terraform to replace it for any other reason, you can override Terraform's default behavior using the replace resources option to instruct Terraform to replace the resource(s) you select. Users can now create a new run via the Terraform Cloud UI with the option to replace resources in addition to the CLI and API approach. The replacement workflow is also available in v202401-1 of Terraform Enterprise.
Run triggers let users connect two workspaces in Terraform Cloud to automatically queue runs when the parent workspace is successfully applied. This is commonly used in multi-tier infrastructure deployments where resources are split between multiple workspaces, or with shared infrastructure like networking or databases. In the past, runs initiated by a run trigger did not auto-apply. Instead, users had to manually confirm the pending run in each workspace individually.
The new “auto-apply run triggers” option in the workspace settings allows workspace admins to choose whether to auto-approve runs initiated by a run trigger. This setting is independent from the workspace auto-apply setting, providing more flexibility in defining workspace behavior. It provides an automated way to chain applies across workspaces to simplify operations without human intervention.
Auto-apply run triggers are now generally available in Terraform Cloud and Terraform Enterprise v202401-1.
Each workspace in Terraform Cloud defines the version of Terraform used to execute runs. Previously, version constraints could be set via the workspaces API, but in the UI version selector, the choices were limited to specific versions of Terraform or the “latest” option, which always selects the newest version. Users had to either manually update versions for each workspace or accept the risk of potential behavior changes in new versions.
Terraform Cloud now has an updated Terraform version selector that includes version constraints, allowing workspaces to automatically update specific Terraform versions with patch releases while staying within the selected major or minor version. This provides a more seamless and flexible experience for users who rely on the web console and don’t have direct API access. This feature is also coming soon to Terraform Enterprise (expected in v202402-1).
These Terraform Cloud and Enterprise enhancements represent a continued evolution aimed at helping customers maximize their infrastructure investments and accelerate application delivery.
To learn more about these features, visit our Terraform guides and documentation on HashiCorp Developer. If you are new to Terraform, sign up for Terraform Cloud and get started for free today.
]]>This post demonstrates how to install the official release binaries for HashiCorp tools on Alpine Linux for container images. We’re sharing these instructions because although HashiCorp supports official repositories for many operating systems and distributions, including various Linux distributions, Alpine Linux users must download the tools from precompiled binaries on the HashiCorp release site. The binaries are not available through Alpine Package Keeper.
You can download the binary for any HashiCorp tool on the HashiCorp release site. Use the release site to download a specific product and its version for a given operating system and architecture. For Alpine Linux, use the product binary compiled for Linux AMD64:
FROM alpine:latest
ARG PRODUCT
ARG VERSION
RUN apk add --update --virtual .deps --no-cache gnupg && \
cd /tmp && \
wget https://releases.hashicorp.com/${PRODUCT}/${VERSION}/${PRODUCT}_${VERSION}_linux_amd64.zip && \
wget https://releases.hashicorp.com/${PRODUCT}/${VERSION}/${PRODUCT}_${VERSION}_SHA256SUMS && \
wget https://releases.hashicorp.com/${PRODUCT}/${VERSION}/${PRODUCT}_${VERSION}_SHA256SUMS.sig && \
wget -qO- https://www.hashicorp.com/.well-known/pgp-key.txt | gpg --import && \
gpg --verify ${PRODUCT}_${VERSION}_SHA256SUMS.sig ${PRODUCT}_${VERSION}_SHA256SUMS && \
grep ${PRODUCT}_${VERSION}_linux_amd64.zip ${PRODUCT}_${VERSION}_SHA256SUMS | sha256sum -c && \
unzip /tmp/${PRODUCT}_${VERSION}_linux_amd64.zip -d /tmp && \
mv /tmp/${PRODUCT} /usr/local/bin/${PRODUCT} && \
rm -f /tmp/${PRODUCT}_${VERSION}_linux_amd64.zip ${PRODUCT}_${VERSION}_SHA256SUMS ${VERSION}/${PRODUCT}_${VERSION}_SHA256SUMS.sig && \
apk del .deps
The example Dockerfile includes build arguments for the product and version. Use these arguments to install the HashiCorp tool of your choice. For example, you can use this Dockerfile to create an Alpine Linux base image with Terraform version 1.7.2:
docker build --build-arg PRODUCT=terraform \
--build-arg VERSION=1.7.2 \
-t joatmon08/terraform:test .
You can run a container with the new Terraform base image and issue Terraform commands:
$ docker run -it joatmon08/terraform:test terraform -help
Usage: terraform [global options] [args]
The available commands for execution are listed below.
The primary workflow commands are given first, followed by
less common or more advanced commands.
Main commands:
init Prepare your working directory for other commands
validate Check whether the configuration is valid
plan Show changes required by the current configuration
apply Create or update infrastructure
destroy Destroy previously-created infrastructure
## omitted for clarity
The example Dockerfile includes commands to download the release’s checksum and signature. Use the signature to verify the checksum and the checksum to validate the archive file. This workflow requires the gnupg package to verify HashiCorp’s signature on the checksum. The Dockerfile installs gnupg and deletes it after installing the release.
While the example Dockerfile verifies and installs a product’s official release binary, it does not include dependencies to run the binary. For example, HashiCorp Nomad requires additional packages such as gcompat. Be sure to install any additional dependencies that your tools require in your container image before running a container for it.
If you need to use a HashiCorp tool in your own container, download and unarchive the appropriate release binaries from our release site. Include verification of the signature and a checksum for the download to ensure its integrity. This installation and verification workflow applies to any Linux distribution that does not include HashiCorp software in its package repository.
Refer to Verify HashiCorp binary downloads to learn more about downloading and verifying HashiCorp release binaries and building container images with HashiCorp tools.
Review our official release channels to download and install HashiCorp products on other platforms and architectures. We release official container images for each product in DockerHub under the HashiCorp namespace.
]]>The code for this demo can be found on GitHub. You can leverage the Microsoft application outlined in this post and the Microsoft Azure Kubernetes Service (AKS) to integrate with OpenAI. You can also read more about how to deploy an application that uses OpenAI on AKS on the Microsoft website.
The rise in AI workloads is driving an expansion of cloud operations. Gartner predicts that cloud infrastructure will grow 26.6% in 2024, as organizations deploying generative AI (GenAI) services look to the public cloud. To create a successful AI environment, orchestrating the seamless integration of artificial intelligence and operations demands a focus on security, efficiency, and cost control.
Data integration, the bedrock of AI, not only requires the harmonious assimilation of diverse data sources but must also include a process to safeguard sensitive information. In this complex landscape, the deployment of public key infrastructure (PKI) and robust secrets management becomes indispensable, adding cryptographic resilience to data transactions and ensuring the secure handling of sensitive information. For more information on the HashiCorp Vault solution, see our use-case page on Automated PKI infrastructure
Machine learning models, pivotal in anomaly detection, predictive analytics, and root-cause analysis, not only provide operational efficiency but also serve as sentinels against potential security threats. Automation and orchestration, facilitated by tools like HashiCorp Terraform, extend beyond efficiency to become critical components in fortifying against security vulnerabilities. Scalability and performance, guided by resilient architectures and vigilant monitoring, ensure adaptability to evolving workloads without compromising on security protocols.
In response, platform teams are increasingly adopting infrastructure as code (IaC) to enhance efficiency and help control cloud costs. HashiCorp products underpin some of today’s largest AI workloads, using infrastructure as code to help eliminate idle resources and overprovisioning, and reduce infrastructure risk.
This post delves into specific Terraform configurations tailored for application deployment within a containerized environment. The first step looks at using IaC principles to deploy infrastructure to efficiently scale AI workloads, reduce manual intervention, and foster a more agile and collaborative AI development lifecycle on the Azure platform. The second step focuses on how to build security and compliance into an AI workflow. The final step shows how to manage application deployment on the newly created resources.
For this demo, you can use either Azure OpenAI service or OpenAI service:
First let's look at the Helm provider block in main.tf:
provider "helm" {
kubernetes {
host = azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.host
username = azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.username
password = azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.password
client_certificate = base64decode(azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.cluster_ca_certificate)
}
}
This code uses information from the AKS resource to populate the details in the Helm provider, letting you deploy resources into AKS pods using native Helm charts.
With this Helm chart method, you deploy multiple resources using Terraform in the helm_release.tf
file. This file sets up HashiCorp Vault, cert-manager, and Traefik Labs’ ingress controller within the pods. The Vault configuration shows the Helm set
functionality to customize the deployment:
resource "helm_release" "vault" {
name = "vault"
chart = "hashicorp/vault"
set {
name = "server.dev.enabled"
value = "true"
}
set {
name = "server.dev.devRootToken"
value = "AzureA!dem0"
}
set {
name = "ui.enabled"
value = "true"
}
set {
name = "ui.serviceType"
value = "LoadBalancer"
}
set {
name = "ui.serviceNodePort"
value = "null"
}
set {
name = "ui.externalPort"
value = "8200"
}
}
In this demo, the Vault server is customized to be in Dev Mode, have a defined root token, and enable external access to the pod via a load balancer using a specific port.
At this stage you should have created a resource group with an AKS cluster and servicebus established. The containerized environment should look like this:
If you want to log in to the Vault server at this stage, use the EXTERNAL-IP load balancer address with port 8200 (like this: http://[EXTERNAL_IP]:8200/) and log in using AzureA!dem0.
Now that you have established a base infrastructure in the cloud and the microservices environment, you are ready to configure Vault resources to integrate PKI into your environment. This centers around the pki_build.tf.second
file, which you need to rename to remove the .second
extension and make it executable as a Terraform file. After performing a terraform apply
, as you are adding to the current infrastructure, add the elements to set up Vault with a root certificate and issue this within the pod.
To do this, use the Vault provider and configure it to define a mount point for the PKI, a root certificate, role cert URL, issuer, and policy needed to build the PKI:
resource "vault_mount" "pki" {
path = "pki"
type = "pki"
description = "This is a PKI mount for the Azure AI demo."
default_lease_ttl_seconds = 86400
max_lease_ttl_seconds = 315360000
}
resource "vault_pki_secret_backend_root_cert" "root_2023" {
backend = vault_mount.pki.path
type = "internal"
common_name = "example.com"
ttl = 315360000
issuer_name = "root-2023"
}
Using the same Vault provider you can also configure Kubernetes authentication to create a role named "issuer"
that binds the PKI policy with a Kubernetes service account named issuer
:
resource "vault_auth_backend" "kubernetes" {
type = "kubernetes"
}
resource "vault_kubernetes_auth_backend_config" "k8_auth_config" {
backend = vault_auth_backend.kubernetes.path
kubernetes_host = azurerm_kubernetes_cluster.tf-ai-demo.kube_config.0.host
}
resource "vault_kubernetes_auth_backend_role" "k8_role" {
backend = vault_auth_backend.kubernetes.path
role_name = "issuer"
bound_service_account_names = ["issuer"]
bound_service_account_namespaces = ["default","cert-manager"]
token_policies = ["default", "pki"]
token_ttl = 60
token_max_ttl = 120
}
The role connects the Kubernetes service account, issuer
, which is created in the default namespace with the PKI Vault policy. The tokens returned after authentication are valid for 60 minutes. The Kubernetes service account name, issuer
, is created using the Kubernetes provider, discussed in step three, below. These resources are used to configure the model to use HashiCorp Vault to manage the PKI certification process.
The image below shows how HashiCorp Vault interacts with cert-manager to issue certificates to be used by the application:
The final stage requires another tf apply
as you are again adding to the environment. You now use app_build.tf.third
to build an application. To do this you need to rename app_build.tf.third
to remove the .third
extension and make it executable as a Terraform file.
Interestingly, the code in app_build.tf
uses the Kubernetes provider resource kubernetes_manifest
. The manifest values are the HCL (HashiCorp Configuration Language) representation of a Kubernetes YAML manifest. (We converted an existing manifest from YAML to HCL to get the code needed for this deployment. You can do this using Terraform’s built-in yamldecode()
function or the HashiCorp tfk8s tool.)
The code below represents an example of a service manifest used to create a service on port 80 to allow access to the store-admin app that was converted using the tfk8s tool:
resource "kubernetes_manifest" "service_tls_admin" {
manifest = {
"apiVersion" = "v1"
"kind" = "Service"
"metadata" = {
"name" = "tls-admin"
"namespace" = "default"
}
"spec" = {
"clusterIP" = "10.0.160.208"
"clusterIPs" = [
"10.0.160.208",
]
"internalTrafficPolicy" = "Cluster"
"ipFamilies" = [
"IPv4",
]
"ipFamilyPolicy" = "SingleStack"
"ports" = [
{
"name" = "tls-admin"
"port" = 80
"protocol" = "TCP"
"targetPort" = 8081
},
]
"selector" = {
"app" = "store-admin"
}
"sessionAffinity" = "None"
"type" = "ClusterIP"
}
}
}
Once you’ve deployed all the elements and applications, you use the certificate stored in a Kubernetes secret to apply the TLS configuration to inbound HTTPS traffic. In the example below, you associate "example-com-tls"
— which includes the certificate created by Vault earlier — with the inbound IngressRoute
deployment using the Terraform manifest:
resource "kubernetes_manifest" "ingressroute_admin_ing" {
manifest = {
"apiVersion" = "traefik.containo.us/v1alpha1"
"kind" = "IngressRoute"
"metadata" = {
"name" = "admin-ing"
"namespace" = "default"
}
"spec" = {
"entryPoints" = [
"websecure",
]
"routes" = [
{
"kind" = "Rule"
"match" = "Host(`admin.example.com`)"
"services" = [
{
"name" = "tls-admin"
"port" = 80
},
]
},
]
"tls" = {
"secretName" = "example-com-tls"
}
}
}
}
To test access to the OpenAI store-admin site, you need a domain name. You use a FQDN to access the site that you are going to protect using the generated certificate and HTTPS.
To set this up, access your AKS cluster. The Kubernetes command-line client, kubectl, is already installed in your Azure Cloud Shell. You enter:
kubectl get svc
And should get the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello LoadBalancer 10.0.23.77 20.53.189.251 443:31506/TCP 94s
kubernetes ClusterIP 10.0.0.1 443/TCP 29h
makeline-service ClusterIP 10.0.40.79 3001/TCP 4h45m
mongodb ClusterIP 10.0.52.32 27017/TCP 4h45m
order-service ClusterIP 10.0.130.203 3000/TCP 4h45m
product-service ClusterIP 10.0.59.127 3002/TCP 4h45m
rabbitmq ClusterIP 10.0.122.75 5672/TCP,15672/TCP 4h45m
store-admin LoadBalancer 10.0.131.76 20.28.162.45 80:30683/TCP 4h45m
store-front LoadBalancer 10.0.214.72 20.28.162.47 80:32462/TCP 4h45m
traefik LoadBalancer 10.0.176.139 20.92.218.96 80:32240/TCP,443:32703/TCP 29h
vault ClusterIP 10.0.69.111 8200/TCP,8201/TCP 29h
vault-agent-injector-svc ClusterIP 10.0.31.52 443/TCP 29h
vault-internal ClusterIP None 8200/TCP,8201/TCP 29h
vault-ui LoadBalancer 10.0.110.159 20.92.217.182 8200:32186/TCP 29h
Look for the traefik
entry and note the EXTERNALl-IP (yours will be different than the one shown above). Then, on your local machine, create a localhost entry for admin.example.com
to resolve to the address. For example on MacOS, you can use sudo nano /etc/hosts
. If you need more help, search “create localhost” for your machine type.
Now you can enter https://admin.example.com in your browser and examine the certificate.
This certificate is built from a root certificate authority (CA) held in Vault (example.com) and is valid against this issuer (admin.example.com) to allow for secure access over HTTPS. To verify the right certificate is being issued, expand the detail on our browser and view the cert name and serial number:
You can then check this in Vault and see if the common name and serial numbers match.
Terraform has configured all of the elements using the three-step approach shown in this post. To test the OpenAI application, follow Microsoft’s instructions. Skip to Step 4 and use https://admin.example.com to access the store-admin
and the original store-front
load balancer address to access the store-front
.
To learn more and keep up with the latest trends in DevOps for AI app development, check out this Microsoft Reactor session with HashiCorp Co-Founder and CTO Armon Dadgar: Using DevOps and copilot to simplify and accelerate development of AI apps. It covers how developers can use GitHub Copilot with Terraform to create code modules for faster app development. You can get started by signing up for a free Terraform Cloud account.
]]>This post covers version 2.4’s new features, including:
Previously, workspace names were automatically named using ServiceNow RITM ticket numbers. These non-descriptive names created confusion and a lack of clarity.
Now users can customize workspace names and adhere to their organization’s naming conventions, while preserving a link to the ServiceNow ticket.This provides the flexibility of adding a more descriptive name, which is prepended to the RITM ticket number upon ordering a particular Catalog Item.
Version 2.4 of the ServiceNow Service Catalog for Terraform enables workspace tagging. You can put multiple tags in a comma-separated list and the Service Catalog's backend script will parse them into separate tags and attach them to the workspace in Terraform Cloud. Some default Catalog items let users update both the name and the tags on a previously created workspace.
Not only do tags provide contextual awareness, but they also help admins organize, find, and filter workspaces more effectively in the Terraform Cloud or Enterprise interface, thus reducing the amount of time spent on repetitive manual tasks.
The general availability of the latest version of ServiceNow Service Catalog for Terraform Cloud and Terraform Enterprise lets users effectively name and tag workspaces. That brings two main benefits:
Custom workspace naming and tagging are available today as generally available features. With these updates, the ServiceNow Terraform Catalog becomes even more useful to organizations with many ServiceNow-provisioned workspaces, helping to streamline processes and promote broader adoption. Learn more by reading the ServiceNow Service Catalog documentation. Install the app to your ServiceNow instance from the ServiceNow Store.
Get started with Terraform Cloud for free to begin provisioning and managing your infrastructure in any environment. Link your Terraform Cloud and HashiCorp Cloud Platform (HCP) accounts together for a seamless sign-in experience.
]]>Previously, workspace creation using the operator was limited to the default project in Terraform Cloud. Users needed elevated user permissions, which led to security risks from overly broad access and also hindered self-managed workspaces due to frequent central team dependency. Now with project support, users can specify the project where a workspace will be created. This enhances self-service by allowing users to independently create and manage workspaces, and execute runs within the context of their assigned project.
The project name can now be set in the Workspace resource (example code).
Also, project administrators can use the new Project
custom resource to create and manage projects and team access in the organization:
The new Project custom resource manages Terraform Cloud projects and team access (example code).
The general availability of project support for Terraform Cloud Operator brings two main benefits:
Take a deeper dive into the Terraform Cloud Operator and securely managing Kubernetes resources by signing up for the Multi-cloud Kubernetes with HashiCorp Terraform webinar.
Learn more about project support for the Terraform Cloud Operator by reading the documentation. If you are completely new to Terraform, sign up for Terraform Cloud and get started using the Free offering today.
]]>In Terraform 1.6 we introduced the Terraform testing framework, a native option to perform unit and integration testing of your Terraform code using the HashiCorp Configuration Language (HCL). Terraform 1.7 brings several improvements to the testing framework, highlighted by the new mocking feature.
Previously, all tests were executed by making actual provider calls using either a plan or apply operation. This is a great way to observe the real behavior of a module. But it can also be useful to mock provider calls to model more advanced situations and to test without having to create actual infrastructure or requiring credentials. This can be especially useful with cloud resources that take a long time to provision, such as databases and higher-level platform services. Mocking can significantly reduce the time required to run a test suite with many different permutations, giving module authors the ability to thoroughly test their code without slowing down the development process.
Test mocking adds powerful flexibility to module testing through two primary functions: mock providers and overrides.
A mocked provider or resource in a Terraform test will generate fake data for all computed attributes that would normally be provided by the underlying provider APIs. By employing aliases, mocked and real providers can be used together to create a flexible Terraform test suite for your modules.
The new mock_provider
block defines a mock provider, and within this block you can specify values for computed attributes of resources and data sources. This example mocks the AWS provider and sets a specific value for the Amazon S3 bucket resource. Test runs using the mocked version of this provider will return the specified arn
value for all S3 bucket resources instead of randomly generated fake data:
mock_provider "aws" {
mock_resource "aws_s3_bucket" {
defaults = {
arn = "arn:aws:s3:::test-bucket-name"
}
}
}
run "sets_bucket_name" {
variables {
bucket_name = "test-bucket-name"
}
# Validates a known attribute set in the resource configuration
assert {
condition = output.bucket == "test-bucket-name"
error_message = "Wrong ARN value"
}
# Validates a computed attribute using the mocked resource
assert {
condition = output.arn == "arn:aws:s3:::test-bucket-name"
error_message = "Wrong ARN value"
}
}
In addition to mocking whole providers, you can also override specific instances of resources, data sources, and modules. Override blocks can be placed at the root of a Terraform test file to apply to all test runs, or within an individual run
block, and can be used with both real and mocked providers. Common use cases for overrides include cutting down test execution time for resources that take a long time to provision, child modules where you’re concerned only with simulating the outputs, or to diversify the attributes of a data source for various test scenarios. This example overrides a module and mocks the output values:
mock_provider "aws" {}
override_module {
target = module.big_database
outputs = {
endpoint = "big_database.012345678901.us-east-1.rds.amazonaws.com:3306"
db_name = "test_db"
username = "fakeuser"
password = "fakepassword"
}
}
run "test" {
assert {
condition = module.big_database.username == "fakeuser"
error_message = "Incorrect username"
}
}
There’s much more you can do with the new mocking capabilities of the Terraform test framework to help enhance your testing and produce higher-quality modules. To learn more, check out the Mocks documentation, and try it out by following the updated Write Terraform tests tutorial.
Along with test mocking, Terraform 1.7 includes several other enhancements to the test framework. You can now:
*.tfvars
files.For a deep dive on all things testing, check out the recently updated Testing HashiCorp Terraform blog post.
During the infrastructure lifecycle, it’s sometimes necessary to modify the state of a resource. The Terraform CLI has multiple commands related to state manipulation, but these all face similar challenges: they operate on only one resource at a time, must be performed locally with direct access to state and credentials, and they immediately modify the state. This is risky because it leaves the configuration and state out of sync, which can lead to accidental resource changes. That’s why in Terraform 1.1 we introduced the concept of config-driven refactoring with the moved
block, and continued this with config-driven import in Terraform 1.5. Today with Terraform 1.7, this concept has again been extended with config-driven remove.
There are several reasons why you might need to remove a resource from state without actually destroying it:
As an alternative to the terraform state rm
command, the removed
block addresses all of these challenges. Just like the moved
and import
blocks, state removal can now be performed in bulk and is plannable, so you can be confident that the operation will have the intended effect before modifying state. Removed blocks have a simple syntax:
removed {
# The resource address to remove from state
from = aws_instance.example
# The lifecycle block instructs Terraform not to destroy the underlying resource
lifecycle {
destroy = false
}
}
Config-driven remove is also compatible with all Terraform Cloud workflows including VCS-driven workspaces. And soon, structured run output in Terraform Cloud will be able to visually render removal actions alongside other plan activity. Read more about using removed
blocks with resources and using removed
blocks with modules in the Terraform documentation, and try it out with the updated Manage resources in Terraform state tutorial.
Terraform 1.7 also includes an enhancement for config-driven import: the ability to expand import
blocks using for_each
loops. Previously you could target a particular instance of a resource in the to
attribute of an import block, but you had to write a separate import block for each instance. Now you can accomplish this with a single import block:
locals {
buckets = {
"staging" = "bucket-demoapp-staging"
"uat" = "bucket-demoapp-uat"
"prod" = "bucket-demoapp-prod"
}
}
import {
for_each = local.buckets
to = aws_s3_bucket.example[each.key]
id = each.value
}
resource "aws_s3_bucket" "example" {
for_each = local.buckets
bucket = each.value
}
This technique can also be used to expand imports across multiple module instances. Learn more and see an example in the Import documentation.
For more details and to learn about all of the enhancements in Terraform 1.7, please review the full HashiCorp Terraform 1.7 changelog. Additional resource links include:
As always, this release wouldn't have been possible without all of the great community feedback we've received via GitHub issues and from our customers. Thank you!
]]>terraform apply
to your infrastructure without negatively affecting critical business applications? You can run terraform validate
and terraform plan
to check your configuration, but will that be enough? Whether you’ve updated some HashiCorp Terraform configuration or a new version of a module, you want to catch errors quickly before you apply any changes to production infrastructure.
In this post, I’ll discuss some testing strategies for HashiCorp Terraform configuration and modules so that you can terraform apply
with greater confidence. As a HashiCorp Developer Advocate, I’ve compiled some advice to help Terraform users learn how infrastructure tests fit into their organization’s development practices, the differences in testing modules versus configuration, and approaches to manage the cost of testing.
I included a few testing examples with Terraform’s native testing framework. No matter which tool you use, you can generalize the approaches outlined in this post to your overall infrastructure testing strategy. In addition to the testing tools and approaches in this post, you can find other perspectives and examples in the references at the end.
In theory, you might decide to align your infrastructure testing strategy with the test pyramid, which groups tests by type, scope, and granularity. The testing pyramid suggests that engineers write fewer tests in the categories at the top of the pyramid, and more tests in the categories at the bottom. Higher-level tests in the pyramid take more time to run and cost more due to the higher number of resources you have to configure and create.
In reality, your tests may not perfectly align with the pyramid shape. The pyramid offers a common framework to describe what scope a test can cover to verify configuration and infrastructure resources. I’ll start at the bottom of the pyramid with unit tests and work my way up the pyramid to end-to-end tests. Manual testing involves spot-checking infrastructure for functionality and can have a high cost in time and effort.
While not on the test pyramid, you often encounter tests to verify the hygiene of your Terraform configuration. Use terraform fmt -check
and terraform validate
to format and validate the correctness of your Terraform configuration.
When you collaborate on Terraform, you may consider testing the Terraform configuration for a set of standards and best practices. Build or use a linting tool to analyze your Terraform configuration for specific best practices and patterns. For example, a linter can verify that your teammate defines a Terraform variable for an instance type instead of hard-coding the value.
At the bottom of the pyramid, unit tests verify individual resources and configurations for expected values. They should answer the question, “Does my configuration or plan contain the correct metadata?” Traditionally, unit tests should run independently, without external resources or API calls.
For additional test coverage, you can use any programming language or testing tool to parse the Terraform configuration in HashiCorp Configuration Language (HCL) or JSON and check for statically defined parameters, such as provider attributes with defaults or hard-coded values. However, none of these tests verify correct variable interpolation, list iteration, or other configuration logic. As a result, I usually write additional unit tests to parse the plan representation instead of the Terraform configuration.
Configuration parsing does not require active infrastructure resources or authentication to an infrastructure provider. However, unit tests against a Terraform plan require Terraform to authenticate to your infrastructure provider and make comparisons. These types of tests overlap with security testing done via policy as code because you check attributes in Terraform configuration for the correct values.
For example, your Terraform module parses the IP address from an AWS instance’s DNS name and outputs a list of IP addresses to a local file. At a glance, you don’t know if it correctly replaces the hyphens and retrieves the IP address information.
variable "services" {
type = map(object({
node = string
kind = string
}))
description = "List of services and their metadata"
}
variable "service_kind" {
type = string
description = "Service kind to search"
}
locals {
ip_addresses = toset([
for service, service_data in var.services :
replace(replace(split(".", service_data.node)[0], "ip-", ""), "-", ".") if service_data.kind == var.service_kind
])
}
resource "local_file" "ip_addresses" {
content = jsonencode(local.ip_addresses)
filename = "./${var.service_kind}.hcl"
}
You could pass an example set of services and run terraform plan
to manually check that your module retrieves only the TCP services and outputs their IP addresses. However, as you or your team adds to this module, you may break the module’s ability to retrieve the correct services and IP addresses. Writing unit tests ensures that the logic of searching for services based on kind
and retrieving their IP addresses remains functional throughout a module’s lifecycle.
This example uses two sets of unit tests written in terraform test
to check the logic generating the service’s IP addresses for each service kind. The first set of tests verify the file contents will have two IP addresses for TCP services, while the second set of tests check that the file contents will have one IP address for the HTTP service:
variables {
services = {
"service_0" = {
kind = "tcp"
node = "ip-10-0-0-0"
},
"service_1" = {
kind = "http"
node = "ip-10-0-0-1"
},
"service_2" = {
kind = "tcp"
node = "ip-10-0-0-2"
},
}
}
run "get_tcp_services" {
variables {
service_kind = "tcp"
}
command = plan
assert {
condition = jsondecode(local_file.ip_addresses.content) == ["10.0.0.0", "10.0.0.2"]
error_message = "Parsed `tcp` services should return 2 IP addresses, 10.0.0.0 and 10.0.0.2"
}
assert {
condition = local_file.ip_addresses.filename == "./tcp.hcl"
error_message = "Filename should include service kind `tcp`"
}
}
run "get_http_services" {
variables {
service_kind = "http"
}
command = plan
assert {
condition = jsondecode(local_file.ip_addresses.content) == ["10.0.0.1"]
error_message = "Parsed `http` services should return 1 IP address, 10.0.0.1"
}
assert {
condition = local_file.ip_addresses.filename == "./http.hcl"
error_message = "Filename should include service kind `http`"
}
}
I set some mock values for a set of services in the services
variable. The tests include command = plan
to check attributes in the Terraform plan without applying any changes. As a result, the unit tests do not create the local file defined in the module.
The example demonstrates positive testing, where I test the input works as expected. Terraform’s testing framework also supports negative testing, where you might expect a validation to fail for an incorrect input. Use the expect_failures
attribute to capture the error.
If you do not want to use the native testing framework in Terraform, you can use HashiCorp Sentinel, a programming language, or your configuration testing tool of choice to parse the plan representation in JSON and verify your Terraform logic.
Besides testing attributes in the Terraform plan, unit tests can validate:
for_each
or count
for
expressionsIf you wish to unit test infrastructure by simulating a terraform apply
without creating resources, you can choose to use mocks. Terraform 1.7 includes a test mocking framework, which you can use to mock providers and resources. The test mocking framework allows you to test your modules without connecting to a cloud service provider API. You can also use community tools that mock cloud service provider APIs. However, beware that not all mocks accurately reflect the behavior and configuration of their target API.
Overall, unit tests run very quickly and provide rapid feedback. As an author of a Terraform module or configuration, you can use unit tests to communicate the expected values of configuration to other collaborators in your team and organization. Since unit tests run independently of infrastructure resources, they have a virtually zero cost to run frequently.
At the next level from the bottom of the pyramid, contract tests check that a configuration using a Terraform module passes properly formatted inputs. Contract tests answer the question, “Does the expected input to the module match what I think I should pass to it?”
Contract tests ensure that the contract between a Terraform configuration’s expected inputs to a module and the module’s actual inputs has not been broken. Most contract testing in Terraform helps the module consumer by communicating how the author expects someone to use their module. If you expect someone to use your module in a specific way, use a combination of input variable validations, preconditions, and postconditions to validate the combination of inputs and surface the errors.
For example, use a custom input variable validation rule to ensure that an AWS load balancer’s listener rule receives a valid integer range for its priority:
variable "listener_rule_priority" {
type = number
default = 1
description = "Priority of listener rule between 1 to 50000"
validation {
condition = var.listener_rule_priority > 0 && var.listener_rule_priority < 50000
error_message = "The priority of listener_rule must be between 1 to 50000."
}
}
As a part of input validation, you can use Terraform’s rich language syntax to validate variables with an object structure to enforce that the module receives the correct fields. This module example uses a map to represent a service object and its expected attributes:
variable "services" {
type = map(object({
node = string
kind = string
}))
description = "List of services and their metadata"
}
In addition to custom validation rules, you can use preconditions and postconditions to verify specific resource attributes defined by the module consumer. For example, you cannot use a validation rule to check if the address blocks overlap. Instead, use a precondition to verify that your IP addresses do not overlap with networks in HashiCorp Cloud Platform (HCP) and your AWS account:
resource "hcp_hvn" "main" {
hvn_id = var.name
cloud_provider = "aws"
region = local.hcp_region
cidr_block = var.hcp_cidr_block
lifecycle {
precondition {
condition = var.hcp_cidr_block != var.vpc_cidr_block
error_message = "HCP HVN must not overlap with VPC CIDR block"
}
}
}
Contract tests catch misconfigurations in modules before applying them to live infrastructure resources. You can use them to check for correct identifier formats, naming standards, attribute types (such as private or public networks), and value constraints such as character limits or password requirements.
If you do not want to use custom conditions in Terraform, you can use HashiCorp Sentinel, a programming language, or your configuration testing tool of choice. Maintain these contract tests in the module repository and pull them into each Terraform configuration that uses the module using a CI framework. When someone references the module in their configuration and pushes a change to version control, the contract tests run against the plan representation before you apply.
Unit and contract tests may require extra time and effort to build, but they allow you to catch configuration errors before running terraform apply
. For larger, more complex configurations with many resources, you should not manually check individual parameters. Instead, use unit and contract tests to quickly automate the verification of important configurations and set a foundation for collaboration across teams and organizations. Lower-level tests communicate system knowledge and expectations to teams that need to maintain and update Terraform configuration.
With lower-level tests, you do not need to create external resources to run them, but the top half of the pyramid includes tests that require active infrastructure resources to run properly. Integration tests check that a configuration using a Terraform module passes properly formatted inputs. They answer the question, “Does this module or configuration create the resources successfully?” A terraform apply
offers limited integration testing because it creates and configures resources while managing dependencies. You should write additional tests to check for configuration parameters on the active resource.
In my example, I add a new terraform test
to apply the configuration and create the file. Then, I verify that the file exists on my filesystem. The integration test creates the file using a terraform apply
and removes the file after issuing a terraform destroy
.
run "check_file" {
variables {
service_kind = "tcp"
}
command = apply
assert {
condition = fileexists("${var.service_kind}.hcl")
error_message = "File `${var.service_kind}.hcl` does not exist"
}
}
Should you verify every parameter that Terraform configures on a resource? You could, but it may not be the best use of your time and effort. Terraform providers include acceptance tests that resources properly create, update, and delete with the right configuration values. Instead, use integration tests to verify that Terraform outputs include the correct values or number of resources. They also test infrastructure configuration that can only be verified after a terraform apply
, such as invalid configurations, nonconformant passwords, or results of for_each
iteration.
When choosing an integration testing framework outside of terraform test
, consider the existing integrations and languages within your organization. Integration tests help you determine whether or not to update your module version and ensure they run without errors.
Since you have to set up and tear down the resources, you will find that integration tests can take 15 minutes or more to complete, depending on the resource. As a result, implement as much unit and contract testing as possible to fail quickly on wrong configurations instead of waiting for resources to create and delete.
After you apply your Terraform changes to production, you need to know whether or not you’ve affected end-user functionality. End-to-end tests answer the question, “Can someone use the infrastructure system successfully?”
For example, application developers and operators should still be able to retrieve a secret from HashiCorp Vault after you upgrade the version. End-to-end tests can verify that changes did not break expected functionality. To check that you’ve upgraded Vault properly, you can create an example secret, retrieve the secret, and delete it from the cluster.
I usually write an end-to-end test using a Terraform check to verify that any updates I make to a HashiCorp Cloud Platform (HCP) Vault cluster return a healthy, unsealed status:
check "hcp_vault_status" {
data "http" "vault_health" {
url = "${hcp_vault_cluster.main.vault_public_endpoint_url}/v1/sys/health"
}
assert {
condition = data.http.vault_health.status_code == 200 || data.http.vault_health.status_code == 473
error_message = "${data.http.vault_health.url} returned an unhealthy status code"
}
}
Besides a check
block, you can write end-to-end tests in any programming language or testing framework. This usually includes an API call to check an endpoint after creating infrastructure. End-to-end tests usually depend on an entire system, including networks, compute clusters, load balancers, and more. As a result, these tests usually run against long-lived development or production environments.
When you test Terraform modules, you want enough verification to ensure a new, stable release of the module for use across your organization. To ensure sufficient test coverage, write unit, contract, and integration tests for modules.
A module delivery pipeline starts with a terraform plan
and then runs unit tests (and if applicable, contract tests) to verify the expected Terraform resources and configurations. Then, run terraform apply
and the integration tests to check that the module can still run without errors. After running integration tests, destroy the resources and release a new module version.
The Terraform Cloud private registry offers a branch-based publishing workflow that includes automated testing. If you use terraform test
for your modules, the private registry automatically runs those tests before releasing a module.
When testing modules, consider the cost and test coverage of module tests. Conduct module tests in a different project or account so that you can independently track the cost of your module testing and ensure module resources do not overwrite environments. On occasion, you can omit integration tests because of their high financial and time cost. Spinning up databases and clusters can take half an hour or more. When you’re constantly pushing changes, you might even create multiple test instances.
To manage the cost, run integration tests after merging feature branches and select the minimum number of resources you need to test the module. If possible, avoid creating entire systems. Module testing applies mostly to immutable resources because of its create and delete sequence. The tests cannot accurately represent the end state of brownfield (existing) resources because they do not test updates. As a result, it provides confidence in the module’s successful usage but not necessarily in applying module updates to live infrastructure environments.
Compared to modules, Terraform configuration applied to environments should include end-to-end tests to check for end-user functionality of infrastructure resources. Write unit, integration, and end-to-end tests for configuration of active environments.
The unit tests do not need to cover the configuration in modules. Instead, focus on unit testing any configuration not associated with modules. Integration tests can check that changes successfully run in a long-lived development environment, and end-to-end tests verify the environment’s initial functionality.
If you use feature branching, merge your changes and apply them to a production environment. In production, run end-to-end tests against the system to confirm system availability.
Failed changes to active environments will affect critical business systems. In its ideal form, a long-running development environment that accurately mimics production can help you catch potential problems. From a practical standpoint, you may not always have a development environment that fully replicates a production environment because of cost concerns and the difficulty of replicating user traffic. As a result, you usually run a scaled-down version of production to save money.
The difference between development and production will affect the outcome of your tests, so be aware of which tests may be more important for flagging errors or disruptive to run. Even if configuration tests have less accuracy in development, they can still catch a number of errors and help you practice applying and rolling back changes before production.
Depending on your system’s cost and complexity, you can apply a variety of testing strategies to Terraform modules and configuration. While you can write tests in your programming language or testing framework of choice, you can also use the testing frameworks and constructs built into Terraform for unit, contract, integration, and end-to-end testing.
Test type | Use case | Terraform configuration |
Unit test | Modules, configuration | terraform test
|
Contract test | Modules | Input variable validation |
Integration test | Modules, configuration | terraform test
|
End-to-end test | Configuration | Check blocks |
This post has explained the different types of tests and how you can apply them to catch errors in Terraform configurations and modules before production, and how to incorporate them into pipelines. Your Terraform testing strategy does not need to be a perfect test pyramid. At the very least, automate some tests to reduce the time you need to manually verify changes and check for errors before they reach production.
Check out our tutorial on how to Write Terraform tests to learn about writing Terraform tests for unit and integration testing and running them in the Terraform Cloud private module registry. For more information on using checks, Use checks to validate infrastructure offers a more in-depth example. If you want to learn about writing tests for security and policy, review our documentation on Sentinel.
]]>TerraformIterator
that supports dynamic list iterations on the block and resource level to enable workflows that users are familiar with in HCL.
Today, we’re releasing CDKTF version 0.20, which improves the existing implementation of iterators. This new support allows developers to handle more complex cases, in which resources are created based on data that’s known only at run time. CDKTF version 0.20 also enables HCL output and improves error messages.
The iterator improvements include:
CDKTF 0.20 supports accessing resources created by iterators. This was previously possible only with an escape hatch. To chain iterators you can use the TerraformIterator.fromResources()
or TerraformIterator.fromDataSources()
methods with the resource or data source you want to chain as an argument:
const s3BucketConfigurationIterator = TerraformIterator.fromMap({
website: {
name: "website-static-files",
tags: { app: "website" },
},
images: {
name: "images",
tags: { app: "image-converter" },
},
});
const s3Buckets = new S3Bucket(this, "complex-iterator-buckets", {
forEach: s3BucketConfigurationIterator,
bucket: s3BucketConfigurationIterator.getString("name"),
tags: s3BucketConfigurationIterator.getStringMap("tags"),
});
// This would be TerraformIterator.fromDataSources for data_sources
const s3BucketsIterator = TerraformIterator.fromResources(s3Buckets);
const helpFile = new TerraformAsset(this, "help", {
path: "./help",
});
new S3BucketObject(this, "object", {
forEach: s3BucketsIterator,
bucket: s3BucketsIterator.getString("id"),
key: "help",
source: helpFile.path,
});
We now support iterating on resource attributes containing values that are known only after apply
. The most common example for this use case is validating an AWS Certificate Manager (ACM) certificate through DNS. In this case, the domain_validation_options
attribute contains values that are known only after the AWS ACM certificate resource has been created:
// Creates a new AWS managed SSL Certificate
const cert = new AcmCertificate(this, "cert", {
domainName: "example.com",
validationMethod: "DNS",
});
// The existing domain that we control and want to create an SSL certificate for
const dataAwsRoute53ZoneExample = new DataAwsRoute53Zone(this, "dns_zone", {
name: "example.com",
privateZone: false,
});
// NEW: fromComplexList() allows iterating over the domain validation options
// while mapping it into a map with "domain_name" as the key
const exampleForEachIterator = TerraformIterator.fromComplexList(
cert.domainValidationOptions,
"domain_name"
);
// This will create the DNS records to validate the ACM certificate based
// on the domain validation options from the ACM Certificate Resource
const records = new Route53Record(this, "record", {
forEach: exampleForEachIterator,
allowOverwrite: true,
name: exampleForEachIterator.getString("name"),
records: [exampleForEachIterator.getString("record")],
ttl: 60,
type: exampleForEachIterator.getString("type"),
zoneId: dataAwsRoute53ZoneExample.zoneId,
});
const recordsIterator = TerraformIterator.fromResources(records);
// This causes Terraform to wait until the validation through DNS was successful
new AcmCertificateValidation(this, "validation", {
certificateArn: cert.arn,
validationRecordFqdns: Token.asList(recordsIterator.pluckProperty("fqdn")),
});
Iterators now support expressing more variations of for
expressions in HCL. For example, it is now possible to map a list of objects into a list of strings by “plucking” one of their properties
new TerraformLocal(this, "list-of-names", mapIterator.pluckProperty("name"));
The new functions are exposed on complex (i.e. non-primitive) computed (i.e. not configurable) lists. They include values()
, keys()
, pluckProperty()
, forExpressionForList()
and forExpressionForMap()
. The latter two are very close to HCL, allowing you to leverage the full power of HCL for expressions without using an override escape hatch.
CDKTF synth now supports HCL as an output, in addition to the Terraform JSON which was previously supported. This makes it easier to debug the configuration that CDKTF creates, as HCL output is easier for people to read.
Moreover, this means you can use CDKTF as a templating engine to generate a Terraform config that will then be used and edited by other teams. Finally, it means you can now use CDKTF with tooling that supports only Terraform HCL. Although Terraform Cloud’s native features, like policy evaluation and health assessments, work with JSON, other common tools, like code scanners and linters, support only HCL.
Error messages in the cdktf
package now give more context and propose solutions to the problem at hand. For example:
Old:
No app could be identified for the construct at path 'path/to/construct'
New:
No app could be identified for the construct at path 'path/to/construct', likely a TerraformStack.
The scope of CDKTF's TerraformStack class is a single App instance created by 'const app = new App()'. The App is the root of your project that holds project configuration and validations.
You can learn more about the App here: https://developer.hashicorp.com/terraform/cdktf/concepts/cdktf-architecture#app-class:~:text=and%20Resource.-,App%20Class,-Each%20CDKTF%20project
If you’re new to the project, these tutorials for CDKTF are the best way to get started. You can dive deeper into our documentation with this overview of CDKTF.
Whether you’re still experimenting or actively using CDK for Terraform, we’d love to hear from you. Please file any bugs you encounter, let us know about your feature requests, and share your questions, thoughts, and experiences in the CDK for Terraform discussion forum.
]]>To address these issues, I developed Target CLI to store these details in context profiles, which engineers can easily switch between. Currently, Target CLI supports HashiCorp Vault, Boundary, Consul, Nomad, and Terraform. Support for all the tools except Terraform is based on connecting to and interacting with clusters. Support for Terraform is based on groupings of Terraform configurations that represent a single environment.
Target CLI can be installed on Mac and Linux. We plan to add Windows support in the near future. Here are the instructions for installing Target CLI on various platforms:
Run this command to install Target CLI using brew.:
brew tap devops-rob/tap && \
brew install target
You can also use a shell script to install Target CLI on both Mac and Linux distributions. To execute the script, run this command in your terminal:
curl https://raw.githubusercontent.com/devops-rob/target-cli/main/install.sh | bash
No matter which installation method you chose, Target CLI should be installed and available on your PATH
. To test this, type the following command in your terminal:
target
If Target CLI is installed correctly, you should see this output:
target
Target CLI allows users to configure and switch between different context profiles for their Vault, Nomad, Consul, Terraform, and Boundary targets by setting tool-specific environment variables.
A context profile contains connection details for a given target, as shown below.
Example:
A vault-dev context profile could point to:
https://example-dev-vault.com:8200
with a vault token value of s.jidjibndiyuqepjepwo
Usage:
target [command]
Available commands:
boundary
|
Manage Boundary context profiles |
completion
|
Generate the autocompletion script for the specified shell |
config
|
Configure target CLI for shell sessions |
consul
|
Manage Consul context profiles |
help
|
Help for any command |
nomad
|
Manage Nomad context profiles |
terraform
|
Manage Terraform context profiles |
vault
|
Manage Vault context profiles |
version
|
Show current installed version of target-cli |
Flags:
-h , --help
|
help for target |
-v , --version
|
version for target |
Use target [command] --help
for more information about a command.
Target CLI supports setting up default context profiles that load all of your default configurations when a shell is spawned. To enable this behavior, Target CLI must place a small helper script in your shell’s startup script. The config
command is a helper utility command to perform this task.
For example, if you are using Zsh as your shell, the helper script goes in the .zshrc
file, which is usually located in the HOME
directory.
The following command configures your Zsh shell for Target CLI defaults:
Target config --path ~/.zshrc
This command adds the following lines to your .zshrc
file:
# Target CLI Defaults
for file in /Users/rbarnes/.target/defaults/*; do
if [ -f "$file" ]; then
source "$file"
fi
done
Creating context profiles for all tools with the exception of Terraform requires an endpoint
config parameter (all other configurations are optional):
target vault create local-dev \
--endpoint “http://localhost:8200” \
--token “root” \
--namespace “droids”
The above example creates a Vault context profile called local-dev
, which points to the http://localhost:8200
endpoint, using the token root
and pointing to the droids
namespace.
The same thing can be done for all of the other tools with the exception of Terraform. Each tool has its own specific configurations available. Here’s a Nomad example:
target nomad create local-cluster \
--endpoint “http://localhost:4646” \
--token “secret-id” \
--namespace “local”\
--region “uk”
To view a complete list of the configuration parameters, use the help
flag for your desired tool as shown here:
target nomad create -h
Once you create context profiles, you can list them for each tool using the list command. Here is an example of this for Vault:
target vault list
Example output:
+----------------------+-------------------------------------------+
| PROFILE NAME | ENDPOINT |
+----------------------+-------------------------------------------+
| local-dev | http://localhost:8200 |
| new-vault | https://some-vault-cluster:8200 |
+----------------------+-------------------------------------------+
The select
subcommand for each tool prints out the export commands to set the context profile configurations in the current shell. For example:
target vault select local-dev
This outputs:
export VAULT_ADDR=http://localhost:8200; export VAULT_TOKEN=root
In order for this to take effect in the current shell session, simply wrap this command in the eval
command, as shown here:
eval $(target vault select local-dev)
Once this is done, any Vault CLI commands will be run against the cluster specified in the local-dev
context profile.
If you would like a particular context profile to be loaded by default any time a shell is spawned, you can set it as the default with this command:
target vault set-default local-dev
The above example sets the local-dev profile as default and as such, the configured environment variables will be set automatically. This assumes that you have configured Target CLI for your chosen shell as described above in the Setting up Target CLI section.
It’s worth noting that setting a profile as default will not take effect in the current shell session.
As mentioned earlier, support for Terraform differs from the other tools. Instead of managing connections to clusters and servers, Target CLI manages groupings of Terraform configurations, allowing the same Terraform code to be used to deploy multiple environments.
In order for this to work, the configuration parameters within the Terraform code that you want Target CLI to specify must be set up to use variables, as shown here:
resource "digitalocean_droplet" "nomad_client" {
image = "ubuntu-20-04-x64"
name = var.droplet_name
region = var.region
size = var.droplet_size
}
The above example code deploys a droplet for a Nomad server to DigitalOcean cloud. Notice the name
, region
, and size
configuration parameters are all set using variables. This allows Target CLI to set their values according to the context profiles created.
Using this example, you could set up two context profiles: one for New York, called nyc-prod
and another for London, called ldn-dev
. The latter droplet will be used for development purposes as the engineering team is based in London, whereas the first droplet will be used for production, because the clients are in New York. The snippet below shows the Target CLI command that would be run to create the London context profile:
target terraform create ldn-dev \
--var "droplet_name=ldn-dev" \
--var "region=lon1" \
--var "droplet_size=s-2vcpu-4gb"
The above command creates a context profile called ldn-dev
with the specified configuration parameters. Notice the --var
flag can be specified as many times as required:
target terraform create nyc-prod \
--var "droplet_name=nyc-prod" \
--var "region=nyc2" \
--var "droplet_size=s-8vcpu-16gb"
This command example above does the same thing for our New York instance. Selecting a context profile in the current shell and setting a default context profile can be done in the same way, as specified in the Switching context profiles and the Setting default context profiles sections, respectively.
Once a context profile is selected, or the default context profile has been loaded into your shell, run the usual terraform plan
and terraform apply
commands and Terraform will pick up the configuration values from the selected context profile.
The primary benefit of Target CLI lies in its ability to simplify the process of switching between different clusters and configurations of HashiCorp tools like Vault, Nomad, Consul, Boundary, and Terraform. By storing connection details in easy-to-switch context profiles, Target CLI eliminates the need for setting and remembering multiple environment variables for each cluster. This not only saves time but also reduces the likelihood of errors that can occur when manually configuring environments. Target CLI’s support for managing Terraform configurations further extends its utility, making it a versatile solution for environment management across various HashiCorp products.
Install Target CLI using the instructions above and simplify your connection and environment settings. If you would like additional HashiCorp tool support added to Target CLI, open a Github issue and share your requirements.
]]>For workspaces linked to a supported version control system (VCS), Terraform Cloud posts status checks back to the repository for runs that occur in response to actions like pull requests (PRs) and merges. These status checks indicate the results of plan/apply runs, policy checks, and run tasks, along with a link back to the corresponding run in Terraform Cloud. This helps ensure the expected result from Terraform changes before code is merged to production and offers valuable in-context feedback for Terraform developers.
However, the previous behavior of sending one check per run did not scale well for customers with monolithic repositories, or monorepos, that contained a large number of workspaces or co-located modules. That approach created an excessive number of status checks, making it difficult to identify essential updates and Terraform plan changes. This highlighted a need for a more efficient way to handle status checks at scale.
Aggregated reviews address this issue with a consolidated presentation of status checks for monorepos across multiple workspaces, highlighting the most vital changes that require the user’s attention or validation. By highlighting the key changes that could impact infrastructure, the new feature reduces the likelihood of missing unexpected modifications. This concise view of summarized changes also offers a detailed option for more information when needed.
While the key use case of this feature is to streamline the status check process for large-scale monorepos, the enhancement also helps with any Terraform repository connected to many workspaces for the purposes of repeated provisioning across accounts, regions, or environments.
The new feature supports Terraform Cloud’s official VCS integrations including GitHub, GitLab, Bitbucket, and Azure DevOps. The two highlights of this update are:
The upgraded commit summary aggregates status checks by organization, providing a quick overview of proposed changes within all workspaces linked to the repository. Users can access additional information by clicking on a Details link, which leads to the new commit page in Terraform Cloud.
This page includes comprehensive details of changes linked to each workspace associated with the monorepo, featuring a summary bar that indicates proposed, modified, or destroyed resources. Workspaces are grouped by their status: those that need attention, have resource changes, are still pending, or workspaces with no changes. Users can also filter workspaces by name and project.
This feature can be enabled in the Version Control section of your Terraform Cloud organization settings. To get started, check out the organization settings documentation.
Get started with Terraform Cloud for free to begin provisioning and managing your infrastructure in any environment. Link your Terraform Cloud and HashiCorp Cloud Platform (HCP) accounts together for a seamless sign-in experience.
]]>2023 was a year of incredible technical advancement, especially in the field of AI, that we may one day look back on as a turning point in computing. For HashiCorp, the unprecedented number of new features we’ve brought to our infrastructure and security lifecycle management products marks a turning point as well. As always, we didn’t do it alone — users, customers, partners, and employees played essential roles helping us get to where we are.
This year we saw more signals that the adoption of infrastructure as code and cloud automation is accelerating. HCL rose to #11 on the most-used languages chart in the GitHub Octoverse report, with 36% year-over-year growth. As organizations look to use cloud in a more productive and cost-efficient manner, it’s clear that infrastructure as code plays a key role, and that many organizations are still transforming their approach to infrastructure management.
With that in mind, I wanted to highlight some of the most important enhancements to our products this year, along with improvements to our practitioner experience.
2023 was a year of major improvements for HashiCorp Terraform, with three releases (1.4, 1.5, and 1.6) helping accelerate developer productivity. Highlights include config-driven import in Terraform 1.5, which lets developers safely and securely import existing resources to Terraform and automatically generate the matching configuration. First-class support for checks allows users to define health checks along with resources and modules, making it easier to ensure infrastructure stays healthy. The Terraform test framework, introduced in Terraform 1.6, gives developers easy-to-use tools to perform unit and integration testing of Terraform modules.
To help module producers and consumers truly benefit from this framework, test-integrated module publishing for Terraform Cloud streamlines the module testing and publishing process. We also introduced AI-generated module tests to jumpstart the test authoring process. And there is more to come in 2024, including the highly anticipated Terraform stacks to simplify complex provisioning workflows at scale.
Terraform Cloud also introduced new capabilities to help organizations improve their security with features like dynamic provider credentials, native Open Policy Agent (OPA) integration, and the general availability of no-code provisioning to enable secure self-service infrastructure workflows.
The Cloud Development Kit for Terraform (CDKTF) continued its rapid pace of innovation, with five releases in 2023 that brought many performance and productivity enhancements for developers working in programming languages such as TypeScript, Python, Go, and more. This year also saw the introduction of multi-language provider docs in the Terraform Registry to help developers discover resource configurations and examples in their preferred language.
HCP Packer also made strides to help customers with security on the image level, with features like audit log streaming, inherited revocation, and channel rollback to simplify image health monitoring and lifecycle management.
Like Terraform, Consul, and Nomad, HashiCorp Vault also notched three major releases this year (1.13, 1.14, 1.15), which added important features such as ACME support in Vault PKI, the OpenLDAP secrets engine, Certificate Issuance External Policy Service (CIEPS), and dozens of other productivity, observability, and security enhancements.
A new Kubernetes integration method was also launched in 2023: the Vault Secrets Operator, which is a first-class Kubernetes Operator for Vault. With three primary methods for Vault-Kubernetes integration, see how the Vault Secrets Operator compares with the other two integration methods and find out if it’s the best fit for your use case.
Perhaps the biggest news in the Vault ecosystem this year was the release of HCP Vault Secrets and the associated secrets sync functionality, which is also available in HCP Vault and Vault Enterprise. Secrets sync represents a major leap in how companies fight secret sprawl and centralize secrets management into one interface, enabling secrets to be synced to external systems, such as cloud native secret stores and SaaS solutions like GitHub and Vercel. The focus and simplicity of HCP Vault Secrets helps teams onboard Vault even faster by focusing on solving secrets management first.
Speaking of secret sprawl, HashiCorp’s acquisition of BluBracket and its advanced secret scanning technology birthed HCP Vault Radar: a tool for finding and analyzing unmanaged secrets. Currently available through early access, we hope to widely release Vault Radar in 2024. We’ve been focused on secrets management since the launch of Vault, and one of the most common challenges continues to be discovery of existing secrets and credential leaks by human users. Radar aims to help with both of these problems by scanning across various collaboration tools to find exposed secrets, both existing and new ones, that are leaked by human users.
2023 was an especially important year for HashiCorp Boundary, with three releases (0.12, 0.13, and 0.14) that added key features for secure access management. First, we introduced Boundary Enterprise, allowing organizations with strict security and compliance requirements to self-manage their Boundary deployments.
Second, we added SSH session recording, Boundary’s most requested feature. Session recording lets administrators record and play back user actions over remote SSH sessions, allowing teams to meet key regulatory requirements, deter malicious behavior, and remediate threat incidents. Multi-hop worker sessions, passwordless SSH access, and an embedded terminal for the Boundary Desktop app also made a big impact among dozens of notable new features in Boundary.
The enterprise IT world is starting to take notice of Boundary’s modern approach to privileged access management (PAM), with Vault and Boundary being added to Gartner’s Magic Quadrant for privileged access management. We’re building an approach to PAM that wasn’t conceived in an era of on-premises castle-and-moat security perimeters, but instead looks toward the cloud-based future of zero trust architectures. I think it’s a sign we’re on the right track and we’re glad to see that the world’s view of PAM is evolving with us.
HashiCorp Consul's 2023 releases (1.15, 1.16, and 1.17) and the new HCP Consul Central significantly improved observability, scalability, and reliability. Envoy access logging and extensions streamlined Consul onboarding and troubleshooting while sameness groups optimized multi-cluster operations. Consul also gained locality-aware service mesh routing, which prioritizes local instances for lower latency and reduced costs.
The introduction of HCP Consul Central was pivotal, providing observability and centralizing management for both HashiCorp-managed and self-managed Consul clusters across diverse cloud environments, simplifying global operations.
Consul’s service mesh capabilities for securing service-to-service communication gained several security upgrades. In 1.16, Consul gained JWT authentication for service-to-service traffic. Envoy extensions also gave Consul more options for security with Wasm and external AuthZ Envoy extensions. Consul API gateways also received security upgrades, such as support for JWT-based authentication and authorization.
HashiCorp Nomad’s three releases (1.5, 1.6, and 1.7) continued to enhance the product’s trademark flexibility and simplicity by giving users more ways to make clusters run as efficiently as possible. One of the biggest additions was node pools: a new way to determine which client nodes are eligible to receive workloads. Nomad Enterprise customers gained additional governance on top of node pools, giving Nomad administrators fine-grained control over which users can put work on what machines.
Along with NUMA support and the release of Nomad Pack 0.1, Nomad also gained enhancements to its Vault and Consul integrations, replacing static token management with simpler and more secure dynamic credentials. Production-ready support for the Podman driver also added more flexibility for customers who want to run containers in RedHat environments. Nomad support for distributed locking also makes it easier to run mission-critical applications without the complexity of external leader election.
Along with reliability improvements, Nomad was focused on becoming even more secure. Better credential management was a key theme, as Nomad added single sign-on (SSO), allowing users to sign into Nomad using any OIDC-compliant identity provider (IdP). Nomad also gained the ability to act as an OIDC provider and mint dynamic workload identity tokens that third parties can use to authenticate the identity of Nomad tasks.
We introduced a new vision for HashiCorp Waypoint this year, pivoting to focus solely on HCP Waypoint. As we reframed HCP Waypoint to empower platform teams to define golden patterns and workflows for developers, we introduced templates to abstract and standardize application scaffolding and add-ons to install infrastructure dependencies into their Waypoint-defined applications. This new vision points HCP Waypoint toward providing an internal developer platform, which is a trend we’ve seen accelerate as platform teams look to simplify how application teams build and deliver in the cloud.
At HashiConf in October, we announced the private beta for Developer AI, a new AI-powered experience for practitioners using our products that we trained on our APIs, documentation, learn guides, support knowledge base, and more. Users who are new to the products can quickly get answers to questions about key use cases, configurations, and how to get started. Experienced users can ask about advanced scenarios and get references to guides and documentation to dive deeper. The private beta is going on now, with an open beta planned for early 2024. To try it out, sign up today.
2023 was a turning point for generative AI. At HashiCorp, we are excited to power the infrastructure for some of the most innovative applications in this emerging field, from using Terraform to deploy cloud infrastructure to train models, to using Nomad to schedule across large-scale GPU clusters. HashiCorp products are playing an enabling role in unlocking the power of AI.
Beyond AI, we see accelerating demand for infrastructure as code, cloud-native approaches to security, and automation for application delivery.
To our users, customers, partners, and employees, I want to give a heartfelt “thank you” for your contributions to the progress we made this year. We look forward to doing great things with you again in 2024 as we continue to build the future of cloud infrastructure and security together.
]]>Introducing policy as code changes can be challenging for compliance teams because thorough testing is required to ensure policies function correctly. Policies that have syntax or logic errors can halt workspace runs and create significant issues for organizations. To combat this, many organizations use the testing capabilities built into policy as code frameworks like HashiCorp Sentinel and Open Policy Agent (OPA) to unit test their organization policies and catch syntax issues early in the development lifecycle.
Many HashiCorp customers have inquired about the best practices for safely implementing policy changes in Terraform Cloud. Traditionally, we have recommended integrating policy changes into a policy set assigning the Advisory enforcement mode, and then using the Terraform Cloud audit system to track policy status and determine the impact of the policy against their infrastructure. However, a notable challenge with this approach is that to gain a complete understanding of how the policies are affecting the entire organization, every workspace needs to initiate a run.
With no way to trigger runs across all workspaces at once, customers could wait for a run to occur naturally before uncovering the policy impact or develop custom workflows that wrap the Terraform Cloud API to perform this task. These shortcomings highlighted the need for a more controlled and efficient approach to managing policy changes.
To overcome these challenges, HashiCorp has introduced on-demand policy evaluation for Terraform Cloud. This feature provides a way to manually evaluate policies against a particular workspace without requiring a full plan or apply run, including workspaces not currently in scope of the policy set. This allows policy maintainers to measure the impact of new policies and policy runtime versions, as well as the compliance of resources that don't frequently change, such as identity and access management (IAM) policies, network access control lists (ACLs), security groups, and subnet configurations. Additionally, because all policy evaluations feed into the audit system, users can now easily monitor compliance across the entirety of their Terraform Cloud organization.
The new functionality is available on the Policy Sets page in Terraform Cloud. The page is now broken up into Configure and Evaluate tabs. The Configure tab contains the existing policy set settings. The Evaluate tab contains a new form specifically for on-demand policy evaluation:
With this new feature, HashiCorp continues to set the standard for cloud infrastructure automation, providing users with the tools they need to enforce policies across their infrastructure at scale.
To learn more, check out the on-demand policy evaluation documentation. Start defining policies for your infrastructure today with the HashiCorp Sentinel or Open Policy Agent (OPA) policy as code frameworks.
You can get started with Terraform Cloud for free to begin provisioning and managing your infrastructure in any environment. And don’t forget to link your Terraform Cloud and HashiCorp Cloud Platform (HCP) accounts together for a seamless sign-in experience.
]]>Amazon CodeWhisperer provides code suggestions based on large language models (LLMs) trained on billions of lines of code, including Amazon's internal code and IaC config files as well as open source code. To generate high-quality Terraform suggestions, HashiCorp and Amazon CodeWhisperer teams worked together to source sample Terraform modules and configurations written in HashiCorp Configuration Language (HCL). The teams collaborated on providing model validations, working to ensure the output generated by the CodeWhisperer meets the requirements of Terraform practitioners.
CodeWhisperer and Terraform is a powerful combination, as HCL has once again been confirmed as a high-growth programming language by Octoverse, indicating that operations and IaC work are gaining prominence among developers. Specifically, HCL adoption has grown 36% year-over-year, demonstrating that developers are increasingly using declarative languages to leverage infrastructure deployments.
To use CodeWhisperer with Terraform, you simply install the latest AWS Toolkit plugin in your integrated development environment (IDE) of choice. CodeWhisperer automatically detects when customers write a new Terraform configuration file (*.tf file) and generates code suggestions using comments.
Here are a few examples of what CodeWhisperer can do:
Let’s start with a simple example, suppose you want to create multiple Amazon EC2 instances using the latest Amazon Linux 2 machine image. You would start with a simple prompt to configure Terraform Cloud, followed by instructions to create the instance by looking up the Amazon Machine Image (AMI) for Amazon Linux 2. CodeWhisperer will provide suggestions for each resource block. You can select the alternative suggestions and use the tab key to accept the suggestion:
CodeWhisperer is also trained in understanding advanced HCL syntax and expressions. For example, you could ask it to do variable validations for bucket names with 10-20 characters without special characters. CodeWhisperer can generate suggestions as shown here:
Another example is to create an EC2 Security Group and populate the ingress rules using dynamic blocks expression and the existing locals:
When writing Terraform code, either by hand or by leveraging an AI-based coding companion such as CodeWhisperer, errors are a fact of life. If the generated Terraform has missing artifacts or has validation errors, developers often find themselves context-switching between their editor and the CLI to validate code, leading to frustration and reducing productivity.
Enhanced editor validation in the Terraform extension for Visual Studio Code automatically validates Terraform code as early as possible, creating an enhanced, integrated authoring experience by highlighting errors and providing guidance to help resolve issues quickly.
Examples of these new validations include:
Validation errors are immediately identified within the Terraform extension for the Visual Studio Code editor, no context switching is needed.
You can start using Terraform code generation in Amazon CodeWhisperer using AWS’ getting started resources. To complement your generative workflows you can install the Terraform extension for Visual Studio Code and learn about enhancements recently added to the extension.
Special thanks to Kevon Mayers, Kalen Arndt, and Sean Doyle whose work behind the scenes made all of this possible.
]]>S3 Express is a new bucket type, built from the ground up to deliver single-digit millisecond
response times for the most frequently accessed datasets. Organizations with compute-intensive big data workloads such as autonomous vehicle data, financial risk modeling, real-time online advertising, and machine-learning training and inference can easily provision the new bucket type using Terraform.
Three key features support S3 Express’ performance goals:
A low-latency zonal storage class. S3 Express optimizes for speed by replicating and storing data within the same Availability Zone as your compute workloads.
A new bucket type with a hierarchical namespace. This new bucket type has a hierarchical namespace and stores object key names in a directory-like manner, as opposed to the flat key structure of traditional S3 buckets.
A new fast-authorization API. S3 Express introduces a new session-based authorization capability that reduces the latency associated with S3 request authorizations. This new capability can be used to create and periodically refresh your connection sessions to the new bucket type.
To set up S3 Express in the Terraform AWS provider, use the new aws_s3_directory_bucket
resource. You also need to use the existing resources to manage the new S3 Express buckets:
aws_s3_bucket_policy
aws_s3_object
To try out this feature, you need:
To create an S3 Express bucket, apply the following configuration:
resource "aws_s3_directory_bucket" "example" {
# S3 directory bucket names must follow the format
# ----x-s3
# where is the Availability Zone ID
# https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#az-ids
# You can use the aws_availability_zone data source to obtain the AZ ID.
bucket = "example--usw2-az2--x-s3"
location {
name = "usw2-az2"
}
# All objects should be deleted from the bucket when the bucket is destroyed
# so that the bucket can be destroyed without error.
force_destroy = true
}
Here is an example configuration of an S3 Express bucket policy and object:
data "aws_partition" "current" {}
data "aws_caller_identity" "current" {}
data "aws_iam_policy_document" "example" {
statement {
effect = "Allow"
actions = [
"s3express:*",
]
resources = [
aws_s3_directory_bucket.example.arn,
]
principals {
type = "AWS"
identifiers = ["arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:root"]
}
}
}
resource "aws_s3_bucket_policy" "example" {
bucket = aws_s3_directory_bucket.example.bucket
policy = data.aws_iam_policy_document.example.json
}
# aws_s3_object is used with directory buckets just like general purpose buckets.
# Note: tags are not supported for objects in directory buckets.
resource "aws_s3_object" "example" {
bucket = aws_s3_directory_bucket.example.bucket
key = "example"
source = "path/to/file"
}
As the Terraform AWS provider download count tops 2 billion, AWS and HashiCorp continue to develop new integrations to help customers work faster, use more services and features, and enjoy developer-friendly ways to provision cloud infrastructure. Launch-day support of the Amazon S3 Express One Zone storage class in the Terraform AWS provider allows practitioners to immediately begin managing this new offering in their existing Terraform workflows. Here are two main benefits of Terraform’s launch-day support for S3 Express:
To learn more about Amazon S3 Express One Zone storage class support in Terraform, please refer to the documentation. To learn the basics of Terraform using the AWS provider, follow the hands-on tutorials for getting started with Terraform on AWS on our developer education platform.
Please share any bugs or enhancement requests with us via the Terraform AWS provider repository on GitHub. We look forward to your feedback and want to thank you for being such a great community!
If you are completely new to Terraform, sign up for Terraform Cloud and get started using the Free offering today.
]]>Earlier this month, we launched a native integration with Terraform Cloud in our VS Code extension to better support professional individuals and teams who rely on Terraform Cloud for standardizing and managing their infrastructure automation and lifecycle.
The core capabilities of the new Terraform Cloud integration focus on providing a read-only view of workspaces and runs in order to reduce the amount of window- and context-switching you need to do. Instead of opening up the Terraform Cloud web interface in your browser to look up why a run failed and then switching back to VS Code to look for the configuration that caused the error, you can now view the apply log side-by-side with your code, allowing you to get back to debugging faster.
To see the full list of features as we continue to add to them over the coming months, check out the extension overview on the Visual Studio Marketplace. And read on to learn more about enhancements you may not know about.
For example, installing HashiCorp's Terraform extension for VS Code also gives you access to our Module and Provider Explorer, which can be used regardless of whether or not you have Terraform Cloud. You can access this functionality by clicking on the HashiCorp Terraform icon in the activity bar:
The Module Explorer lists Terraform modules used in the current open folder (root module) in the Explorer Pane, or you can drag it to the Secondary Side Bar pane to keep it in view. Each item shows an icon indicating where the module comes from (local filesystem, Git repository, or Terraform Registry). If the module comes from the Terraform Registry, a link to open the documentation in a browser is provided.
The Provider Explorer lists all Terraform providers used in the current open document in the Explorer pane, or you can drag it to the Secondary Side Bar pane for an expanded view.
For some time, we have been tracking reports of poor performance in the Terraform language server, which powers the Terraform extension for VS Code and can also be used to provide IDE features in LSP-compatible editors like Sublime Text, Neovim, and others.
When you open up your editor, the Terraform language server does a lot of work in the background to understand the code you're working on. It has an indexing process that finds all the Terraform files and modules in your current working directory, parses them, and builds an understanding of all the interdependencies and references. It holds this information inside an in-memory database, which is updated as you change your files. If you open a directory with many hundreds of folders and files, this may consume more CPU and memory than intended.
v0.31.5 of the Terraform language server and v2.27.2 of the Terraform VS Code extension (both released in September 2023) include fixes that many users have reported dramatically improve performance. If you've previously tried the extension or the language server and encountered problems with performance but have not yet tried these updates, we encourage you to check them out and let us know if you see a difference.
We know these fixes have not yet addressed all of the performance problems that users encounter, and we continue to investigate and trial possible solutions. If you continue to experience poor performance, please don't hesitate to file an issue on GitHub. The more log output and reproducible configuration you provide, the better we are able to diagnose and address the root cause.
In the coming months, we plan to continue to add features to the Terraform Cloud integration, improve the enhanced validation feature announced at HashiConf last month, complete the Terraform language server's understanding of all Terraform language features (you can track our progress here and here), and implement additional performance enhancements.
Another goal we hope to achieve next year is to move toward a model of Day 0 support for new language features as they come out: as functionality gets added to Terraform, we want our VS Code extension (and other editors powered by the Terraform language server) to provide a best-in-class experience when using those new features.
Whether you’re new to Terraform or an advanced practitioner, we’d love to hear how we can improve the experience of authoring Terraform configuration inside of our VS Code extension or other editors. Please file any bugs you encounter, let us know about your feature requests, and share your questions, thoughts, and experiences in the Terraform Editor Integrations discussion forum.
]]>