Skip to main content

Scale infrastructure with new Terraform and Packer features at HashiConf 2025

HashiCorp Terraform and Packer continue to simplify hybrid cloud infrastructure with new capabilities that help customers scale and secure their infrastructure while optimizing costs.

Organizations struggle to manage infrastructure at scale because of fragmented workflows, steep learning curves, compliance burdens, cost overruns, and security risks. These challenges slow down developer productivity, create operational inefficiencies, and increase risk exposure across hybrid and multi-cloud environments.

HashiCorp’s Infrastructure Lifecycle Management (ILM) portfolio helps customers streamline how they build, manage, and secure infrastructure at scale. By removing complexity, improving visibility, and automating key tasks, we enable teams to move faster, control costs, and reduce risk — so they can focus on delivering business value rather than managing infrastructure.

At HashiConf, we are announcing several features to help simplify infrastructure lifecycle management and accelerate time-to-value:

  • Terraform Stacks: Now generally available, Stacks reduce the time and overhead of managing infrastructure at scale.
  • Terraform search: Discover and import resources in bulk more efficiently and accurately.
  • Terraform MCP server: Increase developer productivity by connecting your HCP Terraform or Terraform Enterprise account to your AI assistant.
  • Azure Copilot with Terraform integration: Simplify the adoption and usage of Terraform, without deep Terraform knowledge.
  • Terraform run task for Cloudability Governance: Optimize cost with greater visibility so teams make informed, proactive decisions and stay aligned with budgets before deployment.
  • Terraform actions w/ Ansible integration: Codify Day 2 infrastructure operations such as triggering an Ansible playbook directly in Terraform.
  • Terraform Hold Your Own Key (HYOK): Take ownership of the encryption keys used to access sensitive data in HCP Terraform.
  • Terraform pre-written Sentinel policies: Enforce NIST SP 800-53 Rev 5 controls across AWS environments with a new set of pre-written Sentinel policies.
  • Packer SBOM storage and package visibility: Gain further visibility into the components that make up image artifacts to reduce risk across cloud and on-prem environments.

»Terraform Stacks

Last October, we announced the public beta of Terraform Stacks, a new way to simplify infrastructure provisioning and management at scale, reducing the time and overhead of managing infrastructure. Stacks empower users to rapidly create and modify consistent infrastructure setups with differing inputs, all with one simple action. Stacks also eliminate the need to manually track and manage cross-configuration dependencies because multiple Terraform modules can be organized and deployed together in a Stack.

Today, we’re excited to announce the general availability of Terraform Stacks for all new HCP Terraform plans based on resources under management (RUM). Customers can use Stacks for production workloads with backward compatibility guarantees on the APIs, meaning customers can safely integrate the APIs and unified CLI experience into their CI/CD pipelines. Other improvements include:

  • A unified CLI Experience so developers can create, manage, and iterate on Terraform stacks directly from the command line.
  • Expanded version control system (VCS) support to all major VCS providers, including GitHub, GitLab, Azure DevOps services, and Bitbucket.
  • Support for self-hosted HCP Terraform agents to meet higher security and compliance requirements for private or on-premises infrastructure

HCP Terraform Premium tier customers can now take advantage of the custom deployment groups feature with auto-approve checks. This helps address the challenge of manually managing and approving every deployment as your infrastructure grows. Customers can logically group deployments by environment, team, or application to configure auto-approve checks, which offer a more flexible method to manage deployments more effectively at scale.

Example: The Kubernetes and namespace components are repeated across three deployment groups with multiple instances for development, staging, and production environments.

Example: The Kubernetes and namespace components are repeated across three deployment groups with multiple instances for development, staging, and production environments.

To learn more about Terraform Stacks and its newest features:

»Terraform search

Importing resources in bulk into Terraform can be manual, error-prone, and time-consuming, with a fragmented workflow. To discover resources, users need to identify resources manually on web consoles or through provider CLIs. Once discovered, the identifiers for each resource need to be copied to import {} blocks in Terraform one at a time.

Today, we are introducing the public beta of a new end-to-end search workflow for HCP Terraform. This search workflow leverages resource identity (introduced in Terraform 1.12) to improve bulk import efficiency and accuracy at scale. Users can now discover and import resources in bulk, without hopping between the web console, custom scripts, and editors. A dedicated workflow allows users to bypass complicated existing workarounds to identify resources.

With the help of resource identities, customers can import resources in bulk without worrying about duplicated management of resources that could cause drift and unnecessary Terraform RUM costs.

The beta launch for this workflow includes support for the AWS and Azure providers. To learn more, refer to our search and import documentation.

»Terraform MCP server

Back in May, we announced the public beta of our Terraform MCP server: a lightweight, local server that connects AI developer tools directly to Terraform, enabling accurate, validated, and context-aware interactions.

Today, as we continue to add more capabilities to increase developer productivity and reduce human errors through AI assistants, we’re excited to share that developers can now authenticate the Terraform MCP server with their HCP Terraform or Terraform Enterprise account. This allows them to provision infrastructure without context switching; most actions can be done within their IDE or AI agent. From infrastructure provisioning to updating resources — based on their HCP Terraform accounts, private registries, and public Terraform registry data — developers can get module recommendations from their private registry, and then create, run, and update the workspace directly.

Install the Terraform MCP server here to get started. For example prompts, see the Terraform MCP Server documentation.

»Azure Copilot with Terraform integration

While infrastructure as code (IaC) has simplified infrastructure management, the initial learning curve can be steep. For example, engineers may struggle with Terraform’s HCL language or with setting up and provisioning infrastructure through HCP Terraform. Scaling infrastructure often requires constant context-switching between tools and environments, which slows progress and increases the risk of human error. These challenges hinder customer adoption and reduce developer velocity.

AI coding assistants can be very helpful in smoothing out learning curves and reducing context switching for developers, which is why we’re excited to announce the public beta of Azure Copilot with Terraform integration using the Terraform MCP server. For those unfamiliar, Azure Copilot is an AI-powered assistant for managing Microsoft Azure services. It helps users design, operate, optimize, and troubleshoot their Azure cloud environments.

This integration with HCP Terraform streamlines the adoption and use of Terraform in Azure, enabling engineers to work more efficiently without requiring deep Terraform expertise. Azure Copilot guides customers from onboarding to deployment of Azure resources using HCP Terraform. Developers can now use the Azure AI assistant to automate manual, repetitive tasks such as retrieving resource and module information, and creating/updating workspaces, all while reducing errors when writing their Terraform configurations.

To get started, check out the Azure VS Code portal to begin using Copilot to generate your Terraform configuration.

»Terraform run task for Cloudability Governance

A common pitfall of cloud spending is out-of-control provisioning, reactive cost management, and the disconnect between engineering and finance. Application developers lack the immediate feedback needed to understand the cost of their infrastructure choices, learn from them, and build a more cost-effective option.

Today we’re excited to announce a new Terraform run task for Cloudability Governance. This provides a solution for delivering cost estimates and resource recommendations. The cost impact of infrastructure changes and recommendations will be displayed consistently within the HCP Terraform run details page regardless of how a run is initiated on the UI, CLI, or VCS.

Developers can see the cost impact of infrastructure changes so they can better provision in line with cost policies and can see details on how they are violating their cost quota. Organizations can integrate personalized cost estimates and financial guardrails to help teams make informed, proactive decisions and stay aligned with budgets before deployment. This allows organizations to optimize cost without sacrifices to developer agility.

»Terraform actions

Terraform is widely adopted for Day 0/1 CRUD operations, offering a declarative, reproducible, and version-controlled approach to infrastructure provisioning. However, effectively managing the full lifecycle of infrastructure extends beyond initial provisioning.

While HCP Terraform already serves as a powerful tool for ongoing infrastructure management, providing capabilities such as drift detection, continuous validation, and standardized revocation, a number of Day 2 operational tasks are still handled outside of Terraform. For example, in AWS environments managed by Terraform, teams often need to switch to the AWS console to manually invoke Lambda functions, create invalidation requests for CloudFront’s cache, or send alerts and notifications via SNS. Relying on external CLI tools or manual scripts for these Day 2 tasks can leave users with complex, disjointed workflows, which — until now — could not be unified in Terraform.

To address these challenges, we are excited to announce the public beta of Terraform actions. Actions introduce a way to codify and drive Day 2 infrastructure operations by triggering third-party tools outside of Terraform. Built directly into providers, actions provide preset operations that extend Terraform’s automation capabilities for common Day 2 tasks. These actions can be invoked before or after a resource's CRUD events or ad hoc via the CLI. By codifying more Day 2 operations, organizations can reduce operational costs and accelerate delivery by automating previously manual, error-prone tasks.

Actions provide two major benefits for Terraform users:

  • Standardized Day 2 infrastructure operations: Module authors can define Day 2 infrastructure operations in code alongside the rest of their infrastructure — offering a clear association between Day 2 actions and managed resources — and optionally invoke the operations with lifecycle triggers.
  • Native workflow: By bringing more Day 2 infrastructure operations within Terraform, users can extend its utility by unifying more operations in one control plane. This ensures consistency and brings teams closer to having a single source of truth for all infrastructure.

For details on how to configure and invoke actions, please refer to the actions documentation.

Terraform actions also marks our first step toward a truly unified Terraform and Ansible infrastructure workflow, with the ability to invoke an event-driven Ansible playbook. To learn more about how Terraform and Ansible are working together to simplify infrastructure lifecycle management, check out our deep-dive blog: Terraform & Ansible: Unifying infrastructure provisioning and configuration management.

»Terraform Hold Your Own Key (HYOK)

As customers migrate infrastructure to the cloud, there is a growing demand for increased control over secret access. Secrets are sensitive, discrete pieces of information such as credentials, encryption keys, authentication certificates, and other critical pieces of information your applications need to run consistently and securely. While Terraform already provides a strong foundation by standardizing infrastructure security best practices, we stay committed to continuously improving our security features to help our customers meet the increasing demands of hybrid-cloud environments.

In Terraform, an artifact refers to a file generated as part of the infrastructure provisioning process. Two of the most common are state files and plan files, which are both used to store crucial information about your managed infrastructure. However, since Terraform artifacts can also contain sensitive information such as secrets in plaintext format, they can introduce both internal and external risks, causing apprehension among security teams. While Terraform artifacts are encrypted by default, customers sought additional control over this encryption, particularly those with stringent compliance needs. These concerns called for a new approach to handling these sensitive artifacts to help customers ensure they are secure before they are uploaded to HCP Terraform.

To address this, Terraform Hold Your Own Key (HYOK) is now generally available in HCP Terraform. Hold Your Own Key (HYOK) is a security principle that gives organizations ownership of the encryption keys used to access their sensitive data. With HYOK, organizations can take ownership over secret access by securing and encrypting Terraform artifacts before they are uploaded to HCP Terraform. This new control over state encryption means users can retain complete visibility of the state or plan file without any plaintext secrets that could potentially introduce risk.

HYOK Encryption dashboard

Manage key configurations in the HYOK encryption tab.

For more information on Hold Your Own Key (HYOK), check out our HYOK release blog.

»Terraform pre-written Sentinel policies for NIST SP 800-53

Another way Terraform can aid in securing infrastructure workflows is with policy as code. HashiCorp Sentinel is an embeddable policy as code framework that provides logic-based policy enforcement over infrastructure configurations in HashiCorp Terraform and other HashiCorp product configurations. This approach lets organizations treat policies like application code, meaning the code can be version-controlled, audited, tested, and understood by stakeholders across the organization.

While Sentinel can be used as a powerful tool to ensure cloud governance at scale, we understand that adopting policy as code workflows may be a daunting and time-consuming process. This is especially true for organizations that lack the resources and expertise to write policies from scratch. To address this, AWS and HashiCorp have teamed up to create a set of pre-written policy sets that offer standardized governance rules, integrate with Terraform workflows, and proactively validate compliance.

Building on our recent release of pre-written Sentinel policies for Center for Internet Security (CIS) and AWS Foundational Best Security Practices (FSBP), we’re proud to announce the release of a new set of pre-written Sentinel policies for AWS specifically designed to help organizations enforce NIST SP 800-53 Rev 5 controls across AWS environments. With this release, we have delivered 350+ policies enabling a secure-by-default approach. These policy sets provide a great starting point, significantly reducing manual effort and enhancing the security posture of AWS infrastructure.

Learn how to deploy pre-written Sentinel policies.

Check out the new policy set in the Terraform Registry today.

»HCP Packer SBOM storage and package visibility

In today’s hybrid-cloud world, system images (such as AMIs for Amazon EC2, virtual machines, Docker containers, and more) are the foundation of modern computing infrastructure. They also sit at the very start of the software security supply chain. As organizations increasingly depend on a complex software supply chain that includes both third-party and in-house software packages and dependencies, the need for comprehensive visibility into these components has never been more critical. However, today's platform teams often build and deploy machine artifacts to production without a clear understanding of their internal components. This makes it difficult to identify vulnerable dependencies, track outdated libraries, and ensure images are meeting compliance requirements.

A popular solution to address this challenge is to keep a record of the components with a software bill of materials (SBOM) for each artifact. SBOMs are like an ingredient list on a food item; they list the internal parts that make up the image. Earlier this year, we introduced a new capability in HCP Packer, our managed artifact registry for tracking and governing images, which empowers platform teams to seamlessly generate and securely store SBOMs. We are excited to announce that this feature, SBOM storage, is now generally available.

Building on these efforts, we are also excited to announce the public beta of package visibility in HCP Packer. Package visibility lets users surface essential package information for their software artifacts directly in the HCP Packer UI. Users can now gain key insights into essential package information such as 'Package Name' and ‘Package Version’, increasing transparency into their software’s composition. By surfacing package details directly in HCP Packer, package visibility enables platform teams to make faster, informed decisions while ensuring security and compliance. With these two enhancements, users gain further transparency into the components that make up their image artifacts across cloud and on-prem environments in a single, centralized location.

View package details directly in the HCP Packer UI

View package details directly in the HCP Packer UI.

For more information, please refer to our SBOM documentation and our Track artifact package metadata tutorial to learn how to create and download SBOMs.

»Get started today

From securing infrastructure before deployment to streamlining Day 2+ operations at scale, these new HashiConf announcements reflect our continued commitment to simplifying Infrastructure Lifecycle Management and helping organizations do cloud right at scale.

You can try many of these new features now. If you are new to our ILM products, sign up for an HCP account to get started today, and also check out our tutorials. HCP Terraform includes a $500 credit that allows users to quickly get started and experience all features included in all plans, including HCP Terraform Premium. Contact our sales team if you’re interested in trying our self-managed offerings of Terraform and Nomad.

More posts like this