Terraform security: 5 foundational practices
Learn about five practices to securely write, apply, and manage Terraform configuration and state.
What security practices should you keep in mind as you write and share Terraform configuration? This post discusses five important practices to ensure that your Terraform configuration remains secure. These practices range from verification of modules and providers to limiting access to state and credentials.
» 1. Verify modules and providers
Modules and providers behave as external dependencies for a Terraform configuration, so handle them with the same attention that you’d give to software libraries or artifacts. Verifying the integrity, source, and version of providers and modules ensures that you do not download dependencies with unapproved configurations or even worse, malicious code. As a general approach, make sure you clearly define the source and version of providers and modules approved for use.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.98.0"
}
}
}
» Use a private registry
Many organizations set up a private registry to manage and control access to approved providers and modules through version control. If you have an existing artifact repository used to store container images and other third-party dependencies, reference the repository using a filesystem or network mirror for providers and implement the module registry API to create a minimal registry for modules.
You can use public modules developed by partners and community members or create your own module, but you should deploy and review public modules by version and add them to a private registry. Module consumers can reference the approved module and version in the private registry. Conducting due diligence on public modules before approving them for production use ensures that you provide modules standardized on your organization’s requirements.
» Pin module versions
When importing modules, Terraform does not verify their cryptographic signatures. Ideally, you should store and use modules in a Terraform registry so you can pin module versions. This ensures that your Terraform configuration only uses approved module versions. Other source types require additional URL parameters to pin module versions without using the version attribute.
» Include dependency lock file
Terraform uses a trust-on-first-use approach to verify providers. When you initialize a Terraform configuration, it creates a dependency lock file with a list of provider versions and checksums. The dependency lock file automatically tracks and verifies the checksums and versions from a public or private Terraform registry. Here’s an example lock file:
# This file is maintained automatically by "terraform init".
# Manual edits may be lost in future updates.
provider "registry.terraform.io/hashicorp/aws" {
version = "5.98.0"
constraints = "~> 5.98.0"
hashes = [
"h1:neMFK/kP1KT6cTGID+Tkkt8L7PsN9XqwrPDGXVw3WVY=",
"zh:23377bd90204b6203b904f48f53edcae3294eb072d8fc18a4531c0cde531a3a1",
"zh:2e55a6ea14cc43b08cf82d43063e96c5c2f58ee953c2628523d0ee918fe3b609",
"zh:4885a817c16fdaaeddc5031edc9594c1f300db0e5b23be7cd76a473e7dcc7b4f",
"zh:6ca7177ad4e5c9d93dee4be1ac0792b37107df04657fddfe0c976f36abdd18b5",
"zh:78bf8eb0a67bae5dede09666676c7a38c9fb8d1b80a90ba06cf36ae268257d6f",
"zh:874b5a99457a3f88e2915df8773120846b63d820868a8f43082193f3dc84adcb",
"zh:95e1e4cf587cde4537ac9dfee9e94270652c812ab31fce3a431778c053abf354",
"zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425",
"zh:a75145b58b241d64570803e6565c72467cd664633df32678755b51871f553e50",
"zh:aa31b13d0b0e8432940d6892a48b6268721fa54a02ed62ee42745186ee32f58d",
"zh:ae4565770f76672ce8e96528cbb66afdade1f91383123c079c7fdeafcb3d2877",
"zh:b99f042c45bf6aa69dd73f3f6d9cbe0b495b30442c526e0b3810089c059ba724",
"zh:bbb38e86d926ef101cefafe8fe090c57f2b1356eac9fc5ec81af310c50375897",
"zh:d03c89988ba4a0bd3cfc8659f951183ae7027aa8018a7ca1e53a300944af59cb",
"zh:d179ef28843fe663fc63169291a211898199009f0d3f63f0a6f65349e77727ec",
]
}
Include the dependency lock file as part of version control. This ensures that any subsequent Terraform run uses trusted providers. If you install providers from a filesystem or network mirror, use the terraform providers lock
command to pre-populate checksums into the dependency lock file, taking into account which operating systems you need to support. Each time Terraform initializes, the process verifies the cryptographic signatures of providers in Terraform registries and filesystem or network mirrors.
» 2. Control access to cloud service providers and APIs
Terraform connects to cloud service providers and other service APIs based on provider definitions. Avoid hard-coding credentials in provider configurations. Instead, pass them using variables marked as sensitive or environment variables to avoid storing them in state or outputting them in Terraform logs.
» Configure least-privilege access for providers
Besides protecting sensitive variables, ensure the credentials used for cloud service providers or other service APIs have least-privilege access. For example, you may want Terraform to create an Amazon S3 bucket. If you want the user to only access a specific region’s resources, attach a policy to the AWS credentials that Terraform uses that only has permissions to create, read, update, and delete a specific bucket in that specific region. Here’s how that would look in an example:
data "aws_iam_policy_document" "terraform_s3" {
statement {
actions = [
"s3:*",
]
resources = [
"arn:aws:s3:::${var.s3_bucket_name}-*",
"arn:aws:s3:::${var.s3_bucket_name}-*/*",
]
condition {
test = "StringEquals"
variable = "aws:RequestedRegion"
values = [
"us-west-2"
]
}
}
}
resource "aws_iam_policy" "terraform_s3" {
name = "terraform-s3"
description = "Allow Terraform to create, read, update, and delete a specific S3 bucket"
policy = data.aws_iam_policy_document.terraform_s3.json
}
If you want to create additional resources, update the policy with additional access to other Amazon services and regions. Fine-grained access control to cloud service providers and other APIs ensures that only approved Terraform runs have elevated privileges to configure IAM policies or other administrative services.
» Consider separate credentials for Terraform plan and apply
Organizations in highly regulated industries may have additional least-privilege access requirements that separate credentials between Terraform plan and apply stages. For example, running a terraform plan
generally only requires read-only access to API services. Running a terraform apply
requires read and write access to API services. As a result, some organizations set up one set of credentials with limited privileges to read resources during the plan stage while using a different set of credentials to edit resources during the apply stage.
» Use dynamic provider credentials
When possible, use dynamic provider credentials to issue a unique set of credentials each time you run Terraform. Not only can you revoke and rotate compromised credentials, you can also configure credentials to expire after Terraform completes a plan or apply.
HCP Terraform supports dynamic provider credentials by generating a workload identity token compliant with OIDC and offering it to a cloud service provider. If you configure the proper trust relationship between HCP Terraform and the cloud service provider, the cloud service provider issues a set of temporary credentials for HCP Terraform to access AWS by setting up an OIDC provider. Here’s an example for AWS:
locals {
hcp_terraform_url = "app.terraform.io"
}
data "tls_certificate" "hcp_terraform" {
url = "https://${local.hcp_terraform_url}"
}
resource "aws_iam_openid_connect_provider" "hcp_terraform" {
url = data.tls_certificate.hcp_terraform.url
client_id_list = [var.hcp_terraform_aws_audience]
thumbprint_list = [data.tls_certificate.hcp_terraform.certificates[0].sha1_fingerprint]
}
resource "aws_iam_role" "hcp_terraform" {
name = "${var.name}-hcp-terraform"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRoleWithWebIdentity"
Effect = "Allow"
Sid = ""
Principal = {
Federated = "${aws_iam_openid_connect_provider.hcp_terraform.arn}"
}
Condition = {
StringEquals = {
"${local.hcp_terraform_url}:aud" = "${one(aws_iam_openid_connect_provider.hcp_terraform.client_id_list)}"
}
StringLike = {
"${local.hcp_terraform_url}:sub" = "organization:${var.hcp_terraform_organization}:project:${var.name}:workspace:*:run_phase:*"
}
}
},
]
})
}
In this example, you’d need to add an environment variable to HCP Terraform workspaces that need to create resources in AWS.

Alternatively, Vault supports secrets engines for different cloud service providers that generate ephemeral credentials. If you use a CI framework, you can issue a unique set of credentials based on your pipeline identity.
» 3. Omit or redact secrets from state and diagnostics
You occasionally want to use Terraform to generate an initial set of root or administrative-level credentials with the intention of rotating them later. Before running Terraform configuration to create a secret, check if the provider supports ephemeral resources. Terraform stores certain attributes in state, including potentially sensitive information like secrets. Using ephemeral resources ensures that sensitive information like passwords do not get stored in state or plans. Here’s an example:
ephemeral "random_password" "bedrock_database" {
length = 16
special = false
}
resource "aws_secretsmanager_secret" "bedrock_database" {
name_prefix = "${var.name}-bedrock-database-"
recovery_window_in_days = 7
}
resource "aws_secretsmanager_secret_version" "bedrock_database" {
secret_id = aws_secretsmanager_secret.bedrock_database.id
secret_string_wo = jsonencode({
username = "bedrock_user",
password = ephemeral.random_password.bedrock_database.result
})
secret_string_wo_version = 1
}
» Using an external secrets manager
If a provider does not support ephemeral resources, use an external secrets manager like Vault to manage and audit access to the secret. Dynamically reference the secret each time you run Terraform. If you need to rotate the secret, update it in the secrets manager and configure Terraform to retrieve the updated secret on the next run.
» Using the sensitive function
Terraform may also output secrets as part of diagnostics. For example, if you pass a hard-coded string database password in a database connection string to a Vault database secrets engine configuration, Terraform may not recognize that the connection string contains a secret. When you run a plan, it outputs the hard-coded password in the database connection string. To prevent leakage of sensitive values, pass attributes as input or output variables and mark them as sensitive. For any derived sensitive values, use the sensitive
function to redact values when Terraform outputs a plan or diagnostic information.
resource "aws_instance" "example_instance" {
ami = data.hcp_packer_artifact.packer.external_identifier
// omitted for clarity
user_data = sensitive(base64encode(file("./setup.sh")))
}
» 4. Limit access to state
Additional security practices include setting up remote state and limiting access to Terraform state. The state file contains resource metadata and potentially sensitive information, depending on your Terraform version and providers. Overly broad access to state can leak information about resources or enable changes to state that incur drift or remove resources from Terraform management.
Ideally, only HCP Terraform or CI frameworks should have access to read and update state. Infrastructure as code best practices necessitate that all changes should go through version control. Prioritize the use of moved, import, or removed blocks to declaratively alter state through version control. If you do need an escape hatch to update Terraform state, carefully consider and track manual updates or Terraform CLI operations. Even HCP Terraform or CI frameworks should only edit the state file by CLI. Avoid making any updates by directly editing the state file.
The following example imports an existing S3 bucket into Terraform configuration.
import {
to = aws_s3_bucket.example
id = "test-20250613142230914900000001"
}
import {
to = aws_s3_bucket_ownership_controls.example
id = "test-20250613142230914900000001"
}
import {
to = aws_s3_bucket_acl.example
id = "test-20250613142230914900000001"
}
resource "aws_s3_bucket" "example" {
bucket_prefix = "${var.s3_bucket_name}-"
force_destroy = true
}
resource "aws_s3_bucket_ownership_controls" "example" {
bucket = aws_s3_bucket.example.id
rule {
object_ownership = "BucketOwnerPreferred"
}
}
resource "aws_s3_bucket_acl" "example" {
bucket = aws_s3_bucket_ownership_controls.example.bucket
acl = "private"
}
When you apply Terraform, the plan indicates the import of resources without the use of CLI to import each resource individually.
$ terraform apply
# omitted for clarity
aws_iam_policy.terraform_s3: Importing... [id=arn:aws:iam::1234567890:policy/terraform-s3]
aws_iam_policy.terraform_s3: Import complete [id=arn:aws:iam::1234567890:policy/terraform-s3]
aws_s3_bucket.example: Importing... [id=test-20250613142230914900000001]
aws_s3_bucket.example: Import complete [id=test-20250613142230914900000001]
aws_s3_bucket_ownership_controls.example: Importing... [id=test-20250613142230914900000001]
aws_s3_bucket_ownership_controls.example: Import complete [id=test-20250613142230914900000001]
aws_s3_bucket_acl.example: Importing... [id=test-20250613142230914900000001]
aws_s3_bucket_acl.example: Import complete [id=test-20250613142230914900000001]
aws_s3_bucket.example: Modifying... [id=test-20250613142230914900000001]
aws_s3_bucket.example: Modifications complete after 1s [id=test-20250613142230914900000001]
aws_s3_bucket_acl.example: Modifying... [id=test-20250613142230914900000001]
aws_s3_bucket_acl.example: Modifications complete after 1s [id=test-20250613142230914900000001,private]
Apply complete! Resources: 4 imported, 0 added, 2 changed, 0 destroyed.
Occasionally, specific uses or automation require read-only access to Terraform outputs in state. If you use HCP Terraform, you can customize workspace settings to allow read-only access to outputs from state for a team.

For other backends, you may need to configure additional IAM permissions to ensure least-privilege access to state.
» 5. Apply policy as code
An often-cited security concern with Terraform involves misconfiguration of resources. Some examples include the creation of public object storage buckets, network security groups with public access, or unencrypted databases or queues. Policy as code tools like Terraform’s built-in Sentinel framework check for secure configurations in infrastructure code by verifying attributes for each resource, such as allowed types like the below example.
import "tfplan/v2" as tfplan
# Get all AWS instances from all modules
ec2_instances = filter tfplan.resource_changes as _, rc {
rc.type is "aws_instance" and
(rc.change.actions contains "create" or rc.change.actions is ["update"])
}
# Allowed Types
allowed_types = [
"t2.micro",
"t2.small",
"t2.medium",
]
# Rule to restrict instance types
instance_type_allowed = rule {
all ec2_instances as _, instance {
instance.change.after.instance_type in allowed_types
}
}
# Main rule that requires other rules to be true
main = rule {
instance_type_allowed else true
}
Capturing secure configuration requirements in policy code ensures that changes to deployed resources use the most secure set of attributes as an established standard, rather than leaving it up to the engineer. Establish a standard policy set that you can distribute across teams. These policies can check different resources for secure configuration, such as private object storage buckets, least-privilege network security groups, and encrypted databases and queues.
» Learn more
In general, focusing on maintaining least privilege access for Terraform to manage provider APIs and Terraform state, omitting secrets from state, verifying and pinning module and provider versions, and applying policy as code ensures your Terraform configuration and deployments remain secure. A basic implementation of these practices sets a foundation for more advanced security practices in policy as code and state dependency management. In addition to these practices, refactor your Terraform configuration for the latest versions of Terraform and providers to leverage features that secure attributes and state.
For more information on the security practices outlined in this post, check out our documentation for module creation, dependency lock files, ephemeral resources, state management, and policy as code. HCP Terraform includes additional security controls outlined in the documentation on its security model.
Sign up for the latest HashiCorp news
More blog posts like this one

HashiCorp at re:Inforce: Advancing Security Lifecycle Management with AWS
HashiCorp will be at AWS re:Inforce 2025 sharing expert talks, product demos, and news announcements.

Terraform ephemeral resources, Waypoint actions, and more at HashiDays 2025
HashiCorp Terraform, Waypoint, and Nomad continue to simplify hybrid cloud infrastructure with new capabilities that help secure infrastructure before deployment and effectively manage it over time.

Terraform migrate 1.1 adds VCS workspace support and enhanced GitOps
Terraform migrate 1.1 adds support for VCS workspaces, expanded Git capabilities, and greater control through both the CLI.