Guide

Using Dynamic Secrets in Terraform

Aug 20, 2018

This guide will walk through the workflow of a Producer enabling a Consumer to provision AWS infrastructure using dynamic credentials with Vault's AWS Secret Engine.

Using long lived static AWS credentials for Terraform runs can be dangerous. By leveraging the Terraform Vault provider, you can generate short lived AWS credentials for each Terraform run that are automatically revoked after the run.

» Background

There are 2 different personas involved in this guide, the "Producer" and the "Consumer".

» The "Producer"

The "Producer" is the operator responsible for configuring the AWS Secrets Engine in Vault and defining the policy scope for the AWS credentials dynamically generated.

The "Producer" is generally concerned about managing the static and long lived AWS IAM credentials with varying scope required for developers to provision infrastructure in AWS.

» The "Consumer"

The "Consumer" is the developer looking to safely provision infrastructure using Terraform without having to worry about managing sensitive AWS credentials locally.

» Intersecting Challenge

"Producers" want to enable a workflow where "Consumers" can automatically retrieve short-lived AWS credentials used by Terraform to provision resources in AWS. Traditionally this has been difficult to achieve as each "Consumer" has their own set of long-lived AWS credentials they use with Terraform that remain active beyond the length of a Terraform run.

Long-lived AWS credentials with unbounded scope on developer's local machines creates a large attack surface.

» Dynamic Solution

Store your long-lived AWS credentials in HashiCorp's Vault's AWS Secrets Engine, then leverage Terraform's Vault provider to dynamically generate appropriately scoped & short-lived AWS credentials to be used by Terraform to provision resources in AWS.

This mitigates the risk of someone swiping the AWS credentials used by Terraform from a developer's machine and doing something malicious with them.

Following Terraform Recommended Practices, we will separate our Terraform templates into 2 Workspaces. One Workspace for our "Producer" persona, and one Workspace for our "Consumer" persona. We do this to separate concerns and ensure each persona only has access to the resources required for them to perform their job.

The "Producer" will be responsible for configuring Vault's AWS Secrets Engine using Terraform and exposing the output variables necessary for the "Consumer" to provision the resources they need in AWS. In our use case, the "Consumer" will require access to provision an AWS EC2 Instance with Terraform, and should only be given IAM credentials with permission to do so.

» Prerequisites

Start a Vault Server

We will start by starting a Vault server. Open up a terminal window, run the below vault server -dev -dev-root-token-id=root command and you should see the following result:

# Start Vault Server with a predefined root token 
$ vault server -dev-root-token-id=root
===> Vault server configuration:

             Api Address: http://127.0.0.1:8200
                     Cgo: disabled
         Cluster Address: https://127.0.0.1:8201
              Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "999999h0m0s", max_request_size: "33554432", tls: "disabled")
               Log Level: info
                   Mlock: supported: false, enabled: false
                 Storage: inmem
                 Version: Vault v0.10.4
             Version Sha: e21712a687889de1125e0a12a980420b1a4f72d3

WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
and starts unsealed with a single unseal key. The root token is already
authenticated to the CLI, so you can immediately begin using Vault.

You may need to set the following environment variable:

    $ export VAULT_ADDR='http://127.0.0.1:8200'

The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.

Unseal Key: c7piGexFmr8O1juDLlb4QfOnU6CVqiaOx4nUFitZjkw=
Root Token: root

Development mode should NOT be used in production installations!

==> Vault server started! Log data will stream in below:

# ...

» Configure Environment Variables

Next, Terraform requires a few Environment Variables to be set in order to function appropriately. In this case we're passing AWS credentials in as environment variables instead of Terraform Input Variables because they are sensitive and we don't want them committed to our VCS. If AWS credentials are not set, then you can add them as Environment Variables by setting them on the command line:

$ AWS_ACCESS_KEY_ID=yourAWSaccessKEYid
$ AWS_SECRET_ACCESS_KEY=yourAWSsecretACCESSkey

With these variables set, they need to be exported to Terraform and Vault respectively:

# Export env vars
export TF_VAR_aws_access_key=${AWS_ACCESS_KEY_ID} # AWS Access Key ID - This command assumes the AWS Access Key ID is set in your environment as AWS_ACCESS_KEY_ID as shown above
export TF_VAR_aws_secret_key=${AWS_SECRET_ACCESS_KEY} # AWS Secret Access Key - This command assumes the AWS Access Key ID is set in your environment as AWS_SECRET_ACCESS_KEY as shown above
export VAULT_ADDR=http://127.0.0.1:8200 # Address of Vault server if running locally
export VAULT_TOKEN=root # Vault token

Notice that we're also setting the required Vault Provider Arguments as env vars: VAULT_ADDR & VAULT_TOKEN.

You can verify that these env vars were set appropriately by using the echo command:

$ echo ${TF_VAR_aws_access_key}
yourAWSaccessKEYid
$ echo ${TF_VAR_aws_secret_key}
yourAWSsecretACCESSkey
$ echo ${VAULT_ADDR}
http://127.0.0.1:8200
$ echo ${VAULT_TOKEN}
root

"Producer" Workspace

Next, we will change directory to the producer-workspace and initialize Terraform. This will pull down the appropriate Terraform providers required by the declared resources.

$ cd producer-workspace
$ terraform init

Be sure you are starting in the root directory of this folder. After running the command, notice Terraform fetches the Vault provider.

Initializing the backend...

Successfully configured the backend "local"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "vault" (1.1.1)...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.vault: version = "~> 1.1"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Take a look at the producer-workspace/main.tf Terraform template to see the Vault resources Terraform will configure.

» "Producer" Workspace Apply

Run the terraform apply to actually provision the resources in the "Producer" Workspace. Based on the plan, we expect Terraform to:

  1. Use the AWS credentials supplied by the env vars above to mount the AWS Secret Engine in Vault under the path dynamic-aws-creds-producer-path.
  2. Configure a role for the AWS Secrets Engine named dynamic-aws-creds-producer-role with an IAM policy that allows it iam:* and ec2:* permissions. This role will be used by the "Consumer" Workspace to dynamically generate AWS credentials scoped with this IAM policy to be used by Terraform to provision an aws_instance resource.
$ terraform apply 
vault_aws_secret_backend.aws: Creating...
  access_key:                "<sensitive>" => "<sensitive>"
  default_lease_ttl_seconds: "" => "120"
  max_lease_ttl_seconds:     "" => "240"
  path:                      "" => "dynamic-aws-creds-producer-path"
  region:                    "" => "<computed>"
  secret_key:                "<sensitive>" => "<sensitive>"
vault_aws_secret_backend.aws: Creation complete after 0s (ID: dynamic-aws-creds-producer-path)
vault_aws_secret_backend_role.producer: Creating...
  backend: "" => "dynamic-aws-creds-producer-path"
  name:    "" => "dynamic-aws-creds-producer-role"
  policy:  "" => "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"iam:*\",\n        \"ec2:*\"\n      ],\n      \"Resource\": \"*\"\n    }\n  ]\n}\n"
vault_aws_secret_backend_role.producer: Creation complete after 0s (ID: dynamic-aws-creds-producer-path/roles/dynamic-aws-creds-producer-role)

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Outputs:

backend = dynamic-aws-creds-producer-path
role = dynamic-aws-creds-producer-role

Notice there are 2 Output Variables named backend & role. These output variables will be used by the "Consumer" workspace in a later step.

If you go to the terminal where your Vault server is running, you should see Vault output something similar to the below. This means Terraform was successfully able to mount the AWS Secrets Engine at the specified path. Although it's not output in the logs, the role has also been configured.

2018-08-08T13:55:14.633-0700 [INFO ] core: successful mount: path=dynamic-aws-creds-producer-path/ type=aws

"Consumer" Workspace

Next we will initialize the "Consumer" Workspace by changing directory cd ../consumer-workspace and running terraform init similar to what we did with the "Producer" Workspace. This Workspace will consume the outputs created in the "Producer" Workspace.

$ cd ../consumer-workspace
$ terraform init
Initializing the backend...

Successfully configured the backend "local"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "aws" (1.30.0)...
- Downloading plugin for provider "vault" (1.1.1)...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.aws: version = "~> 1.30"
* provider.vault: version = "~> 1.1"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Take a look at the consumer-workspace/main.tf Terraform template to see the resources Terraform will provision.

» "Consumer" Workspace Plan to Provision EC2 Instance

Before we provision the EC2 instance, login to your AWS Console and navigate to the IAM Users tab. Search for the username prefix vault-token-terraform-dynamic-aws-creds-producer. Nothing will show up in your initial search, but we are now prepared to do a "Refresh" after we run a terraform plan to verify that the dynamic IAM credentials were in fact created by Vault and used by Terraform.

In the consumer-workspace Terraform template we've defined an aws_instance to be provisioned. Assuming the credentials passed into the AWS provider have access to create the EC2 Instance resource, the plan should run successfully.

Run a terraform plan to see what Terraform is going to provision in the "Consumer" Workspace. If you haven't defined a region, then Terraform will also ask where you want this instance to be deployed:

$ terraform plan
provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Default: us-east-1
  Enter a value: us-east-1

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.terraform_remote_state.producer: Refreshing state...
data.vault_aws_access_credentials.creds: Refreshing state...
data.aws_ami.ubuntu: Refreshing state...

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_instance.main
      id:                           <computed>
      ami:                          "ami-a22323d8"
      associate_public_ip_address:  <computed>
      availability_zone:            <computed>
      cpu_core_count:               <computed>
      cpu_threads_per_core:         <computed>
      ebs_block_device.#:           <computed>
      ephemeral_block_device.#:     <computed>
      get_password_data:            "false"
      instance_state:               <computed>
      instance_type:                "t2.nano"
      ipv6_address_count:           <computed>
      ipv6_addresses.#:             <computed>
      key_name:                     <computed>
      network_interface.#:          <computed>
      network_interface_id:         <computed>
      password_data:                <computed>
      placement_group:              <computed>
      primary_network_interface_id: <computed>
      private_dns:                  <computed>
      private_ip:                   <computed>
      public_dns:                   <computed>
      public_ip:                    <computed>
      root_block_device.#:          <computed>
      security_groups.#:            <computed>
      source_dest_check:            "true"
      subnet_id:                    <computed>
      tags.%:                       "3"
      tags.Name:                    "dynamic-aws-creds-consumer"
      tags.TTL:                     "1"
      tags.owner:                   "dynamic-aws-creds-consumer-guide"
      tenancy:                      <computed>
      volume_tags.%:                <computed>
      vpc_security_group_ids.#:     <computed>


Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Now verify a new set of IAM credentials were created after running the plan.

The reason the IAM credentials were created is we have a vault_aws_access_credentials Data Source in our consumer-workspace, which is requesting the Vault provider to read AWS IAM credentials from the role named dynamic-aws-creds-producer-role in Vault's AWS Secrets Engine.

These credentials are generated by Vault with the appropriate IAM policy configured in the vault_aws_secret_backend_role resource, and a default_lease and max_lease_ttl configured on the AWS Secret Engine. These resources were configured by the "Producer" in the producer-workspace.

Because the default_lease_ttl_seconds is set to 120 seconds, Vault will revoke those IAM credentials and they will be removed from the AWS IAM console after 120 seconds. Every Terraform run moving forward will now use it's own unique set of AWS IAM credentials that are scoped to whatever the "Producer" has defined!

» "Consumer" Workspace Apply to Provision EC2 Instance

Now that we've run a successful plan, the "Consumer" will actually want to provision the EC2 Instance in AWS. We should expect to see yet another set of IAM credentials named with a prefix of vault-token-terraform-dynamic-aws-creds-producer and an appropriately scoped IAM policy attached.

These IAM creds will be dynamically generated by Vault and used for the AWS provider in Terraform to provision the aws_instance resource. You will be able to see this in the AWS EC2 dashboard by searching for Instances with the name dynamic-aws-creds-consumer.

Just like the terraform plan, the short lived IAM credentials used by Terraform will be revoked after 120 seconds.

$ terraform apply
data.terraform_remote_state.producer: Refreshing state...
data.vault_aws_access_credentials.creds: Refreshing state...
data.aws_ami.ubuntu: Refreshing state...
aws_instance.main: Creating...
  ami:                          "" => "ami-a22323d8"
  associate_public_ip_address:  "" => "<computed>"
  availability_zone:            "" => "<computed>"
  ebs_block_device.#:           "" => "<computed>"
  ephemeral_block_device.#:     "" => "<computed>"
  instance_state:               "" => "<computed>"
  instance_type:                "" => "t2.nano"
  ipv6_address_count:           "" => "<computed>"
  ipv6_addresses.#:             "" => "<computed>"
  key_name:                     "" => "<computed>"
  network_interface.#:          "" => "<computed>"
  network_interface_id:         "" => "<computed>"
  placement_group:              "" => "<computed>"
  primary_network_interface_id: "" => "<computed>"
  private_dns:                  "" => "<computed>"
  private_ip:                   "" => "<computed>"
  public_dns:                   "" => "<computed>"
  public_ip:                    "" => "<computed>"
  root_block_device.#:          "" => "<computed>"
  security_groups.#:            "" => "<computed>"
  source_dest_check:            "" => "true"
  subnet_id:                    "" => "<computed>"
  tags.%:                       "" => "3"
  tags.Name:                    "" => "dynamic-aws-creds-consumer"
  tags.TTL:                     "" => "1h"
  tags.owner:                   "" => "dynamic-aws-creds-consumer-guide"
  tenancy:                      "" => "<computed>"
  volume_tags.%:                "" => "<computed>"
  vpc_security_group_ids.#:     "" => "<computed>"
aws_instance.main: Still creating... (10s elapsed)
aws_instance.main: Still creating... (20s elapsed)
aws_instance.main: Creation complete after 25s (ID: i-0c47c6d46f0a71fb8)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Voila, our "Consumer" has successfully created the EC2 Instance resource without ever having long-lived AWS credentials locally.

"Consumer" to Destroy EC2 Instance

Now let's cleanup the EC2 Instance created by Terraform. After destroying, you can check in the AWS Console to verify they were deleted. You should also have seen another set of IAM credentials get generated to run the terraform destroy operation.

$ terraform destroy

data.terraform_remote_state.producer: Refreshing state...
data.vault_aws_access_credentials.creds: Refreshing state...
data.aws_ami.ubuntu: Refreshing state...
aws_instance.main: Refreshing state... (ID: i-0c47c6d46f0a71fb8)
aws_instance.main: Destroying... (ID: i-0c47c6d46f0a71fb8)
aws_instance.main: Still destroying... (ID: i-0c47c6d46f0a71fb8, 10s elapsed)
aws_instance.main: Still destroying... (ID: i-0c47c6d46f0a71fb8, 20s elapsed)
aws_instance.main: Destruction complete after 21s

Destroy complete! Resources: 1 destroyed.

"Producer" IAM Policy Update Plan

Now let's say the "Producer" wanted to scope the "Consumers" IAM policy to only allow them to create iam resources with Terraform, but not ec2 instances.

Previously, this would have required them to revoke every "Consumers" IAM credentials and generate new ones with the updated policy. However, because we are dynamically generating IAM credentials for each Terraform run, the "Producer" simply has to update the IAM policy in their producer-workspace/main.tf#L27-L38 Terraform template and they're done.

To prove this, we will change the IAM policy in the producer-workspace/main.tf Terraform template from:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "iam:*", "ec2:*"
      ],
      "Resource": "*"
    }
  ]
}

to:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "iam:*"
      ],
      "Resource": "*"
    }
  ]
}

This change means that any "Consumer" should now not be allowed to provision any AWS EC2 instances. The following commands change the directory to the producer-workspace, then make the above changes to the policy, and lastly perform a terraform plan:

$ cd ../producer-workspace
$ sed -i '' -e 's/, \"ec2:\*\"//g' main.tf
$ terraform plan

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

vault_aws_secret_backend.aws: Refreshing state... (ID: dynamic-aws-creds-producer-path)
vault_aws_secret_backend_role.producer: Refreshing state... (ID: dynamic-aws-creds-producer-path/roles/dynamic-aws-creds-producer-role)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ vault_aws_secret_backend_role.producer
      policy: "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"iam:*\",\"ec2:*\"],\"Resource\":\"*\"}]}" => "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"iam:*\"\n      ],\n      \"Resource\": \"*\"\n    }\n  ]\n}\n"


Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

» "Producer" Policy Update Apply

Now apply those changes and update Vault role's policy by running terraform apply.

$ terraform apply -auto-approve

vault_aws_secret_backend.aws: Refreshing state... (ID: dynamic-aws-creds-producer-path)
vault_aws_secret_backend_role.producer: Refreshing state... (ID: dynamic-aws-creds-producer-path/roles/dynamic-aws-creds-producer-role)
vault_aws_secret_backend_role.producer: Modifying... (ID: dynamic-aws-creds-producer-path/roles/dynamic-aws-creds-producer-role)
  policy: "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"iam:*\",\"ec2:*\"],\"Resource\":\"*\"}]}" => "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Action\": [\n        \"iam:*\"\n      ],\n      \"Resource\": \"*\"\n    }\n  ]\n}\n"
vault_aws_secret_backend_role.producer: Modifications complete after 0s (ID: dynamic-aws-creds-producer-path/roles/dynamic-aws-creds-producer-role)

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Outputs:

backend = dynamic-aws-creds-producer-path
role = dynamic-aws-creds-producer-role

"Consumer" Workspace Plan to Provision EC2 Instance

Now we will verify the "Consumer" is not able to provision an EC2 Instance as it should no longer have the ability to do so based on the updates the "Producer" made to the IAM policy. We should expect to see the terraform plan fail here as the credentials generated don't have permission to provision the aws_instance resource.

Let's try it by changing to the consumer-workspace directory and running terraform plan.

$ cd ../consumer-workspace
$ terraform plan

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.terraform_remote_state.producer: Refreshing state...
data.vault_aws_access_credentials.creds: Refreshing state...
data.aws_ami.ubuntu: Refreshing state...

Error: Error refreshing state: 1 error(s) occurred:

* data.aws_ami.ubuntu: 1 error(s) occurred:

* data.aws_ami.ubuntu: data.aws_ami.ubuntu: UnauthorizedOperation: You are not authorized to perform this operation.
    status code: 403, request id: 5f25a398-9417-4e16-9bae-f336d624e017

As expected, our plan failed! The "Producer" would need to add the ec2:* permission back to the IAM policy for this plan to succeed.

Next Steps

Play around with the "Producer" permissions and the "Consumer" resources to get a feel for how this can work for you.

Once finished, run terraform destroy in each "Producer" and "Consumer" workspace to ensure all resources are cleaned up.

You can take your security to the next level by leveraging Terraform Enterprise's Secure Storage of Variables to safely store sensitive variables like the Vault token used for authentication.

Stay Informed

Subscribe to our monthly newsletter to get the latest news and product updates.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×