Running dynamic, ephemeral multi-hop workers for HCP Boundary: Part 2

Running Boundary workers as dynamic workloads can be challenging. Using the Nomad and Vault integration along with a custom Vault plugin, this process can be seamlessly automated.

Part 1 of this blog series explored the challenges of running HashiCorp Boundary workers as dynamic workloads such as auto scaling groups, Nomad jobs, and Kubernetes deployments. The post also showed how the custom Boundary secrets engine for Vault can be used to generate worker activation tokens and manage the lifecycle of workers alongside their dynamic deployment models.

This post concludes the series by walking you through the steps required to run Boundary workers as dynamic workloads in Nomad using the custom Boundary secrets engine for Vault.

»Nomad

HashiCorp Nomad is a workload scheduler and orchestrator that enables engineers to deploy both containerized and non-containerized workloads. Workloads can be deployed as binaries, executables, Java ARchive (JAR) files, or Docker containers to multiple platforms including Windows and Linux machines.

The domain architecture of Nomad has many components, but this blog post focuses on the following elements:

  • Nomad server: This is responsible for scheduling workloads.
  • Nomad client: This is the platform that runs the workloads, and could be Windows, macOS, or Linux. For Docker and JAR files to be scheduled to a client, it must have Docker and Java installed, respectively.
  • Nomad job: This is a declarative definition of a workload. It contains information about resources required, storage and volume mounting definitions, environment variables, and container definition, etc.

»Vault integration on Nomad servers clients

Nomad and Vault can be seamlessly integrated to provide Nomad jobs with secrets retrieved from Vault. Secrets retrieved from Vault by Nomad can then be rendered to a file using Nomad's templating capabilities. This is ideal for running the Boundary worker as a Nomad job because you need to have a config file for the worker. This templating capability allows you to render the config file for the Boundary worker and inject the dynamic worker activation token retrieved from Vault.

The first step for this integration to work is to configure Vault tokens and access policies for Nomad to use when communicating with Vault. The specifics of this can be found in the Vault integration documentation. (Note: A root token can also be used, however, this is not advised for production workloads.)

Once you have a token to use with Nomad, the next step is to configure the Nomad servers with the Vault integration. This is done by adding a Vault stanza to the config file of the Nomad servers:

vault {enabled = trueaddress = "https://vault.service.consul:8200"token = REDACTEDcreate_from_role = "nomad-cluster"}

Note: The snippet above is an example Vault stanza in a Nomad server config file, which includes the Vault token. This approach is not recommended for production workloads. Instead, this can be omitted from the config file and written to the VAULT_TOKEN environment variable.

Here is an example Vault stanza in the Nomad client config file:

vault {enabled = trueaddress = "https://vault.service.consul:8200"}

The token needs to be configured only on the Nomad servers. Clients will have tokens created for them automatically by the Nomad servers using the role specified in the server config.

»Running Boundary workers as Nomad jobs

Defining a Boundary worker in a Nomad job is no different than most other Nomad jobs. The key is to generate a Boundary worker config file on startup of the job. Here is an example Boundary worker config file:

hcp_boundary_cluster_id = "my_cluster_id" listener "tcp" {  address = "0.0.0.0:9202"  purpose = "proxy"} worker {  auth_storage_path = "/boundary/auth_data"  controller_generated_activation_token = "my_token"  tags {    type = ["frontend"]  }} 

The key configuration parameter is the controller_generated_activation_token, which we want to fetch from Vault using our custom plugin. In order to use this, we need to create a Vault policy for the worker that allows permissions to be read from the secrets engine. The policy shown here will grant the relevant permissions to generate an activation token:

path = "boundary/creds/worker" {  capabilities = ["read", "update"]}

Create a file called worker_policy.hcl with the above policy written to it. Add the file to Vault using this command:

vault policy write boundary-worker worker_policy.hcl

All the prerequisites for this workflow are now in place, so it's time to create the Nomad job file. We can start with the base nomad_worker.hcl file below:

job "boundary_worker" {  datacenters = ["dc1"]    type = "service"    group "worker" {    count = 1      restart {    # The number of attempts to run the job within the specified interval.      attempts = 2      interval = "30m"      delay = "15s"      mode = "fail"    }      ephemeral_disk {      size = 30    }      task "worker" {      driver = "docker"        logs {        max_files = 2        max_file_size = 10      }        resources {        cpu = 500 # 500 MHz        memory = 512 # 512MB      }    }  }} 

You need to add a Vault stanza to your job file. This stanza tells Nomad which Vault policy to use when retrieving your activation token. Specify the policy you just created:

vault {
  policies = ["boundary-worker"]
}

Next, add the worker template to the Nomad job:

template {  data = <<-EOFdisable_mlock = true  hcp_boundary_cluster_id = "739d93f9-7f1c-474d-8524-931ab199eaf8"  listener "tcp" {  address = "0.0.0.0:9202"  purpose = "proxy"}  worker {  auth_storage_path="/boundary/auth_data"  {{with secret "boundary/creds/worker" (env "NOMAD_ALLOC_ID" | printf "worker_name=%s") -}}  controller_generated_activation_token = "{{.Data.activation_token}}"  {{- end}}    tags {    environment = ["nomad"]  }}EOF  destination = "local/config.hcl"}

In order to populate this template with a value read from Vault, use the {{With secret "Vault_path_toSecret"}} variable, which will pull the secret from Vault. This secrets engine requires a unique name for each worker node, so you’ll use the Nomad allocation ID to provide those. Nomad exposes this through the NOMAD_ALLOC_ID environment variable.

{{with secret "boundary/creds/worker" (env "NOMAD_ALLOC_ID" | printf "worker_name=%s") -}}

The code above reads the environment variable and prints it to worker_name=. To generate the activation token, add this line:

controller_generated_activation_token = "{{.Data.activation_token}}"

This line of the template populates the controller_generated_activation_token configuration parameter with the activation token retrieved from Vault. When that path is called within Vault, the response will contain an object called data, which contains an activation_token key/value pair. When the template is rendered, it is written to a file called local/config.hcl.

The final part of the Nomad job to add is the config stanza. This specifies the container image to use as well as the commands to run when the container is started:

config {  image = "hashicorp/boundary-worker-hcp:0.12.0-hcp"  command = "boundary-worker"  args = [    "server",    "-config",    "local/config.hcl"  ]}

Here is the complete Nomad job file:

job "boundary_worker" {  datacenters = ["dc1"]    type = "service"    group "worker" {    count = 1      restart {      # The number of attempts to run the job within the specified interval.      attempts = 2      interval = "30m"      delay    = "15s"      mode     = "fail"    }      ephemeral_disk {      size = 30    }      task "worker" {      driver = "docker"        vault {        policies = ["boundary-worker"]      }        template {        data = <<-EOF          disable_mlock = true            hcp_boundary_cluster_id = "739d93f9-7f1c-474d-8524-931ab199eaf8"            listener "tcp" {            address = "0.0.0.0:9202"            purpose = "proxy"          }            worker {            auth_storage_path="/boundary/auth_data"            {{with secret "boundary/creds/worker" (env "NOMAD_ALLOC_ID" | printf "worker_name=%s") -}}              controller_generated_activation_token = "{{.Data.activation_token}}"            {{- end}}              tags {              environment   = ["nomad"]            }          }        EOF          destination = "local/config.hcl"      }        logs {        max_files     = 2        max_file_size = 10      }        config {        image   = "hashicorp/boundary-worker-hcp:0.12.0-hcp"        command = "boundary-worker"        args = [          "server",          "-config",          "local/config.hcl"        ]      }        resources {        cpu    = 500 # 500 MHz        memory = 512 # 512MB      }    }  }} 

You can now run this job using the following command:

nomad job run nomad_worker.hcl

This will deploy and authenticate a worker in Nomad to your HCP Boundary controller. You should now see a worker created in the Boundary UI.

»Summary

This blog series explored the challenges around running Boundary workers as dynamic workloads. It also looked at how you can move to an ephemeral worker model using the Vault custom plugin. Finally, it looked at running workers in Nomad using the Vault custom plugin.

Sign up for the latest HashiCorp news

By submitting this form, you acknowledge and agree that HashiCorp will process your personal information in accordance with the Privacy Policy.