Solving secret zero with Vault and OpenShift Virtualization
Explore how you can use Red Hat OpenShift Virtualization and HashiCorp Vault to solve the secret zero problem for your virtualized infrastructure
Establishing machine identity for on-premises infrastructure resources has been a consistent challenge for many organizations adopting identity-based security practices with HashiCorp Vault. This challenge becomes even more critical as organizations modernize their virtualization platforms and seek to unify virtual machine (VM) and container management under a single, cloud-native platform.
Well-established virtualization technologies such as VMware or Kernel-based Virtual Machine (KVM)-derived solutions don’t provide an implicit machine identity that virtual guests can leverage, which means that as practitioners we often find ourselves resorting to some form of “secret zero” — an initial credential that must be delivered to the guest using a secure, trusted mechanism when we need to perform secure introduction between the guest, and Vault.
Historically, the best option here has been to build on the trusted orchestrator pattern to securely introduce secret zero to these infrastructure resources — whether that was using certificates, AppRoles, or any other credential-based Vault auth method.
Now though, our industry finds itself at an inflection point in the virtualization domain. Where once organizations would have accepted the terms and conditions presented and continued to pay for the commodity service they relied upon, they are now starting to seriously look at alternatives for their virtualization solutions.
Red Hat OpenShift Virtualization and the upstream KubeVirt community are being positioned as one of the key contenders in this space, and this presents an opportunity for organizations to drastically improve the security posture of their virtual guests through the use of well established patterns and practices normally associated with standard Kubernetes workloads.
For organizations evaluating virtualization alternatives, the decision isn't just about cost, it's about gaining access to modern operational patterns that have been proven at scale in cloud-native environments. Red Hat OpenShift Virtualization uniquely enables VMs to participate in these patterns without requiring application changes.
In this post, we will look at how we can use Kubernetes identities to establish trust between Red Hat OpenShift Virtualization workloads and HashiCorp Vault using Vault Agent, without relying on secret zero. This approach will be demonstrated with HCP Vault Dedicated, Red Hat OpenShift, and Red Hat OpenShift Virtualization.
HCP Vault Dedicated is HashiCorp’s cloud-hosted, single-tenant Vault offering. The concepts outlined here are also applicable to Vault Enterprise, assuming it is hosted in a location accessible by workloads within the Red Hat OpenShift cluster.
» What is Red Hat OpenShift Virtualization?
Red Hat OpenShift Virtualization is an operator included with all Red Hat OpenShift editions, including Red Hat OpenShift Service on AWS, and is available out of the box at no additional cost. It leverages the open source projects KVM and KubeVirt to run VMs directly on the worker node kernel, treating them as first-class citizens alongside containers.
Each VM runs within a virt-launcher
pod on OpenShift, which uses KVM to launch and monitor the VM process on the host. This pod manages the VM’s lifecycle — start, stop, restart, and recovery — and integrates access controls, storage, and networking, allowing the VM to inherit all the enterprise-grade security, compliance, and operational benefits for the OpenShift platform.
By leveraging Red Hat OpenShift Virtualization, you can migrate your supported VMs to a unified application platform at your own pace. By moving VMs from other platforms and running them on Red Hat OpenShift, you can get the most from your existing virtualization investments while taking advantage of cloud-native architectures, streamlined operations and management, and new development approaches. This unified approach means your VMs can immediately benefit from Kubernetes-native patterns like the service account-based authentication demonstrated in this post.
» What is Vault Agent?
Vault Agent behaves as a client-side daemon to make requests to Vault on behalf of a client application. This includes taking responsibility for authentication to Vault.
All Vault clients (human users, applications, etc.) must authenticate with Vault and get a client token to make API requests. As tokens have a time-to-live (TTL), the clients must renew the token's TTL or re-authenticate to Vault based on the token’s TTL. Vault Agent authenticates with Vault and manages the token's lifecycle so the client application doesn't have to.
In addition, you can inject template markup into Vault Agent to render secrets into files where the client application loads data from.

This eliminates the need to change your application code to invoke the Vault API. Your existing applications can remain Vault-unaware.
This combination of automatic authentication with Vault and secret templating makes Vault Agent an ideal mechanism to demonstrate the use of Kubernetes identities within Red Hat OpenShift Virtualization VMs.
You can learn more about how to use Vault Agent from the HashiCorp Vault Agent tutorial.
» Prerequisites
There are two distinct setup steps required to use Kubernetes service account identities from within a Red Hat OpenShift Virtualization virtual machine as a form of trusted identity.
- Enable Red Hat OpenShift Virtualization on your Red Hat OpenShift platform
- Enable the Kubernetes auth method in your HashiCorp Vault platform
While this blog will not focus on the enablement of these two features, they are well documented by their respective vendors; the Red Hat OpenShift documentation will walk you through the steps of deploying Red Hat OpenShift Virtualization on Red Hat OpenShift, and the HashiCorp Vault documentation will show you how to enable and configure Kubernetes authentication within your Vault cluster These steps will allow workloads within the Red Hat OpenShift cluster to authenticate to Vault using their Kubernetes service accounts.
» Configuring the connection
Once you have enabled Red Hat OpenShift Virtualization within your Red Hat OpenShift cluster and you’ve enabled and configured Kubernetes auth in Vault, then it’s time to move on to configuring the specifics of how Red Hat OpenShift Virtualization’s VM instances will authenticate to Vault.
First, it’s good practice to create a specific role within the Kubernetes auth method that the virtual machines will authenticate with. A role on an auth method allows you to group method-specific parameters to simplify method configuration — for example:
- Applying specific policies
- Customizing the time to live (TTL) of tokens issued by the auth method
- Customizing more role-specific configuration parameters.
This, in turn, allows you to enforce security boundaries between your secrets in Vault and the identities representing your Red Hat OpenShift Virtualization workloads that will be consuming those secrets.
In the following example, the Kubernetes auth method is mounted at the ocp
path to identify it as explicitly handling authentication requests for Red Hat OpenShift workloads. The role created against this auth method will also be configured to only permit authentication requests from specific Kubernetes service accounts in specific Kubernetes namespaces (or Red Hat OpenShift projects). It will also limit the token TTL to 1 hour and bind specific Vault policies to that token:
vault write auth/ocp/role/rhelvm \
bound_service_account_names=rhel-vm \
bound_service_account_namespaces=secret-zero-demo \
token_policies=default,standard-policy \
ttl=1h
Success! Data written to: auth/ocp/role/rhelvm
Once the role is configured, the corresponding resources within Red Hat OpenShift will also need to be created — namely, a Red Hat OpenShift project called secret-zero-demo
and a Red Hat OpenShift service account within that namespace called rhel-vm.
% oc new-project secret-zero-demo
Now using project "secret-zero-demo" on server "https://api.cluster-fwgsd.dynamic.redhatworkshops.io:6443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app rails-postgresql-example
to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.43 -- /agnhost serve-hostname
% oc create sa rhel-vm
serviceaccount/rhel-vm created
Once these pieces of the puzzle are created, you’re ready to create your virtual machine.
» Configuring the virtual machine
The Red Hat OpenShift Virtualization documentation goes into some detail on the different strategies you can use to create your virtual machine. This example uses the template approach for expediency, but you can create them however you like. The configuration steps that follow the virtual machine creation will be the same.
From the template selection, choose the Red Hat Enterprise Linux 9 VM
option.

At this point, if you are following the template approach, you will be presented with the option to Customize VirtualMachine
. This is where you make the link between the virtual machine and the service account that will represent its identity to Vault.

There are a lot of things you can influence from the resulting screen, but what you’re interested in is the Environment
tab.

It’s here you can assign the service account to the VM.

It’s very important to remember to click Save
after you add the service account to the VM template at this point; failure to do so will mean the YAML definition of your virtual machine doesn’t get updated, and the service account won’t be available within the virtual machine at runtime.
This wouldn’t be a Kubernetes blog post without editing at least a little bit of YAML, and that’s exactly what needs to be done now. Now that you’ve added the service account to the virtual machine template, you need to ensure it’s available to use as a volume within the virtual machine.
To do this, you’ll need to select the YAML tab (once you’ve saved your changes), and scroll down until you find the userData
block. It will look something like this:
userData: |-
#cloud-config
user: cloud-user
password: icp0-lyxy-ptna
chpasswd: { expire: False }
You’ll need to add two more lines to this block in order to mount the service account as a volume.
bootcmd:
- "mkdir -p /var/run/secrets/kubernetes.io/serviceaccount"
- "mount -o,uid=1000 /dev/sda /var/run/secrets/kubernetes.io/serviceaccount"
This additional configuration creates a folder structure within which to mount the service account as a volume. If you’ve spent any time at all on the filesystem of a running Kubernetes pod, you’ll recognize this path as the default mount point for service account credentials. Since this doesn’t exist within a Red Hat OpenShift Virtualization / KubeVirt VM out of the box, this is where it should get created.
In reality, you can choose to mount this volume anywhere in the filesystem; however, this is an established practice and a standard filesystem location within Kubernetes. Using this location helps simplify the configuration of Vault Agent further down the line. This is due to the fact that when Vault Agent is configured for Kubernetes authentication, it looks at this location by default for the service account credentials to use when authenticating with Vault.
The parameter -o,uid=1000
is also important here. Without it, this volume will be mounted by the root user and will not be accessible to any unprivileged user. Therefore, Vault Agent would need to run as root to access the service account credentials, which goes against HashiCorp’s security guidelines for Vault. In this example, the credentials will be mounted and accessible by user 1000.
The complete userData
block should look something like this:
userData: |-
#cloud-config
user: cloud-user
password: icp0-lyxy-ptna
chpasswd: { expire: False }
bootcmd:
- "mkdir -p /var/run/secrets/kubernetes.io/serviceaccount"
- "mount -o,uid=1000 /dev/sda /var/run/secrets/kubernetes.io/serviceaccount"
This will be persisted across reboots.
Once more, click Save
to ensure your changes are applied to the template. Now it’s time to create the virtual machine, at which point your virtual machine will be started by default.

After a few seconds, you’ll be able to select the Console
tab and see the status of the virtual machine. Once booted, it can be accessed as any other RHEL VM would — directly in the VNC / Serial console, or over SSH if you supplied a public key during template configuration.
In either case, navigating to the mount point — in this tutorial it’s /var/run/secrets/kubernetes.io/serviceaccount/
— and listing the contents of the directory will give you a number of files, including the token:
[cloud-user@rhel9-lime-lobster-92 ~]$ cd /var/run/secrets/kubernetes.io/serviceaccount/
[cloud-user@rhel9-lime-lobster-92 serviceaccount]$ ll
total 12
-rw-r--r--. 3 cloud-user 107 8651 Mar 21 09:02 ca.crt
-rw-r--r--. 3 cloud-user 107 16 Mar 21 09:02 namespace
-rw-r--r--. 3 cloud-user 107 1212 Mar 21 09:02 service-ca.crt
-rw-r-----. 3 cloud-user 107 1367 Mar 21 09:02 token
Again, for those familiar with the filesystem structure within a standard Kubernetes pod, the contents of this directory will hold no surprises. However, do note the file permissions here: the user owner of the files is a non-privileged user — in this case cloud-user
.
» Installing Vault Agent
Now that you have access to the token representing the Kubernetes service account, from this point on, you simply need to follow the distribution-specific instructions for installing the Vault binary on this host. That will give you access to Vault Agent functionality:
[cloud-user@rhel9-lime-lobster-92 serviceaccount]$ sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
[cloud-user@rhel9-lime-lobster-92 serviceaccount]$ sudo dnf -y install vault
…
Installed:
vault-1.19.0-1.x86_64
Complete!
» Testing the connection
Now that you have verified that the service account credentials are being mounted correctly and you have installed the Vault binary, you can use the token directly to authenticate to Vault. This will ensure your authentication method is set up correctly and that the service account token enables you to authenticate your workload as expected before you move on to automating this process with Vault Agent.
First, you can navigate to the service account credential directory if you’re not already there. This just makes it easier to reference the token file for the connectivity test:
[cloud-user@rhel9-lime-lobster-92 ~]$ cd /var/run/secrets/kubernetes.io/serviceaccount/
Next, you’ll need to set the VAULT_ADDR
and VAULT_NAMESPACE
environment variables to reference your Vault cluster and the Vault namespace where your Kubernetes auth method is mounted:
[cloud-user@rhel9-lime-lobster-92 serviceaccount]$ export VAULT_ADDR=https://my-hcp-vault-cluster-00000000.00000000.z1.hashicorp.cloud:8200
[cloud-user@rhel9-lime-lobster-92 serviceaccount]$ export VAULT_NAMESPACE=admin
Now all that remains is to log in to Vault using the Kubernetes auth method from the Vault command line interface:
[cloud-user@rhel9-lime-lobster-92 serviceaccount]$ vault write auth/ocp/login role=rhelvm jwt=$(cat token)
Key Value
--- -----
token hvs...
token_accessor tUCn0cEcD7HDzkBEKjPLh8u1.h0YWG
token_duration 1h
token_renewable true
token_policies ["default" "standard-policy"]
identity_policies []
policies ["default" "standard-policy"]
token_meta_role rhelvm
token_meta_service_account_name rhel-vm
token_meta_service_account_namespace secret-zero-demo
token_meta_service_account_secret_name n/a
token_meta_service_account_uid 827b5857-2bf6-435a-b258-18fdde9a53eb
If everything is configured correctly, a successful response payload should be received from Vault.
» Configuring Vault Agent
Now that you have verified that the service account token allows you to authenticate to Vault, you can configure Vault Agent features as needed, using HashiCorp Configuration Language (HCL). Let’s take a closer look at what each section is doing here.
auto_auth {
method {
type = "kubernetes"
mount_path = "auth/ocp"
namespace = "admin"
config = {
role = "rhelvm"
}
}
sink {
type = "file"
wrap_ttl = "10m"
config = {
path = "mary-rose.txt"
}
}
}
vault {
address = "https://my-hcp-vault-cluster-00000000.00000000.z1.hashicorp.cloud:8200"
namespace = "admin/tenant"
}
In this section, you can configure the Vault cluster that Vault Agent will communicate with, and the details around the Kubernetes auth method such as the Vault namespace and mount point. You can also configure the specific role on the Kubernetes auth method that the agent should use — in this case, the rhelvm
role configured earlier — but you may well use a different naming convention to better represent your workloads.
template_config {
static_secret_render_interval = "1m"
exit_on_retry_failure = true
}
template {
source = "database-credentials.ctmpl"
destination = "database-credentials.txt"
perms = "0744"
}
In this section of the configuration, the details of the secret data and how it should be rendered are captured. In this case, the secret is located at a specific path in Vault’s key-value store:

The Vault Agent configuration tells the agent that this secret needs to be rendered to database-credentials.txt
using a template defined in database-credentials.ctmpl
.
This Vault Agent template is written in template markup and is structured as follows:
host = {{ with secret "kvv2/data/rhelvm/database/postgres" }}{{ .Data.data.host }}{{ end }}
database = {{ with secret "kvv2/data/rhelvm/database/postgres" }}{{ .Data.data.name }}{{ end }}
user = {{ with secret "kvv2/data/rhelvm/database/postgres" }}{{ .Data.data.user }}{{ end }}
password = {{ with secret "kvv2/data/rhelvm/database/postgres" }}{{ .Data.data.password }}{{ end }}
» Running Vault Agent
Once the Vault Agent configuration and the Vault Agent template are in place, you can run Vault Agent to start retrieving secret data from Vault. If you still have the VAULT_ADDR and VAULT_NAMESPACE environment variables still set, don’t forget to unset them before running the agent.
[cloud-user@rhel9-lime-lobster-92 agent]$ vault agent -config=agent.hcl
WARNING! VAULT_ADDR and -address unset. Defaulting to https://127.0.0.1:8200.
==> Note: Vault Agent version does not match Vault server version. Vault Agent version: 1.19.0, Vault server version: 1.18.4+ent
==> Vault Agent started! Log data will stream in below:
==> Vault Agent configuration:
Api Address 1: http://bufconn
Cgo: disabled
Log Level:
Version: Vault v1.19.0, built 2025-03-04T12:36:40Z
Version Sha: 7eeafb6160d60ede73c1d95566b0c8ea54f3cb5a
2025-03-21T10:03:32.953-0400 [INFO] agent.sink.file: creating file sink
2025-03-21T10:03:32.953-0400 [INFO] agent.sink.file: file sink configured: path=mary-rose.txt mode=-rw-r----- owner=1000 group=1000
2025-03-21T10:03:32.954-0400 [INFO] agent.exec.server: starting exec server
2025-03-21T10:03:32.954-0400 [INFO] agent.exec.server: no env templates or exec config, exiting
2025-03-21T10:03:32.954-0400 [INFO] agent.sink.server: starting sink server
2025-03-21T10:03:32.954-0400 [INFO] agent.template.server: starting template server
2025-03-21T10:03:32.955-0400 [INFO] agent: (runner) creating new runner (dry: false, once: false)
2025-03-21T10:03:32.954-0400 [INFO] agent.auth.handler: starting auth handler
2025-03-21T10:03:32.955-0400 [INFO] agent.auth.handler: authenticating
2025-03-21T10:03:32.955-0400 [INFO] agent: (runner) creating watcher
2025-03-21T10:03:33.668-0400 [INFO] agent.auth.handler: authentication successful, sending token to sinks
2025-03-21T10:03:33.668-0400 [INFO] agent.auth.handler: starting renewal process
2025-03-21T10:03:33.668-0400 [INFO] agent.template.server: template server received new token
2025-03-21T10:03:33.668-0400 [INFO] agent: (runner) stopping
2025-03-21T10:03:33.668-0400 [INFO] agent: (runner) creating new runner (dry: false, once: false)
2025-03-21T10:03:33.669-0400 [INFO] agent: (runner) creating watcher
2025-03-21T10:03:33.669-0400 [INFO] agent: (runner) starting
2025-03-21T10:03:33.874-0400 [INFO] agent.auth.handler: renewed auth token
2025-03-21T10:03:34.161-0400 [INFO] agent: (runner) rendered "database-credentials.ctmpl" => "database-credentials.txt"
2025-03-21T10:03:34.171-0400 [INFO] agent.sink.file: token written: path=mary-rose.txt
As you can see from the output, Vault Agent authenticates to Vault automatically using the Kubernetes auth method with the service account token, and renders the database-credentials
template to a file. Inspecting the output gives us:
[cloud-user@rhel9-lime-lobster-92 agent]$ cat database-credentials.txt
host = rhel-postgres-again
database = my-rhel-ocp-virt-db
user = rhel-user
password = rhel-password
As you can see from the output, the data matches the content displayed in Vault’s UI previously.
» Templating the virtual machine configuration
One of the features of Red Hat OpenShift Virtualization is its ability to allow you to template your configurations into Kubernetes objects using VM templates. This means you can encapsulate all of the virtual machine configuration steps described in this blog post within a single template and present it as a User Template within Red Hat OpenShift’s user interface. You can even use the provided templates, such as the ones for RHEL, as a starting point for your VM templates.
In the following example, the provided RHEL 9 template has been copied, and several updates have been made. A new parameter for the Kubernetes service account name to be mounted has been added, along with the requisite disk definitions and cloud-init updates required to mount the disk. Additionally, the HashiCorp RPM repository has been added as a package source, and Vault has been automatically installed.
kind: Template
apiVersion: template.openshift.io/v1
metadata:
name: rhel9-server-medium-sa
namespace: secret-zero-demo
...
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
- disk: {}
name: '${SERVICE_ACCOUNT_NAME}-disk'
...
volumes:
- dataVolume:
name: '${NAME}'
name: rootdisk
- cloudInitNoCloud:
userData: |-
#cloud-config
user: cloud-user
password: ${CLOUD_USER_PASSWORD}
chpasswd: { expire: False }
yum_repos:
hashicorp:
baseurl: "https://rpm.releases.hashicorp.com/RHEL/$releasever/$basearch/stable"
gpgcheck: true
gpgkey: https://rpm.releases.hashicorp.com/gpg
name: hashicorp
packages:
- vault
bootcmd:
- "mkdir -p /var/run/secrets/kubernetes.io/serviceaccount"
- "mount -o,uid=1000 /dev/sda /var/run/secrets/kubernetes.io/serviceaccount"
name: cloudinitdisk
- name: '${SERVICE_ACCOUNT_NAME}-disk'
serviceAccount:
serviceAccountName: '${SERVICE_ACCOUNT_NAME}'
parameters:
- name: NAME
description: VM name
generate: expression
from: 'rhel9-[a-z0-9]{16}'
- name: SERVICE_ACCOUNT_NAME
description: Name of the Service Account to mount into the virtual machine
required: true
...
Sections omitted for brevity.
As you can see, this new template appears in the Red Hat OpenShift user interface just like any other:

When you select the new template, you can immediately see the new SERVICE_ACCOUNT_NAME
parameter, configured as a required value:

Because the template now encapsulates all the steps you previously undertook, you may now simply select Quick create VirtualMachine
and your new virtual machine will be provisioned, have the named service account mounted, and have Vault installed.
Again, the configuration can be verified using the steps previously described. In this example, the output is as follows:
[cloud-user@rhel9-ivory-pony-56 ~]$ dnf repolist
repo id repo name
hashicorp hashicorp
[cloud-user@rhel9-ivory-pony-56 ~]$ vault version
Vault v1.19.0 (7eeafb6160d60ede73c1d95566b0c8ea54f3cb5a), built 2025-03-04T12:36:40Z
[cloud-user@rhel9-ivory-pony-56 ~]$ ll /var/run/secrets/kubernetes.io/serviceaccount
total 12
-rw-r--r--. 1 cloud-user 107 8647 Mar 24 18:08 ca.crt
-rw-r--r--. 1 cloud-user 107 16 Mar 24 18:08 namespace
-rw-r--r--. 1 cloud-user 107 1212 Mar 24 18:08 service-ca.crt
-rw-r-----. 1 cloud-user 107 1365 Mar 24 18:08 token
[cloud-user@rhel9-ivory-pony-56 ~]$ export VAULT_ADDR=https://my-hcp-vault-cluster-00000000.00000000.z1.hashicorp.cloud:8200
[cloud-user@rhel9-ivory-pony-56 ~]$ export VAULT_NAMESPACE=admin
[cloud-user@rhel9-ivory-pony-56 ~]$ vault write auth/ocp/login role=rhelvm jwt=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
Key Value
--- -----
token hvs...
token_accessor yhaj4W6wtiEyfg4DiAWnFp6k.h0YWG
token_duration 1h
token_renewable true
token_policies ["default" "standard-policy"]
identity_policies []
policies ["default" "standard-policy"]
token_meta_role rhelvm
token_meta_service_account_name rhel-vm
token_meta_service_account_namespace secret-zero-demo
token_meta_service_account_secret_name n/a
token_meta_service_account_uid 827b5857-2bf6-435a-b258-18fdde9a53eb
» Next steps
The steps defined in this blog are intended to demonstrate the basic concepts of authenticating Red Hat OpenShift Virtualization VMs to HashiCorp Vault using trusted Kubernetes identities, just like you would for any other Kubernetes workload.
There are several clear improvements that could be made to this process:
The first is automating the deployment of VMs with this configuration using tools such as HCP Terraform / Terraform Enterprise, Red Hat Ansible Automation Platform, or GitOps workflows driven by popular solutions such as ArgoCD. These tools would allow this workflow to easily scale with organizational demand.
Second, building VM images for the platform using Packer would allow the Vault binary to be preinstalled and would allow your organization to enact any standard configurations or hardening more gracefully than basic cloud-init.
Third, configuring Vault Agent itself as a systemd unit would ensure that it remained available and active, providing your applications and processes with secret data across restarts of the virtual machine.
As you can see, the use of Kubernetes service accounts allows us to securely authenticate Red Hat OpenShift Virtualization workloads with HashiCorp Vault using an established form of machine identity. This opens up the possibility of consuming secrets from any of Vault’s secrets engines — not just static secrets as shown here, but dynamic database credentials, cloud provider credentials, PKI certificates, and much more — in a workflow that no longer relies on secret zero.
Sign up for the latest HashiCorp news
More blog posts like this one

Managing Ansible Automation Platform (AAP) credentials at scale with Vault
Learn how to automate SSH certificate retrieval and management through AAP, using Vault to issue signed SSH certificates on demand.

HCP Terraform introduces Hold Your Own Key (HYOK)
HCP Terraform customers can now gain greater control over access to secrets within Terraform artifacts such as state and plan files with Hold Your Own Key (HYOK).

Nomad secrets consumption patterns: Vault integration
Learn how to consume application secrets in HashiCorp Nomad using HashiCorp Vault.