nomad

HashiCorp Nomad 0.6

We are pleased to announce the release of HashiCorp Nomad 0.6. Nomad is a distributed, scalable, and highly available cluster manager and scheduler designed for both microservice and batch workloads.

Download Now

Nomad 0.6 includes a number of new features focused on improving job management and configuration as well as many improvements and bug fixes. Highlights include:

We are also pleased to announce that the Nomad ecosystem now includes a version of Apache Spark that natively integrates Nomad as a Spark cluster manager and scheduler. See our Running Apache Spark on Nomad blog post for additional details.

»Job deployments

Nomad 0.6 introduces a mechanism to safely transition between versions of an application using rolling, canary, or blue/green upgrades. This new functionality can also perform auto-reverts to stable job versions if a deployment fails. To perform job updates with one of the described update strategies, the job specification can be annotated with an update stanza. The below example allows Nomad to upgrade the job at a rate of two allocations at a time and requires the allocation is healthy for at least thirty seconds before continuing the rollout.

group "api-server" {
  # Run 10 instances of the api server
  count = 10

  update {
    # Perform two parallel updates at a time.
    max_parallel = 2

    # Ensure that the newly placed allocations are healthy for at least 30
    # seconds before unblocking anymore updates.
    min_healthy_time = "30s"

    # Give the allocation at most 5 minutes to become healthy before failing
    # the deployment.
    healthy_deadline = "5m"

    # If the new allocations are unhealthy, auto-revert back to the latest
    # stable job.
    auto_revert = true
  }

  # Use Docker to run the API server.
  driver = "docker"
  config {
    image "api-server:0.1"
  }

  ...
}

When updating the job in a way that requires the creation of new allocations, Nomad will apply the group's update strategy. In the example above, if the group definition changed the image from api-server:0.1 to api-server:0.2 and was resubmitted, Nomad would make the change using a rolling upgrade by creating two new allocations running api-server:0.2, and then wait for them to be healthy for thirty seconds before continuing. If the newly placed allocations fail their health checks, Nomad marks the deployment as failed and, when auto_revert is set, rolls back the job to the latest job that had all healthy allocations.

An allocation has a healthy status if all the tasks enter the running state and all registered service checks are passing. If these health metrics are not enough, Nomad also supports canary upgrades. To canary a change the update policy can be annotated as follows:

update {
  # Test changes by deploying two canaries first and once they are promoted
  # continue the rolling update.
  canary = 2

  # Same update fields as above
  ...
}

Continuing the example, if the image was updated and ran with the above update strategy, Nomad would keep 10 instances of allocations running the api-server:0.1 container as well as creating two canary allocations running api-server:0.2. When the operator confirms the canaries are performing properly, the operator can promote the canaries using the following command:

$ nomad job promote api-server

After promoting the canaries, Nomad initiates a rolling upgrade to replace the allocations running the older image. Blue/Green deployments can be done by setting the canary count equal to that of the desired count. This results in a full blue and green set running on the cluster until the operator promotes or rolls back the updated version.

For a more detailed example of using the update stanza, please see the operating a job guide and update stanza documentation.

»Job history and reverting

Nomad 0.6 now tracks multiple versions of a job, allowing operators to inspect recent changes. As operators submit new versions of a job, Nomad automatically increments a new Version field allowing inspection of the various versions as follows:

# The -p flag shows the diff between job versions.
$ nomad job history -p example
Version     = 2
Stable      = true
Submit Date = 07/25/17 00:08:57 UTC
Diff        =
+/- Job: "example"
+/- Task Group: "cache"
  +/- Task: "redis"
    +/- Resources {
          CPU:      "500"
          DiskMB:   "0"
          IOPS:     "0"
      +/- MemoryMB: "256" => "512"
        }

Version     = 1
Stable      = true
Submit Date = 07/25/17 00:08:45 UTC
Diff        =
+/- Job: "example"
+/- Task Group: "cache"
  +/- Task: "redis"
    +/- Config {
      +/- image:           "redis:3.2" => "redis:4.0.1"
          port_map[0][db]: "6379"
        }

Version     = 0
Stable      = true
Submit Date = 07/25/17 00:08:33 UTC

Further, Nomad now supports reverting between job versions. This allows operators to quickly recover if a job is misbehaving:

$ nomad job revert example 1
==> Monitoring evaluation "98dd3a0a"
    Evaluation triggered by job "example"
    Evaluation within deployment: "810d5c19"
    Allocation "399ad719" created: node "24dc095f", group "cache"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "98dd3a0a" finished with status "complete"

For more detail see the documentation for the history and revert commands.

»Dynamic environment variables

Nomad 0.5 introduced a new template block that provides a convenient way to include configuration files that are populated from Consul data, Vault secrets, or just general configurations within a Nomad task. While this functionality is incredibly powerful, not every application accepts configuration files.

Nomad 0.6 enhances the functionality of the template block with the introduction of an env parameter. When set, Nomad renders the template, parses its values, and injects those values as dynamic environment variables for the started task.

The below example shows a template populating environment variables with data from both Consul and Vault. The template could also be stored outside of the job file and downloaded separately.

task "example" {
  # ...
  template {
    data = <<END
  LOG_LEVEL={{key "service/geo-api/log-verbosity"}}
  API_KEY={{with secret "secret/geo-api-key"}}{{.Data.key}}{{end}}
    END

    destination   = "secrets/config"
    env = true
  }
  # ...
}

The task's environment would then have environment variables like the following:

LOG_LEVEL=DEBUG
API_KEY=12345678-1234-1234-1234-1234-123456789abc

This enables twelve-factor style environment variable based configuration while keeping all of the familiar features and semantics of Nomad templates.

For more information see the template stanza documentation.

»Automatic advertisement of container IP addresses with Consul

Nomad 0.6 enhances the Consul integration for users of overlay networks and the Docker driver by automatically advertising the routable address assigned by network plugins rather than the host network. This alleviates the need for the container to register itself with Consul independently using a tool like Registrator.

For more information see the Docker driver documentation.

»Other bug fixes and improvements

In addition to these new features, there are many bug fixes and other improvements. Please review the v0.6.0 changelog for a detailed list of changes and be sure to read the upgrade guide.

»Nomad 0.6 Webinar

Recently, Armon Dadgar, co-founder and co-CTO at HashiCorp, and Caius Howcroft, Director of Compute and Data Infrastructure at Citadel, discussed how Nomad enables an organization to run any workload on any infrastructure, with an emphasis on flexibility, ease-of-use, scalability, and performance. The video features:

  • A deep dive into Nomad by Armon Dadgar
  • A live demo that includes all the latest Nomad capabilities
  • How Nomad enables Citadel to run batch analytics at high throughput across multiple public clouds by Caius Howcroft

The webinar gives an overview of the new features in Nomad 0.6 and how Citadel leverages the capabilities of Nomad.

For more information on Nomad or to get started, refer to https://www.nomadproject.io/.


Sign up for the latest HashiCorp news

By submitting this form, you acknowledge and agree that HashiCorp will process your personal information in accordance with the Privacy Policy.