Skip to main content

HashiCorp Terraform: Modules as Building Blocks for Infrastructure

Operators adopt tools like HashiCorp Terraform to provide a simple workflow for managing infrastructure. Users write configurations and run a few commands to test and apply changes. However, infrastructure management often extends beyond simple configuration and we require a workflow to build, publish, and share customized, validated, and versioned configurations. Successful implementation of this workflow starts with reusable configuration, in this post we will look at modules, the problems they solve, and how you can leverage them to form the building blocks for your infrastructure.

Terraform modules provide an easy way to abstract common blocks of configuration into reusable infrastructure elements. To write a module, you apply the same concepts that you would for any configuration. Modules are collections of .tf files containing resources, input variables, and outputs, which exist outside the root folder of your configuration.

Infrastructure code like application code benefits from a well-managed approach consisting of three steps: write, test, and refactor. Modules can help with this as they significantly reduce duplication, enable isolation, and enhance testability.

Modules also help with understanding. Consider the example of a Consul cluster. If you were going to create a Consul cluster in GCP a minimum configuration would require an instance group manager, instance template, and multiple firewall rules.

Once you codify these resources in a Terraform config, how do we identify the set of resources as a Consul cluster? We could use a common naming convention, or we could read the configuration and attempt to decipher their intent, however, none of this is particularly obvious. If we were to write these resources in a terraform configuration they would look something like this:

resource "google_compute_instance_group_manager" "consul_server" {
  name = "${var.cluster_name}-ig"

# ...
}

resource "google_compute_instance_template" "consul_server_private" {
  count = "${1 - var.assign_public_ip_addresses}"

  # ...
}

resource "google_compute_firewall" "allow_intracluster_consul" {
  name    = "${var.cluster_name}-rule-cluster"
  network = "${var.network_name}"

  # ...
}

resource "google_compute_firewall" "allow_inboud_http_api" {
  count = "${length(var.allowed_inbound_cidr_blocks_dns) + length(var.allowed_inbound_tags_dns) > 0 ? 1 : 0}"

  # ...
}

The config in the snippet is somewhat understandable; however, when you look at the unabridged version it takes time to infer the meaning. The full implementation is contained in the following file. terraform-google-consul/main.tf at master · hashicorp/terraform-google-consul · GitHub

Personally, I feel that without prior knowledge it is difficult to ascertain that it is creating a Consul cluster.

Like software, you spend more time reading config than you spend writing, one of the leading benefits of declarative infrastructure code is this understanding. The key to maintainable infrastructure code is ensuring your code is understandable at a later date, and potentially by a reader who was not the original author.

What about duplication? If you manage a large infrastructure, it is possible you have multiple clusters. If the resources for creating the cluster are duplicated across multiple Terraform configurations when you have to change something you are required to do this multiple times. If you work in a globally distributed team, coordinating efforts to avoid duplication can be challenging, Team A builds their application configuration which includes Consul; Team B builds another application configuration which includes Consul. You now have duplicated effort and wasted time, even before we get into the discussions about consistency in the code base.

Another major concern is testing infrastructure configuration., The more complex your configuration, the more resources it contains, then the more challenging and time-consuming it becomes to test. Small modular components allow you to isolate testing. For example; you want to upgrade the version of Consul and need to modify the instance template. When the resources that define your Consul cluster are not modularized, then you potentially have to create the entire stack to test one small component. As impressive as Terraform is at analyzing dependency and parallelizing resource creation, it still takes time, and if you are testing and tweaking infrastructure changes you may go through the apply/destroy step many times, so you need to make sure this is as fast as possible.

Lastly versioning; with a single mono-config, you have to pin all of the versions together. For example; you have a Nomad cluster and a Consul cluster and are currently looking at upgrading them, when you do not take a modular approach, then you are bound to a sequential workflow. Maintaining multiple branches taking into consideration all the different possible combinations can be challenging. It also makes the process of merging and rolling back any changes far more complicated than just changing a version number.

»Modules to the rescue

Let's see how modules can help with all these issues. A module is a collection of Terraform files which exist outside of your root config folder, this could be a sub-folder, or it could be a git repository or Terraform Module registry. Modules can also have input and output variables which act as the interface to other Terraform elements, and allow you to design the right level of abstraction.
Terraform only parses files which have the .tf extension in the current folder where you execute the terraform command, it does not recurse into subfolders. Because of this, it makes it incredibly easy to divide up your existing configuration into modules. In fact, the folder which you run your terraform command is also a module, it just so happens to be the root module.

Taking our example from before below is the Terraform module representation of the cluster.

module "consul" {
  source  = "hashicorp/consul/google"
  version = "0.0.1"

  consul_server_source_image = "abc1235.img"
  consul_client_cluster_tag_name = "app_cluster"
}

Immediately we have the right level of abstraction to understand what the configuration is doing. It tells us that this Terraform plan defines a Consul cluster, it also tells us that we are using version 0.0.1 of our config. We are also setting rasonable defaults for the module resources. The consumer can, of course, override the defaults by explicitly setting a module variable, however, this is not required.

We have also solved our duplication problem because we have contained the configuration inside a module it allows easy re-use by adding additional modules pointing to the same source.

Moreover, we can also solve our versioning problem; modules also accept a variety of different source options which allow versioning, GitHub, Bitbucket, and the Terraform Module Registry, among others. You can pin a module to a tag or branch, allowing you to upgrade specific instances of a module without needing to force a particular version for all consumers. It also allows you to efficiently run different versions of a module in a staging environment to facilitate simple upgrade testing. Module Sources - Terraform by HashiCorp

From a testing standpoint, we have localized the resources relating to our Consul cluster, this means that we can test these resources in an isolated environment.

»Creating Modules

The best thing about modules is that it is very little you need to learn, if you are already writing Terraform, then you can already write modules.

Like the root module you can specify inputs and outputs for the module using variable and output stanzas. Let's create a simple module and see how that works.

I am setting up my project with the following structure; I have a main.tf which contains my provider and a subfolder modules/instance which contains the terraform configuration to create an instance in GCP.

$ tree
.
├── README.md
├── main.tf
└── modules
    ├── instance
        ├── README.md
        ├── main.tf
        ├── outputs.tf
        └── variables.tf

In our file modules/instance/main.tf, we create the resources which define our instance. We do not need to add a provider as this is picked up from the root module.

resource "google_compute_instance" "default" {
  name         = "test"
  machine_type = "${var.machine_type}"
  zone         = "${var.zone}"

  boot_disk {
    initialize_params {
      image = "${var.boot_image}"
    }
  }

  network_interface {
    network = "default"
  }

  metadata {
    foo = "bar"
  }

  metadata_startup_script = "echo hi > /test.txt"

  service_account {
    scopes = ["userinfo-email", "compute-ro", "storage-ro"]
  }
}

To specify the image which we would like to use for our instance, we can create another file modules/instance/variables.tf and define variables for our boot_image, machine_type, and zone.

variable "boot_image" {
  description = "Image ID for the instance"
  default = "debian-cloud/debian-8"
}

variable "machine_type" {
  description = "Machine type for the instance"
  default = "n1-standard-1"
}

variable "zone" {
  description = "Zone to deploy the instance into"
  default = "us-central1-a"
}

We also need to return some information about this resource to our root module so let's create an output in modules/instance/outputs.tf

output "id" {
  value = "${google_compute_instance.default.instance_id}"

All we need to do now is to reference our module from our root, to do this we use the module stanza. In our simple example we can write this as follows:

provider "google_cloud" { }

module "instance" {
  source = "../modules/instance"

  boot_image = "ubuntu-os-cloud/ubuntu-1604-lts"
}

We declare the module and set the source to our local folder; we also need to set the required variables which we do by adding keys and values to the module stanza. Because we have default values set for our module variables which represent reasonable defaults, we do not need to set anything. However; we can choose to override them, and I am changing the boot image from the default debian-8 to ubuntu-1604-lts.

The final step is to expose the instance ID of our created compute instance, since we defined an output variable, we can use this to obtain the instance ID using the following simple interpolation syntax.

output "instance_id" {
    value = "${modules.instance.id}"
}

The syntax is very straightforward modules.[name].[output], Note: you can not access the individual resources inside a module without declaring them as outputs, for example; this is invalid syntax: ${modules.instance.google_cloud_instance.id}. Variables and Outputs define the contract for your module and abstract the complexity within; they have a very similar concept to that of public and private accessors present in many programming languages.

We can now run terraform init to initialize our config and download our module to the local cache. Then run terraform plan and terraform apply as usual.

For the time being, we can leave this module in our central repository; we can take a programmatic approach and refactor when needed.

»Summary

This post has explained the concepts behind why modules exist and how to use them. We have also looked at the simple syntax which is required to create a module.

With a little planning, you can take a module first approach to all of your Terraform configurations and help reduce your maintenance and make managing your infrastructure more efficient. Modules can also help you collaborate with people across an organization by providing reusable abstractions.

If you are looking for inspiration check out the Terraform Module Registry, or if you are already using a module which is useful outside your own organization then why not add it to the Terraform Module Registry for others to consume.

For managing private infrastructure code, the Terraform Enterprise Module Registry provides organizations with a workflow to allow IT operators to codify, collaborate, and publish validated modular templates for provisioning cloud infrastructure that can be used by developers or other operators across large organizations. For more information or to request a free trial, visit https://www.hashicorp.com/products/terraform.


Sign up for the latest HashiCorp news

By submitting this form, you acknowledge and agree that HashiCorp will process your personal information in accordance with the Privacy Policy.