Terraform Enterprise: Understanding Workspaces And Modules

Learn why you would want to decompose your monolithic Terraform configurations into different workspaces for networking, security, database, and more, and see how you can break up and reuse infrastructure code as modules.


  • Teddy Sacilowski
    Teddy SacilowskiSr. Enterprise Architect, HashiCorp


Today we're going to be talking about Terraform Enterprise. Specifically, some of the things to consider as you begin to adopt Terraform Enterprise in your organization.

One of the first things users have to work with when they start working in Terraform Enterprise is the workspace. Workspaces exist in open source, but they're actually a core logical construct in Terraform Enterprise in which we operate. To help think about how to structure workspaces it's helpful to learn about what they’re composed of—so let's do that.

The first thing we need to do when we create a workspace is to associate it with a set of Terraform Configuration. Now, that configuration can live in version control. We have built-in integrations with things like GitHub, GitLab or Bitbucket right out of the box. But you can also upload configuration manually using the APIs or the CLI, for example.

Assigning variables

The next thing that we need to do—after we have our configuration—is to provide some variables for those configuration files. Now that we have our configuration and our variables, we want to take them and do something with it. We want to apply these changes to our environment.

If you're familiar with how Terraform Open Source works, you know we split things up into a plan and apply phase. And in our plan we see the steps that Terraform is going to take in our environment before making any changes—all of the resources that it's going to create, modify, or delete. Then—in the apply—is where we make the changes to our environment.

In Terraform Enterprise—within the concept of the workspace—we wrap this whole thing up in what's called a run. And because we've separated these two phases—we have our plan and our apply—we can now serialize through these and add additional steps. In Terraform Enterprise we also add a phase for Sentinel.

If you're not familiar with what Sentinel is it's HashiCorp's policy as code framework. Essentially it lets us create guardrails or sandboxes around different things that you can or cannot do with Terraform.

Adding more functionality

Because we have this separation of the distinct phases here, we're also able to add additional steps as we add more functionality to Terraform Enterprise. We take these steps—and because we've wrapped this in this logical construct of a run—we're also able to queue up additional runs so that we're not clobbering other people and their resources.

Then finally, after a run completes—and Terraform goes through the steps of planning—and doing its policy checks and applying the changes to the environment—we have the output of a state file. It's important here to outline the distinction—that for every workspace you have a very deliberate one-to-one mapping between the configuration and the state file.

The last thing that we could do in a workspace is take this entire thing and we wrap it in a set of permissions. Now that I've drawn this out we have everything we need to start reasoning about how to start structuring things in Terraform Enterprise—and how to start decomposing our infrastructure to fit this construct. So, let's work through that.

Decomposing infrastructure

When users first start with Terraform they might write fairly monolithic code. They might define their VPCs, their subnets. Maybe they're also defining their own security group rules, their own IAM policies.

They might need some persistent data storage so they might have an RDS instance—some sort of database. They need to define some resources for their actual application at the compute level. That might be an application load balancer—maybe an autoscaling group. Maybe they're even using Nomad or Kubernetes. And when I look at this, I can start to see how I might begin to break this up.

Workspace structure

So let's start backwards—going back to our workspace structure. The last thing I talked about was permissions. I probably don't want my application developers to be managing their own security group rules—or at least not at the global level—and probably not IAM policies as well. And I probably don't want them to be defining the network structure that might be used for other applications as well. So, I start to see how I could take this and begin to carve this out.

I have my network team managing the network layer components. I have my security ops team—and they're managing the security-related components. I typically want a smaller group of individuals to manage those things. Databases typically require a very specific set of expertise—we might have our DBAs responsible for that.

Finally, we have the components that our developers are concerned with—and that's the compute-related things, our load balancers, autoscaling groups, so on.


When we start to look at this—and let's say this defines the stack for app one. More likely than not, other applications are going to use the same structure, so I can almost start to think of this in a graph format—app 2, app X. I could start to see how I could bring this same pattern over to each of these constructs—and maybe at the intersection of each app and each role that's a workspace. Each of the workspaces has its own set of variables that are specific to that application.

As soon as you add additional applications you could see that they're likely going to use the same structure. And at the intersection of each one of these apps and the different roles involved—that essentially could become a workspace. That's one way to consider how to break things up. So, this covers permissions.

Mapping state files

The next concept that I talked about earlier on was the idea of the state file and the fact that there's a very deliberate one-to-one mapping between a state file and a set of configurations in a workspace.

That leads us to ask some additional questions:

  • What are the implications of this?
  • What is the blast radius of the different resources in my configuration?
  • If I accidentally push a change to my VPC that takes out the VPC will I lose everything in there?

There's an additional consideration around the rate of change of these resources. I might be pushing changes out to my application-level components 10 times a day to try to deliver new features faster—but my network foundation—that’s not going to change very often at all, so I might use that as a way to start breaking things up.

Finally, we have the whole concept of the combination of our configuration and our variables—and this begs the question about code reuse. This is where we start thinking about modules.

Using modules

Another way to break things up is think about how you encapsulate different types of functionality—or how you group together related functionality. If I'm deploying an application I might need all of these resources—and that might be one module. Same type of thing with your network components and so on.

As we start to evolve our usage of modules, we can start thinking about how to plug them together—developing an API contract of sorts—really defining the different inputs and outputs between the different modules and using that to tie things together so we could avoid hard coding values as inputs.

As I look at this I could start to see how we start developing a framework with which to think about how to break things up. It's important to take all these factors into consideration so that you can help your teams be more effective and really push adoption of Terraform Enterprise through the organization.

There's a lot to take into account as you adopt Terraform Enterprise in your organization. And it's a journey—and we'd love to help you with that. In the meantime, please feel free to take a look at HashiCorp.com to learn more.

More resources like this one