The concepts of workspaces and the Sentinel policy engine make using Terraform at scale with multiple users, teams, and organizations a lot easier.
Hi, I'm Corrigan Neralich, a solutions engineer at HashiCorp, and I'm here today to talk to you about some of the challenges that organizations encounter when they adopt open source at scale internally.
Oftentimes clients that I work with ask, "How can Terraform Enterprise solve these challenges that we've encountered focusing on security, workflow enforcement, module creation, and code reusability and discovery?" I'm here today to talk to you about how Terraform Enterprise can help solve some of those challenges.
When I speak with most organizations, they face a key challenge, which is…
How do we limit access to configuration files?
How do we control state?
How do we maintain security when it comes to our cloud credentials?
Within Terraform Enterprise, we have a concept of workspaces, which essentially represent a unit of management. You can think of workspaces as being tied to a single configuration or collection of resources defined using Terraform and tied to a single corresponding state file.
Now, you have a self-contained environment for managing those deployments, and you can also use the built-in RBAC system to either grant access or deny access to various teams, as well as dictate what level of permission teams have within this workspace. Do they have the ability to run plans but not applies? Do they have the ability to run applies but not administer and modify sensitive credentials or provide variables?
Historically, the way that Terraform files are managed is either in standalone repositories for sets of configurations, or quite possibly subdirectories within a single repository, which we refer to as a "monorepo."
In this case, each subdirectory represents a standalone project. This might be an application or an application environment, and each of these within the Enterprise solution will be tied to its own corresponding workspace so that it could be created together, managed together, and ultimately potentially destroyed together.
Now that you have your configuration files attached, now that you have your state securely controlled and encrypted, the other components within this workspace are variables. How do you traditionally pass your variables to your Terraform configuration files?
As I mentioned earlier, API credentials historically have them locally stored in your terminal as, say, environment variables, and you customize your deployment through some default variables in your configuration files, but also what we refer to as
tfvars files, where you can keep these locally, not check them into version control, because sometimes they contain sensitive information that you don't want living in VCS.
In this case, because everything is managed centrally within the Enterprise solution, you no longer have to worry about separating out sensitive values locally versus what's living in version control, because now, just as the configuration and access can be managed in the workspace, so can the credentials themselves.
Now you can decouple who needs access to the workspace from an infrastructure provisioning standpoint, as well as the individual who is providing the credentials that are needed in order to deploy to your cloud platforms.
The final component is, How do you connect these things? There are a few different ways that Terraform Enterprise enables you to do that. We endeavor to make this tool as flexible as possible because we understand that different organizations, even different teams within organizations, have different preferred workflows. One of the most common workflows is the VCS-driven workflow.
In this case, essentially what you're able to do is connect your Terraform Enterprise organization directly to version control, whether this is GitLab, Bitbucket, GitHub, or Azure DevOps. Now you can start tying your workspaces directly to individual repositories or subdirectories within repositories or even specific branches.
And the trigger event—that plan, that apply—is no longer happening locally as commands on your laptop. That trigger event itself is now a merge to master. And this really helps you continue to ensure that whatever is represented in master, whatever resources you have declared, is actually what is getting deployed into the cloud.
This helps solve that workflow challenge and reduce the possibility of human error, where individuals fork or deploy against the wrong branch.
Now, because all of the credentials live within the workspace rather than in the local environment, you also can rest assured that individuals aren't sidestepping this. They no longer have the credentials to log in directly and modify resources, delete resources, provision out of band, because all of the credentials that they need live centrally in the workspace and are not accessible to those individuals.
Another common workflow is the API. Oftentimes, you might have a CI/CD pipeline, so you might use a Jenkins or a CloudBees or any number of other tools within this pipeline, and you want to make sure that Terraform is just one component.
The API is fully supported, so you can easily make API calls to trigger runs. You can do any number of jobs or processes beforehand, and once you've gotten to that point, you can provision against your workspace using the API.
The final component is, Within this new framework, how do we enforce best practice? We want to make sure that we are setting up guardrails, and we are ensuring that, just as we've codified our infrastructure, we can codify and define and version what we define as best practice for our organization.
Now, rather than having manual review processes for enforcing best practice, let's codify those and apply them either globally or even selectively to individual workspaces or groups of workspaces based on environment or use case or whatever your need may be.
When you're talking about Sentinel, there are different levels of enforcement. At the lowest level, we have an enforcement level called "advisory." These are policies that bring awareness. They allow individuals to see, when a plan is run and before an apply happens, that this policy has been triggered. These are more informative in nature.
These are also useful, as they can be used to identify patterns in your deployments, common patterns that emerge that could potentially be areas where you could modularize or create reusable templates.
The next level of enforcement is what we call "soft mandatory," in which we bake in an administrative override capabilities. Let's say you write a policy that limits machine size for your development environments to t2.micro in AWS, as an example. This gives you the ability to prevent developers from over-provisioning and spinning up costly infrastructure that they don't actually need.
At the same time, it understands and recognizes that there are exceptions to this and there are cases where they may need larger machines. In this instance, you can prevent that apply from happening. But individuals with administrative privilege can override those as a one-off if need be.
Finally, at the highest level of enforcement, we have a setting called "hard mandatory." What hard mandatory represents is there is no override capability. These are our hard and fast rules that you know you want enforced 100% of the time for your organization. Examples here are rooted in security. As an example, ensuring that no VMs are deployed without your best practice security group supplied. Or this could even be time-based, such as preventing deploys after 5 pm on a Friday.
Those are just a few examples, but you can start to see how this becomes tremendously powerful since it's evaluated when a plan is run versus before an apply ever happens so that you can stop bad practices and set up these guardrails around your deployments to proactively protect your environment and your organization.
How OVHcloud Migrated to Terraform Enterprise
How Deutsche Bank Onboarded to Google Cloud w/ Terraform
Using Terraform to Build a Self-Service GitOps Infrastructure as Code Platform at AppFlyer
Using Terraform with AWS Control Tower via AFT