Demo

From CLI to Cloud and Back in HashiCorp Terraform

Dive into some of the hidden features found in Terraform 0.13 and a sneak peek at what is coming up in Terraform 0.14 later this fall!

Speakers

Transcript

Petros Kolyvas: My name is Petros, and I am the Product Manager of Terraform open source. Let's start with the Terraform CLI. For those who either missed the opening keynote this morning or are watching at a later date, I'm going to give you a bit of context about where we've come and where we're headed. In Terraform 0.13, we introduced features like count, for_each, and depends_on for modular workflows.

We also added deep changes around providers — particularly with namespace providers — so that Terraform could interact with the Terraform Registry as well as other registries. There are a lot of other features that follow on from that — that I'll share later.

We also added variable validation, and that set the tone for some of the work we're going to be doing in that area going forward. Today, we're also going to preview a few of 0.14's changes, and I'm going to save that for a little later in the presentation. But let's get back to Terraform 0.13.

Community Providers

Terraform 0.13 and the required providers block — and the provider source syntax in particular — allowed us to dive into some work with expanding our ecosystem. The Terraform Registry can now contain community providers in addition to official and verified partner providers.

In just over two months, we've gone from none to 350+ — and I believe we're closer to 400 now — community providers. It just shows how diverse and vibrant the Terraform ecosystem is and how much we value the contributions of our community.

I'm just going to call out a few here because I think they're phenomenally interesting. There are, for example, four different Git community providers — and each of them offers a different set of features depending on your workflow. There's a shell provider. One of my favorites is the stdlib provider, and this was an interesting tack on extending Terraform.

Although this provider currently has just a single data source, the gist is that this provider uses data sources to extend Terraform by implementing functions that aren't in Terraform today — and it was a creative use of providers in this way.

There's also a wireguard provider. For those of you who love wireguard like I do, you can begin creating keys with it. Hopefully, we can file some feature requests and get some more work done for configuring peers and endpoints down the road.

Provider Source

All the community provider work depends on being able to specify a provider source. So I wanted to take a moment to explain a bit about what the provider source is and isn't. This source attribute in the required providers block contains three main components. Understanding those components is like unlocking the way this works in a number of facets within Terraform.

Those three components are a registry hostname, so for example, the Terraform Registry, a namespace, so the owner — or maintainer — who is responsible for a given provider, and then the provider type. If you and I were just talking about Terraform casually, and we would say something like the AWS provider or the AzureRM provider, GCP or Datadog — whatever it may be — we're talking about the type of provider in the scope of the required provider source syntax.

If you're looking at this slide wondering where the registry hostname is in the provided code, the answer is that if you leave off the hostname in the source, Terraform infers that you mean the HashiCorp Terraform Registry — so the name is built out that way. This has a knock-on effect on your disc where providers are installed. Namespace providers allow you to have multiple providers with the same type name now. That's a trend you're going to see moving forward — we've expanded the possibilities with providers dramatically.

Provider Installation

One way this is reflected is with provider installation and some new provider installation methods we've offered for special use cases. Whether you're an organization that has a custom provider you don't make publicly available for whatever reason — or you have some network needs where you don't want to be reaching out to the registry for specific reasons — these new provider installation options allow you to do some things and cater to some requirements that may not be requirements that all Terraform users have.

File System Mirrors and Network Mirrors

In particular, I want to focus on the network mirror provider installation method, which allows organizations, teams, operators to install providers or configure a network endpoint that can be used as an alternative to a registry source.

The Network Mirror Protocol is a very straightforward protocol that allows you to collect a bunch of static objects — so the providers sorted into folders that reflect their source address as well as some JSON objects that allow Terraform to know which versions — and which architectures — are available. If you're interested in learning more about the Network Mirror Protocol, you can check it out on the Terraform website — it's in our documentation.

Network Mirror Pro-Tip

I wanted to offer a little pro-tip for building that collection of assets that can be used quickly in a network mirror. We offer a command in Terraform 0.13 that's terraform providers mirror, and then a target directory. Then Terraform will take any providers in a given configuration — that are in the same directory you’re running the command from — and copy them into a structure that you can immediately use as the basis for your network mirror.

Dig in, take a look around, poke around that feature. It's one we released in 0.13.2, and I'm excited to see how the community is going to use it moving forward.

Riot Games and 0.13

I also wanted to share something exciting. Exciting for me because it's fun to see how talented operators, customers, and practitioners are using Terraform for their workflows. Riot Games reached out to us with this example, and they were very kind and let me share it with you.

They're using module for_each and variable validation in an interesting — and to me — fun way. Riot Games is using Vault — so to onboard their users into Vault, they have them fill out a YAML file. That YAML file contains all the information that's going to go into their onboarding. So they use Terraform, GitHub, and Jenkins to validate the YAML file using variable validation. They look into each YAML file, make sure that information that they expect to be there is there — and is formatted.

If terraform plan succeeds, their security team has a high degree of confidence that they'll be able to onboard that user into Vault without much additional effort. It's just such a neat way of piecing together all these components of Terraform and using it as part of a greater whole.

I wanted to send a shout out to Riot Games for allowing me to share — and actually just for thinking about using Terraform this way. Because — as an open source tool — it's examples like this that help us deeply understand what users are doing and set the direction for Terraform going forward.

Terraform 0.14

Speaking about setting the direction for Terraform going forward; what's next for Terraform? Now, I know many of you wanted to see a slide that said Terraform 1.0. And I can tell you today that's not far off. We're not there yet, but it's coming in quickly.

You'll see as we iterate more quickly on Terraform versions with fewer breaking changes that we're getting ready to do that big thing. But for today, let's talk about 0.14.

Concise Diff

0.14 is going to release with a concise diff because we're into the whole brevity thing. Joking aside, Terraform 0.11 was much more concise than Terraform 0.12 and 0.13. I think now we're seeing huge broad use cases where Terraform is operating at massive scale for many users — teams big and small — it's become clear that providing actionable information about the changes you're about to make to your infrastructure is key.

We have to begin separating the signal from the noise here. What do I need to know as a Terraform operator that will help me feel comfortable saying, "Yes," or, "No," at decision time? That's what this work is about. Hopefully, it too will evolve and mature as we develop even better pictures about some of our users' needs. Kyle's going to demo this feature for you a little later.

Sensitive Values in Terraform

The other big piece of work going into Terraform 0.14 is sensitive values. Sensitive input variables. Before I talk about the feature too deeply, I want to focus on this slide’s subtitle: protecting things with people you trust. Security threat models and threat boundaries can be scary and intimidating no matter who you are — and it's important to highlight that this feature is about helping operators and practitioners prevent the exposure of variables that they have the permission to access and use.

This feature is not about preventing malicious use of Terraform. Rather if you have a sensitive input value that you use in your terraform plan and apply — in your Terraform configuration — we want to give you the ability to redact that throughout the plan. That way, if you egress the plan and apply output into some other logging system like Spunk; a logging system you may not have control over — you don't have to risk exposing sensitive values or variables, whatever they may be.That's the guardrails we're putting on this first deliverable.

The other big piece of work under the hood is that Terraform doesn't have an idea of what is sensitive today. You can define an output as sensitive. But that doesn't propagate through the plan, and we don't see any way yet — or we don't have a way in 0.13 — for Terraform to understand that a variable is sensitive. So we're setting the stage here for a much broader set of work that's to come. And I want to make sure people understand what we're delivering today.

It's still a great change, and we're excited about it, and a lot of work has gone into it — but this is not the end; it's only the beginning.

Terraform Testing

I'm going to step into some very deep waters that I'm incredibly nervous about stepping into. But here we go. Terraform testing. No, we are not shipping terraform test in 0.14. But we are exploring a way to use a testing provider — a provider that can run tests written in HCL against your modules

Later today, Kyle is going to be demoing this, and I wanted to talk a bit about the desire to get here. I think for us, it's important that when practitioners — operators — sit down in front of their code editor that they're able to write tests in the same HCL they're building their infrastructure in; that this isn't something new to learn.

The provider you see on screen here can today write tests using data sources to make assertions against your modular infrastructure. And you can do that in HCL. This particular provider — which we're encouraging people to go poke at — can also write tests using the test anything protocol where you can extensibly pass off those tests to tools you might have on your workstation or in your infrastructure pipelines. Kyle's going to give you a little demo of this. This is going to form part of our plan moving forward to make testing a first-class citizen in Terraform itself.

Terraform Demos

And now it's the time you've all been waiting for — Kyle's going to show us how this stuff works.

Kyle Ruddy: Thank you, Petros, for that. Let's dive right into our demos here.

Concise Diff Demo

In our first one, we're going to take a look at the concise diff. In this particular situation, we have our Terraform configuration; it's in and using Terraform 0.13. We're going to be making use of the vSphere provider because this particular provider — when using it against virtual machine resources — creates a very verbose output for each one of those resources. Here, we've done a plan; we're just changing the name of a specific virtual machine.

Scrolling through all of this information – it's overwhelming. There are a lot of things to look through and view. But there's only one object being changed as part of this. And that's just our name argument. Let's check out how this looks in Terraform 0.14. We're going to jump over to a different system that already has Terraform 0.14 installed, and we're going to jump over to where our Terraform configuration is.

We're going to verify that we are using Terraform 0.14 — in this case, we are — one of our development versions. Then we're going to run that Terraform plan again, trying to create and change our virtual machine resource with a new name. Instead of seeing that extensive list of attributes, we now only see the one specific one that is going to change.

We can see that 63 different attributes have been hidden. There are a couple of different blocks that are hidden. This is very much more readable and a lot easier to view and consume.

Sensitive Variables Demo

We're going to go full screen now on our Terraform 0.14 system. This time, we're going to switch and start using a brand-new feature called sensitive variables. We're going to change our Git repo over to a different branch so that we can access code that's already been pre-populated. Here in our main.tf configuration file, we can see the usage of experiments in our Terraform block and specifying those sensitive variables that we want to start making use of.

From that aspect, we want to change over to where our variables are being configured and show that we are updating those to include a sensitive argument and a Boolean input. In this case, we want to use true because we want certain variables to be sensitive — so that we don't see the output from those particular arguments.

We have assigned this to name, CPUs and memory in our particular configuration file here. Now, we're going to switch over to our local CLI session. We're going to run our terraform plan again. Changing our VM name over to demo VM02, we can now see that our condensed output now — for our name argument — is just sensitive. It doesn't include the actual name. Now, we're going to update our command here to also include the variables for our CPUs as well as our memory — to double both of those. As part of the output for our plan, we will see that those parts are now also marked as sensitive.

This is increasingly important, especially for environments where logging is involved — where you're pulling all this information in, and you're consuming that. Then occasionally, that opens the door for people who may not — or should not — have rights to read that, to see them.

Sensitive Variables with Terraform Cloud

Let's move on to our next demo. In this demonstration, we're going to see how we can use those sensitive variables with a Terraform Cloud environment. We're going to change our branch again to where we're going to use a remote block to specify that we want to interact with Terraform Cloud. Checking to look at our main configuration file, we can see that we're using our backend of remote — calling out Terraform Cloud by default using a HashiCorp — or HashiConf — organization, as well as our vSphere workspace.

We need to do another terraform init to initialize that backend for usage with this file. Luckily, we can also transfer up our state file as it exists by just entering yes as part of our transitioning process.

Now that we have that done, we're going to go back to our prior example, where we were configuring those three different variables. We're going to run and apply and send this out to Terraform Cloud.

Here, we can still see, even though we've specified a different remote block; those variables — those attributes  — are still being marked as sensitive. In this case, we're ready to go. We want to run these actions. We just enter in yes. Then after a couple of seconds, our virtual machine has been updated, and we'll be able to see and view that state file within our Terraform Cloud environment.

Our process has been completed. Heading over to Terraform Cloud, we can see the state file that we imported before. Hitting refresh, we can now see the brand-new state that we just ran from our prior session. We can dig through our Terraform state file to see all of the different pieces that are there. However, the one piece that I want to call out here is the newly additional sensitive attributes added to our state file.

Then if we go down to the changes in this version, we can see those updated variables that we input in plain text. This is one of those reasons you should always make sure that you have the appropriate role-based access controls, rules, and policies apply to your state files.

Variable Validation

Now that we've started protecting and marking our variables as sensitive, we also want to probably start making sure that we're allowing the appropriate input for those variables. In this demo, we're going to take a look at variable validation. Our Terraform configuration has reverted back; it no longer has our remote block in it. However, we want to focus on our variables file here.

We're going to add in a validation for our vSphere datastore saying that we only want input here that starts with vSAN datastore — because that's what we're using in this environment. We also want to check to validate and make sure that input for our VM name only accepts input that ends in two numbers — trying to match our naming standard for this particular environment.

These still work with the sensitive attribute being set to true. Now, let's do a terraform plan, and we want to configure our datastore to be an NFS mount instead. We're going to make use of our vSphere datastore and specify NFS01. Here, as part of this, we can still see that we're using our sensitive variables, but down at the end, we have an invalid value for our variable saying that our variable should contain vSAN datastore. Now, let's try and do that same thing against our VM name.

Instead of running demo VM02, let's just try demo VM. In this case, we're going to see that error again — invalid value for the variable. We're looking for VM name variables that only end with two digits. This helps to sanitize the input and gets more appropriate and reliable when we start making the migration over to modules — that I'll talk about here in just a second.

Before we jump and take a look at our modules — and starting to use modules more actively in developing those — I want to point out one thing. Petros talked earlier about the ability to take and use local file systems to provide access to these providers.

In this case, this environment doesn't have access to the Terraform Registry. We can verify that by doing a ping just against registry.terraform.io. You can see that we have a firewall rule. It does not allow us to egress out to connect. If we look at our Terraform configuration file, we can see that this uses the file system mirror and is pointing to a local path where those providers are all living.

Testing Modules

For our last demonstration, we're going to take a look at that item that Petros was a little worried about dipping his toe into the water on. This is looking at testing our modules. We're going to jump to a different repository here that I have on my local system.

We've taken the Terraform configuration that we've been using in the prior demos and turned it into a module. Here's our top-level main configuration file, calling out our brand-new module vsphere-emtpyVM. Within that, we've moved all of those different data blocks, and resource blocks into our module here called that vsphere-emptyVM. We can see that everything in there is roughly the same. Mine is for our new resource called Random Pet that helps me use unique names.

If we go back to our CLI here, we'll run a plan, just playing right off the bat. In this case, we're going to create one virtual machine as well as one randomized name. Cool, right? This is going to work perfectly.

But now, you have to consider — since you're using modules — that many people will start using these modules; many people will create different resources. We can use something like the count argument that was newly added to 0.13. Now we can create 20 different virtual machines. A quick shout out to the folks that have been maintaining the Terraform extension for VS Code and the auto-formatting. Fantastic. Check it out if you haven't already.

Here we have the output from our plan. We're creating 20 different virtual machines. We have a lot of stuff. That puts a lot of strain and a lot of organizational trust on the usage of that module. Let's take a look at our testing provider that's brand new, and we're starting to start playing with.

We're going to switch over to our test directory that lives inside the module directory itself. There we can see a test.tf configuration file. We can see the directory breakdown here in VS Code. Then if we look at our test.tf file, this looks very similar to our main configuration file. However, we're calling out the testing provider as part of our configuration file here. Then we continue to use our vSphere provider; we're going to use that module and call that out.

But then the bottom is the important part where we have our testing assertions. This is testing the output from that module to verify that it is exactly what we want it to be. In this case, we're testing our response of virtual machines to ensure that we have a MoRef, which is the virtual machine identifier that starts with VM-.

Then we're also creating shell systems; we're not doing clones, so we want it to be guestToolsNotRunning. Let's skip right ahead and run and apply against our test.tf file. It's going to go through the process. It's going to create this virtual machine because that's what our testing assertions are going to test against. And then — ta-da. We have a resource that's been added. We can see our testing assertions were tested and validated through.

Let's go back to our testing file and modify our testing assertions to something that won't pass. Instead, let's change our MoRef to — let's say, CM- — something that doesn't exist. Then let's change our guestsToolsRunning over to guest tools should be running. Let's switch back over to our terminal session here and rerun our apply and see what the output is.

Here we go. We now have two different failures, both indicating that they are test failures. The first one saying that we did not successfully test against the output of our MoRef ID. The last one is testing, and it's saying that we wanted a state of guestToolsRunning. However, we received one that said guestToolsNotRunning. That was a quick look at the testing provider. How we can make use of that to test against our modules to ensure that they are exactly as we want them to be before we publish them out to either the Terraform Registry or our own private module registry.

Petros Kolyvas: Thanks for the wonderful demos, Kyle. I also wanted to point out for the audience here that the sensitive values experiment at the top of the Terraform configuration is only necessary if you're using the alpha. When the 0.14 beta comes out — and it should come out imminently — you won't need to use that. But again, using the alpha, follow Kyle's wonderful advice.

I wanted to add one more thing before we get into Q&A, and that is that Terraform is finally being compiled for arm64 architecture on Linux. You can try it today in the alphas, you'll be able to use it in the betas, and we are going to officially support it from 0.14 moving onward. Most providers have arm64 binaries for Linux available today, and Terraform is going to follow suit. Run out, install it and start using it on your Graviton instances or your Raspberry Pi clusters in your basement — and let us know how it's working for you. Now I think it's time for Q&A.

More resources like this one

  • 3/15/2023
  • Presentation

Advanced Terraform techniques

  • 2/3/2023
  • Case Study

Automating Multi-Cloud, Multi-Region Vault for Teams and Landing Zones

  • 2/1/2023
  • Case Study

Should My Team Really Need to Know Terraform?

  • 1/20/2023
  • Case Study

Packaging security in Terraform modules