Presentation

HashiCorp Packer with HCL Configs

The HashiCorp Packer team recently implemented HCL2 templates. This talk will walk through the benefits of HCL2 and warn about the practical pitfalls.

Speakers

  • AD
    Adrien Delorme

Transcript

Hello, I'm Adrien Delorme, and I'm a software engineer at HashiCorp on the Packer team. Today I'm going to show you how HCL makes the Packer experience way better. First, I'm going to describe Packer a little. Then I'm going to explain why we're moving to HCL2, what makes it a good choice, and what it looks like. Then finally, I'm going to give you a glimpse of what the future is going to look like with Packer.

HCL — or the HashiCorp Configuration Language — is an open source library maintained and made by HashiCorp. HCL is interesting as it is a language oriented towards configuration. A good example of that are the dynamic blocks and the footage blocks that can generate new configuration contents from variables.

Fun fact; in one of my interviews before joining the Packer team, I remember asking what could be a big challenge I could work on. One clear answer was HCL. I remember thinking to myself, "Easy, I'll stop blabbering and be done." Well, it turns out it was a bit more complex than that. But first, let me talk about Packer a little.

HashiCorp Packer

Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration. It helps you automate for machine image building — whether you want your image on AWS, GCP, Docker, a private cloud or QEMU. For example, if you have to build a base server image from an Ubuntu server ISO, it should have your security and telemetry settings. You could use Packer to start an instance, apply your security and telemetry provisioning steps, and then save it. That is going to work for many environments.

From that base image, you can make the blocks that constitute your cloud. If you have programs that work in clusters like these, you could save the common cluster settings over a Kubernetes image, and then link them in the network — and then you have a cluster. Its probably a bit more complex than that, but that's another topic. You could also put your App in the image, and that's up to you.

Packer in Numbers

Packer has 34 core builders integrated in its code. Each builder gives the ability to build machine images somewhere like in a cloud, VM Docker, QEMU again. Packer has 18 provisioners, and they allow you to apply changes to an image. For example, you could use the shell provisioner to run shell commands on to a running builder, or you could use the file provisioner to upload or download something from a running instance.

Post-processors are optional, and they allow you to reuse or use the result of a built. You could upload a Docker image or an AMI somewhere, or a VMware image somewhere — extract just the files you need or create what we call the manifest file; a list of the things you've built in Packer. Builders, provisioners, and post-processors are interacting with Packer as if they were external plugins. We call them plugins or components of Packer.

In the Packer team, we are four core maintainers, and that's not too much. We are working from pretty much around the globe. We have Oregon, Florida, Germany, and I'm in the Netherlands. This gives us a lot of hands-on coverage, but we do have weekly meetings to stay in touch ‘though — because we don't have a full earth day overlap. For example, in Europe, it's the beginning of the day, and then we get Florida around noon, and then later in the day, Oregon comes in.

Currently, Packer has 1,088 persons that have contributed to either the code, the documentation — or an example. They are from all around the globe. We try our best to help everyone. As always, peers are welcome, so if you want to contribute, we try to be as welcoming as possible.

Why HCL2?

Good question. JSON works with Packer, but Packer has been known to only work with JSON — here's an example. There are quotes everywhere, and it can make it a bit hard to parse the text as a human. HCL2 has much less quotes. And it uses quotes only when you want to set strings — so not fields — making the parsing a little bit easier on the eyes.

Quirky Variables in JSON

Variables are quirky in JSON and there are not enough. There can only be string variables, no arrays, objects, or lists. If a variable is new — like on the top example — then it's mandatory to be assessed when you start Packer. But if it's the empty string like the second line and the third line, it's optional. There is no variable validation in Packer, so things are validated at runtime by builders and provisioners.

Everything is Parsed At Once in JSON

Everything is read once with JSON, and that's how the JSON delivery is done. Here, for example, that AWS active key user variable setting is not known at the time we start that builder. That builder has to start, fetch the value of this variable, then update itself with the value that we now have. In more advanced cases, this is run twice.

JSON is making it also very hard to factor out the common fields and the login information from the builder. Each builder has a manual setup for credentials and something that I think would be less repetitive. In HCL2, you can programmatically define an image order blocks are parsed, making it easy to know values before evaluating a block. For example, the builders or the components would not have to interpolate the variable values. We also take that burden out from components while simplifying components — and making Packer more powerful at the same time.

The Packer-JSON User Experience Could Be Better

Config files are a bit hard to understand at first sight, and it can be a bit scary to look at as a newcomer. There is so much configure repetition, for example. If you are building similar images like this one and want to change only one field. If you have an Ubuntu server that is an Ubuntu server building block and want to build a different version of Ubuntu, you have to copy all of that builder's content — then just change that field. You can make variables, but the body is still going to be a huge chunk of body. In HCL, we've tried to make that simpler and less repetitive.

Build Chaining Could Be Easier

Speaking of user experience. Let's say you have a base server image from which you want to build downstream images — like this. Build chaining in Packer JSON is super hard, currently. The only way to do this is to create a manifest file, an adjacent manifest file, parse it and run Packer again. Here's an example of how you would get your base server image.

Here we have three builders an Amazon EBS, a Docker, and a QEMU builder, to which we apply security and telemetry scripts settings. Finally, we use the Manifest Post-Processor to ask Packer to tell us what was built — and Packer creates for us this JSON file. This is a simplified version of it. Note that the order of these entries is not predictable since Packer is running parallel. So, you can have a build that runs first, and you cannot just say give me the first value of this — and know it's going to be Docker.

If you'd want to reuse that, all you have to do is that giant jq command. From there, you could use the output from that command as a verified parameter and give it back to Packer in the next build.

That works, but this feels like not good enough because you have to get out of Packer to do this and then install a jq — which is a third-party dependency. I was thinking that maybe Packer could do all of that.

HCL2 allows you to have blocks that depend on other blocks. In theory, it's going to be easy to have a building block that depends on another building block. Then the second one would wait for the first to be done before starting new builds. Then with middle states added to that, we could do build chaining more easily. This is still a to-do part for Packer, ‘though.

YAML Was Not an Option

It's much the same as JSON, but without the curly brackets and not as many quotes. It's a little bit more dangerous, in my opinion. An O, without the quotes, will be interpreted as the false Boolean, for example. If you are interacting with Norway and the country code of Norway could be NO. You have a false country, and that could, for example, take down a cluster or something.

But you can write comments, though — that's nice. YAML is also single parse. So you would have to do the same tricks that are preventing us from making Packer better.

HCL1 Was Not an Option Either

We have tried to go through the HCL1 route, thinking it would be a step to upgrade to V2. But it turns out that the library drastically changes for the better between the two versions. It was easier to go directly to V2 from JSON. HCL1 is great, but HCL2 is the result of a lot of usage and improvements — and trial and error upon HCL1, which had some flaws.

HCL2 in Packer

This is an example of an HCL2 config file. Check out these comments. This is roughly the same Amazon EBS config as I showed you before but in HCL. The fields you can set pretty much have a one-on-one match with the one you can set in JSON.

All the fields you're used to using JSON will be settable — but in HCL2. This is because the code that reads HCL2 is code generated from the code that reads the JSON. That allowed us to be much faster and safe to go to HCL2.

Changes in Packer for HCL2

Packer didn't have to change too much to bring HCL, but a few things did change. The main box of Packer — builder, provisioner, post-processor, and variable are present but presented in a slightly different manner.

HCL Blocks

They're called in HCL blocks, and their type and name will be at the label at the top instead of being inside the blocks — which I think makes them easier to watch, read, and parse.

Here's a side-by-side example of a variable definition. Initially, much like for Terraform, you name the variable from the label at the top. You can give a variable, a type, and a default. Here because that variable has a default string, then its type will be also string. Everything you want to give to foo has to be transformable to a string.

A builder becomes a source. But here, this source statement you see on the right will not start anything. That source block simply defines your builder settings as an importable build block. To reuse that configuration somewhere else, it's like a bag of builds — and you can call it from a build. To start a builder, you have to invoke a source from a build.

In that example, you can see we have solid two sources. One is a simple call. The second one — the singular source — is specifying just one field to be different. That gives us our power to not repeat ourselves because the first source definition could be more generic — and then you could specify the specific fields we need. That made Packer build files much cleaner.

To run a provisioner and a post-processor on those builds, all you have to do is create a provisioner and post-processor blocks. Again, their type is at the top and in double-quotes in purple — like you can see here. All the fields you're used to — except the type field — would be set at init. If you want to give a Packer HCL2 a try, an HCL2 upgrade command will transform your JSON file into an HCL2 file. It's quite handy because you can just give it a try.

Split HCL2 Config Files

Another cool thing about HCL2 blocks is that you can now split your config files into multiple files. Here's an example. You can see we have multiple files, and you can now run Packer builds on the folder. You can also see that we have defined new extension types. HCL is tool-specific. Therefore, an editor and a tool cannot just interpret the HCL for Terraform Consul or Packer the same way.

To make it easier for tools to differentiate them, Packer will only see .pkr.hcl files. Those files are where to define your HCL blocks and configs. There are also pkrvars.hcl files and they're a bit similar to what tvar files are. Here you can set the values of already defined input variables. In this file, Packer only expects values of input variables to be set. Also, we expect those variables to exist — so to be defined in a .pkr.hcl file.

HCL2 JSON Does Not Equal Classical JSON

HCL2 also supports JSON. With the same stricter HCL2 sources and with the same capabilities, be careful because classical JSON is not HCL JSON. Here's an example. These two configs are doing the same thing, but the one on the right-hand side is in JSON. We recommend always avoiding HCL JSON if you can. A good use case for HCL JSON would be if you wanted to auto-generate parts of your configs and you have a missing feature, and you want to generate parts of your config.

A Few Caveats

Packer HCL2 has some caveats right now. HCL2 support is currently beta. It has been beta since we released the first version of Packer back in December 2019. It has improved a lot since. By the way, many thanks to people testing, submitting bug reports, feature requests, and code to Packer. You have helped us a lot, thanks.

Some parts of Packer HCL2 still depend on the Go interpolation. Here's an example. Here we can see the Go interpolated variables are not the same as an HCL interpolated variables — and they are interpolated at different times.

Here in that example, that's going to work. That HTTP server is going to serve these preseed files. But here in that example, the double-quoted string in red is a Go templated call. Because HCL2 runs first, that call is going to be uppercased — like the example at the bottom. When the Go templating will run, then that's going to be an error. Again, this is avoidable if you make your call 100% HCL.

A reminder: Packer HCL has a great set of functions, and they are very similar to the Terraform fractions. I recommend trying those if you can. And I also recommended being a 100% HCL, if you can.

Looking to the Future

HCL2 has stabilized recently, and we — the Packer team — have started to feel much more comfortable using Packer HCL2 because it feels so much better. Of course, the templates are much smaller. You can reuse variables, then it's easier on the eyes.

As I said, Packer HCL2 is also giving us a lot of opportunities to improve other things. But first, we would like to reach parity with the current Packer JSON version. When we are there, we will slowly start degrading JSON and slowly start adding new features to the HCL version. Here are a few things you would love to do when we can.

Chaining Builds

Packer HCL2 cannot chain build, so if you have to split builds into multiple post steps, you will have to do the same as before. In the future, we would like to add a stanza to make a source block or a build block dependent on another build block. Then you can have a build that builds your base image, and then your Delta image will be built later on without needing to go out of Packer.

Terraform Interoperability

Terraform interoperability would be awesome — wouldn't it? It would be nice if you can reuse the result of a Packer build into Terraform to start an instance. The solution would check if the image exists and run Packer if need be.

Completely Removing Go Templating

This is going to be a slow deprecation process.

Packer Plugin Repository

In Packer, everything is a plugin. But currently, all plugins are inside the GitHub of Packer — all plugins are inside the codebase. We recently started denying new plugin contributions to the Packer core. We have a lot of plugins to maintain, and it's starting to get very hard for four people.

But we would still love to help you create plugins for Packer. Currently, adding a plugin to Packer as a user is hard. You have to put it in the right place, and you cannot tell Packer to download it. We would like to add a new stanza in HCL that allows you to download the plugin, put it in the right place — so you don't have to do that manually. Then your build could be more automatic. We will know a bit more about this a bit later this year.

Thanks. As always, pull requests are welcome, issues are welcome and don't hesitate to ping us if you have any questions. I think I'm done.

More resources like this one

  • 2/3/2023
  • Case Study

Automating Multi-Cloud, Multi-Region Vault for Teams and Landing Zones

  • 1/20/2023
  • Case Study

Adopting GitOps and the Cloud in a Regulated Industry

  • 12/31/2022
  • Presentation

Golden Images and How To Create Them

  • 12/19/2022
  • Presentation

The Packer Roadmap — HashiConf Global 2022