Webinar

The 4 steps to infrastructure-as-code collaboration, using Terraform

HashiCorp and its partner Contino explain the four phases of a full-scale infrastructure-as-code implementation, allowing operators to collaborate in provisioning and managing cloud infrastructure—at scale.

(Update 2018: Some of the names and technologies have gone through significant changes but many of the general practices remain the same.)

Organizations of all sizes are adopting cloud-based services for application workloads. The development teams using these cloud-based services are able to operate with greater independence from the operational constraints of their underlying infrastructure. For most organizations, this means navigating a transition:

  • from a relatively static pool of homogeneous infrastructure in dedicated data centers,
  • to a distributed fleet of servers spanning one or more cloud providers.

To handle this shift, many organizations treat their cloud-based infrastructure as code—managing and provisioning it collaboratively. HashiCorp Terraform uses infrastructure as code to provision any cloud infrastructure. Terraform provides a collaborative workflow for teams to safely and efficiently create and update infrastructure at scale.

However, not all organizations are ready to jump right into this. So in this video, HashiCorp and its partner Contino explain the four steps to full-scale infrastructure as code for operators to provision and manage their cloud infrastructure at scale.

You will learn:

  • Why to use collaborative infrastructure-as-code to address technical and organizational challenges in provisioning cloud infrastructure
  • How to self-assess your infrastructure-as-code practices and organizational maturity
  • How to identify the next steps your organization can take to move forward in its journey to the cloud

Speakers

  • Jordan Taylor

    Jordan Taylor

    DevOps Practitioner, Contino
  • Maciej Skierkowski

    Maciej Skierkowski

» Transcript

Maciej Skierkowski: My name is Maciej Skierkowski. I’m the product manager for Terraform Enterprise. And what I wanna do today is walk you through an intro to both Terraform Open Source and Terraform Enterprise. Gonna go through the principles behind the two.

I’m gonna give you a little demo of Terraform Enterprise, including some of the collaboration functionality and governance, and then I’m gonna hand it off to Jordan, to continue through the assessing and progressing through the journey.

First thing is, where does Terraform fit into the HashiCorp stack? If we think about all the different challenges that we have with building and provisioning, Terraform is where we focus on the provisioning of infrastructure. And the typical challenges that we have there are with scale, heterogeneity, and dependency management.

The meat that we provide is of provisioning any infrastructure for any application. Before jumping into the details of Terraform, one of the things I wanna do is just set the stage of how Terraform fits into the HashiCorp product principles.

If you look at all the different HashiCorp products, there are three different things that we try to achieve:

  1. First is codification. With Terraform, the way this applies is we wanna make sure that we take infrastructure and we are able to provision using code.
  2. Second thing is, we’re close to providing a consistent workflow for provisioning.
  3. And then lastly, of course, making everything open and extensible.

The way that this applies to Terraform is:

  1. First, we have plan-and-apply. This provides a consistent workflow for working with infrastructure-as-code. Whenever you make any kind of changes, whether setting resources, you can plan everything out first, and then when it looks good you can apply.
  2. Secondly, the architecture really aligns well with this design, because we have a core and a provider split. On the left side we have the core, which provides that consistent workflow and language, and also the different providers. This is the way that it looks. We have the infrastructure-as-code, and we have HCL as a way of expressing the infrastructure’s code, and on the other side we have the entire extensible provider model that allows us to integrate with all the different providers. With this design, one thing that’s interesting is that basically we try to make sure that we have a consistent way of interacting with all these services while still providing and preserving the uniqueness of all the different resources.
  3. The providers are open source. We have, I think, over 75 providers that we support, and there are literally thousands of resources within all these different providers.

That sets the stage for Terraform Open Source. One of the things I wanna talk about is how Terraform Open Source and Terraform Enterprise fit together. This is a diagram I really like to use because what we see here is—we have Terraform Open Source on the left side, and then we have Pro and Premium. These are two enterprise tiers, and what we see here is the value that we’re adding. First is collaboration, and then the second one is governance. I’m gonna talk about each of those and how they apply to the products themselves.

Before, I had the diagram where we go through the entire process of infrastructure-as-code, going through the plan-and-apply, creating different resources. This all works well when you’re using Terraform as an individual user. However, things start getting a little bit more complicated when you start increasing your team size, number of environments, different components for repositories, the configurations as a result, and of course a lot of states.

And the way they would think about Terraform Open Source versus Enterprise is that Open Source is really focused on addressing the operational complexity, while Terraform Enterprise is for addressing the organizational complexity.

» How do you use Terraform at scale?

The best way to do that is to do a live demo, which I love doing. What I’m showing you is Terraform Enterprise data. This is available today. It’s under HashiCorp.com. What I’m showing you is the SaaS product. It’s also available for private installs as well. And I’m showing you the beta as it is today. It is not available in general availability yet.

What I’m gonna do is go through the entire process of creating a new workspace, connecting it to VCS, and then triggering a run, all in the context of Terraform Enterprise. This is a project that I have in my GitHub repository that I created. It contains a single file, main.tf. It’s a super-basic configuration. All we’re really doing here is just defining one variable called “name.” We don’t set a default value, so that basically makes it a required value, so we have to cast that in when we are performing the plan-and-apply. We create a random resource, and then we have two output variables. That way later we can check what those outputs are.

This is a very basic example of a configuration, and what I’m gonna do now is create a new workspace within Terraform Enterprise that uses this configuration. What I wanna do is create a new workspace. For lack of a better name, create a new workspace, provider name. We have documentation in terms of better naming conventions. I’m just using a very basic name for the sake of the example. Then what we do here is I’m gonna select this project from GitHub, from this repository. Before the demo, of course, I configured the authentication model between GitHub and Terraform Enterprise so Terraform Enterprise has access to my list of repos. When creating this workspace, we also have other options that we can set, including the working directory of VCS branch, so we can configure where Terraform is being executed from. I’m not gonna go into those details; we’re just gonna stick to the defaults, and I’m just gonna go ahead and create this workspace.

Behind the scenes, Terraform Enterprise connected to my VCS GitHub, and it registered the web hook. If I were to make any kind of command or PR on the repository now, it would kick off a run. I’m not gonna do that. We know that it’s gonna fail because we haven’t set the required variable yet. So I’m gonna switch over to this variables view, and I’m gonna edit.

I just set a new variable for this workspace. The Terraform variable is, you can think of it as the equivalent to setting the dash-dash variable on the command line for Terraform Open Source. We’re basically feeding in this variable into the environment that’s gonna be executing the Terraform plan-and-apply.

Other than this, we also have two other types of variables you can pass in. First is an environment variable. This is the equivalent to using the export command on the command line. Typically, this is used for setting credentials on providers, so, for example, if you’re using AWS, you can set AWS_access_key as the environment variable; that way that provider will use that environment variable for the credentials. Then we also have the personal environment variable, which is the same. It’s also an environment variable; however, the difference is, with this workspace, right now I’m just using as a single user. However, you can have multiple users that are connected to this workspace, and each of these personal environment variables is scoped to each individual user. So if another user were to log in and look at this exact same workspace, the set of variables they would see here would be different than the ones that I see.

So we set the variables. Next, what I’m gonna do is go ahead and queue a plan. What we are doing now is, Terraform Enterprise is reaching out to GitHub, and it pulled that repository, pulled that configuration. And it automatically triggered as a plan. I’m gonna scan this. What we can see here is, we see the outputs, the logs of running Terraform plan. It provided that variable and executed the Terraform plan command under the hood. We’ve done it. It’s successful, and now we have the option to confirm and apply.

In this case, because it succeeded, we can do confirm-and-apply; it will execute it. I’m gonna go ahead and do that. To do this you need to have appropriate permissions so you can actually set it up so that not all users can perform the apply. I have permission to do that, so I clicked confirm-and-apply, and that will execute the apply command, and we can see the queue apply. And we see the entire output and receive that, it’s those two output variables, the “HelloWorld”s. So it took that variable and executed that.

So we went through this entire demo, which went through the process of creating a new workspace, connecting it to VCS. I also set a variable and then triggered a run. All of this was executed in the context of Terraform Enterprise. One of the things that I didn’t show you is the team management. This is really designed for working in a team environment. You can pick teams; you can give those teams various types of permissions to open different workspaces in the organization.

That’s the demo for the core Terraform workflow.

I wanna go back to this slide again. We were talking about all this collaboration functionality. When using Terraform Enterprise, you can create a lot of different workspaces, you can have a lot of configurations and states and a lot of teams, and it just really helps managing the organizational complexity. One of the things that’s interesting is, when you start doing this and you start repeating this entire process, basically you can—in a larger organization—enable your developers in all the different teams to provision their own resources. Basically, we’re enabling self-serving of infrastructure. This actually creates new types of challenges, and that’s really where governance comes into play.

Last month at HashiConf, we introduced Sentinel. It is an embedded framework for defining policy as code using a syntax, the Sentinel syntax. It is its own language. Sentinel actually follows a lot of the same product principles as Terraform, so it allows you to express rules, your policies, basically as code. It provides a common workflow, so you can also perform test-and-apply on those policies, and it’s also expansible. You can import your own modules to be able to integrate with any kind of third-party or internal services that get other data that you can define your policies around. The language itself is easy to learn. The logic is quite robust. You can express a lot of different rules with it. That’s basically the Sentinel and policy as code.

What I wanna walk through now is how this applies to Terraform and Terraform Enterprise. If you are using Terraform Enterprise and you have enabled all of your users to provision resources, what you wanna do is create policies that make certain restrictions or rules around what can and cannot be done. The first step in using Sentinel in the context of Terraform is defining those policies. Once you have defined those policies, you save them to your organization, and so they become embedded as part of your organization, and, therefore, they are enforced in all of the runs. In the first demo, I showed you a plan-and-apply. Once you set a policy of an organization, there’s gonna be an intermediate step, which is a policy check between a plan-and-apply. Once you can do that, there’s that check, and if that check passes, the next thing that you’re going to be able to do is it’ll perform the apply with that level of confidence where you know that all the different resources you’re provisioning are compliant with those policies.

So that’s the overview of Sentinel and how it applies to governance and Terraform Enterprise. What I want to do now is show you how that works in Terraform Enterprise.

The first thing that I had mentioned is, I wanted to find a new policy for provisioning those resources. I’ve defined a very basic Sentinel policy. What this does is, it imports tfplan. This enables Sentinel to be able to reason about my Terraform configuration. So it’s able to parse the configuration, the state, and the plan. Then what we do is we define this new rule. And what we’re doing here is, we’re using the tfplan, we iterate through all the different resources that are of type aws_instance. So these are basically all the AWS instances, and what we’re doing is iterating through all the instances, and then defining the two rules. One is that the resource contains tags, and the second is that it contains billing-id.

What this policy is basically doing is… if I’m in a large organization, and let’s say I have another system that handles all the billing within AWS, and I want to define this billing ID in order to make sure that we can easily bill and attribute all the different resources to the appropriate teams within my organization, we’re defining this policy to be able to say, “You know what, we’re gonna use tags on all the AWS instances, and we’re going to use this billing-id tag to be able to do that, to identify those instances.”

So this is what this policy is doing. And what I’m going to do now is switch over back to GitHub—I’m putting on my operation/developer-practitioner hat—and I’ve defined this different configuration here. What this configuration does is basically create a VPC, a subnet; then, most importantly, it creates a new aws_instance, and we define all this. One of the things in this aws_instance is this tags section, which I have commented out.

Currently, this configuration has no tags defined on it, and that’s where we want to enforce this policy on this configuration. So I’m gonna switch back to Terraform Enterprise. What I have on here is, I’ve created this webinar app, and I’ve already connected it to this repository, and I’ve also set my credentials in here for AWS.

What I wanna do now is set this policy for my organization. We have this organization called Webinar Demo, Sentinel Policy. I’m going to create a new policy, and I’m gonna take this policy that we defined here, and I’m gonna set this here.

Give it a name, and there is a drop-down here for defining the enforcement level. There are basically three different levels of how I want this policy enforced.

  • The first is hard-mandatory: This basically means that everything must be compliant with this policy and there are no exceptions to that rule.
  • The second is soft-mandatory: What this means is this policy will, if it fails, certain users will be able to override it. Those certain users are people who have appropriate permissions, which are the admins of the organization. The idea is, let’s say you define a policy that says that you cannot do any deploys Friday through Sunday. You just don’t want anybody to push anything out over the weekend. However, if you have some kind of emergency situation where you need to do that, you would probably set that policy as soft-mandatory so that the admin can override that policy as needed.
  • Last, we have advisory: which is more about providing information. This is just helping the practitioners understand what the policies are and how they should be using them. However, it will not block an apply from happening.

For now, we’re gonna use the hard-mandatory to set this new policy. We’re gonna go back to this app. Now I’m gonna queue a new plan. What this is doing now, is it pulled the configuration from my repo, it kicked off a new run, and so we’re gonna see this plan. You can see the plan that it actually generated. It’s creating the VPC, subnet, and also this new AWS instance. Then we’re gonna scroll down here and, because we have defined this new policy, there is this new intermediate step called “policy check.” Now I’m gonna view this check here. What we see here is that one policy was evaluated—that’s the one that we defined, the billing-id-tag-required policy, and the result was false—meaning that we did not comply with this policy.

Now, at this point, we’re blocked. We cannot continue performing an apply. Had I set this as a soft-mandatory policy, and if I was an admin, I would have a button here that would say, “Override and continue,” and it would allow me to override this policy and continue even though it wasn’t compliant. That’s only if it’s set on soft-mandatory, and if I have appropriate permissions.

Now that we have that, what I’m gonna do is go back to this configuration. As user, I’ve created this configuration, created this run, it wasn’t compliant, so now I say, “Oh, okay, I have to fix my configuration in order for it to be compliant with this policy.” So I’m gonna set these tags here.

I committed this change in this VCS and I added the appropriate tags. Because we’re using VCS and because we had the web hook already set up, Terraform Enterprise picked up that change and went through that entire process. So we see the plan. The plan looks exactly the same as we had it before. It defined the AWS instance, VPC, and subnet, but now when we go to the policy check, we see that the result is true, because my configuration is now compliant with this policy. As you can see, we have the Confirm & Apply button available here, so now I can click it. It will go through the entire process of provisioning those AWS resources.

I’m not gonna do that because that takes a moment, so let’s just skip that part.

This demo basically showed me using Terraform Enterprise, defining a new policy, which required me to have tags and that billing ID tag set on all AWS instances. I went through the three levels of policy enforcement, and then we went through an entire process of going through a run, a plan-and-apply, and the intermediate policy check.

So that’s the intro to Terraform Open Source and Enterprise—the collaboration and governance functionality. I wanted to show you those demos just to show you how this actually looks, how collaboration and how governance actually look within Terraform Enterprise. Now I’m gonna hand this off to Jordan to guide us through assessing and progressing through infrastructure-as-code.

Jordan Taylor: You’ve probably seen the post recently about Terraform recommended practices. This is just to introduce that, on how it can help you assess your organization and also progress your organization on the journey of infrastructure-as-code.

Before I do any introductions, I just want to justify the value of listening to this presentation. Most of us are already playing with Terraform, and we love the tool, but, from experience, there’s a lot of pain when it comes to scaling Terraform in the enterprise if it’s not done correctly. For example, production environments being destroyed, spend being a lot bigger than you expect it to be, and also the guys who are actually paying the bills don’t actually know what is running in their cloud. This gives some insight on how you could work toward keeping all these things at bay.

I’m Jordan Taylor, and I’m a DevOps practitioner from Contino, and a HashiCorp practice lead. Who are Contino? Contino are an enterprise DevOps and cloud transformation consultancy based in the UK, USA, and Australia. We focus on regulating enterprises with the successful adoption of DevOps and continuous integration. We’re also a HashiCorp systems integrator partner, and the reason that we can speak about this kind of stuff with Terraform is because we’ve implemented these tools across over 70 engagements now.

So, assess and progress: What am I talking about? The Terraform Recommended Practices guide gives organizations a self-assessment across 23 questions to see where in the journey of infrastructure-as-code they are. With this insight, they then know the next steps that they need to take on the roadmap to collaborative infrastructure-as-code.

The four phases of the journey tend to be:

  1. Manual—At this stage everything is heavily manual and, pretty much, there is no infrastructure-as-code in the organization, and there is zero traceability, almost, in your environments.
  2. Semi-automated—At this stage, you’ve got some manual, but also some infrastructure-as-code in the labs of the organization, and then there will probably be multiple solutions to the same problem. There’s just a lack of unification across the organization when it comes to how you provision your infrastructure.
  3. Infrastructure-as-code—At this point, you’re probably hitting the more mature levels of Terraform Open Source. Things are fully automated, everything’s consistent, and you also have a lot of reusability. But that’s just in a few teams; when it comes to multiple teams and the whole organization needing some compliance around how people provision infrastructure, this is where you move on to…
  4. Collaborative infrastructure-as-code—This is where you need tools like Terraform Enterprise.

When you use the best practices guide, a typical assessment that I have seen—let’s take Bank X, for example. The organization has long lead times to get new infrastructure. There are teams using Terraform, there are a lot of bands within the organization playing around with Terraform, but you work in a heavily-regulated and process-driven environment, so any big changes in the organization are tough to get through. But at the end of the day, you need to deliver code more effectively and more efficiently, so you need more speed.

As a taster, I’m just going to go through a few questions just to show you the kinds of things that you will be assessed on.

How do you currently manage your infrastructure?
As Bank X, I know that it’s not [solely through a UI or CLI]—we do use command-line scripts and a combination of UI and infrastructure-as-code. We’re playing with Terraform in the labs, but a lot of the guys will just go to the UI, change security groups, and then when I run my Terraform again, things have broken.

What is the process for implementing changes to existing infrastructure?
I know that we have configuration management. We’ve got a lot of Puppet running from the sysadmin end, and there are some guys that are using Ansible from the app development teams. And so that’s pretty good.

How are changes promoted through the environment?
We’re quite good here. We’ve nailed the idea of no manual changes to the environment. We developed this test locally first, and then it goes to UAT, Staging, and then Production. But we don’t have deployment pipelines all the way through to production. There are still manual steps within each of the progressions between the environments.

After that short three-question assessment, I can see that we have the majority of twos [semi-automated]. I answered number two [semi-automated] more than any other. I answered one number three [infrastructure-as-code], so we’re at the top end of the two. Which means that we’re semi-automated, so we have some infrastructure-as-code that we’re lacking in unification. We also have some configuration management, but we’re not completely manual. We’re making progress.

And so, how do I progress to the next phases of infrastructure-as-code? I’m going to start right from the start. Let’s pretend that we didn’t do that assessment. We’re now a manual [organization]—we’re going to start from day one. When you’re building infrastructure manually, it’s very hard to audit and keep control of who’s doing what. And if you can’t keep control over who’s doing what, then it’s going to be hard for you to scale while also remaining compliant and secure.

And sharing knowledge of how everybody is doing things is also very ad hoc. It doesn’t really happen—an email here and there—it’s not really controlled. So how do you get to the semi-automated phase? Obviously, I’m going to say, “Install Terraform, because Terraform is how everybody should do infrastructure-as-code.” And then you should start playing around with some Terraform configurations, build in a few instances after reading the Getting Started guide. But then, you want to take your first small project, whether it is just a small team with a fresh application, the first people into AWS, let’s say. Just implement that first piece of infrastructure into whatever your cloud provider is, using Terraform just to get a feel for what it is to automate your infrastructure provision.

At this point, you are semi-automated. You have some guys that are automating the process of provision and infrastructure. And you have a small but meaningful subset of infrastructure which is managed as code, but the majority of guys are still doing a lot of manual stuff and the practices aren’t ideal.

So how do we get to the phase of infrastructure-as-code from where we are now? Most people would already be using version control as soon as I mention the word “code.” But if you haven’t already, everything, when it comes to configuration code or infrastructure code, should be in version control. Put all of your Terraform configurations into a version-control system and then you should also look to start and move the Terraform modules.

Just to illustrate the Terraform modules approach: When should you actually write a module? Are you writing a module for reusable code? Yes, you are. Is it small? Then no, it’s not. It’s a full network stack. Is it an architectural pattern? Yes, it’s a full network stack. And so you would write a module in this case.

But you will find that there are two types of modules: You have core modules—which are reusable by everybody—they could be the VPC modules that the HashiCorp guys put on their module page. Or it could be a contextual module—relevant to your organization, but it inherits from core modules.

And this diagram is also in the documentation, but this gives some insight into when to know when to write a module with Terraform.

At this stage we’re still semi-automated. Everything’s in VCS, and we’re starting to write modules. But we need people to know about Terraform and about the practices that we’re going to start using with Terraform. And so at this stage you can start to get everybody into a set of compliant habits. They should be starting to do test-driven development. Your Terraform configurations can start having unit testing and integration testing as they’re going through development, through test, into production.

By this stage you should also look to use configuration management as well. If you already have it in the organization, that’s great. You can use that with Terraform. But if you don’t, then now’s the time when everything within a host should be managed by configuration management or some sort of automation. There should be no manual changes happening in boxes at this stage. And also, people have secrets in code, whether it’s database-connection strings or generally managing the access to the cloud provider by the API—your access keys and your secret keys. These things shouldn’t be out in the open and, to be honest, they shouldn’t be static. They should be dynamic. This is where HashiCorp Vault comes in, where you can create these database credentials—or AWS credentials—dynamically. Or you can just store things in an encrypted fashion in the K/V store.

But at this point, we’re now at infrastructure-as-code. It was quite a tough journey. A lot of changes, but we’re at a good point. But we only have a few teams that are really strong. And to be honest, those core teams, they don’t have unified practices. The guys from above, they don’t have any visibility as to what is actually going on and don’t really have much control over their cloud, even though they’re paying for it.

And so we have at least one Terraform module. A few guys have made a lot of progress, and there is much more visibility, because all the infrastructure is now as-code, but only to the technical guys who can read that code.

Now, we’re at a stage where we want to go to collaborative infrastructure-as-code, where all the teams can collaborate and where there is much more visibility. At this point, you don’t want to solve this problem again, because lots of companies have come together and said, “We have these pain points with Terraform.” And HashiCorp have listened, which is what Terraform Enterprise is.

It tackles all of the problems that everybody is hitting when they start to scale their automated infrastructure-as-code provisioning. Once you understand Terraform Enterprise running environment, you realize that how you split your repositories, or your workspaces which is another word for it, is the most important thing. As Maciej said earlier, managing Terraform state when you have lots and lots of states is a big issue. And how people use the metadata from those states is also key.

On the right here are a few examples of how you would split the workspaces. The networking team would have their own workspace. The billing team have their own workspace. The security team would have their own workspace. They’re all managing any networking-related configuration in the cloud provider, security-related…

Let’s go back to networking and say, “Okay, we’ve provisioned the underlying VPC. You guys can use this set of subnets.” And that’s in that Terraform repository, but then the team over here also need the values from that networking repository. And so that’s where Terraform Enterprise will handle sharing that metadata between the two and everybody has their own individual workflow to get new code through the lifecycle as well.

Not only are people able to test on their own software development lifecycle; they can also inherit from other people’s repositories or other teams’ repositories easily. And at this point you can create the workspaces as well. And once you’ve created the workspaces, you know who in that team should have which permissions. Let’s take, for example, the networking prog guys down here. A networking engineer is able to read the repository. And when it comes to the owners of the networking, they can read and write. And then the central IT have admin over it. The team owners here and the central IT guys are the only guys that can say, “Yes, you can perform and apply in network and on production.”

But the engineers won’t have the ability to make any direct changes to production because there are probably some policies that have been applied to make sure that certain changes can’t go into production unless certain things happen anyway, whether they should be. And also, only particular guys should be able to make those changes and perform those applies as well.

And also the final stages: Restrict any non-Terraform access. So what am I saying here? I’m telling you that once you get to a maturity level with Terraform, you should no longer be able to do anything with the cloud provider unless you’re doing it via Terraform. That way you get to apply compliance all the way through the pipeline. Any change that needs to go through production has gone through a series of validations, from unit testing, from integration testing, and from Sentinel as well. You know that all of your policies are being adhered to as well.

And then once it gets into production, why would you need to log into the UI to make a change when you know everything is as it should be? That is the end goal.

That’s infrastructure-as-code in a collaborative fashion. We’ve scaled it to multiple users across many data centers. Teams are building all of their infrastructure without any conflict or any risk between each other. The guys who are paying for the cloud and get the most value from the cloud are getting the governance and auditability factors that they need because everything is all built in from one place. They know who is doing what and they also know that testing and policy tests are being applied all the way through the process.

And so Terraform Enterprise—why do Contino think that it’s really going to make a difference in the enterprise? As I said before, multiple teams are able to manage their workspaces, their Terraform states. You have secure variable management natively. Vault is running under the covers to manage the variables in the encrypted fashion. And then everything’s auditable as well. And then with the permissions that people have on workspaces, X person can’t run an Apply at this time. With Sentinel, let’s say, all that role-based access control that everybody needs in the enterprise is ticked with this tool.

And also, the softer view of it would be that it changes the culture and mindset around infrastructure-as-code. Infrastructure is code now. Configuration is also code, or should be. And the new application code is code. They should all follow the same lifecycle. They should all be tested. They should all have pipelines all the way through to production. Nobody should be making manual changes. Everything should be treated as code with the same software development lifecycle.

Go and check out the Terraform Recommended Practices guide and just give your organization or your teams a self-assessment to see where you are on the journey. And then from there, you can see, “Oh, we’re doing quite well. We’re already at infrastructure-as-code phase.” But if you’re not, then maybe you need to look at how you can collaborate effectively with Terraform or something like Terraform Enterprise.

Thanks for listening. If you’ve got any questions from the Contino angle, if you’re quite low in the maturity scale when it comes to infrastructure-as-code, then feel free to contact us. But if you feel like your organization is already at a level where you just need the tool, you need Terraform Enterprise, then contact the guys at HashiCorp.

Stay Informed

Subscribe to our monthly newsletter to get the latest news and product updates.

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×