Learn about Google's new Magic Modules in its Terraform provider.
With over 100 providers under its belt, Terraform provider development has come a long way. An exciting new approach, Magic Modules, is being pioneered in the Google Cloud Platform provider, allowing new resources to be supported within hours of being launched.
Come learn about the past, present, and future of the Google Cloud provider, get all the details on machine-generating the provider, and hear about the unique opportunities this opens up for the provider that we’re exploring.
Paddy: Hello. Hi everybody. Hello, friends. Welcome to The Magic of Friendship, the Google Provider's New Approach to Terraform.
I hope you're all having a good afternoon and enjoying the conference. Yes? I'm going to assume yes. Excellent.
Today we want to share a little bit about what we've been doing and to make a lot of new friends. I see a lot of you out there, so I'm glad we've got so many new friends today. To start us off, Dana do you want to introduce yourself?
Dana: Sure. My name is Dana, I am a software engineer at Google. I'm the technical side lead for HashiCorp Integrations for GCP. I'm one of the maintainers for the Google Terraform provider and you can follow me: @danawillow on both GitHub and Twitter.
Paddy: Awesome and as mentioned, I'm Paddy Carver. I'm a software engineer. I work for HashiCorp on the Terraform ecosystem team. This is the team that does all of the provider maintenance outside of our official product providers. My primary responsibility's to help maintain the Google cloud platform Terraform provider, along with Dana and her wonderful team at Google. If you've been on GitHub, you know me as the mean guy, @paddycarver, that keeps telling you "no" in all of your GitHub issues 'cause that's kinda what I do everyday.
Today, we wanted to talk about a few things. First and foremost, we want to talk about where we've been. We want to give you a little bit of context to what we've been doing. But, we also want to kinda give you an update on where we are and what we've been doing with the provider and how that's going for us. And, we want to take a little bit of a look ahead to where we're going and talk a little bit about the future of the provider and where we see it going in the next year or so.
While we do that we also want to talk about friendship because you know what's great? Friends. Friends are great but also, this is my favorite story to tell. I was in Seattle working with Dana and her team at one point, collaborating, talking about the provider and Dana tells this story about how one of her coworkers comes up and saw like a tweet of mine or something and they showed it to her and she was like, "Yeah that's my friend Paddy". I was like, "Oh we are friends, you said that? There are witnesses? You cannot take this back? We are friends now." The joke has been that Dana's my friend now. But, it's not so much a joke because we do have a great friendship between the two companies. We do work really well together and because of the way that we collaborate closely with each other, we're able to do a lot of really interesting things with this provider. I feel really fortunate that I've been able to be part of this.
We also want to talk about some of that cool stuff that we have been doing—and that's our new approach to Terraform provider development. So, we're going to give you a little bit of insight into what we're doing that's a little bit unique, a little bit unusual for how Terraform providers are usually developed.
To start with, let's kick things off, let's talk a little bit about where we've been and gain some context to understand all of this new information. First, there was nothing. There was no Google provider when Terraform 0.1.0, the very first release of Terraform, went out. Google was the seventh provider to be added to Terraform. Mitchell Hashimoto created the first commit, creating the Google provider on August 25th, 2014, and created a Google compute instance resource that had no fields and did nothing. But, and this is indicative of how we do things at Terraform, it created the user experience. It showed an example of the code that he wanted to write to get a Google cloud compute instance, and he built from there.
Three days later, we released version 0.2.0—I say we, I wasn't here—we released version 0.2.0 of Terraform and that included the Google provider. We know had a working compute instance along with four other resources—I believe it was networks, IP addresses, disks, routes and compute instance—were released as part of 020 and that was the very first release that had working Google cloud compute in Terraform. That was on August 28th, 2014.
Dana: The first commit by Google employee was made in January of 2015 and then shortly after that, two other Google employees started making a lot of really important contributions to the provider in their 20 percent time at Google. So, between them, the Terraform team at HashiCorp and some community members, we had a working provider with good coverage for most GCP resources. But, at that time, there was nobody who was working on it as their primary responsibility and so it felt a little bit more like a collection of resources versus one unified provider.
Paddy: This begins, what we like to call—well I like to call it Dana and Paddy's Reign of Terror…
Dana: I like to call it Dana and Paddy's Reign of Awesomeness!
Paddy: But, what we did is we came in, I joined HashiCorp and Dana started on the project and we came in and we started changing things very rapidly. Dana do you want to talk about your first commit?
Dana: Sure. In 2016, I joined the Cloud Graph Team at Google. A bunch of my coworkers are here, hi. Our mission is to meet our users where they are, provide really great experiences for people that want to use GCP with open source tools.
Paddy: As is typical with our working relationship, Dana showed up in the end of October and did something really cool. She started committing changes to help enable cross-project networking. Two weeks later, I showed up like a day late and a dollar short with a cup of Starbucks and a really unimpressive bug fix and that was my first commit. I was like, "Yeah we're equals!" But no, it worked out really well.
Dana: But, Paddy also landed his change before mine, so really we are equals.
Paddy: Yes. It's been a give and take. Dana will not let me be self-deprecating during this talk. You'll just find her correcting me being like, "No, Paddy's awesome." Don't let her fool you.
Dana and I got together, we recorded a video for Google in February of 2017 that was exploring the working relationship that Google had with HashiCorp. We both talked about how we collaborated in Terraform and after we finished this video, we did something unusual for the provider, this is mind blowing just everyone kinda take a moment, collect yourself, get ready for this. We sat down, we decided what we wanted the provider to be and we laid out steps on how to get there. That's right, we made a plan. It was an unusual day for us.
We knew what we wanted to do, we knew how to get there so we started collaborating and figuring out how do we work together and make this happen. What we wanted to do was we wanted to make sure that the common use cases, that the things that most people are going to be using the provider for, were going to be covered by the Terraform provider. Any infrastructure that you stand up on Google cloud is going to have a compute instance in it. It's going to need networks, it's going to need IP addresses, it's going to need disks. We wanted to make sure that those use cases were rock solid, that they offered the user experience that we wanted and that we were proud to stand behind that.
We started doing things like adding features and fields that were missing previously, that had been added since the resources were created and nobody had gotten around to updating them. We fixed some of the UX things that we had been experimenting with that we decided were not what we wanted and we ended up with a solution that we thought was pretty okay. That was basically what we did for the first half of 2017: get our house in order, to build a nice foundation to build everything off of.
Then in June at the middle of 2017, I always think of summer as the end for some reason, but I guess in the middle of 2017 the provider split happened. Terraform providers used to live in the HashiCorp Terraform repository and everyone was kinda working on top of each other. You could only release when Terraform released one of Terraform's versions. So, we did the provider split and we wound up in our own Terraform provider Google repository.
What this allowed us to do is it allowed us to work a little bit more independently, to have releases on our own schedule, and to have more velocity by having a little bit more control of our destiny that way. It worked out really well for us. It shows because roughly four months later on October 2nd, we released version 1.0 of the provider. That was nice because we were able to deprecate and get rid of some of the UX decisions we had made that we weren't happy with, that we thought we could do better at. So, that provided us our first real opportunity to make big, breaking changes in the provider and we took advantage of it to try and make it the best we could make it.
After we finished with that, we turned our attention to what I'm gonna call niche resources. Theses are resources that are important in infrastructures, they're your cloud internet of things, registries, they're your cloud en points, they're Bigtable. It's things that are very important, very valuable for a lot of people but which aren't necessarily in every single deployment that you do to Google cloud, that are not going to be in part of everyone's architecture. We wanted to make sure that we had best in class support for these things, too, because if you're using Terraform with Google cloud, then you should have best in class support for whatever it is you're trying to do. We turned our attention to finishing up the year trying to support that stuff.
Dana: At this point, things were going really well. We had a lot of really good feedback from our users, people really liked what we were doing and we were seeing the number of feature requests go up and up. In the summer of 2017, we launched the ability to use the provider to provision resources that were in beta or had fields that were, and that was really exciting to a lot of people. But, the number of beta features that exist in GCP is large and it's growing; more and more products keep coming out. We started thinking about ways to scale what we do so we can make sure we land support for new things even faster.
We started looking for places where we could re-factor out common patterns. So, if you've ever used our IM resources, you've probably noticed that they all come in three's. We have IM policy, IM binding and IM member. Our IM resources all share common code underneath so that when we need to add a new IM resource, we just need to plug the necessary API calls into the right places instead of building these from scratch every time.
And so, from there, we started thinking about finding other similar patterns or places where we could reduce the amount of code an individual needed to write to land their feature. At the same time, some others on my team back at Google were working on a project called Magic Modules. Magic Modules is a code generation tool that, at the time, was being used in some other open source projects to write code for them from scratch. Even though we already had a huge amount of Terraform code already written in the Terraform provider, we started looking into whether we could use Magic Modules for Terraform to help us with this scaling, to help us scale the provider better.
Just to talk a little bit about the philosophy in Magic Modules, we kind of have these three C's we use to guide us. The first one is that it's comprehensive: If GCP offers a product that makes sense to configure with a tool like Terraform, we want to be able to support that product. Pretty straightforward. We want something that could allow us to add new features, proactively, right as they're released and where the amount of work necessary needed to maintain the code base, doesn't keep going up and up when we want to add new products and new features.
The second one is consistency: We want all of our resources to look and act the same. They should all have support for all eligible Terraform features. Documentation should reflect exactly what's available on the resource, our examples should be up to date and if we fix a bug or find a better way of doing things in one resource, we want to feel confident that we don't see that same bug in any new resources and all our existing resources can have these fixes, too.
The final C is for Cohesion: So whether we generate a resource or we write it by hand, we want everything to fit together. As the user, you shouldn't need to know the intricacies of how the resource was built, only that it continues to work as part of the system in the way that you expect.
Paddy: So, when Dana had approached us and was like, "Hey. My team came up with this great thing. We want to apply Magic Modules to what we're doing at Terraform," we really had two priorities or two guidelines that we really wanted followed before we were like, "Yeah. Let's do this." The first is we really wanted to be the same Terraform experience. Users shouldn't have to know or care about Magic Modules, and we weren't going to compromise on the user experience of Terraform just because code was generated. It had to meet the same exact guidelines that we have for our human contributors for user experience, even though it was a robot contributor.
But our users aren't just the people that download the provider and use it. Our users also are the people that help contribute fixes, and help investigate bugs on the provider, our open source community. And so, it's not enough that the people that download the binary can't tell the difference. We need the people who are contributing code to the provider to not be able to tell the difference either. So, we had to have the same expectation for code quality. It had to meet the same linting, vetting, testing, and code review guidelines that all human contributions do. Again, Magic Modules is just like a human contributor. It's just a robot. So we wanted to be the same process, however, for both humans and robots.
We first released the Magic Modules resources in 1.6.0 in February 2018. We released, I believe it was the compute backend bucket. It was the first resource that was released that day. Nobody noticed. We didn't put it in the change log. Nobody had any idea, nothing changed. That was exactly what we wanted. That was the plan, because Magic Modules was never meant as a replacement for our developers. It was never meant as a way to change the Terraform experience for users. It was always meant as an aid for our developers to do their work more efficiently and better. That kind of brings us to where we are today. So I want to give some context on the things that we're working on now, the things that we've been doing and kind of show a little bit of what all of this work has brought us.
Right now we're doing a lot of work on tightening up resource designs. We've been working on the provider for about two years. We've tried a few things. Are there any App Engine people in the crowd? I'm assuming yes. I can't actually see anybody. I made an unpopular decision about App Engine, and we are correcting it. That's a thing we do sometimes, right? We do the best we can to make decisions, and sometimes that's not what our users need, so we adjust course. That's something we're trying to do across the provider right now. So, we can tighten our resource designs so that we're happy to stand behind them for the next year, so that as we're generating these things they are consistent. They are cohesive, and we don't need to introduce breaking changes just because we want to generate something. Or worse, we don't want to have to build weird exceptions into the generator just to take care of this one odd thing that we did for no reason just because we want to be able to generate it.
But, we're also working on adding resources to Magic Modules. We want more things to be generated. We want more things to have this consistency, cohesion, and best practices baked into it. Right now, more than 30 resources are generated with Magic Modules. If you are using the Terraform Google provider, I can almost guarantee that you are using Magic Modules today.
I want to talk a little about numbers, because I did not know these numbers until this week and I'm very proud of them. So, I'm going to share them all with you, and you can be amazed or not, as is your wont.
So, this is a graph. We're covering our issues, breaking it down by bugs and enhancements, talking about number of resources we have, talking about number of Magic Modules resources we have. But, you'll noticed that closed issues is kind of skewing this graph a little bit. It's hard to see what's going on with the rest, because closed issues is kind of making the Y axis a little bit unreadable. It turns out there's a good reason for that. More issues were closed in the last year than were created in the entire history of the provider, all three years up to that point. We closed something like 630 issues in the last 365 days, which compares to the 600 issues that were created in all of the time existing before that for the provider.
But, let's talk about that for a minute, because it's nice to have issues. Right? Everyone wakes up in the morning and is like, "You know what I want today? I want a new bug reported on my software." It's a good feeling. It's good to have issues, because that can mean user traction. But, issues can also mean that we are producing buggy software, that our quality control isn't up to snuff. Or, it can mean that people want new resources and we aren't releasing them, or that we're not being active enough and responsive enough. So, let's talk about the breakdown of our issues here. Because as we just saw, I think we're being responsive enough. We're closing a lot of issues, so I think that's clear that the problem isn't that we're not being responsive enough.
But, you'll notice that we're still getting a spike in issues here. That means two things to me. First, that means Dana was right. We need more scaling. We need to figure out how to scale this, because we'll never be able to keep up with this graph. This graph is that awful hockey stick of resource consumption that we need to keep up with. But, it also means that people are excited, and people are using this provider. I say that because if you noticed, the red line there is the number of open bugs we have. The number of open bugs we have isn't spiking. You would expect if we had quality control issues, you would see a lot of open bugs on the software. But, you don't. It remains roughly stable with the number of open features we have. Which, to me, indicates usage. It doesn't indicate any instability, quality control problems, or any other kind of issue that we have in our development process that needs fixing. It just means, "Hey, we're doing good things and people want to use it." And when people use your things, sometimes they run into problems.
You can also see we've got the green line there, is resources. I love the vertical slope of that. You can see Magic Modules down there in the corner. I hope that we can share another graph like this sometime in the future that shows Magic Modules eclipsing our open issue count, because that would be very exciting to me. So, that's where we are.
Oh, I forgot the most important part. We can also say with confidence that this is related to more use because in the last week, we had six times more unique IPs downloading the Google provider for Terraform as we had for the same week last year. So, we've got 6x growth in the last year, which I think is amazing and pretty awesome. It's also very gratifying, because you don't really see that kind of thing when you're just fixing bugs of issues. You're just plodding along, doing your daily work. But, it adds up. So, that's very exciting for me.
Dana, we've been talking a lot about Magic Modules and kind of what like it is. I would love to see a demo on how it works.
Dana: I'd love to give you a demo. The team and I and you included, we've been working really hard these last couple months with all the Magic Modules stuff. I don't know, I kind of maybe want to take a break and play a game. Do you want to play a game?
Paddy: I mean, I would love to play a game. Are you all up for a game? Can we show hands, game? Game, game? I can't see anyone, but I'm assuming game. Games are good. Yeah. Let's play a game. I'm game. Let's do this.
Dana: Awesome. Let's play Mad Libs. Do you like Mad Libs?
Paddy: I love Mad Libs.
Dana: Cool. So, if you're not familiar with Mad Libs, you will be very soon. So, we're going to play a game of Mad Libs. So, we've got this example. I have this template. It's telling a story, but some of the words in the story need to be filled in. So, Paddy, I'm going to ask you if you can fill in some words because I know it can be kind of stressful on stage. I've given you this word bank that you can or cannot use. So, Paddy, can you give me a noun?
Paddy: I know what that is. This is the expensive English degree getting put to use, finally. All right. My whole life has been leading up to this. I'm going to go with dog.
Dana: Great. Can I get another noun?
Dana: Okay. How about an adverb?
Dana: And how about a preposition?
Dana: Cool. So, the story we just came up with was, I believe, the dog ran up the hill. Then, it sadly jumped under the chair. So together, we've just made this story. While Paddy had the word bank, I had this story template. You can see that with this same story template, we can fill in different words from the word bank, and we get different stories. They're all similar to each other. They all have the same structure, but we can tell a different story. So, we can say that the can ran up the mouse, and then it angrily jumped around the chair. Or, the cat ran up the hill, which makes a little bit more sense than running up the mouse, and sadly jumped around the chair. You see the picture.
So, now if we take this again and we apply it to GCP itself, our story this time is a more about how to create something in GCP. So, we can say a certain resource has fields, and we're going to send a request to a certain URL in order to create it. So, now the story we're telling looks something like this. The compute instances resource has the field's name, machine type, description, etc. And to create it, we send a request to this URL that has the words compute and instances in it. We could do a similar thing for storage buckets, or any other GCP resource that we want.
So the system that we've just come up with starts with a word bank of GCP resources, templates that can create something usable out of the items in the word bank, and a framework that puts it all together. That's Magic Modules. You put those three things together, and that is what you get. So, to show you a little bit about what it actually looks like in practice, I know I fooled you all. I had to tell you we were playing a game. You were really learning. This is what our word bank looks like. Each resource is represented in a consistent format. We have the information that describes the resource itself, like URL or description, and list of properties and information about those properties.
Next up, we have templates. So if you've written a Terraform resource, it should look pretty familiar. It's in Go, it follows a very similar format. So when you run the compiler, we end up with something that looks like this. All of the blanks in our story are filled in with information. We have a Terraform resource that is completely ready to use. A system like this means that if we find a bug that could affect multiple resources, we only have to fix it in one place ever. So, this is a real example fix from a bug report that we received. In a world where all resources are written by hand, trying to fix this across the board would've meant that we would have had to look through every single resource file for one that had the property that would've triggered this bug.
Or, depending on how much time you have if you're maybe in a little bit of a rush, you might just fix it in the resource that the reporter found the issue in and just leave the rest alone until somebody else files another bug report. But, in Magic Modules, we were able to make the change in just the one file. And then, once that pull request is opened, our CI system which we call the Magician, opened this Terraform pull request to apply the fix in the 10 files that the bug could potentially have appeared in. So now I'm going to actually walk you through what provider development looks like in Magic Modules.
So we'll start off with this feature request. So my colleague Morgante filed this request because he was looking for support for network and point groups. In this API, we can see the Google Cloud platform resource that we're working with. We have a whole bunch of fields and there are types and you can see that some properties are output only or have restrictions on values that can be set. So using this reference, we create this API definition, this word bank that we talked about earlier. We're taking information from the documentation, from the API, translating it into a format that magic modules can understand.
So then after that, we add information that is Terraform specific. So for example, we have this property
id that we don't wanna show up in Terraform,
name, which has extra validation, and
zone, which we don't want Terraform to treat as required since we let you set a provider level default. We also tell Magic Modules where to find an example of how to configure the resource. And speaking of that example, we write a template for this example. We do this by hand so that we can have it reflect how a real user would configure this resource.
And so now that everything's been written, we can actually generate the resource, that was all we had to write. So here I'm running the compiler, I tell it to generate the compute API for Terraform into that directory for the network endpoint group resource using the beta version of the API. And now let's take a look at what we got. So it looks like we have changes to four files: our main provider file, the resource itself, a test, and the documentation. So let's build our provider and then take a look at these files.
So here's our provider file, it has the new resource added to the list. Next up is the resource file and notice that even though it's machine generated, it's still human readable and it has features that might not always come in the first version of a resource like import and configurable time-outs. And we can do a few nifty things automatically like adding validate functions for enum values and logic that lets users pass in either a name or URL for fields that reference a different type of resource.
We also generate the markdown file that gets rendered into the documentation. The docs, just like the resource, are machine generated but human readable and will always stay up to date with what's supported in the resource because it comes from the same place. And the last thing we generate is a test file which contains some acceptance tests. These tests are based on that same example that we wrote, so not only do we have one fewer test to write by hand, but we can also be more confident that our examples and the documentation work and will continue to work since we'll run these tests nightly. So now let's actually see it in action.
And since most people run Terraform itself and not the test, we'll go ahead and demo a Terraform run. So we've taken that exact example from before into a config which we'll now apply and just to speed things up slightly for the demo, I've already created the network and the sub network so this is only gonna create the network endpoint group. And while this is running, I just want to let you know that this was all done in a single take, I did not edit this demo at all, this is a real thing that I did.
Just to show you that it did get created, it does work, I just did a G cloud command to show that the network endpoint group is there. So yep, that's it, Magic Modules is open source so anybody can contribute to it and even though I showed my personal development workflow locally that I did for this, if you open a pull request in the Magic Modules repository, the Magician, our CI system, will automatically generate any downstream changes so it'll open a pull request in the Terraform provider on your behalf with those changes.
So what comes next, why did we actually do this, what do we get out of it? So this system allows us to focus more on getting new resources out the door, adding new features to our existing resources, and now that we have this templating system, we can also start expanding in other directions like generating matching data sources, perhaps, for the resources we have.
And it means that we can support these resources faster than we ever could before. You saw in the demo, it was a pretty fast process, and also code reviews will speed up because we only have to review the real meat of the changes, all of the extra boilerplate completely goes away.
And with the release of version 1.19, we released an entirely new provider, the Google-beta provider. This is a superset of the Google provider except that it also includes beta API features. Previously, we had these altogether in the main provider, we would pick which API to use depending on which fields and which resources you used, and it worked really great. It complicated some things like importing and it had a couple of other caveats.
And it relied on users to read the documentation, notice when they were using beta features, which could change and break in ways at any time and generally are not quite as stable as the rest of the provider. So now that machines are writing the code, we can generate an entirely separate copy of the provider with more added to it and we can keep these providers both in sync. It lets us design resources to optimize for each version of the API and it allows users to explicitly make the tradeoff between risk of change and access to new features.
This also opens the door for potential for alpha and early access program builds. These builds are harder than beta builds because alpha features sometimes have a whitelist, there are some extra checks we have to do, we wouldn't be able to support import for those features, and EAP adds even on top of that an extra layer of complexity because the features aren't public yet so we can't check that code into GitHub.
So in the past, we've been kind of reluctant to add alpha and EAP features because of this. We know APIs could change underneath us also, they could potentially have merge conflicts if we were keeping some code locally and only putting it in the provider later. With Magic Modules though, because it's so compartmentalized, we're starting to take a look at how we could add support for these features… Paddy wants me to say that we can't make any promises, though.
Paddy: We can't make any promises. We're still investigating.
Dana: We also have a brand new feature that I can promise 'cause it's out. We will be able to run the examples that are in the documentation with one click, straight from cloud shell. So you can see it in the documentation today from any of our resources. We'll be adding more and more in the future as we start generating more and more resources with Magic Modules. So if you click that button, it'll run the exact example that you see in the documentation.
And if you've been paying attention to our GitHub repo over the last few weeks, you've probably noticed that there's been a lot of activity. If you're using 1.19, you might notice some new deprecations and so we're excited to share today that we plan to release version 2.0.0 of the provider with the 0.12 beta1 release of Terraform core. This release is gonna include a lot of the fixes, consistency, and improvements that we talked about earlier in this talk.
This is just the start of what using magic modules enables us to do and we are super excited to be on the cutting edge of provider development.
Paddy: Of course, these are just our thoughts and our ideas, and a lot of them came out of meeting you and talking to you and talking to our users and hearing what you're trying to do with Terraform that you can't do today. So if you have other things that you're trying to do with the Google provider with Terraform that you can't do today or that you wish you could do but are struggling with, we definitely wanna hear about them.
That's a link to our issue tracker for the code repo. Please open an issue, talk to us. We love hearing about it. It helps shape and guide our future decisions around changes like Magic Modules and things like that. And helps us kind of explore how we might be able to give you a better user experience in the future. So please never hesitate to open an issue, we love talking to users.
And thank you so much. That's our talk, that's all we have. I think Dana... I think you had a few more details for where these folks can find more information about this?
Dana: Yeah, so if you come to the Google Cloud booth, you can talk to us about if you have any of the questions about any of this. We have some code labs where you can try out the provider if you haven't gotten a chance to do so yet. And we're also giving out free GCP credits so make sure you take a flyer that has a QR code that you can scan for that link for that.
If you want more information, you can also go to any of these websites. The first is the Google Cloud docs, the next one is the GCP Slack, this is an unofficial Slack but it does have a Terraform channel where Paddy and I and some of the other maintainers like to hang out. There's also the Terraform docs and Paddy, do you wanna explain the last link?
Paddy: I do. So I've got two more things for you... well, I've got three more things for you. So the first one, that link I've been tweeting, I don't know if the rest of you have been following, for the last two week or so I tweeted a different song about friendship every day to promote this talk. That link is a nice YouTube playlist of all of our friendship songs in case you all wanna have friendship music for today.
My second thing, I know we've got the team here, does anybody that is a full-time maintainer for the Terraform Google provider wanna stand up or raise your hand real quick for me? Can we give a round of applause for these people? They work very hard. Awesome. And my very last thing, I've got 15 seconds left. I promise I will be done in that.
My very last thing, thank you all so much for coming today. It was great meeting you all and thank you for being a friend.
How Deutsche Bank Onboarded to Google Cloud w/ Terraform
Intelligence Community Guide Article Series
Vault in BBVA, Secrets in a Hybrid Architecture
A Leadership Guide to Multi-Cloud Success for the Department of Defense