This demo session shows how to use the new Terraform AWS Cloud Control provider & understand when to use the new provider vs the existing AWS provider.
Good morning. Good afternoon, everyone. Hi, I’m Rahul Sharma, and I’m a senior product manager at AWS. I’m quite excited to be here at HashiConf to talk about a recently announced AWS service, AWS Cloud Control API.
Terraform and AWS collaborated closely on this integration resulting in the Terraform AWS Cloud Control provider. I’m happy to walk you through how users can benefit and get started with this provider in more detail in my presentation today. Without much ado, let’s get straight into it.
I will start the talk with background on Cloud Control API’s genesis. Introduce to you all what Cloud Control API is and how it benefits users Third, I’m going to introduce you to the new Terraform AWS Cloud Control provider, walk you through a demo with examples on how to use the Terraform AWS Cloud Control provider, and conclude the presentation with resources to get you started.
I want to first take note of the sequence behind the product development process. At AWS, we work backward from our customers, hear their feedback, and identify a solution — and then build our products. Cloud Control API followed the same sequence, and there was no difference at all.
In the case of Cloud Control API, there are two types of customers we target. First, the developers or the builders who are the end-users of AWS — who build applications using various AWS services and features, manage those applications, and monitor them.
The second set of customers are the AWS Partner Network or the APN partners who build solutions on top of AWS and expose their offerings — be it infrastructure as code, configuration management, cloud security posture management, or other tools to our mutual customers so that they can benefit from it.
We identified three opportunities to help these customer personas. There were three opportunities for Cloud Control API to solve for. Before we get into that, let’s walk through the opportunities we identified.
First, we heard from AWS customers that use APN partner solutions to build and manage the cloud infrastructure that they want to accelerate the pace of innovation and time to market for their applications.
For example, there are situations when there is a lag between supporting a new AWS feature or service release in an APN partner solution. A classic example of this is if Amazon Kendra Resources are unsupported in a partner solution, then customers using that partner tool will need to wait for its support before they can start using the Kendra resources to analyze unstructured data.
So, the open question for us was, can we help these customers adopt the new AWS features and services closer to the day of launch? Can we help APN partners onboard to new AWS features and services closer to the day of launch? And that led us to identifying the second opportunity for us.
What do we mean by that? As you’re aware, AWS continues to innovate on behalf of our customers by introducing new features and services to unlock new capabilities on the public cloud.
Today AWS supports over 200 fully-featured services. In 2020 alone, we launched over 2,700 significant new features. What does this translate for our partners? Our partners want to stay in sync with AWS’s pace of innovation so that our mutual customers continue to benefit at the same pace. However, we heard from our partners that this can often take a few weeks — or in fact, even months — to support new AWS capabilities or integrate with the newest AWS capabilities.
Naturally, our question was can we automate supporting the latest AWS features and services on behalf of partners through a one-time integration? Can we have a unified interface that allows partners to integrate once and benefit from any future AWS innovation that gets rolled out?
In that case, we would have a system where partners don’t have to spend ongoing effort to integrate. At the same time, our customers benefit from getting support closer to the day of launch.
Finally, we recognize that we have an opportunity to standardize the APIs, to interact with all these AWS features and services that we just talked to you about. You may wonder why: As applications become increasingly complex and sophisticated, developers and builders tend to work across several AWS and third-party services using distinct service-specific APIs.
While these APIs are descriptive and intuitive, some developers prefer a more consistent set of APIs to manage cloud resources across various services. For example, in this slide I have highlighted the experience of creating an Amazon Kinesis stream and the experience of creating a CloudWatch log group.
You notice that the set of API naming is different. Each set of APIs has a unique input parameter that gets accepted for the API operation to complete. Just let’s see this work in real life. In the case of Amazon Kinesis, if I need to create a data stream, I would use the create stream API call.
If I need to add a retention period or tags to the stream, I would call additional APIs to add tags or to stream or increase stream retention period. Similarly, for the CloudWatch log group, I would use the create log group API and put retention policy to define retention policies.
As you notice, these are API differences in the names, as well as input parameters, just for two services features when we extend this across the breadth of AWS. It means that there are a lot of APIs that developers have to use. While some developers prefer this, there are a set of users who prefer a more consistent approach.
Naturally, the question that we wanted to answer — the opportunity that we had — was how can we standardize the experience of having a consistent set of APIs to interact with AWS services and features. I also want to call out that one common use case we heard from developers was when they create resources outside the purview of infrastructure as code tools — for example, for the purposes of testing a new product or testing a new service.
When they create these, they create an inventory of resources that are managed outside infrastructure as code. While it works well during testing, these customers or developers also want a programmatic approach to identify such resources and then delete them. Today these customers or developers prefer a consistent set of APIs to programmatically list and identify these resources and delete them. These are some of the common use cases that we also felt could be unlocked if we are to solve for consistent APIs.
Let’s summarize the three opportunities that lay in front of us first. Can we support new AWS features and services in the form of resources closer to their launch? Second, can we help AWS partners automate their integration with new AWS capabilities? And finally, can we standardize APIs to interact with hundreds of AWS services — and third-party services as well. These three opportunities laid the foundation for Cloud Control API.
Cloud Control API is essentially a set of common Application Programming Interfaces — or APIs — that are designed to make it easy for developers to manage their cloud infrastructure consistently and leverage the latest AWS capabilities faster. Typically on the day of launch.
It introduces these consistent APIs for managing the end-to-end lifecycle of AWS resources. Hundreds of AWS resources are supported on Cloud Control API, which continues to keep adding support for new AWS to resources. And when I say resources or resource type, I mean a resource type essentially, that has a set of properties and permissions that Control API interactions with underlying AWS or third-party services.
I’m showing these five APIs: `create, `get — for read operations — `update, `delete, and `list can be used for any of the supported resources that exist on Cloud Control API. If you see this example on the slide deck, you’ll notice the example that I used before for Amazon Kinesis streaming and CloudWatch log groups. We saw a different set of APIs for just creating and even reading the state of the resource.
Now, you see these get standardized. You use the create a resource API call with a unified set of input parameters to define or create a Kinesis stream, a CloudWatch log group, a Lambda function, an S3 bucket, you name it. As long as these are among the supported AWS resources or third-party resources, it gets created using that API call.
Similarly, to read the state of those resources, you have the get resource call. You parse in the identifier associated with that resource, and you get the state directly without having to use any distinct service-specific APIs.
You may wonder — while this is great, while these consistent APIs are great for developers from a simplification standpoint, how does it address the opportunities we identified first?
Cloud Control API uses the CloudFormation registry to expose resources built by AWS service teams and third parties that build their solutions. For new AWS services and features, these are available typically closer to the day of launch. So, you get fast access to new services and features through Cloud Control API.
Second, it offers a unified interface for one-time integration to help partners keep up with AWS’s pace of innovation. Partners can now build a unique API codebase using unified API verbs, common input parameters to automatically integrate with Cloud Control API — and therefore any future AWS innovation.
The Terraform team collaborated with AWS to integrate with AWS Cloud Control API — to expose the latest AWS resources closer to the day of launch through the Terraform AWS Cloud Control provider, which I’m going to talk you through in just a moment. That’s the benefit you’re getting from this unified interface.
This is designed to make it easy for developers to manage the cloud infrastructure consistently. Whether it’s an ECS cluster, Lambda function, S3 bucket, or other hundreds of AWS resources — or a dozen-plus third-party resources — you can use the same CRUD + List API to manage them end-to-end.
While I briefly introduced the Terraform AWS Cloud Control provider, you may wonder: how does it all work? What is it? In the next few slides, I will show you visually what it means and how you can get started.
In this particular slide, you see a base layer of AWS resources represented by various services — be it Lambda, Kinesis, DynamoDB, S3, etc. Cloud Control API externalizes the resource provider layer built by AWS service teams.
Terraform is integrated with Cloud Control API using these unified or consistent APIs that I mentioned. Through this integration, it’s created a new AWS provider which is in preview called the Terraform AWS Cloud Control provider. The new provider is now available for developers to use for a variety of use cases — one of them which I’m going to demo later in the presentation.
I do want to call out that the new provider is automatically generated, which means new features and services on AWS can be supported right away. You may wonder: how does this work?
Well — because AWS Cloud Control API provides an abstraction layer for resource providers to proxy through while interacting with AWS service APIs — Terraform is able to automatically generate the codebase for the AWS Cloud Control Terraform provider.
Generating the provider allows Terraform to provide new AWS resources faster because it does not have to write boilerplate and standard resource implementation for each new service. The maintainers of Terraform AWS Cloud Control provider can instead now focus on user experience upgrades as well as performance improvements.
First, there’s the aspect of resource generation. In resource generation, the Cloud Control provider is generated with the AWS CloudFormation schema. If you recall, earlier in the presentation, I talked about Cloud Control API being built on top of the Cloud Formation registry model.
In this case, the provider is generated using the CloudFormation schema. This means there is no manual work required to add new features or services to the provider. Users can expect to use these services within days of AWS’s launch rather than potentially wait for community support to prioritize that feature for inclusion.
I’m going to show this in my demo as well. But you can get started by configuring the provider in your configuration file. You can use the block referred here. You see, in the Terraform block, you have outlined the Cloud Control provider as HashiCorp, AWS CC.
In this case, you also — in the provider block — configure the Cloud Control provider for the specific region that’s available. For authentication you will need to authenticate your AWS account with Cloud Control provider. You can use any authentication methods available in the AWS SDKs, including environment variables, shared credential files, AWS CodeBuild, Amazon ECS, EKS rules, custom user agents, among others. The only thing that you need to keep in your account is you should be using Terraform version 1.0.5 or later. Without much ado, let’s get started into the demo.
While I introduced you all to how the Terraform AWS Cloud Control provider works and how we can get started, I’m now excited to demo how to create resources using the Cloud Control provider.
As part of this demo, I’m going to showcase two examples. First, how to create a resource using just the AWS Cloud Control provider. Second, how to create resources using resource types supported in the Cloud Control provider and supplement that with resource types available in the AWS providers.
Let’s get started with example number one. On my text editor, as you see, you can view the Terraform configuration file. You see, you have the required providers set as AWS Cloud Control. You have the provider block configured with AWS Cloud Control in the region, which is US-West-2. In this specific example, I’m going to create an Amazon Kinesis stream as part of the data stream application.
As part of this Kinesis stream, I am going to first define the name of the Kinesis stream. Then I’m going to define or configure the SHA account associated with it. Then the retention period hours. What we mean by all of this is, Amazon Kinesis stream consists of various shard counts, and each shard has a fixed capacity. And the sum of all shards results in the total capacity of the Kinesis stream.
In this case, I define the shard count to be three — and the retention period hours essentially defines the amount of days or amount of hours that your Kinesis data stream records need to be retained for. In this case, I’ve specified that to be 168 hours or seven days.
Once we go through the configuration file, I would be very excited to showcase how this translates into the terminal. We begin by pointing this configuration file in my directory, initializing Terraform, performing validation, doing apply, and applying the configuration. Then proceeding with reading the state of the resource by doing an inspect operation.
In this step here, I am pointing to the Terraform configuration file. I have named it the TF AWS CC demo. I’m going to use a touch command on my terminal. Once that’s done, I’m going to initialize Terraform using the `terraform init command, which will help install the necessary providers. Let’s get started with that. As you see, it’s installing, and Terraform has successfully initialized the provider.
Next I’m going to take this configuration file and check whether it’s syntactically valid or not. I’m going to use the `terraform validate command. Bingo. You see, this is successful, for the configuration is valid. Now since the configuration is valid, I will apply this configuration to create the Kinesis stream resource.
I’m going to use the `terraform apply command right now. That draws up an execution plan for me. You notice what’s going to happen after the resource gets created and the Amazon resource team or the ARN gets created and an identifier is provided. The name is in the configuration file — Terraform AWS CC Kinesis stream.
With the configuration done, I’m going to accept this. I’m going to click yes. While this is happening, your Kinesis stream resource is getting created. You would see that the Kinesis stream —my stream — is getting created. We’ll give it a few seconds for this resource to be successfully created. And yes, bingo, you have the `apply complete, which means this resource is created in the region and in the account where I created it to be.
Now, to read the state or inspect whether this resource is configured with the properties that I highlighted it to be, I’m going to use the `terraform show command. As you notice here, the ARN associated with this resource is successfully created, and the name retention period and shard count are created as well.
With this, I’m going to conclude the first demo, and roll over to the second demo, where I’m going to highlight how to create resources using the Terraform AWS Cloud Control provider and supplement that with resource types available in the existing Terraform AWS provider.
I’m going to use two providers in my Terraform block. As we continue to add more resources into the AWS Cloud Control provider, there will be situations where you would want to supplement one with the other provider. For latest resources is one classic example — like the latest AWS resources that are available in the Cloud Control provider. You can then supplement those with the AWS provider of Terraform.
There’s a great blog by HashiCorp on the announcement, which talks about configuring an S3 bucket using the existing provider and using the app flow resource — using the new Cloud Control provider.
In this specific demo, I’m going to create an Amazon S3 bucket with the Terraform AWS Cloud Control provider and manage the S3 bucket level public access block configuration using the Terraform AWS provider. In this configuration file, you’ll notice two required providers.
As I’m pointing out, you have the AWS CC or the Cloud Control provider, and you have the existing Terraform provider, both configured for region US-West-2. In this specific case, I am going to use the AWS Cloud Control provider to create an S3 bucket with the bucket name — as I highlighted here — Rahul’s demo for TF AWS CC S3. I’m going to use the existing Terraform AWS provider, which has a resource type called AWS S3 public bucket, public access block. I’m going to configure that resource so it blocks public ACL or Access Control Lists and blocks public policy. I’m going to set both of them as true.
That’s my configuration files set up in a nutshell. I’m going to now move over to my terminal to do the same set of sequences that I did in my first demo — which was firstly pointing to the Terraform configuration file in my directory, initializing Terraform, validating the configuration and then applying it — and then doing a `terraform show to read and inspect the state of those resources.
Let’s get started. In my terminal, let’s start with pointing to the directory which has the demo file, in this case, the demo #2. I’m going to do a `terraform init to initialize. Now you will see two providers getting initialized or installed. Once I hit enter, you would see that would take place.
You are seeing you have the AWS provider version being matched, installed similar to the Cloud Control provider being installed. You will see that the Terraform has now been successfully initialized. I’m now going to validate my configuration files.
I’m going to use the `terraform validate command, and I’ll wait for the output of this validate command and — bingo — success. The configuration is valid. Now that the configuration is valid, I’m going to apply this configuration using the `terraform apply command, which I’m going to just type in right now onto my terminal.
I’m going to see an execution plan being mapped out. You see that for the S3 bucket, public access block, using the existing AWS provider. You see what’s going to happen. You see, the ECS policies block and public policy would be turned true as configured. It would depend upon the S3 bucket resource being created. Which is essentially getting created by my new Cloud Control provider.
Then you would get an ARN after the bucket has created domain name, object logs, etc., what’s supplied is essentially the bucket name, which is TF AWS CC S3 bucket Rahul demo — very creative, the name of the bucket! Once I do that, you see I’m good with the execution plan. I’m going to accept it. Going to click yes and wait for the two resources to be successfully created. We’ll wait here for a few seconds,
You see that the S3 bucket is getting created, yes and done. Your config for your `terraform apply was successful. The two resources have been created. And I would like to read the state of the resource and see the way the execution plan was mapped out — how the resource got configured — so I’m going to use a `terraform show command.
And you notice that just the way the execution plan had mapped out, it turned out the same way. You have the public ACLs and the policies block on true. You have the bucket name, which I supplied, and that resulted in a unique ARN for this S3 bucket and a website URL as well. With this, I would conclude the second demo.
I can’t wait to get you all started on using and managing AWS resources using the Terraform AWS Cloud Control provider. To do that, I am going to flip over to my presentation and introduce to you a few resources to get you started.
To learn more about AWS Cloud Control API, visit the product page link. To get started managing AWS resources using the Terraform AWS Cloud Control provider, please refer to the blog post. It’s a great blog, which talks about how you can get started on configuring your AWS Cloud Control provider. It has a few examples to get started.
With this, I will thank you all for giving me the opportunity to present Cloud Control API, walk you through its genesis and how it helps solve user problems — and talk to you about the collaboration between Terraform and AWS to have this Terraform AWS Cloud Control provider ready. With this, we can’t wait to get you started building cloud infrastructure using this new provider. Thank you, everyone.