Learn how platform and developer teams can collaborate effectively using CDK for Terraform, Terraform Cloud, and Sentinel to safely deploy EKS workloads to AWS.
Oscar Medina, senior cloud architect at Amazon Web Services, joined us on a live stream recently to demo a collaborative workflow to enable self-service Amazon EKS deployments for developers, using CDK for Terraform. Check out the recording of the live demo, and read on for a tutorial of the workflow that platform and developer teams can use to collaborate on Kubernetes deployments with HashiCorp Terraform, Sentinel policies, CDK for Terraform, and EKS.
This demo takes advantage of several tools and platforms to craft a collaborative workflow for platform and developer teams to define and deploy Kubernetes infrastructure configurations:
During the live demo, Oscar shows how a platform team working with Terraform in HashiCorp Design Language (HCL) can enable a developer team working with CDK for Terraform in TypeScript to deploy EKS clusters to AWS, and use Sentinel policies to ensure that all deployed clusters meet platform requirements.
As mentioned, the platform team uses Terraform with HCL, and they leverage Terraform Cloud to store and manage state, run jobs, and manage Sentinel policies and apply them to appropriate workspaces. They also leverage the EKS Blueprints for Terraform to provision Kubernetes clusters.
The developer team has adopted CDKTF because they prefer using TypeScript to define infrastructure configurations. In this application, they use TypeScript to define a configuration to deploy three different Kubernetes services: a deployment, service, and ingress.
In this scenario, the platform team wants to manage costs by preventing users from deploying services of the LoadBalancer type. So they have established Sentinel policies that only allow developer teams to deploy services of type NodePort or ClusterIP in EKS. In the demo walkthrough, you can see that when a developer tries to use an unapproved service type, it triggers a Sentinel policy error, alerting the developer and helping them resolve the issue before the change gets deployed. Using policy as code, platform teams can feel secure that their business and security rules around provisioning infrastructure will still be respected, even when developer teams are given more freedom and autonomy to deploy resources themselves.
The demo shows how leveraging all these tools together enables a workflow that platform and developer teams can use to collaborate on Kubernetes deployments and enable self-service deployments for developers, while ensuring that essential guardrails remain in place.
You can reference the code used for this demo in the repos below, and follow along as you watch the recording or follow the steps outlined in the next sections.
NOTE: Please clone any GitHub repositories. The instructions assume you’ve cloned the repositories.
For this scenario, we have two Terraform Cloud workspaces, one for the platform team and another one for the developer team. Each workspace needs to be configured with Sentinel Policies so that when any Terraform Plan is executed, the Policy Check uses our custom Sentinel Policies. First, let’s create the two workspaces.
Each of the two workspaces is configured differently. Let’s go over the creation for the platform team workspace.
NOTE: Please ensure that under Workspace Settings, the Terraform Working Directory is set to examples/eks-cluster-with-argocd.
This workspace is configured with a different workflow. It does not use version control. It uses the Remote execution mode.
This will be the RemoteBackend you configure in the developers-team-aws-eks repository you cloned (more later)
organization = "sharepointoscar"
name = "developers-team-aws-eks"
Once you’ve created both workspaces, they should both be displayed in Terraform Cloud as shown here:
Now, let’s configure the developer team workspace to use an existing GitHub repository, which contains your EKS policies:
Your settings should look similar to those shown here:
You have now configured our Terraform Cloud instance to use a custom GitHub repository where our policies reside. Next, you need to configure AWS credentials for each of your workspaces so that you can properly connect to your AWS account.
Depending on your environment, you may opt to use short-lived credentials, or not. This example uses short-lived AWS credentials and each workspace is configured accordingly before running any plans.
The process to configure credentials is the same for both workspaces:
NOTE: If you are not using short-lived credentials, skip step 9 above.
Once you’ve configured AWS credentials for your workspaces, it should look something like this:
Using the Terraform CLI and HCL language, the platform team deploys EKS clusters to AWS, so that they can be used by development teams. The repository team-platform-aws-eks contains many examples on deploying clusters. We will use the following example to show how to deploy your cluster.
main.tf, change the backend to reflect your Terraform Cloud workspace (lines 21 to 25).
main.tf, change the region you would like to use (line 45).
main.tf lines 57 to 69.
terraform plan, this triggers a speculative plan that allows you to ensure the plan will succeed.
Whenever you execute a
terraform plan or
terraform apply, the logs are streamed to your terminal, but you can also click on the link to the Terraform Cloud active run for a more appealing visual as shown here:
Once your cluster is created, and Terraform Cloud workspaces are configured with Sentinel Policy Sets, we are ready to deploy a sample workload.
As the platform team, you want to provide your developer teams with the cluster name, so that they can deploy their workloads to it. The output of the cluster provisioning has the cluster name as shown below. For platform team members, Terraform Cloud allows easy copy-and-paste to your kubeconfig.
You are now ready to deploy a workload to the shared EKS cluster. For these steps you will use the developers-team-aws-eks GitHub repository.
The workload you are deploying is a basic static website. It uses the ALB Ingress Controller, SSL, and a FQDN. It includes Deployment, Service, and Ingress Kubernetes objects, which are defined using TypeScript.
Please be sure to have AWS credentials setup in the developer team’s Terraform Cloud Workspace before the next step.
The workload, as cloned, uses a different Terraform Cloud account and organization. You need to change this to reflect your environment including the previously created EKS cluster name.
main.tf file lines 19 to 26 and be sure to enter your Organization and Workspace name.
NOTE: If you choose not to use SSL certificates or domain, you need to do the following:
host entry from the ingress spec.
Now that you have all the changes required, go ahead and execute a
cdktf synthesize to ensure our work compiles.
In the root of the repository, execute
cdktf synthesize as shown below. Your output should look similar.
➜ cdktf synthesize
Newer version of Terraform CDK is available [0.10.1] - Upgrade recommended
Generated Terraform code for the stacks: developers-team-aws-eks
On this first attempt, you will deploy the workload with the LoadBalancer Service Type that is not allowed — the plan should fail and tell you what the problem is.
Please modify the main.ts file, line 104, change from NodePort to LoadBalancer.
Go ahead and deploy the workload and execute cdktf deploy. You should see output as follows in the Terraform Cloud console.
Next, modify the main.ts file, line 104, change from LoadBalancer to NodePort and run
The Kubernetes resources are clearly shown. Type
yes to proceed. After a couple of minutes you should see output like this:
➜ cdktf deploy
Running plan in the remote backend. To view this run in a browser, visit:
Deploying Stack: developers-team-aws-eks
✔ KUBERNETES_DEPLOYMEN skiapp-deployment kubernetes_deployment.skiapp-deployment
✔ KUBERNETES_INGRESS skiapp-ingress kubernetes_ingress.skiapp-ingress
✔ KUBERNETES_SERVICE skiapp-service kubernetes_service.skiapp-service
Summary: 3 created, 0 updated, 0 destroyed.
You can see below that the Terraform Cloud console shows the policy restrict-eks-service-type passed!
Check out the links below to learn more about the tools used in this demo.
EKS Blueprints are a great way to get started with EKS. This framework aims to accelerate the delivery of a batteries-included, multi-tenant container platform on top of Amazon EKS. You can use this framework to implement the foundational structure of an EKS Blueprint according to AWS best practices and recommendations.
HashiCorp Terraform Cloud now offers even more ways to connect, secure and provision infrastructure with AWS.
17 new Terraform integrations from 14 partners provide more options to automate and secure cloud infrastructure management.
The latest Terraform Cloud and Enterprise enhancements help users leverage high-quality modules, monitor their workspace health, minimize management overhead, and more.