Case Study

Using Terraform Enterprise and Chef to enable continuous deployment at Barclays

When manual processes—driven by fintech regulations—were slowing down developers too much, Barclays built a hybrid cloud platform and adopted Terraform Enterprise and Chef to bring back that lost agility.

Barclays exists in a heavily regulated industry and as a result manual processes have crept in over time to reduce risk. This has slowed innovation and tied the hands of developers. Over the past 3 years Barclays has been on a DevOps and Lean transition which has seen a lot of the manual processes streamlined to better enable their application teams. Developers are performing continuous integration processes but deployment of their applications is still largely manual.

With the introduction of a new hybrid cloud platform they’re better equipped to help their developers drive automated change. Through the use of tools such as Terraform Enterprise and Chef they’re enabling their application teams to rapidly deliver that change to customers.

In this presentation, Barclays product owner Brian Simpson-Adkins walks through some of the challenges his company faced due to financial regulation, some of the limitations they imposed on themselves in order to reduce risk, and how they are changing technology at Barclays to address these challenges and better enable developers.

Speakers

  • Brian Simpson-Adkins
    Brian Simpson-AdkinsIaC Product Owner, Barclays

I'm the product owner for infrastructure as code at Barclays, and today I'm going to be taking you through how Barclays is automating their way to a more flexible future through the adoption of continuous deployment. I'm going to take you through some of the drivers for change, the tools that we've developed within that space, and some of the lessons that we've learned along the way.

Who are we? Barclays is one of the UK's largest banks, with 325 years of history. We employ over 120,000 people in 40 countries. We have, currently, 15,000 developers, all operating in an agile manner. We have 80,000 servers across 3,000 applications, and currently, 500 of those applications are sitting within our hybrid cloud.

If I was to say to you, "What does Barclays do or what does a financial services organization do?" you'd probably say something like: move, lend, send, or receive money. You probably wouldn't say: develop software. We've only really started thinking of ourselves as a software house for probably the last 2 to 3 years. But if you look at the snapshot of history, some of our key activities for the last 60 years, even if we go back all the way to 1966 with the UK's first credit card, can you imagine the amount of software that it took to reconcile all those transactions so you could use it in a supermarket checkout? Or the world's first cash machine? That would have had a UI to it. It might have been basic, but it would have had a UI that you could have interacted with.

Then we fast-forward to 2007, where our industry went through a fundamental change as we moved to a more online, digital, and mobile presence. That coincided with the release of the first smartphone back in 2007. That saw us competing in a space where more mobile and more nimble and agile development teams operated, such as the marketplaces for Android and Apple. That saw us competing for our value-added services. There is a directive currently on our switches, the PSD2 directive that is gonna see us open up our data via API for third-party vendors, so that puts us even more into competition with these small-market development teams. For instance, if you want them to crunch your bank account history to see where you've been shopping, how you've been spending your money, that's all stuff that we add on top of your current account today that we're gonna have to compete for. If we're gonna have to compete in that space, we've got to get more nimble and agile ourselves. We recognize we need to change.

Barclays operates in a very regulated industry, right? According to Wikipedia, there are 243 regulating bodies over 139 countries. I've pulled out some of the big ones on screen that have an impact on us in infrastructure and application. For instance, the FCA, the PRA in the UK, or Fed in the US, or MAS in Singapore. Each one of these has very tight regulation around the installation and operation of IT services. To make sure that we're meeting that regulation and reducing risk for the organization, we see a lot of manual process that creeps in over time. That stops us competing with small-market developers.

What stands in the way of us creating awesome content for our customers? At Barclays, the infrastructure and developer were very disconnected. It could take 6 to 8 weeks to get a physical or virtual server. And even when you got it, you might not have access to it. You might need to use configuration-management tools or whatever that looks like, or ask your own colleagues. The solutions had to conform to really tight patterns. Believe it or not, we did that to speed up our delivery to production. But what that stops us doing is innovating. If you want to do something new, then you end up on the edge cases of that overall pattern and then you can be tied up in manual process getting authorization from security and so on. Finally, in order to get that physical or virtual server, you need lots of teams involved. It could be the storage team, the build team, the application team, and all those teams have SLAs associated with them that slow us down.

Barclays has been on a DevOps and lean transition now for the last 3 years. It's led to some of the following improvements:

  • All of our code is now stored in VCS repositories, so we can say who did what, where, when and what potentially the downstream impact of that is.- We've introduced a hybrid cloud, and that allows us to not only take advantage of the flexibility of the cloud APIs, but it also removes a lot of the capacity concerns that we used to have with the older stays. We can now make use of the extended capacity of the cloud providers.

  • We've introduced a sandpit area. In order to reduce risk to the bank and make sure we're adhering to regulations, we don't really give out administrative rights to our laptops or desktops or servers. Where does a developer play with software? How do they know whether they have to build something rather than just buy something, or potentially just use a plugin off the web? We've created a sandpit area that's fenced off from our core network so that developers can download stuff, have a play with it. If they like it, then they go through the process to get it into the bank.

  • We've introduced DevOps tooling. We're a large Chef chop, and this solution I'm gonna show you is largely built around that. And Terraform, to make best use of our licensing. [00:05:30]

  • We've also introduced centralized Jenkins. We've got the VCS repository and anything you need to build out CI/CD pipelines. Then we've gone about educating our teams on how to build those pipelines, do unit testing, do integration testing, and so on.

  • We've also worked very hard to reduce the blame culture between Dev and Ops. The operations teams have been having a lot of training around co-based activities to help them understand things like Terraform templates and the actually code that application teams are writing, so that they can understand, when they're pushing the change to production, exactly what it is that they're doing. Then they can wind it back if they need to.

  • Finally, we've got developers teaching developers. We've got a lot of community apparatuses across Barclays, where developers demo to each other what they're working on, and that leads to a lot of reuse of code, as well. If one developer happens to like what they're seeing, they'll pick up the phone and we see it transition into that app.

It's not a perfect world, and we've still got some challenges that exist, as you'll see when I walk through the solution. One of the key ones is, we have a pause just before production, and that's where we do our pen testing and our compliance reviews. We're looking to automate that with tools like InSpec and Metasploit, but we're not quite there yet. We're actively working on these. We want to get to the point where we do automatic rollback. We don't want to be in the position of forward-fixing all the time, we want to be in a position where we just roll back to a good, known, common base.

The transformation plan

[00:07:00]The introduction of the hybrid cloud came with a lot of flexibility, but it also came with a lot of challenges. The APIs are so flexible that we saw all of our developers, a large portion of them, using them in different ways. That just doesn't scale when you've got 15,000 developers and 3,000 app teams. You can imagine having to rebase people's code because something's not working correctly. Even 10% of them could almost tack up a full day for a support team, so we had to do something about it.

The CTO function that I'm a part of, and the ISE team that I'm responsible for, were tasked with coming up with a common method of deploying application infrastructure change to that hybrid cloud, and then selling it to our developers. That was based around three solutions.

Test

The first one is test: Give our developers a centralized way of testing application infrastructure code so that they can build immutable test environments they can perform integration testing against, and strip down afterwards so we've got no technical debt associated with testing.

Promotion

The next one is promotion: Give our developers a common way of promoting their application infrastructure change, where the infrastructure, security, and compliance teams can do scraping for malicious activity in a set of pipelines rather than doing manual process, so we can remove that manual process from the pipelines.

Deploy

Last, deploy: Give them a common way of deploying into the hybrid cloud. That led to the Continuous Deployment Toolkit. There's a solution for each one of these principles: Test, promote, and deploy. Before I can do that, I need to take you through how we do Chef at the bank, because this is how we create our immutable application configuration.

When you're creating these promotion pipelines, it's very difficult to do pattern matching on stuff if everybody's using it in a non-common fashion. So the first thing I did was, I set about creating a set of building blocks and a set of patterns around usage of Chef, and then wrapped an education piece around it to teach people how to use Chef at Barclays.

The first one is the library repo. That is your architectural building blocks. It's technically supported by an infrastructure or a middleware team. It contains a library cookbook, and the library cookbook is the only thing that's stored inside our Chef supermarket. It has nice readmes around it that show all the imports and how developers should use it.

Then we've got the application repo, and that contains everything that the application team needs to build out their application in production. It's contained in 1 place; it's promoted as 1. That contains 3 cookbooks. The wrapper cookbook is for overriding attributes inside a library. Think of Apache, and maybe make an Apache listing on port 81 instead of 80. The application cookbook is what it says on the tin: The application teams use it to install the application. The role cookbook is about wrapping all of those together to create this immutable stack. Each one of these with a version and its method data file. What the role cookbook's saying is, "I want version 1.1 of the application, 1.1 of the wrapper, and 1.1 of the library." That's the dependency structure. The role cookbook depends on the application, the application on the wrapper, the wrapper on the library.

If we were to version this out, because the middleware team have just come out with a new Apache cookbook, we would version out the entire stack, because the bottom piece has changed, and the dependency roll sits like that. But if we were to just turn a red button to a blue button inside the application, and we were pulling some new files down from Nexus, we'd just do from the application upwards. That's how we're creating that full, consistent application stack for the developer.

Inside an application repo for a typical 3-tier architecture, I'd expect to see three role cookbooks, with a multitude of applications and wrappers inside them. Each one of them would have a version associated with it. That's what we call the app stack. It's everything you need to stand up your application.

The first solution is test

That's with the test kitchen. Barclays has said, we don't give developers administrative or root access to the laptops, so it means that they can't install the ChefDK, and, potentially, a hypervisor such as VirtualBox. Instead, we've built out a solution in our data centers that's highly available, production supported, and so on. That consists of Jenkins wrapped in the ChefDK, and then a couple of hypervisors with an API on the frontend so that developers can interact with it programmatically.

Developers would be working on an application repo at this stage. They create this application repo from a frontend portal called the Repository Management Portal. That allows them to create InSpec repositories, library cookbook repositories, application repositories. If they create the application repository, it uses a code generator to flesh out the structure of the cookbook underneath it so they get the structure for the role cookbook, the application cookbook, and so on, so they don't have to worry about their kitchen.yml files and all that good stuff that comes with it. All the complexities are taken care of. The application developer can focus on sticking code inside cookbooks.

They're working against a branch. They've got to the point where they're happy with the codebase and they would call out to script. They supplied two parameters. First one is the set of credentials, so we know we're standing up the instances and we can track it in the logs. Second is, the branch that they're actually working off, so we know what they want to test. That executes against the API, and it creates that 3-tiered stack form, so they can perform some integration testing against it. It'll give them an output in a near-native format that says whether the full cookbook converged correctly, or whether they have to look at it. If they have to look at it, they can look back at the instance, log onto it, find out what the problem is, update their branch, commit the code, and then run a re-converge the exact same way they would if it was on their local desktop.

Now we move into the promote phase

This is a set of pipelines that protects our cookbook promotion. This is what does all the scraping for malicious activity, and so on. The application team is working against the repo, they're happy with the code, and they want to merge it into the master, so they do a pull request against it. We've got two pairs of eyes on every piece of code at Barclays. Somebody approves it and the merge happens. There's a webhook that's on that master branch.

The first phase of the pipeline is audit, and inside that there's two phases. First phase is food critic, and I class that as malicious-activity monitoring. It's checking to see whether anybody is manipulating something they shouldn't. There are about 40 controls inside that currently. The 2nd piece of it is debt revoke op, that's lint checking to make sure their syntax is correct. If you fail any of them for any reason, the pipeline will exit; your code doesn't get onto the Chef server.

The next one is compliance

This is really exciting for us, because this has the potential to reduce all that manual process associated with audit each year. You know, all the spreadsheets and PDF documents you all create. This is using InSpec. What we've done is, we've built an API in there. Because the role cookbook in the application repo is the thing going through the pipelines, each one of the library cookbooks has an InSpec profile associated with it. Think: the controls around Apache. There are about 58 of those controls currently inside the Apache cookbook. What the pipeline will do is, if the application repo is going through the pipeline at that time and the developer hasn't included the tests for the library cookbook inside his test for his role cookbook, the pipeline will exit and tell him that they have to go back and include the specific set of tests, the ones that are in their dependency stack. Assuming that's the case, the developer then changes that, resubmits it to the pipeline, and the 58 controls are all assessed. If any of them fail, for whatever reason, then the pipeline will exit again.

What we're trying to do there is drag that failure as close to the developer as possible. What typically used to happen was, developers don't read security standards. We get to the point where we've got the UAT (user acceptance test), and that's where we have our pen test (penetration test), and so on. Then the developer gets told they've breached some standards, so they have to go back, rebase the code, resubmit it through, they miss the PEN test window. It costs Barclays money. Bringing that failure as close as we can to the development stage means that we avoid all of that unnecessary extra cost.

Assuming that all passed, we go to the upload phase. That sticks the cookbook structure on the Chef server. There's some security that makes sure that can't be put into play until the application teams call out for it.

Then we move on to the promote phase

It basically says, "I want this version of the cookbook, 'web' in this case, pinned in Dev1." And you can see it's gone in there. That's how we create that immutable stack. Any instance that's spun up, we put it in a particular Chef environment. Because the version of the cookbook is pinned inside the environment file for Chef, it means that the server gets that version of the cookbook. The application teams can promote that whenever they're ready. They could have a new version of the cookbook all ready to go, and they just call out when they're ready to put that code into play.

The last piece of it is Terraform. This is really what brings that application and infrastructure configuration together for us. It's based on Terraform open source, at the moment, and we're working to migrate this to Terraform Enterprise. It's currently in the lab, and we're actively working. We're hoping to have that completed by the end of the year; we're just waiting on some of the stuff, the PTFE instance that's coming in maybe a month or two. The application team would be working against that Terraform repository that they've created at the frontend. For the open-source version, we've wrapped the binary inside Jenkins so it can do some orchestration. We've got Console as a remote backend, and we've strapped an API onto the frontend so that developers can use this within CI/CD pipelines.

The development team are working against that Terraform repository, they've potentially pulled in some modules from our module supermarket, which is the module registry you saw called out at the keynote yesterday, and they execute it against the API. When they execute, they supply four parameters. The 1st is the AWS credentials. We use the STS service from Amazon. The session token, the combination for that, the access keys, secret keys, will dictate which account you actually go into, non-prod or prod. We also supply the repository, so we know what repository your role cookbooks are actually contained within. Then we've got the two runtime environment variables, which is the Chef environment, which, in this case, would be Dev1, and we've got the role cookbook, which would be web. We use the bootstrapping process of the instance to say to the Chef server which environment it's going to go into, and which role cookbook is gonna go onto the server. We use user data to manipulate this process so that we can feed it in at instantiation time. That results in the application stack that we were looking at before. We end up with a web, an app, and a data, and their versions as described inside the app stack, because we've put it inside Dev1, and inside the user data we've described what cookbooks are gonna go on what instances.

How do the application teams use this in their CI/CD pipelines? [00:18:30]This is a bunch of APIs that they call out to. They would be responsible for creating their own CI/CD processes around that, right? First, they create a template or configurations. They make sure that they're either merging to the master branch or that they've been through the pipelines and they're up on the Chef server. In this case, we've got one role cookbook, web, and it's version 1.1.

Over to the application pipelines. The first thing they'll do when they kick it off is, the process calls out to that promote API that you saw before and it will say, "I want web 1.1 pinned into Test1." And then they call out and create the infrastructure, and they pass the AWS credentials, the repo for the Terraform configuration, and the two runtime variables, which is, we want it to go into Test1, and we want the role cookbook to be web on these instances. We get two instances combined with an auto-scaling group, because we've used launch configuration in the auto-scaling group to pass through that role cookbook and the environment data to the bootstrapping process.

Then the application teams would perform some integration testing against it. This could be anything. We typically see, at the test phase, some very basic tests, like just pinging a website. Then they'll get more and more stringent as they go through to UAT. At UAT, we might see a point where we break out into app dynamics. We start doing some real-time performance testing against the app, and the feedback of that will trigger the pipeline again.

After that, we call out and we destroy the infrastructure. We've created that immutable infrastructure stack, we perform the integration tests against it, and then we destroyed it afterwards. Then we go again. We tend to do this 3 times, depending on what your category of tier is. If you're Cat0, you might see more tests than this, there might actually be 5 stages of non-production testing.

It depends what it is that we're doing and what the application influences. Let's assume we've got 3. We do the integration testing again and each time, this gets more and more stringent. Just before production, when we're raising the change control, we'll package all of this up, and then we'll put that as evidence for the test closure report. We destroy the infrastructure again, and then we go again in UAT.

Each time, you can see that we're promoting the role cookbook into a different environment, and we would be changing the environmental variable that is associated with those instances. It would have been going into Test1 and Dev1, and then UAT1.

And then we get to production. This is where we have our manual review, this is our manual change control at the moment. Somebody would go and kick off that pipeline. It will probably always be separated into two teams, because we have a "developer access to production" rule in regulation, which means that a developer can't build a backend into a particular product and then push it out on their own, because that'd be really bad.

[00:21:30]We kick off the promotion. This time, when we call out, we say, Prod1, but we change the AWS credentials in it. Now it's pointing to our production AWS account. Then we perform the integration test. You'll notice there's no destroy infrastructure on the end of this part of the pipeline. That's because we use the declarative nature of Terraform to change the end state in production rather than stripping it back down, so we can do non-disruptive change.

What would this look like if we were doing blue-green? What you just saw there was writing to AWS for the first time. If you're going for a blue-green deployment, and we want to do non-disruptive change. We've changed the application cookbook; we're changing a blue button to a red button on the website. And we've updated the role cookbook, because the application cookbook has changed; it's pulling down new files, now, from Nexus.

We call out, we start the pipeline, and get our promotion process that we saw before, where it pins 1.2 into Test, Dev, UAT. We would be creating all of the infrastructure inside of our non-production accounts and doing the application testing and integration testing across it. If it fails, for whatever reason, at any of these points, the pipeline fails as well.

Then we get to production. The difference here is, you can see that the infrastructure is already running in Prod1.1, because we've not stripped it down. When we call out to this configuration that we've done this time, when we've supplied at runtime variables, we change it to Prod2 so the instances go into a separate Chef environment. That Chef environment is gonna have 1.2 inside of it. When the instances get spun up, they'll get version 1.2 of the code.

Now we've got these instances. We want version 1.2 of the code. On the auto-scaling group, we used some uniqueness on the end of the name. That forces Terraform to re-create that auto-scaling group each time we run the configuration. We also use two parameters, which is "wait for ELB capacity," and "create before destroy." That'll ensure that the auto-scaling group is all nicely warmed up and tested before we cut over to it. Then we'll cut over to the instances, destroy the auto-scaling group, and we're now running in production with version 1.2 of the code.

The module supermarket or module register that was talked about at the keynote ... I call it the module supermarket internally so we create those synergies with the Chef supermarket. People already know they're our building blocks for our applications.

For us, it's gonna be based on BitBucket, and we're waiting for that integration to come in in the PTFE installation within the next month or 2, which'll then see us go through production with this product. But we've already started building out all of our modules. Our AWS accounts are created and configured using an in-house-written tool.

We did that because Terraform hadn't reached critical mass by the point we went live with this. Now that we've got access to the Terraform Enterprise licenses, that's probably something we'll change later, but we do use it for all of our application configuration items: the auto-scaling group, launch configurations, and so on.

We've created, as part of the acceptance criteria for stories, for creating some of these technologies, the module as an output based on one of them, essentially supported by the infrastructure-as-code team, at the moment. It will be the infrastructure-as-code run team, rather than the engineering team later on.

Modules will be versioned. In our internal one, at the moment, before we get the module registry, they're not versioned. If we want to version them out, we just change the code inside them. In the next run, everybody takes the change. We want to get to a position where our application developers are choosing when to move to the latest version of the product, because we don't want to force change on them. It causes outages. You've probably seen from the keynote that there was that versioning box on the module registry. That means that we can make use of that item when it comes in.

Finally, we're working with HashiCorp to really work out a process of how we can build a community around this. How can we let people create modules of their own, and then promote them into this module registry? That's something that we're gonna be working on over the next few months.

Why did we choose Terraform Enterprise?

We had a great working version, it still works great today: the open-source version. Number 1 for us is support. You've seen that Terraform is intrinsic to our delivery pipeline. If you've got a Category 0 application, such as online retail banking, and you can't get change out to that, and we've got an outage, then we need somebody on the phone straight away. We need support around this service.

The API

We've been working with HashiCorp to pass credentials inline. At the moment, most of the promotion actually happens based on a merge into the master branch inside the Enterprise product. We want to be able to pass credentials over an API. You saw some of that come in at the keynote yesterday with the workspaces that was announced.

Logging

Really important in a regulated industry, as it probably is for everybody sitting here. We need to know who did what, when, and what did they do. The integration with BitBucket gives us that ability. We can say, "What did you change inside the configuration, and who promoted that configuration?" Depending on how long we want to keep that data, a Cat0 application might be 10 years. All the logs are kept in CloudWatch. We can scrape them off to Splunk, and then we can keep them for whatever the retention period is.

The next one is UI

We didn't have a UI for the open-source version. We've talked a lot about building these into the APIs in the CI/CD pipelines, but we're not naive enough to think that nobody will want to use this thing manually. The UI updates that we've seen at the keynote yesterday are really important. You can see all of your information for a run team is stacked in one page now, and that's gonna be really important for us as we implement this in production.

The next one is ACLs

Role-based access in a regulated organization is paramount. With the new workspace feature that was advertised, we're able to say, "This particular team can do these particular operations against this workspace," and a workspace ties back to an AWS account for us. Now we've got that fine-grain control that we were looking for that we were missing inside the open-source version.

Next, the policy engine

This is really important for enforcing that control. This is Sentinel, that you saw announced. When you look at the Terraform pipelines, the reason we have all that control inside the pipelines is to make sure somebody's not doing something either malicious or it's just defective code. We want the same for our Terraform pipeline. If I was to turn around to a regulator, for instance, and say, "We're implementing ELB as a technology," and if they use the module in the module registry, it'll make sure that we stand up to our regulation around resilience, which is, it must go into at least 2 or 3 availability zones. The first thing they're gonna say to me is, "Can somebody write a configuration that puts it into one?" Being honest, I'm gonna say, "Yeah, of course they can." This will stop them being able to do that. I can put a rule inside the Sentinel that says, "If you're using or promoting an ELB configuration, it has to be in 2 availability zones, otherwise they can't get that out to production." It means I'm moving into that proactive state again, moving that failure close to the developer rather than getting that remediation activity off the audit team, which'll be, "Go and scrape all the ELBs and tell me which ones are in one availability zone, then do a remediation activity around it." It cuts down on waste.

Last, vendor based rather than self-write

The open-source community's great, but a lot of features that we're interested in are pushed into the Enterprise product, because it's Enterprise customers stacking that against their backlog. We've already seen products that we want or features that we want to take advantage of, and they come from other organizations. We might not have seen them if it had been in the open-source version alone.

What lessons have we learned along the way? The first one, I would say, is:

Go all in

We configure our accounts with an in-house-written tool, and that causes us no end of pain because we have to get creative about how we pull things like the VPCID in when we're building an ELB, because it's not in that common-state backend. If I was to do this again, I'd have a pipeline, something like Sentinel, or potentially, if you're using the open-source version, the Jenkins pipeline, to make sure that all of the base modules, such as guardrails, IAM Policies, the base configuration for network, all of that is included before you can merge it into the master branch, or before you can get that into one of our workspaces.

The next one is:

Check your code

We don't have any pipelines over Terraform at the moment, you've seen that from what I've displayed. We're gonna use Sentinel to do this. This'll be the malicious-activity monitoring, our defective-code monitoring, that stops us needing to do all that remediation activity. Otherwise, the service just won't scale. By the time you get 10 application teams all picking up the phone, saying that, "My code's getting this error back from AWS," because it's a proxy layer, almost, on the front of it, we might get to a point where the developer doesn't get the true message back, they get a Terraform message instead of an AWS message. They'll pick up the phone and we waste time every time we do that, so if we can stop that behavior before it gets into an account, great.

Use modules from day one

We went down the route, originally, of just letting developers do whatever they wanted inside the configurations. That's gonna end in disaster for us, because there's just too many developers out there. With 15,000 developers, even 1% could potentially tie up my team. When I first took over the team, I was getting at least 10 calls a day for Chef before we put all these pipelines, and building blocks, and patterns around the usage of it, so I know what that can do to a team. The modules are gonna be your architectural building blocks for things. That's what's gonna give you consistency, that's what's gonna allow you to reduce your burden on your support teams.

The next one is:

Reduce manual change

Inside our application pipelines today, we've allowed manual change to our AWS accounts. It was part of our strategy to give as native an experience as possible. That's potentially gonna cause us problems downstream. Not yet, but it probably will. When we've got a Sev1 incident, and all hell's breaking loose, there's a good potential for somebody to make a manual change in there, and then what happens when Terraform reruns? It's gonna take it back out, and there's a potential that we're back to the Sev1 incident again. That's probably something that we'll look at in 2018 as we learn more about this tool.

And last:

Test your code

These application pipelines that you're building aren't necessarily just there for testing your application configuration. If they're making a change, I don't know, they're upscaling an auto-scaling group or something, they should be passing that configuration through the test dev UAT. And if you've got something like Sentinel where you can stop that promotion, or some of the new workspace features that are coming into Terraform Enterprise, where you can potentially say that your code has to go through these particular workspaces first, then do it. Otherwise, you're just gonna get people promoting code straight to production, and that's gonna end in disaster.

I just want to pull you back, before I finish, to that first slide that I presented, where the infrastructure and the developer were disconnected; where developers waited months or weeks for service; where their solutions needed to conform to these really tight patterns, and it stopped us being flexible, and iterating change; and the amount of support teams that were possibly involved inside a build-out of these instances.

Where we are today

Developers can exploit our cloud now to create competitive advantage. They can build solutions from their application and infrastructure configuration in minutes against our hybrid clouds. They can experiment, they can spin up immutable environments that have no technical debt associated with them, or they can play in these sandpit environments where there's no risk to the bank whatsoever. That makes them a lot more agile in their development processes. Finally, they can build repeatable solutions, because they've got it all held in configuration now. We also see a lot of configuration sharing. If an application team stands up a secondary application that has a very similar profile to it, because we're feeding in the Chef cookbooks inline as we run these configurations, we get reusability in our code across our 3-tier infrastructures.

That's it. Thanks for listening. Hope you all enjoyed it.

More resources like this one

  • 2/3/2023
  • Case Study

Automating Multi-Cloud, Multi-Region Vault for Teams and Landing Zones

  • 1/5/2023
  • Case Study

How Discover Manages 2000+ Terraform Enterprise Workspaces

  • 12/22/2022
  • Case Study

Architecting Geo-Distributed Mobile Edge Applications with Consul

zero-trust
  • 12/13/2022
  • White Paper

A Field Guide to Zero Trust Security in the Public Sector