Keynote - HashiCorp Waypoint
Oct 26, 2020
Get an overview of HashiCorp's newest project: HashiCorp Waypoint.
- Mitchell HashimotoCo-Founder & CTO, HashiCorp
Learn how Waypoint intends to provide a framework that can provide one common workflow for developers to automate the build, deploy, and release steps of the software delivery lifecycle, regardless of the tools you use for each of the steps in that process.
Mitchell Hashimoto: Thank you, Jake. Thanks to everyone else for joining us on day two of HashiConf. Yesterday, we talked about a ton of exciting security-related announcements. As we said yesterday, today we're going to be talking about a little bit more on the developer side of things instead.
This keynote, I'd like to focus on exploring the developer workflow. How developers interact day-to-day with applications and some exciting new stuff that we're working around that.
The Developer Lifecycle
So let's start by exploring the application lifecycle and how developers interact with that lifecycle. Here we have a view of the standard steps during the application lifecycle that any application using any technology tends to hit.
You start with the coding side. You do some tests, you build, deploy, release, then you operate some infrastructure that it's running on. You tend to usually measure both the infrastructure and the application to determine everything is working correctly.
If we look at this lifecycle of an application, there are certain steps that have well-established tools to use with them, as well as patterns around it. On the very far left, we have coding and testing. Around this, you have editors like Visual Studio Code. For tests, you have CI environments like Circle, you put your code on GitHub. These are all accepted and set-in-stone solutions that everybody is happy with and is predominantly using across the industry.
If we look on the very right, we see something similar. In the operate and measure side of things, there are tools like Terraform, Kubernetes, Nomad, Datadog. These are tools that, no matter what application you're working on, no matter what infrastructure you're targeting, these will work for you. They're used very widely across the industry — they're very common tools.
Build, Deploy, and Release
If we zoom in here on these middle three, it gets a little more complicated. When we look at build, deploy, and release, there are a lot of tools out there that aim to address these steps. The challenge is they're usually very technology-specific, or there's a lot of fragmentation between communities. Those tend to be related — because of the fragmentation, there are a lot of different tools.
You see some solutions here, but they are a challenge, and they're not consistent. If you learn one set of tools, you tend to have to relearn a totally different set of tools to use a different platform, use a different programming language, or join a company with a different cultural approach to this problem.
Build, deploy, and release is where we were seeing a lot of challenges. To be clear, the problem with these steps isn't where we could deploy applications. Over the past decade or so, we've seen a huge rise in the number of places we could run our applications.
We have EC2 VMs, so you could still run VMs. We have container platforms like Kubernetes. We have full Platform as a Service, like Heroku, and Netlify, and others in that space. Now we also have serverless platforms popping up, like Lambda or Google Cloud Run. In this category of where to run applications, there is a wealth of choice, and it's gotten very exciting over the past 10 years seeing that change happen.
The challenge instead is not where, but how we get there. It's the workflow of build, deploy, and release that is extremely fragmented and difficult with these different platforms. Let's make this concrete and take a look at some examples, starting with build. We'll go through build, deploy, and release.
Let's start with build, and let's look at an EC2 example, a Kubernetes example, and a Google Cloud Run example. These are three very popular platforms that you might use today. They're three different paradigms as well, VMs, containers, and serverless.
Taking a look at these three examples, looking at build alone, you could see the challenge starting to rise here. For each platform, we have a totally different industry-accepted tool –community accepted tool – that would be used for this phase.
For EC2, you might use something like Packer to build the VM image. For Kubernetes, you might use something like Docker to build the container image that is used with Kubernetes. For Google Cloud Run, you're likely going to use either Docker again directly or the Google Cloud-specific command line interfaces that they've created for Google Cloud Run.
You could see some fragmentation here, even though in abstract, they're all trying to do the same thing — take some source code from an application and turn it into some artifact that is accepted by the platform.
Next, if we look at deploy, very similar. We see Terraform for the VM side, kubectl coming in, or Helm for the Kubernetes part, and then for Google Cloud Run, we're still living in that Google Cloud CLI. There's fragmentation here. But between these steps from build to deploy we're also seeing the tool itself change.
For VMs, for example, we went from Packer for the build phase to Terraform for the deploy phase. We saw something similar with Kubernetes. We went from Docker for the build phase to kubectl or Helm for the deploy phase. So we're seeing both fragmentation across and between the platforms and the steps.
If we look at release, we see the same thing. There are some tools that are staying the same, like Terraform, and then there are some changes in the tools that are being used. Across the three different paradigms and three different platforms, we're seeing different tooling being used.
So the challenge is the workflow of build, deploy, release, and the fragmentation and specific tool knowledge that you need to get through those. Just using these three examples, but remember, there are a dozen or more different viable platforms that we could be sending our applications to.
To boil it down into a simple challenge, deploying applications to production doesn't seem to have gotten much easier. There are a lot of challenges to get your application from code into a deployment platform — even though the choice we have of where to put them has grown wildly.
Developers Want a Better Interface
We see different organizations attempting different solutions to try to solve this in different ways. Here are three examples of how a company today might try to solve this developer workflow challenge to get their applications to production.
On one side, you have something like Makefiles. You might standardize your company — your team — on a set of makefiles or maketasks to use, and this tends to work. The challenge with this is it is very difficult to scale, and there's no community knowledge or shared knowledge around this practice.
Another option might be using CI/CD Pipelines to hide it behind an interface — it might be a git push or clicking a button in the CI UI. This is similar to Makefiles in that it tends to be very organization-specific and it's very fragmented between the different CI environments and tools you use as well.
Platform as a Service
Then the last option is a Platform as a Service. A Platform as a Service gives you a clean way to get an application to production. But the challenge is if you exit the boundaries of that platform. If you need to do something a little bit too complicated or you need to talk to an application — or interact with an application that isn't in the platform — a lot of these platforms tend to break down or become extremely difficult to use. We could see that with platforms we had around ten years ago, as well as the emerging platforms we have today. They tend to be an all-or-nothing type thing.
Here are three interfaces that have been tried, but we think there might be a better way. It boils down to us trying to address the developer need to deploy. They want to write their applications, and they want to deploy. No matter what platform it's going to, no matter what technology is in use, they want a common workflow to deploy their applications.
What Are Our Developer Interfaces?
Let's take a step back, and talk about HashiCorp for a bit, and talk about our experience with developer interfaces. If we look back across this application lifecycle — from code to measure — we've released a lot of software in here.
If we look at our whole portfolio, there is one tool where we focus on developer workflow first. That is in the coding side with Vagrant. From day one, Vagrant was a tool to build development environments. We tried to build a workflow — to consistently build development environments — that was repeatable and shareable.
The way we built this workflow was boiling it all down into a single command, which was
vagrant up. In one command, we took what was prior to Vagrant — dozens, or many, many more steps of how to set up a developer machine — to one command that worked on all major operating systems; Mac, Windows, and Linux.
We did this ten years ago, and Vagrant has been wildly popular since then. Developers attached to this workflow. It solved a problem. It made something repeatable. It worked no matter what programming language you were using, what operating system you were on, how you were deploying your application.
Vagrant is one developer interface we made that we felt hit the mark in what developers were trying to do. So, the question is looking at that experience — and based on that experience — can we create a similar type of workflow for application deployment? We think we can. So prior to announcing this today, we shared what we built with a number of people outside the company. This is what they had to say about it:
Jessie Frazelle: CPO Oxide Computer Company. Waypoint is awesome I would expect nothing less from the team at HashiCorp.
Tim Perrett: Senior Staff Engineer, Alpinist. I think the most promising thing about it is a qualified build-deploy-release cycle. I'm really hopeful that Waypoint will be able to relieve people from the need to create these custom systems with custom scripts. Depending on the size of your organization, it might even be an entire team. I'm really excited about the prospect that Waypoint might allow a Terraform-esque community to form around it in the continuous delivery space.
Brendan Burns: Kubernetes Co-Founder, Microsoft Azure. Hey there HashiConf, the future of Cloud Native is developer productivity. So I'm excited to see innovative tools like Waypoint make it easier for developers to build, release, and deploy their containers out to our services like Azure container instances, or Kubernetes service, or even to the edge with Azure Arc. Congratulations, HashiCorp, on the launch of Waypoint.
Mitchell Hashimoto: As you could see, people seemed excited about what we had to show them and I'm excited to now share that with you today.
Today, we're announcing our new project, Waypoint. Waypoint is a new free and open source project available now.
Waypoint gives you a consistent workflow to build, deploy, and release your applications on any platform. The way Waypoint works — based on what we learned with other tools — is
waypoint up. The
waypoint up command encapsulates the build, deploy, and release phases — all in one command — to focus on getting your application from development into production.
You can see here on this slide that we break down each phase — the build, deploy, and release — into the specific tool that you want to use to make that happen, the platform where you want it to go to, and the logic you want to use to release it to the public.
Waypoint’s Configuration File
We take each of these configurations, and we put it into a single Waypoint configuration file. In this example here, we would put in the build phase, the deploy phase, and the release phase.
By putting it all in one file, you could reference this one file to know the full logic and lifecycle of how that application gets to production. You can see here that Waypoint isn't trying to replace tools like Helm or Docker, or Kubernetes directly. We're trying to wrap these tools, glue them together in the right order, and provide a consistent workflow on top of them.
For the build, we're using Cloud Native Buildpacks here — that's what the
pack means in this example file. For deployment, we're using Kubernetes. For that, we will be creating deployments, services, etc. For release management, we're also using Kubernetes’ native systems. In this case, we use the Kubernetes service primitives to point to the correct deployment that we want the public to see. So you could see that we're able to compose these different technologies and give a consistent workflow on top of it.
When you run a
waypoint up, the output looks something like this: You'll see the build, you'll see the output as part of that, you'll see the deploy happening, and then you'll get a release at the end.
With the release, you'll get a release URL. The release URL is the public URL that people can visit to see this application. This is provided by the platform. In the case of Kubernetes here, we're getting an IP address for a load balancer.
Below that, you get a deployment URL. You'll notice that this deployment URL has a waypoint.run domain. This uses a service that we run — that is separate from Waypoint — to provide routable URLs for all your deployments.
This is an optional service, but it's nice because it gives you a URL to every application you have — no matter what platform it's on — and it's consistent.
waypoint up works across any application code to any platform. It doesn't matter what programming language you're using — or where you're deploying it to —
waypoint up gives you that same consistent workflow, URL output, deployment URLs — across any mix of these platforms and programming languages.
Validating your Deployments — Is it Running?
waypoint up, the next question is always "is it running?" Any deployment tool you use, you do the deploy, and the next thing you'd check is if it's actually running. This might be opening up a URL, refreshing a few times — it might be checking logs. Either way, you want to make sure the deploy worked. So we think it's important for a tool that does a deploy to also give you those tools necessary to validate that your deployment worked. With Waypoint, we provide a number of features to validate that the deployment worked.
One way to validate your deployment is through our UI. Waypoint has a UI — that you could visit in the web browser — that shows you a list of all your builds, deploys, releases, and their current status. You could use this to verify that your deployment is complete. You know that we built this for developers because we also have a dark mode.
You could also use
waypoint exec is a tool on the command line that you could use to open a shell or execute any process in the context of a deployment. You could do this to read a file, check logs, debug things. This gives you access to any deployment to make sure that it's up and running.
Waypoint exec works across any platform you're deploying to. We know tools like Kubernetes provide a similar exec style functionality on its own. But you can also use Waypoint exec with Google Cloud Run, EC2 instances, and anything else that Waypoint supports. It's the same experience across all of those. This gives you a great tool to make sure that your deployments are working.
Another thing you could do is view logs. With Waypoint, you could run
waypoint logs. As part of that, you could see all the recent logs from all your deployments of that application.
Again, this works across every platform consistently. Even if one of your platforms provides a log-like capability, you could also get this for every other one you're using — using a consistent workflow. These logs aren't meant to replace long-term log storage or log searching. These are the past few days — past few weeks — a set of logs that you could use to diagnose any challenges you might be seeing quickly or to verify that everything is up and running as you would expect. Those are a few sets of tools that give you the ability to validate that your deployments are working and also to debug issues that might happen later on.
The last pillar that's very important for Waypoint is extensibility. We want to make sure that Waypoint can work the way you want to work as well as work where you want to work. So it's extensible via a plugin interface, and the workflow is flexible to work in a variety of scenarios. Let's take a look at the workflow first.
Waypoint in CI/CD, GitOps, and ChatOps Workflows
A common way people like to display applications is CI/CD. There are other things that you're trying to do as part of deployment that doesn't necessarily fit into a standard build-deploy-release cycle. CI/CD is a great way to wrap that up. Waypoint works great with CI/CD. We've provided a number of examples and documentation of how you could run Waypoint directly within your CI/CD platforms. In doing so, you simplify the step of deploying your application to be consistent across a variety of different places.
People also like to work through a GitOps or Git-based workflow. Usually, the way this looks is you would do a
git push, which triggers a deploy either to production or preview. With Waypoint, we've documented and published plugins for different version control systems so that you could integrate this directly into your Git workflows.
In this screenshot, we show you our GitHub integration. With the GitHub integration, we've made it so that every branch will get a separate build-deploy-release, and a deployment URL that is branch-specific. So you could see for your branch, the preview of that deploy, how it's working, and then when you merge it into your main branch, it'll deploy into production. All of this is super configurable and an example of what's possible with Waypoint.
There are a variety of other ways people like to work. We mentioned CI/CD; we mentioned Git. Some people like to trigger deploys via Slack; some people like to use a UI. Of course, there are a lot of people that use the Terminal — the command line interface — and are comfortable with that.
As you can see through the couple examples we gave, Waypoint is flexible to work in all of these environments. But it's not just flexible to work in those environments; it works across them as well. You could deploy something by clicking in the UI at the same time as someone deploying something using the CLI; at the same time that someone is maybe pushing a branch into GitHub and getting a branch-based deployment. Waypoint lets you work the way you want to work — or the way that your company or organization wants to work.
Waypoint’s Extensible Interface
The next thing is making sure that Waypoint supports all the available systems that you want to orchestrate with it. That it could deploy to the platform you want to, that it builds images the way you want to build them. To do that, Waypoint has a fully plugable extensible interface. This is a feature of software that you’ve come to know and love from HashiCorp. We do this with almost everything we build, and Waypoint is no exception.
With Waypoint, you could build plugins that are custom builders, custom registries of where these artifacts could be sent to. You can build custom platforms of where the deployments are sent or where the deployments run.
Finally, you could also build plugins to do custom release management. Do you want to do blue-green deployments?. Do you want to deploy right away? You could customize all of this through a plugin interface. We support more than a dozen plugins out of the box, but we expect this to grow significantly as the community jumps in and starts building more plugins.
I’ve highlighted the three important pillars that we built Waypoint on. We want to make sure that Waypoint provides a consistency in its workflow. We also want to make sure that — as part of that workflow — you have confidence that it worked. We showed that through exec logs, etc. Finally, we want it to be extensible and to build an ecosystem. That's done via our plugin interface and Waypoint's flexibility to support all of these different environments.
So I spent a lot of time talking about what Waypoint is and why we built it, but I want to dive in and show you how this thing works. So let's do that now.
To start showing you this demo, we're going to show you the context that Waypoint has. Context lets us talk to different Waypoint servers. In this case, we have a local environment that we're running to test our deployments and a production environment that we might be sharing across our company.
Every project that you have has a Waypoint configuration file. The project maps one-to-one to a Git repository or VCS repository. In this case, we named the project HashiConf Demo, and we have one application that we want to deploy within this project. To do that, we're going to be using Docker to begin. You can see the demo here, our app, and then our build and deploy settings.
Next, let's take a look at the Docker file that we will be using to build our application — in this case, it's a standard React-based application, so we're going to build the application with React and then serve it using Nginx.
With all these configuration files, first of all we have run a
waypoint init. A waypoint init registers our project with the Waypoint server. It also validates our configuration, makes sure that we have all the plugins, and that we're authenticated properly, etc. Once the
waypoint init is done, we could run the
up. As the
up is running, you can see we build the image and show you that output. We do the deploy, and we do a release.
To remind you again, this is all happening against Docker — against our local deployment currently. This might be to test the build, test the deploy, make sure it works. Once the release is done, we get a URL at the end of it. If we click that URL, we could see it worked, everything deployed, and it's running.
Deploying into Production
Everything worked locally. Next, we're going to switch to our production context. Let's deploy this application into production. To show that things are changing, we're also going to modify the application. I’m going to un-comment some stuff that we have here in our application so it looks a little different.
We’re also going to modify the Waypoint configuration. In production, we're not going to be using Docker directly. We're still going to use Docker to build the image, but we also need to push the image of the build to a registry so that our deployment platform can reach it.
We're going to use a Docker registry and push to that. Then for the deployment, we're going to use Kubernetes in production. Use that plugin — Kubernetes has slightly different configurations — so we're going to do some of that. We're also going to set up a release manager in this case, and that'll make sure that we could visit this publicly. We run the init process again. This is done to ensure that we switched servers, so we need to make sure that this project is again registered with the new server.
Then we run
waypoint up once again. In this case, it'll look a little different. The build looks the same because we're still using Docker, but we're going to see a push happen afterward. Then we're going to see the deploy and release look different because we're going to Kubernetes this time and not Docker, and they work differently. But at the end of it, we get a release URL, we get a deployment URL. Let's go ahead and look at that.
It deployed. It looks different because we modified our application, but it's running on Kubernetes and using the exact same configuration style and the exact same
waypoint up workflow.
Deploying to Serverless
To show the flexibility of Waypoint one more time, we are now going to modify the application because we're going to deploy it to a different platform. We're going to deploy to a serverless platform this time.
Let's modify the application and add in a little bit more so it looks different. We're going to add in this timeline component. Let's change it from a dark theme to a light theme, so it'll definitely look different this time. Next, we're going to modify our Waypoint file again. In this case, we're going to be targeting the Azure container service. So, let's throw away the Kubernetes configurations and add in the Azure container configurations.
We use the Azure container instance plugin. This has a number of different configurations — you can see we need a resource group here and capacity config. It looks a little bit different, but the workflow will be the same.
With that saved, we can run
waypoint up again. We don't need an
init this time because it's the same server that we're using. The deploy and release will look a little different because we're going to ACI this time and not Kubernetes.
At the end of this, we're going to get — again — two URLs out of it; the application URL and the deployment URL. If we open that, we now see our light mode Waypoint website with our timeline component rendered there. This is now running on ACI.
Demo Recap and Summary
We showed three different deploys, one locally on Docker, one in Kubernetes, and one in ACI. Throughout the whole thing, it required minimal changes and running
waypoint up again. We talked about
logs, so let's also look at that. If we run
waypoint exec /bin/sh, we get a shell. This shell is running in ACI. This is a feature that ACI doesn't provide natively. This is showing how Waypoint could provide features like this that the platform may not support. It's a real shell. We could prove that by
cat-ing our Nginx configuration that we built into our image — you could see that is there.
We can also take a look at logs — let's run
waypoint logs. We could see all the various HTTP request outputs that we would see because we visited the application.
Let's finally look at the UI. So we did all this, we've been working in the terminal this whole time. Let's open up the UI. You can do that simply with the Waypoint UI command. Since our Mac, in this case, is in light mode, our UI is also in light mode. You could see here; we have all our builds, all our deploys listed here. You saw some logs, but you could also see it from the UI. The UI gives you a great way to get an overview of everything that has been happening with this Waypoint server.
deployments page, you could also see on the right here that for each deploy, we have that deployment URL. I mentioned earlier in the keynote that every single deploy gets a version URL — so we have V1 and V2 here. They have the same name, the
globally communal fox, because they're the same application. But they get the V1 and V2 because they're different deploys.
Here I want to click each of them because — if you remember — one is on Kubernetes, and one is on ACI, yet both deployment URLs work. We deployed to both Kubernetes and ACI. The deployment URLs are roughly the same — except for the version — and you can visit both versions of that application. That's neat because you could go back in time and see the previous state of your deployments.
At the end of all this, once you are ready to clean up, we could run a
waypoint deployment destroy, and destroy all our deployments. This deletes them and makes them so they are no longer visible. That is Waypoint.
What Else Is in the Initial Release?
I'm super excited to be able to have shown you that demo today of Waypoint, but there's so much more in the release that I wasn't able to show you. This slide talks about some of those things.
We saw build, deploy, release,
logs, etc., but there are so many more plugins I wasn't able to show you and other features listed here today. There's also a lot more that we want to build towards. This is an initial release, after all, so we're excited about where Waypoint is going to go.
So today, Waypoint 0.1 is now available. You can find Waypoint at waypointproject.io. Give it a try. We hope you love it. If you want to learn even more about Waypoint, we have a deep-dive session given by Evan Phoenix, who worked directly on Waypoint from the first day we started building it. He's going to be doing this directly after this talk.
Thank you very much and have fun with Waypoint.