Presentation

Application Keynote: Exploring the Application Delivery Developer Workflow

With HashiCorp Nomad, Consul, and Waypoint, we're trying to create a paved path for developers that simplifies orchestration, progressive delivery, and release management integrations.

Speakers: Blake Covarrubias and Yishan Lin

»Transcript

Yishan Lin:

Hi, everyone. Welcome to the HashiConf Europe 2021 application keynote. I'm Yishan Lin, the product manager for Nomad and Waypoint. Speaking with me today is Blake Covarrubias, who's a product manager from Consul.

In today's keynote, we're going to explore the application delivery workflow and talk about the different tools on the infrastructure stack that it encompasses. 

We'll also get into how this extends into the greater developer experience and talk about the growing importance of getting that right in today's organizations.

To start off: What is the developer experience, and why is it so important? 

»The Developer Experience

When we think about user experience in the world of infrastructure, there are 2 types of users. 

The first type is operators. These are individuals or teams that are responsible for selecting, evaluating, installing, and configuring infrastructure tools that match their company's hardware and fulfill business objectives.

Once operators settle on a particular infrastructure tool, they take it through a proof-of-concept phase and then into their company's production environments. 

Operators generally take on greater responsibilities in taking on the day-to-day management of these infrastructure tools and being responsible for their overall uptime, health, security, and cost. 

Behind the scenes, operators also play a pivotal role in educating and onboarding developers and other internal stakeholders at their organizations, teaching them how to use these infrastructure tools when they're needed.

The title of the operator has evolved over the past few years. Historically, we would call an operator something like an assistant administrator. Nowadays, more common titles for operators are SREs, DevOps leads, or infrastructure leads.

The second type of user in the world of infrastructure are developers, specifically software developers. The title and the responsibility of software developers have been largely consistent for the past several decades. The main job of a developer is to write code and ship applications that push the business forward.

Between these 2 types of users, the operators and the developers, there is a very strong and clear producer-consumer model, where developers are the end consumers of the workflows and the infrastructure tools that operators put into place at their organizations. 

Developers consume and use these tools and follow these workflows to deploy their applications and their software onto their company's hardware.

If the developer experience is shaped so much by the operators at any given organization, why does the developer experience matter so much, and why are we talking about it today? 

User experience as a whole has evolved to become a top priority in the world of infrastructure. Look at some of the past technologies that once dominated the infrastructure space but have faded away in recent years with their adoption usage. 

Hadoop, Mesos, and Pivotal are all examples of technologies that once shared exciting promise, had a great premise, and boasted some powerful functionality. 

However, the user experience for a lot of these technologies, notably in the operator experience side, was difficult. It was a hard thing for an average operator to get started and install and configure properly their own Hadoop, Mesos, or Pivotal cluster.

It's no secret that during the years that a lot of these technologies were at their peak of popularity, the proper installation and configuration of these technologies often had to be done by external, third-party consultants, professional services, or systems integrators. 

It was hard for an average operator to be successful with these technologies out of the box. And as these technologies began to evolve and mature, adding more features and functionality, the operator experience only got more complicated. 

As a result, operators had a really tough time being successful with these technologies quickly. 

And developers had an even tougher time being successful using things like Hadoop, Mesos, and Pivotal. 

That really taught the infrastructure space its first deep lesson around user experience, which is that when operators are not successful, then developers can't be successful. 

You have to get the operator experience right before you can go building things for developers.

When you look at HashiCorp and any other infrastructure company that has grown in the years since, you see that there is a tremendous amount of collective organic focus that's being placed on the operator experience, making sure that every infrastructure tool that is available is something that can be easily installed, learned, configured, and managed.

HashiCorp in particular has put a uniquely strong focus on getting that operator experience right. 

When you look at any HashiCorp product available today, you see that they all share the identical operator first principles of each tool being a single, lightweight binary that's compatible in many environments and flexible enough to accommodate multiple use cases and workflows. 

As a result of this change in evolution, the goalposts have changed. Historically, operators were more concerned with, "How do I make sure that I know how to install and configure this tool?" 

The vision now has moved toward, "I now have better infrastructure tools in place that I know have a great installation and day 1 experience, but my focus now is, How do I take these infrastructure tools and put them together into a workflow where my developers can easily learn them, use them, and consume them and deploy their applications onto my company's infrastructure independently and responsibly?"

Now that the operator experience is well established in the HashiCorp products, we're putting a lot of renewed emphasis on getting the developer experience right in all of the HashiCorp products. 

Three core pillars of utility, efficiency, and ease of use are how we set the vision as we continue to invest in this into the future.

The pillar to highlight in the context of application delivery is utility. 

We have a strong belief at HashiCorp that it should be workflows over technologies, and what that means is that if you're a developer who's working and building on a Java application, the workflow that you use to deploy your applications and update them should be as simple, straightforward, and easy as it is for developers who are using newer technologies like containers and microservices. 

With HashiCorp, there's equal consideration and weight given to both the brownfield and the greenfield teams, projects, and workloads.

»The Orchestrator Is Critical

Now that we know where things are set, let's start at the trailhead of the developer experience by focusing on the tool in the stack that serves as the main intersection point between developers and operators in organizations today, and that tool is the orchestrator.

The orchestrator is a tool like Nomad or Kubernetes, and the main purpose of an orchestrator is to decouple software and hardware. 

Before the mainstream popularity of things like Nomad and Kubernetes in the past few years, applications were always tied down to a specific server or set of servers, and as a result there was really no developer experience or workflow. 

If you were a developer at your organization and you had to deploy or update your application, your experience was writing a ticket into your organization. 

An operator would manually review it, triage it, and make those requested changes by manually going in and making those requests onto the machine. 

This is a process that was very manual and could take anywhere from hours to days to weeks, depending on the scale of the company and the size of the requested changes.

The adoption of the orchestrator has empowered operators to stop thinking about servers as single, independent silos and more like a dynamic, unified fleet of resources that they it can pull together. 

At the same time, with the mainstream adoption of containerization and the public cloud, the developers, now more than ever, can write their applications into lightweight, portable formats that can be easily deployed on any server with few to no dependencies.

Orchestrators like Nomad and Kubernetes are the core tool that has inspired organizations to reimagine their developer experience and workflows from these static, slow, ticket-based models into more dynamic self-service workflows that can potentially have developers being independent and responsible without being exposed to all the underlying details of the infrastructure.

For organizations, given that the orchestrator plays such a foundational role at this intersection point between the developers and the operators, a tremendous amount of emphasis has been placed on the choice of the orchestrator that you go with. 

We continue to see that many large, high-profile organizations opt for Kubernetes for its expansive ecosystem and its powerful, first-class support for greenfield projects. 

And a lot of these large enterprises have the budget and staffing and expertise in place to handle the technical complexity of Kubernetes. 

There's also a case to be made that there are certain developer segments, largely found in large organizations, that really enjoy working with Kubernetes.

On the other hand, small and medium-sized organizations continue to use Nomad for simplicity and flexibility in deploying containerized and non-containerized applications. 

The SRE teams and operator teams at these smaller organizations are generally a lot leaner, and they're looking for something that's more immediate, sustainable, and can give them immediate success with their brownfield and greenfield deployments.

When we look at the amount of configuration that's possible between Nomad and Kubernetes, we see that these tools at their heart are orchestrators, and they share the identical universal concept that the application and its details in how it should be deployed should be abstracted into a single job file. 

In Kubernetes, this job file's written in YAML. In Nomad, this job file's written in HCL. Between the amount of configuration options available, there is a relative simplicity that Nomad brings to the table when it comes to defining the job spec that Kubernetes doesn't have.

But even though Nomad is relatively simpler than Kubernetes, for the average developer that's deploying their application, to take advantage of all these configuration options that are available in either Nomad or Kubernetes, that requires deep expertise, deep context, and exposure to the company's infrastructure. 

And the average day-to-day developer that wants to deploy their applications just wants the essentials. They want to be able to point to their container image, their application artifact, set their ports, set their resources, and move on. 

Even though there is relative simplicity within these 2 orchestrator tools, the reality is that exposing the raw orchestrator to developers is not really a complete developer experience. 

And we have started to see organizations of all sizes start building frontend layers in front of their Kubernetes clusters or Nomad clusters to try to expose the right amount of abstraction and to craft the right kind of developer experience for their organizations.

Now that we've talked about where the orchestrator is today, in the context of the developer experience, I'm going to pass it off to Blake to talk to us about where the networking and service mesh layer fits into this larger picture.

»Application Communication & Progressive Delivery

Blake Covarrubias:

Thank you, Yishan. 

I want to switch gears and talk about application communication. 

What happens once an application is ready to be deployed to production? How does that application communicate with other services in the environment? 

The challenge often isn't just about enabling application connectivity. 

Often there are larger concerns that also need to be solved such as, How do you secure and control access to the application or gradually roll out new versions of the application without affecting existing traffic? 

Lastly, how does the application deal with network issues like packet loss, delays, or dropped connections?

Developers could solve for these concerns directly in the application.

However, doing so can have negative or unwanted side effects like increased complexity to the code base, which could lead to longer development times, and also requires networking and security expertise, which development teams may not have. 

So how do you ensure developers aren't impeded by these blockers, without sacrificing security or developer agility?

One way is by using a service mesh like Consul. But a service mesh shouldn't be something that you implement after the deployment. It should be woven into the orchestration layer.

This way, the platform can provide identity-based security and protection of application communication using mutual TLS, as well as observability and resilience for all application traffic.

It can also give developers intelligent routing and testing capabilities, allowing for experimental or blue-green deployment strategies. 

Incorporating service mesh as a core component of the application lifecycle not only creates consistency with deployments, but also gives platform and operations teams a chance to define networking best practices for the entire environment. 

Having this consistent control plane lets developers follow a common, simplified workflow, whether they're interacting with the mesh for things like canary deployments and traffic management or just deploying with the confidence that their services will be able to communicate immediately and securely.

Adding new applications to the service mesh doesn't have to be complex. In fact, applications can easily be configured to utilize service mesh with minimal changes to deployment code. 

And while a service mesh offers many application-centric features, which are beneficial to developers and SREs, a service mesh is not just a tool for developers. 

It also benefits network operations teams by providing a consistent higher level of abstraction to manage application communication policies across different clouds and runtime environments.

It also benefits security teams by giving them the assurance that data in transit is protected and network configurations meet security and compliance requirements. 

Back over to you, Yishan.

»A Unified Workflow with Waypoint

Yishan Lin:

Thank you, Blake. 

Today we've talked about the developer experience in the context of the orchestrator and then at the networking and service mesh layer. 

Now we want to talk about the developer experience in a more holistic way, and one that spans an entire organization beyond 1 infrastructure tool or a set of infrastructure tools.

Developers today are deploying applications to a multitude of tools. They're generally not restricting themselves to a single orchestrator or a single environment. 

It's very common for us to find and talk to organizations that are running their core workloads in a Nomad cluster or Kubernetes clusters, but they're also investing and building in new, event-driven applications specifically to take advantage of something like AWS Lambda. 

The challenge that we've begun to see here is that as organizations adopt more technologies and adopt these different tools that bring their own UIs and CLIs and configurations, more workflows naturally emerge.

Each tool has its own learning curve, its own requirements, its own visibility concepts and requires its own independent workflow. 

Over time, as organizations adopt more and more of these technologies, organizations are ending up with many separate divergent workflows and different environments to deploy to, which leads to this overall fragmentation in visibility and decreased developer velocity over time.

Waypoint, which was launched last year, seeks to solve this problem by unifying all these different tools together into a single, simple workflow of build, deploy, and release. 

Rather than exposing the full underlying raw details of any given orchestrator or deployment tool, Waypoint offers a single, standard, and simple abstraction for developers to deploy their applications into any environment without changing the underlying configuration. 

You can take a Waypoint file and can deploy that application to a Kubernetes cluster, a Nomad cluster, AWS Lambda, Google Cloud Run, all through one single workflow, interface, and abstraction.

Each of the 3 HashiCorp products that we've talked about today, Consul, Nomad, and Waypoint, has had new and exciting releases in the past few weeks, with Consul 1.10, Nomad 1.1, and Waypoint 0.4. 

As folks dig into these newer releases, they'll see new and exciting features that push these products forward. But you'll also be able to see how these serve as the foundational pieces for us to start building this developer experience that we've talked about so much today, moving forward.

Thank you, everyone, for attending this keynote, and have a great rest of your time at HashiConf Europe 2021.

 

More resources like this one

  • 3/15/2023
  • Case Study

Using Consul Dataplane on Kubernetes to implement service mesh at an Adfinis client

  • 1/20/2023
  • FAQ

Introduction to Zero Trust Security

  • 1/19/2023
  • Presentation

10 Things I Learned Building Nomad-Packs

  • 1/19/2023
  • Presentation

Use Waypoint To Easily Deploy To All 3 Cloud Providers