With a microservices architecture, how do we handle service updates that used to happen every few months now happening several times a day? The network becomes a major bottleneck.
Hi, my name is Armon, and today we’re going to talk about the challenge of networking with a microservices architecture. As we have these many different smaller applications that we want to update frequently, particularly in a cloud environment where we might be scaling them up and down, what are the challenges that brings from a networking angle, and how do we start to address this?
When we talk about microservice networking and the challenges associated, I think it’s useful to step back and look at a bigger network. And when I talk about the bigger network, there are 2 key network paths to talk about. One is north-south, and when we talk about north-south, that’s traffic flowing from the public internet and our end users into our networks, so flowing into us, versus east-west, which is traffic flowing within our data center between different services.
When we talk about these 2 key flows, there are a bunch of key systems to consider. The first is, as traffic comes north-south from users over the public internet, what are the systems that are going to be touched? Common ones will include firewalls and load balancers. These load balancers will then ultimately bring us back to our different applications.
Maybe we have a web app and we have an API service and these services might need to communicate east-west to other services like a database. And our database may also have an internal firewall that protects it. So this flow from our API server, for example, to the database is an east-west flow, versus this path from our user to our web service, which is a north-south flow.
Now, as we start to talk about these challenges within the context of microservices, what are we really talking about? One aspect of this is, in a traditional world, these pieces were all relatively static, and some of the key network components were hardware. So as we talk about microservices in a cloud environment, we can’t necessarily ship our hardware devices anymore to the cloud. So some of these things have to be treated and managed like software.
The other thing is our applications become much more dynamic. We might want to be able to push 5 updates or 10 updates a day to our web server. We might want to dynamically scale up and down our API servers based on load. As we start to do these things, we put pressure on these different networking pieces. As an example, if we add an autoscale, or web servers and APIs, we now need to somehow update much more frequently the load balancer and potentially our ingress firewalls.
These things now need to be updated to allow traffic to new web servers as they scale up and then disallow access as they scale down and those IPs are no longer relevant. Similarly, as we talk about these firewalls internally, if I scale up the number of APIs, I need to update the internal firewalls and internal load balancers to allow access between these different services.
When we start to talk about the key different functions within the network, in various places what we’re worried about is authorization. When we deploy a firewall, what we’re using it for is to manage what services you’re authorized to talk to or talk in between. And our load balancers are acting as routing and naming devices. So it might be, to our user, they’re hitting api.hashicorp.com. That’s the name. But internally we might be wrapping that to different services depending on the endpoint that they’re hitting.
These are key functions that exist in the network, but now we’re going to put pressure on them by virtue of having many, many more services. As we adopt the microservice architecture, we have many more of these things, but we’re also gonna make that much more dynamic. Instead of updating every few months, they’re going to update a few times a day, potentially.
We’re going to put pressure on all of these network services to update themselves. When we talk about this whole world, it’s these key functions. Authorization managed by things like firewalls. How do we update them? How do we update our load balancers as they provide things like naming and traffic management?
And then as we talk about our API gateways, how do they provide ingress as well as some amount of routing and filtering? These are the key pieces that we put a bunch of pressure on with our microservice architecture and that we’d like to deal with in a more sophisticated way so that our network doesn’t become the bottleneck that’s preventing our application teams from being able to deliver value.
I hope you found this video useful. I’d recommend going to hashicorp.com and checking out either our resources section, which has a lot more material on service networking and the challenges and how you can solve some of those, as well as the Consul page to learn a little bit more about Consul and how it can solve some of the network automation challenges. Thanks.
Network Automation on Terraform Cloud With CTS
HashiCorp Deep Dive Demos from Ignite and KubeCon Europe
Orchestration to Delivery: Integrating GitLab with HashiCorp Terraform, Packer, Vault, Consul, and Waypoint
Unlocking the Cloud Operating Model on Oracle Cloud Infrastructure