See how Boundary ensures secure, zero trust access to Verizon MEC resources; solving for critical pain points such as access key control, managing bastion hosts, and controlling user access.
Hi everybody. Thank you for coming. Today, I'll be talking about secure collaboration with Boundary. How at Verizon, particularly in the Verizon Innovation Labs, we're using Boundary to do some of the testing with a lot of our partners. That is a quick safe harbor statement. You can skip through that one,
As the introduction said, my name is Venkatesh Konanur. I'm an engineer here at the Innovation Labs in SF. We have locations in Boston and L.A. We are cross-continental. We have four initiatives.
One is to work with our executives on their 5G journey. 5G is the next transformative cellular technology, and a lot of the legacy systems that are on 4G, we're looking to onboard that under 5G, not just from a network perspective. I know your phone shows 5G. That was not just a transformation, but for a lot of enterprises, we are looking to onboard the next generation of applications. Our centers serve as the focal point for executives to come and learn about how 5G is going to transform their business.
Nothing happens if the developers don't actually use the tools. To make developers aware of how they can leverage the network. A lot of the APIs that Verizon has built in terms of service discovery for 5G endpoints, being able to leverage that, we do some of the work tied to that one as well.
Another thing, I think, more importantly, tied to this talk today is engaging partners. We have the executives, the customers. When I said executives, they're our customer executives, and then there's a developer piece.
But at the same time — our partners that we're working with — they're the ones that build the solutions that run on our 5G network. They take advantage of some of the infrastructure that we deploy at our customer sites.
Lastly — and I think this is a pretty nifty thing — is being able to reach out to the next generation of talent at universities. Not only if you're looking at it from a business perspective but from a next-generation talent perspective. If the new generation knows what the 5G networks do, then it's pretty valuable for us.
As I mentioned, we engage with our partners, independent developers, customer executives. But when the rubber meets the road — the actual work that happens — we need to provide secure access for a lot of these people to access some of the compute that we host.
This could be on AWS, Azure. It could be EC2 instances or nodes that we have deployed. We need to make sure people can access these securely. Then we keep track of who we are onboarding, especially when it comes to key sharing. I'll go on that in some detail. But in this case, I'm talking about public MEC, private MEC, and managing an ever-changing user landscape.
I'll explain what MEC is, and I'll also go over what the 5G network is and how MEC sits within all of that. Public, as the name suggests, is open to the biggest subset of users. That's the public 5G network we all know and love.
The private MEC is the dedicated on-prem compute that we deploy for our customers. This could be particularly for secure facilities where they don't want traffic leaving the point of origination. That's where we deploy a private network and a private MEC — MEC, meaning Mobile Edge Compute. I'll go to the next slide. And we'll talk about how Verizon defines Edge.
Managing an ever-changing user landscape, part of the nature of our team is to be very agile and very front-facing. We meet somebody, think they have a good idea, and then we want to onboard them and give them resources to test on. We onboard new developers and partners all the time. How do you securely do that? That's one of the pieces that Boundary helps us to do.
5G Edge is an internal term Verizon uses for MEC — for Mobile Edge Compute. But when it comes to public MEC, we partnered with AWS. We took a lot of the availability zones that exist in their general setting, and we took a piece of that and put it within our mobile network. These availability zones that you can see if you log into your AWS console — if you go to zones and you sign up for the wavelength zones. AWS calls it wavelength. We call it 5G Edge. Essentially the same thing.
You have an availability zone that sits within the mobile network. The advantage is traffic is not leaving and going to the open internet. People are used to setting up tunnels — various configurations — when deploying resources in the general cloud. But from a security and latency standpoint, we believe we can offer the benefits that the 5G network uniquely provides. By keeping the compute from exiting the mobile network, we can give a quicker, faster, and a reliable response time to our users.
This is the public network at a high level. I mentioned a plethora of user devices. We have trucks. We have mobile phones — that's the biggest market share. But it goes through a public core, and then it has the public MEC.
The public MEC itself, one we have deployed in partnership with Amazon, ties back into their parent cloud infrastructure. So, if I log into my AWS console in my VPC among my subnets, I see the ones like in AZ1, AZ2, AZ3, and wavelength zones one, two, and three. We have about 19 zones currently deployed in the country. And these are designed in a high-density cluster like high-density cities, essentially major locations.
With private networks, the architecture is pretty much the same, except the compute sits within the customer site. And what's also different about it and unique is that we deploy a dedicated network. We talked about signal strength. Right now, my phone is at four bars — five bars. But that's the public network that's accessible to all of us once you sign up, go to a Verizon store, get signed up for a plan — that's what you'd see.
But manufacturing facilities, and as I mentioned, some of the secure facilities need their own bespoke dedicated network. That's where we'll deploy radios at their site. And then also compute in partnership with major cloud service providers like AWS Outpost and Azure Stack Edge. It could be any compute that the customer already has — with some of the on-prem systems they have.
With public MEC and private MEC, there are unique challenges that come with it. I mentioned it's a public compute resource. You can deploy the server components by going to our AWS console and setting up whatever mechanisms you have. The client still sits on the Verizon mobile network, so you have to be a Verizon subscriber to hit that endpoint on the inside.
The same applies to the private network as with private MEC. You'd have to have some mechanism to be able to remote in — to do a lot of the development work that you do. Here at the labs, you have access to both. The closest public MEC zone is in Sacramento. We have resources deployed there doing some testings with partners. Also, in the labs in SF, we have a private MEC — both flavors of AWS Outpost and Azure MEC — with the Verizon Ultra Wideband private network deployed.
First, I want to talk about Boundary's multi-hop worker — to explain Boundary's architecture at a high level. There are worker nodes that work in lieu of jump hosts and bastions that give access to resources within a private subnet.
The worker would be the front-facing one. It's the public node that has the public IP address, and that's the only thing that you need to apply firewall rules to. The Boundary client will be talking to that worker node. Or if you're using self-managed or enterprise Boundary, there’s a control plane that manages that as well. Essentially, the worker node behaves in lieu of your jump hosts and the traditional bastions that people use.
A multi-hop worker can also serve as a worker for another worker. You can imagine from an upstream/downstream perspective, these two agents are talking to each other. That plays a big role in a private network scenario, in a private compute resource or on-prem compute perspective: We have a worker node deployed locally that talks to a worker node that we have deployed in our AWS Cloud. That way, we have a centralized infrastructure that can give access to public and private nodes.
I mentioned worker nodes replay a multitude of jump hosts. This was a problem for us:We work with such disparate partners — and sometimes they compete, they were market competitors. We want to make sure that they don't by any means cross paths with each other, and data gets shared.
We want to keep those separate. That became quite a bit of a hassle from setting up bastions and jump hosts in the past. Also, no key sharing. Being able to share keys externally is not a good security practice. In fact, it's frowned upon. I say frowned upon, but somebody in IT is definitely taking it a lot more seriously than I do.
Nonetheless, we should not be sharing keys externally at organizations when Boundary has credential injection with SSH. That means when I give external user access to one of the compute resources, I'm not sharing any of the keys. Their permissions are time-limited, so that it's bounded that I can have the user credentials expire within a month or two during the period of testing with none of the keys shared externally.
From a management perspective, I have to generate a multitude of keys per user with different permission levels. I can have one master key that I manage to control different segments of the compute.
In this case, we are using the HCP-managed version of Boundary. If you have the upstream worker node, that's what you'd end up deploying. That sits in your public subnet with a lot of the virtual machines that are deployed within a “private” subnet. That's our wavelength zone subnet. I mentioned that as “private” because those are accessible via the Verizon public network but not through a general ISP network. You get assigned an IP address called the carrier IP. It is a public IP address, but I can't reach it from my general WiFi network.
The worker node is what talks to the instances in the private subnet in the wavelength zone so that a user, when we gave it to a tester, can sign in and access those instances that way. That’s because a lot of our users don't sit within a location that has Verizon's 5G network. We do have offshore developers that might not have access to a wireless network. Or if somebody is coming in from a rural area or if they don't have a Verizon subscription, we don't want to require them to get a Verizon network to do development. Those worker nodes deployed on our public infrastructure can help with that.
The downstream worker node is managed by the upstream worker node — and both workers are configured the same way. It was pretty simple — just change a YAML script and add the right IP address. But with the private MEC instance, the security feature is more enhanced because I don't have to open up port 22 anywhere in a private instance. This emboldens the security and hardens it up. And access to the private network only happens to the assigned worker node — the only white-listed worker node that I have — to be able to get in those instances.
However, as you can see in the bottom portion, the provisioned user does not see the difference between any of these things. They don't know which is a private network, which is a public network. All they see is a list of targets listed for them — assigned to them by the admin — and they'll be able to hit connect SSH in and then do their development work.
Boundary served a unique role in problem-solving for us because of these two conditions. With private MEC, by definition, we want to be able to keep them secure. Only have those resources accessible to the private 5G network — without having that much entanglement from an external side. From a public MEC perspective, because you can only reach those endpoints — the carrier IPs — through Verizon's public network, we had to have a solution so developers don't feel that friction when reaching those endpoints.
Secure collaboration is crucial for innovation, given that we work with a lot of partners. We work with a lot of partners because we want to test a lot of different ideas. We want to validate and improve a lot of these applications.
Then, the faster you move, you might be able to break a few things, but it's good to know the mistakes earlier on. Once we go through a few iterative cycles, we can pass it on to the product team to develop further.
Managing resources should be seamless by reducing user fiction. I've been in phases where I’m trying to reach something — if the bastion is down or the jump host is not working correctly, a lot of the firewall rules were not reset properly because the IP addresses of the laptop changed or they're in a different network all of a sudden. It caused a lot of friction, a lot of repetitive human work. Being able to whitelist certain IP addresses.
Boundary can scale to both public and private. As I mentioned, the developer does not see the difference between public and private. It doesn't hinder them. They can open their laptop and connect to their home WiFi. But most are working remotely, so they'll be able to connect to either of these resources in S.F., in Boston, in L.A., wherever it might be. They don't have to be locally located.
Innovation teams improve their agility by adopting new technologies — I think that’s a pretty obvious statement. But something that we try to apply to ourselves is trying to use newer, more secure tools to make development more accessible and making it easier for companies to engage with Verizon.
HashiCorp Boundary helped solve our challenges, but can also scale in production. Essentially, being able to prove it here. The Innovation Labs wants to test a lot of these different applications — different tools — and have them go through a vetting process. There's definitely potential when we do deploy some of these solutions — eventually down the road, after going through the product team, when it gets deployed at customer sites — there's still a secure method for developers to access them in case they need to fix a bug or something.
With that, thank you for listening, and thank you for coming.