Skip to main content
Case Study

Securing & connecting healthcare platforms with HashiCorp Vault at Roche

Learn how Roche uses Vault Enterprise to securely distribute credentials from their corporate network to heterogeneous on-premises customer environments.

Roche is a Swiss multinational healthcare company focusing on both pharmaceutical and diagnostics. As a global provider of a diverse set of products typically installed on customer premises, Roche faces heterogeneous environments that need security credentials for communicating with remote services and platforms.

»What You'll Learn

Learn how Roche uses HashiCorp Vault to securely distribute credentials from their corporate network to systems across the world.

»Transcript

Leandro:

We are here today to talk a bit about securing and connecting healthcare platforms. My name is Leandro Ausilio. I'm the global product manager for DevSecOps enablement at Roche.

Harsha:

My name is Harsha Vathsavayi, and I'm a product owner and DevOps lead at Roche, working with Alejandro. Thank you to be here.

Leandro:

Someone told me once that if you want to do a good presentation, it should always start with either a joke or a story or some relevant facts about what you want to talk about that day. I'm not a native English speaker, so my humor would be horrible, and we only got 30 minutes — so there's no story. We're just going to jump in. 

»Background and Introduction

First, some facts. Then we want to tell you a bit about our company, and what we do for it — the specific challenge that got us here today. We want to talk a bit about digital healthcare. The first thing that I wanted to show you is that the healthcare market is expected to grow to $1.3 trillion by 2030. That's a sixfold increase in the next year. 

At the same time as that increases, data breaches also increase. Last year, we had over 500. That is literally almost 1.5 breaches per day. They have affected over 26 million records, just in the US. Generating almost $7 billion in data loss.

»At a Glance 

We work for Roche, and we’ll give you a quick overview of who we are. It's a healthcare company. It was founded in 1896. An interesting fact is that it is still majority-owned by the same family. We are the number one healthcare company in the world in terms of R&D investment. 

Last year, to give an example, we treated — with our pharma division — more than 16 million patients. We have performed, with our diagnostic products, over 27 billion tests. Context — that's almost four tests per person on earth. 

Right now, we have, for example, 32 medicines on the World Health Organization's list of essential medicines. We are a market leader in both pharma and diagnostics. 

To give you an idea of our global footprint, we are over a hundred thousand employees, and we are located in over 150 countries around the world. This is related to why we have certain challenges around technology and securing data, as well. We'll talk a little bit about it later, as well.

For our specific conversation today, we're going to focus on our diagnostic division, which is mainly comprised of two business areas called Roche Diagnostic Solutions and Roche Information Solutions. To give you quickly an explanation of the difference — one focuses more on our medical hardware devices and the other one on digital platforms and data.

»Roche Information Solutions 

One of the main goals of this area is generating those lab or provider insights — but also creating digital platforms, and enabling the other areas to reuse digital models so that we can reduce our time to market. That's the goal that they have. 

One of the approaches we took to get there, is to create a platform. Because the challenge we had is, how do we secure and be compliant with our industry regulations —. and at the same time, enable teams to get to market fast? It's not that easy to achieve.

This platform, called Navify — it's a cloud edge platform that tries to enable that for all of our research and development areas. We have four main pillars that we said, "This platform has to have," and that are super important to us. That it's easy to adopt, first. It has to be secure and compliant by design. It also needs to help us accelerate that delivery. 

Most of all, it has to be cost-effective. When you have to reuse and start all over again every time you need to create something, that's time-consuming. We need to try to reduce that time to market and the cost of our solutions.

Now specifically, what do we do in this context, as Roche Informatics? Just as I said Navify helps our R&D areas to get products to the market fast, we try to enable software engineers, across our companies — there’s thousands of them — to do that job day-to-day as fast and as seamlessly as they can. We provide DevOps tools to accelerate that software delivery and release.

»Our Vision 

We have a vision that we communicate to everyone across our business, as much as we can: We believe our software engineers shouldn't have to spend time building and/or maintaining their own tool chain. There's a bit of a caveat to that — on the way that we work day-to-day. We do inner source, so a lot of these platforms we manage collaboratively with the business. 

Something that I always say is that we are a team of about 25-50 people across the globe. We service thousands of software developers. So, the only way we can get things done as fast as we need to is if we build things in collaboration with them. That, to us, has always been super important. 

»Our Principle Guidelines

We run our services with three main principle guidelines. First, we want to radically simplify and harmonize our landscape, which we'll show you a glance now, as well. We want to enable multi-cloud

I think the nature of what we do requires us to run very dynamic environments. We don't believe on-prem infrastructure is the right vessel for build pipelines and things like that. We usually try to embrace cloud providers to leverage that. Finally, we want to secure and automate by design.

»Product Landscape 

This is a quick overview of our technology landscape for software engineering. As you can see, we split it into capabilities. Usually, we leverage Gartner's infrastructure automation pipeline to do this. If you look closely into it, you'll find that we run sometimes multiple products per capability, which is not ideal. At the same time, we've highlighted how we are leveraging the HashiCorp suite of products to achieve some of those goals. We are also working now on trying to consolidate and streamline this much more. 

»Our HashiCorp Stack Usage 

We are now leveraging the HashiCorp stack, using Vault for secrets management and encryption as a service. At the same time, we use Consul for service mesh and common registry. We use Terraform Cloud now, to provision and manage multi-cloud, and Nomad for our container orchestration.

As you can imagine now, when I said the amount of employees and how we are located across the globe, most of our medical devices are located in many different countries. Those products use credentials to communicate back to our services, our datacenters, or on the different cloud platforms. 

The most simple example that we can give about what we do is, for example, a field service engineer has to get access to one of those devices to connect to them and pull Docker images to upload and download data from them. All of that is done through credentials that we somehow need to maintain, secure, and update as often as we can to avoid a data breach. I'll hand it over to Harsha now, for a quick overview of it.

»Secrets Management Requirements 

Harsha:

Yes. Thanks, Leandro, for the introduction. Now, I'll first talk you through the challenges we have to solve, for delivering secrets to our products. As we mentioned, our products are installed at customer locations and are operated at customer sites. 

As part of the product lifecycle, the products have to talk to other Roche cloud platforms and Roche digital solutions, which are running at different locations. For successful communication and operation, these products use various types of credentials for connecting to the devices and for tailless encryption/decryption, and for securing the connection as well. 

In this operating model, we have two challenges. The first one is, how do we make sure we can deliver secrets, in a secure way, to the products operating globally in a distributed fashion? The second challenge is — as you may have noted at the beginning of the presentation — data breaches are growing every day, and being in the healthcare industry, data breaches are very costly. We have to control them and avoid potential security leakages. We have to continuously update the credentials. 

In addition to that, we have some other secrets management requirements. I will summarize them now. Our secrets management solution has to solve all of these requirements. The first one is secure, accessibility to secrets across regions. The second one is, the products are sitting at the customer side, and the customer's IP ranges may vary. They're not static anymore. They can be dynamic. 

Another thing is, that as the customers operate in a controlled environment, firewall-wide testing is not easy. It's a very complex process and takes quite a long time. In addition to that, to avoid potential security leakages, we want to update credentials frequently.

Then the other requirement is our product portfolio is quite diverse. We have different sets of products. If any of you have done a COVID test, we have a product which is installed in laboratories, which can do more than 1,000 COVID PCR tests, in very few hours. At the same time, we have small devices running at the pharmacies to get health insights, as well. 

These products use different types of secrets for their lifecycle. We have to support those requirements as well so that the products can use different kinds of credentials for their communication. Finally, we have customer support engineers who have to troubleshoot the devices. They need an easy way to retrieve the secrets and update the secrets when needed.

»Delivering Secrets to Products 

Now, let's look at how we have created a solution that solves these requirements. In this journey we started evaluating all the secrets management offerings in the market — including cloud provider-based secrets management offerings, and also, some other prominent secrets management offerings in the market.

Out of all of them, HashiCorp Vault satisfied the majority of our requirements. We initially started with the open source Vault. Then, as we mentioned, we have diverse product teams. If we have to support all of the product teams on one Vault, it's quite cumbersome. Having separation of concerns, and multi-tenant is very complex. To avoid that, and reduce the burden on our operations team, we went with Vault Enterprise, which offers the concept of namespaces. The namespace concept gave us the flexibility to configure different secret engines as the products need.

Also, the product teams now have autonomy in completely isolating their secrets from other product teams — so they know their secrets are not messed with accidentally by other product teams. This gave us a complete boundary in securing one product team’s secrets from other product teams. 

Finally, we want an easy way to onboard product teams to our Vault environment, so they can quickly start managing secrets in a better way using Vault. For that, we have developed a self-service approach. I'll talk briefly about how we have developed it.

»Vault Onboarding — A Self-Service Approach 

The self-service approach is a GitOps-based flow. We have everything in code in a repository. The teams have to just provide the product name that they want to use inside the Vault namespace, and then the authentication method they want to use to log into Vault.

By default, many companies use a lot of authentication providers. Similarly, we use a lot of authentication providers, and we already have a lot of groups where the teams belong to. So, we also ask which group the team wants to use authenticating to Vault namespace.

We are talking about the self-service approach because the GitOps flow, by default, gives us the audit trail of which product team is using which namespace, and who has access to what. Being a regulated environment, it's very helpful for us to know who is accessing what — it gives the audit our governance information. Also, it has improved the rate of our adoption. 

As Leandro mentioned, we work in an inner source way, and we have developers working globally. Being a small team, we can't support them 24/7. We have collaborators from other parts of the world. So, when someone files a pull request, even if we are sleeping — we’re in Europe — someone from the US or Asia can look at the pull request and approve the flow. Let's see how we have implemented this. 

Again, here we have heavily used HashiCorp Vault provider. We used Vault provider's resources to create a namespace, and then configure it with the default authentication engines, and default authentication provider —an LDAP or GitHub provider. Then also, we have set some guard rails, which access a default permission set on the namespace. As soon as some operator approves the floor, we have a CIDR job running, which would use Terraform to provision the namespace, and then configure it as needed.

»Secure Accessibility of Secrets Across the Globe 

Let's look at the major challenge that we have to solve. That is secure accessibility of secrets across the globe. We have chosen Vault for managing our secrets. If we have to deliver secrets to products running at different locations, we have to make Vault accessible, widely. 

Further, we have coupled Vault with CloudFlare, Bring Your Own IP. Let's say we own an IP prefix, we can go to CloudFlare and say, CloudFlare, please advertise it across your edge locations.

CloudFlare announces it in all of the edge locations — or 200 edge locations, so products or customers operating globally can reach the IP prefix. We have seen lately that a lot of customers who want to have enabled outbound traffic for the applications, are adopting Bring Your Own IP a lot.

One advantage with that, is the customer firewall teams don't need to manage the whitelisting of multiple IPs. If you're working on a controlled environment, the firewall controls are quite complex. If you need to change something, it's quite cumbersome, and takes a lot of time. This concept of whitelisting one prefix for a set of services is growing rapidly nowadays.

»Architecture 

Now let's look at the architecture — how we have solved it. On the right side, you see the Vault offering that we have, and it is protected by CloudFlare web, and advertised across CloudFlare edge network. The products running at the edge — or running at different customer sites — can access Vault through this CloudFlare network and retrieve the necessary secrets they need for their communication.

In addition, the digital platforms we operate in the cloud and also the raw digital solutions, can also talk to the CloudFlare network, and then access secrets they need for their communication as well. This has helped us solve our first challenge, which is accessing secrets globally from different geographical locations, and having products on different customer sites as well.

»Our Vault Usage So Far 

We rolled out Vault ten months back. We have onboarded 60 different product teams from different sites. More than 600 users are accessing our Vault offering to manage and control their secrets. Being a global company, product teams at more than five different sites are controlling and managing their secrets using our Vault offering.

On the right-hand side, you see the secrets engines that we use. These are the common secrets engines our teams are using. Here again, I have to stress about the namespace concept. Why? Because we have diverse product ranges. Let's say the teams working with cloud and edge products heavily use cloud-based secrets engines like AWS, Azure's secrets engine. 

Then there are teams who work with web-based products. They use a key-value secrets engine. The namespaces have helped us to have this flexibility, and using the needed secrets engine for the purpose. Also, if you have listened to Armon’s talk in the morning, a lot of the secrets engines are dynamic in nature. 

Each secret we generate has a time to live — and when it expires, Vault can automatically rotate it and give a new secret. With this, we can continuously update our credentials. This solves our second challenge that is updating secrets frequently to improve our security standing. 

In addition to that, HashiCorp Vault offered a rich UI, so customer support engineers can log into the UI and — based on their access privileges — they can see the secrets that they need to troubleshoot the devices.

Over to you, Leandro.

Leandro:

Thanks, Harsha. That was great. Now we are left with one more challenge that we wanted to talk about. What's in the pipeline for us in the future? Spoiler alert, it's on the name of the slide. 

»Multi-Cloud Access with HashiCorp Boundary 

We have devices in multiple locations, but at the same time, we have engineers in multiple locations. We also outsource development services to a lot of different companies, — depending on the business area, that can change as well.

So, how do we make sure that we get people access to the platforms that they need to manage securely? Our initial approach here was to use a custom development. The challenge was that, as you can see now, we have external business partners, we have internal developers. We leveraged pretty much all the major cloud providers, including our own infrastructure services. We have datacenters across the globe, as well. That was a lot of overhead for us. 

We first attempted using Bastion hosts. We used them to control access to all the cloud resources. The challenge here was, that we have to configure identity and access management for each one of those manually — and then we have to maintain them. 

That means we have people dedicated to — instead of managing the platforms — managing the access that allows us to manage the platforms. That really can lead to a lot of errors — and, like I said, a lot of overhead to manage. So, we are working very closely with our partners from HashiCorp to leverage Boundary to manage all of that multi-cloud access for us.

»Conclusion 

To conclude now with what we just went through with our challenges: First, Vault secrets management engine and its capabilities have helped us integrate seamlessly our multiple products. Then, the namespace concept offers us the flexibility to support diverse product teams. The way we usually refer to it is as your own mini Vault. Once we create it for you, we are out of the way, and people can manage it on their own — they don't need us. We just maintain an enterprise-level configuration, so we can ensure some policies, for example.

By coupling Vault with CloudFlare, we are able to solve the challenge that Harsha was talking to us about, which is how we get those secrets everywhere around the world? And finally, with Boundary, we want to improve access management for cloud and on-prem environments, at the same time. 

We thank you for spending the time to be here with us today. Except the people that are from our team. They are obligated to be here. We had a great time, hope you have as well, and we are welcome to connect off stage with you. If you have any questions, or if you want to share how you tackle similar challenges to these ones, we are going to probably steal your ideas if it's better — but please feel free to connect with us. 

Thank you very much.

More resources like this one

  • 12/21/2020
  • Case Study

Multi-Tenant Workloads & Automated Compliance with Nomad & the HashiStack at Exact Sciences

  • 7/13/2020
  • Case Study

Vault Configuration as Code via Terraform: Stories From the Trenches at Hippo Technologies

  • 7/29/2019
  • Case Study

How Whiterabbit.ai uses Terraform and Packer to Fight Cancer With Machine Learning

  • 12/7/2018
  • Case Study

Scalable CI at Oscar Health Insurance with Nomad and Docker

HashiCorp uses data collected by cookies and JavaScript libraries to improve your browsing experience, analyze site traffic, and increase the overall performance of our site. By using our website, you’re agreeing to our Privacy Policy and Cookie Policy.

The categories below outline which companies and tools we use for collecting data. To opt out of a category of data collection, set the toggle to “Off” and save your preferences.