From AI to the edge, HashiCorp Co-Founder and CTO Armon Dadgar shares his insights on where the cloud is headed, and what that means.
At our user conference, HashiConf, I shared some thoughts in my keynote about the infrastructure trends we’re seeing among our customers and in the industry at large. I wanted to share those here as well and invite you to join me in the discussion about where the cloud is headed, and what that means.
Let’s dive in.
As the growth of edge infrastructure continues to fragment the landscape, there will be more need for solutions that pragmatically simplify management across platforms.
Defining “edge computing” precisely is a challenge, and you’ll get a wide range of perspectives on what the term means. But it’s undeniable that these approaches are growing in adoption and importance.
One example is a company like The Home Depot providing thousands of store associates with connected devices that enable better customer service, turning every store into its own datacenter by deploying applications on site. Another example is how self-driving cars require a complex array of software and hardware to enable making decisions in milliseconds without depending on internet connectivity. In both cases, the deployment of edge computing completely changes how teams think about infrastructure orchestration.
Meanwhile, hyperscalers like AWS, Microsoft Azure, Google Cloud, and AliCloud continue to grow, as do specialized vendors that cater to specific verticals or data sovereignty concerns. And then there’s “near cloud” infrastructure from Fastly, CloudFlare, and others that can provide enhanced compute services for use cases that require low latency, like multiplayer video gaming.
This explosion of choice is good news for businesses. Now, they can adopt the platforms that work best for their specific needs, spreading applications across several providers while creating a more resilient operation and minimizing impact when there’s an outage.
But this change brings hurdles as well as benefits. In the past, customers were worried about being locked in to one vendor. Now, they’re worried about how to unify the management landscape across many different providers. Meanwhile, the growing number of regulations around data sovereignty and privacy, which require businesses to keep customer information within a certain geographic boundary, is further fueling that fragmentation in infrastructure.
For multi-cloud and edge environments to work, companies need tooling that provides consistency and simplicity. They need to be able to connect and manage the different ecosystems together to have a holistic approach to their infrastructure.
Most importantly, they need to move away from manual processes. While humans may have been able to effectively manage enterprise infrastructure when it was a handful of data centers, the explosion of public cloud and the edge is outpacing the ability to keep up. To manage that growth, businesses need standardization and automation.
As the number of endpoints, devices, and applications grows, traditional perimeter security models no longer work, driving the need to embrace an identity-centric approach.
MGM Resorts, Sony, Volkswagen — it seems every day a new cyberattack dominates the headlines.
Hackers are getting smarter in their tactics, while there are more opportunities than ever to strike. Attack surfaces are growing rapidly. Employees are using more devices to do their jobs. There are now digital applications underpinning everything a business does. Critical data is spread across different storage platforms.
In this new landscape, legacy security methods are no longer sufficient. The so-called “perimeter model” is dead because in the cloud there is no perimeter.
Now, companies must move to an identity-first approach, where every user, workload, and device is constantly verified to ensure secure access — what we call a zero-trust architecture. But organizations need to acknowledge that adopting zero trust is a multi-year transformation that impacts their people, process, and tooling.
In the past, developers took for granted that passwords could be stored in plaintext and rarely changed, confidential data was often stored unencrypted, and employees were given broad access to systems they didn’t need access to.
All that needs to change in the new zero trust environment. Secrets and data need to be tightly managed to ensure they are confidential, well protected, and accessible only by those who have a clear business need.
The shift to multi-cloud and hybrid architectures increases the surface area and complexity of the network, making it difficult for operators and security teams to debug problems and secure access.
When it comes to networking approaches, software has replaced the use of physical hardware for routing and middleware layers, like firewalls, load balancers, and API gateways. Often, security teams are scrambling to figure out how to manage distributed networks, where traffic flow is much more complex than in the past.
They’re deploying tools like software-defined networks, network overlays, and service meshes to help manage access and reduce risk. However, those approaches aren’t uniform across the business. Companies are often using more than one of these new methods, alongside traditional approaches like firewalls and switching layers.
There’s been no consensus in the industry on how to solve these problems. Meanwhile, application and platform teams are addressing them in silos, further fragmenting an already splintered environment and making work more complicated for the operators and security teams tasked with debugging applications and securing endpoints.
As with many other challenges, organizations need to drive a standardized approach and automate network management, which will help reduce the overall complexity and simplify the process of securing those networks. The alternative is often an ad hoc mix of technologies and manual processes that hinder the speed and security of infrastructure.
Given the demands on developer productivity, there is a high level of interest in internal developer platforms that abstract the complexity of infrastructure and allow application teams to focus on delivering business value.
As infrastructure gets more complex, developers increasingly want to bypass it all together. That’s why internal developer platforms that help standardize the development process within enterprises — and the platform teams that build and run them — are becoming popular.
Platform teams are able to define a set of golden patterns and golden workflows within an organization and package them so developers don’t have to worry about provisioning infrastructure. Templates that represent best practices are reused to provide consistency and ensure companies are complying with applicable security and compliance controls. This way, development timelines are accelerated without the platform team giving up any control over the underlying processes.
But the rise of these self-service portals presents new issues. Modern applications are increasingly a mix of different cloud services stitched together. That means most systems can’t be built or tinkered with in closed environments.
Instead, companies need to be able to supplement local development with isolated, short-term cloud environments for testing. Businesses also need a way to seamlessly manage all those components so troubleshooting becomes easier.
Managing this complexity and unlocking productivity for developers and operators elevates the need for automation. Successful infrastructure management will use GenAI to assist, but not drive, this new automation.
Faced with all this new complexity, businesses are struggling to find the right skills to help them manage it. At the same time, they are trying to move faster than ever to deploy next-generation, AI-powered applications.
That’s one reason why there’s been such rapid adoption of infrastructure as code. IaC brings consistency into the infrastructure provisioning and management workflows, so operations teams know what to expect and can more easily pin-point when something goes wrong. Greater use of self-service portals will also help reduce the burden on engineering teams of becoming experts in many different systems and tools.
But the massive increase in the volume of code that will soon be driven by non-experts puts new pressure on security and compliance. This will be exacerbated by new Gen AI tools that allow users to auto-generate their infrastructure.
To address the new scale of infrastructure consumption, companies need a centralized approach to their infrastructure to help security teams monitor and more easily locate vulnerabilities, and to allow governance teams to enforce policies and controls.
This is where AI can help — but not replace — developers and operations teams. One example might be AI surfacing issues automatically, so human operators can resolve them with specific expertise. Problems are resolved faster, but safely, and teams can spend more time building the applications that delight customers and push the business forward.
The Tao of HashiCorp has guided the way we’ve built software for more than a decade. Reflecting on it today, it remains central to how we think about the future of our products. While there have been many technology advancements since our inception, the core workflows of building and developing applications remain much the same, and HashiCorp remains committed to improving those workflows through automation.
At HashiConf our team announced many new products and features that are built for these trends. As complexity and automation proliferates, our products will help users increase velocity of development, reduce cloud costs, enforce policies and compliance, and limit their security exposure. There’s still more work to do, and I can’t wait to build the future with our team.
Thanks to all the customers, partners, and community members who joined us at HashiConf. For those who couldn’t make it, catch up on all the news on our blog: Infrastructure and security releases open HashiConf 2023,
HashiCorp and Palo Alto Networks celebrate our ongoing partnership protecting customers with infrastructure security automation for the second year in a row.
A recap of HashiCorp infrastructure and security news and developments on AWS from the past year, from self-service provisioning to fighting secrets sprawl and more.
If you’re attending AWS re:Invent in Las Vegas, Nov. 27 - Dec. 1, visit us for breakout sessions, expert talks, and product demos to learn how to accelerate your adoption of a cloud operating model.