Skip to main content

AI is making developers faster, but at a cost

The 2024 DORA report on AI coding assistance tools shows an increase in code review speed but a drop in delivery stability. What is the cause and what is the solution?

AI coding assistance tools are accelerating software development, but there’s a catch.

According to Google’s 2024 DORA report, teams that have adopted AI report a 3.4% increase in code quality and a 3.1% increase in code review speed. These gains are fairly small, but could grow as AI coding tools continue to mature.

The downside from the DORA report is that these teams say they experience a 7.2% reduction in delivery stability and a 1.5% reduction in delivery throughput.

There are many factors at play, as the DORA report explains, and more work to be done to understand the full effect of AI on developers. However, this report suggests that some developers are accelerating code reviews and improving quality checks at the cost of delivery stability.

Why is this happening?

This blog will explore some possible reasons behind the DORA results and offer guardrail strategies that could potentially recover lost delivery stability and throughput when adopting AI coding tools.

»Potential causes of delivery instability and the risks

While AI may generate code with an understanding of the code’s surrounding context, it lacks awareness of the broader system and business logic that exists outside the code given to it.

It’s also important to remember that AI models are trained on historical data. This means they can reinforce old patterns or common misconceptions, which can affect quality, stability, and performance.

It’s also possible that teams aren’t using AI tools in other areas of software delivery, like testing, infrastructure provisioning, and security.

Security is one of the biggest risk areas when using AI-generated code. AI can introduce vulnerabilities, such as hardcoded secrets or other insecure coding practices. One study conducted by Stanford University found an increase in insecure code submissions by developers using AI code assistants (as much as 40%) and code was often “buggier”. In another experiment, researchers constructed 900 prompts from GitHub code snippets, and Copilot returned 2,702 hard-coded secrets — 7.4% (approximately 200) of those were confirmed as real hard-coded secrets that the researchers could identify on GitHub.

While AI promises to help organizations innovate faster than ever before, AI coding assistance tools must be surrounded with effective guardrails. These guardrail systems protect against known drawbacks and enable organizations to narrow the growing delivery stability gap.

»Narrowing the growing stability gap

As cloud adoption has grown, so has the struggle to achieve ROI from cloud investments. Key contributing factors include complexity, lack of standardization, limited visibility, and poor governance, compounded by organizations' resistance to modernizing their security architectures for the cloud or their lack of capability to implement such changes. Adding AI-generated code to the mix without cloud maturity just makes things worse.

The good news is that a lot of the platform engineering best practices and guardrails that come with cloud maturity transformation will also help narrow the delivery stability gap created by AI coding tools.

Some key guardrails that can mitigate the risks and stability dropoffs from AI tools include:

  1. Secure infrastructure modules: When platform teams implement infrastructure as code and policy as code, teams can consistently and safely deploy secure infrastructure across all environments using reusable, secure-by-design modules. Policy as code ensures security and compliance policies are applied to every new build to prevent misconfigurations. This should be applied to all code, whether developed by humans or AI tools.
  2. Centralized secrets management: To reduce the risk of stolen credentials, a platform team must provide a centralized secrets management solution to track and protect keys, encryption, PKI, and identity-based access. This helps prevent poor coding practices that often lead to secrets sprawl and hard-coded secrets. It is critical to ensure AI is complying with these workflows as well.
  3. Centralized visibility and control: To increase visibility for an organization’s entire estate, platform teams must provide a single system of record for infrastructure and security across all cloud environments. This makes it easier to track and manage risk, audit AI-generated code, and make compliance reports for auditors much simpler to generate.
  4. Golden images and workflows: Internal developer platforms become even more transformative when they chain together multiple abstraction layers to automate security and reliability requirements through pre-built, self-service golden machine images, modules, and registries that are approved by relevant stakeholders. Further, these are great tools for training AI engines going forward.
  5. Unified platform: By consolidating the security and infrastructure lifecycle management strategies in this list into a single platform, managed by the platform team, you centralize your data and operations through one tightly integrated set of systems. This drastically simplifies governance and observability while giving your AI tools a complete picture of the operational context for developers’ application code.

»Adapting to the AI-driven future

A new shadow IT movement around AI tool usage is already underway. Developers are using these AI tools, whether their organizations want them to or not. Their careers depend on learning to use these tools.

Organizational leaders can prepare for the growing amount of AI-assisted development by adopting a platform-based approach to software delivery. For more information on this platform-based approach, read our white paper: Do cloud right with The Infrastructure Cloud.

Sign up for the latest HashiCorp news

By submitting this form, you acknowledge and agree that HashiCorp will process your personal information in accordance with the Privacy Policy.