On the DORA 2025 AI Report - AI Adoption and use

The DORA State of AI Assisted Software Development report came out recently. It's a massive 142 page report that details analysis done by the DORA team. It captures trends and observations covering a 5000 participant study that focuses on AI Adoption and tool use in the Software industry. It's a long and detailed report so I'll focusing on areas I think are interesting and summarizing as I go. I aim to follow the flow of the original report and leave commentary as I go.

These notes follow my progress in trying to understand this stuff myself. I invite any thoughts or perspective on the topic.

The original report is available here.

The survey questions are published here.

Foreword

The foreword highlights a decade-long evolution in software development practices, emphasizing Google’s DORA research on DevOps and its recent pivot to address AI's impact. The author is bullish on Vibe Coding - as evidenced by their upcoming book of the same name and take the stance of having seen AI result in extremely positive outcomes going so far as to call last year's report the "2024 DORA anomaly" which showed a correlation between increased AI use and reduced software stability/throughput.

Steve and I have seen how using vibe coding can go wrong, resulting in deleted tests, outages, and even deleted code repositories. But we’ve concluded that this was because the engineering instincts that served us well for decades were now proving woefully insufficient.

They blame issues they've experienced with vibecoding on current engineering instincts being "woefully insufficient". The author comes from the position that this is a paradigm shift that needs new patterns to keep up with

Suppose the fastest you’ve ever traveled is walking at four miles per hour, and someone asks you to drive a car at 50 miles per hour. Without practice and training, you will undoubtedly wreck the car.

I think the broader technical industry has seen its stance evolve on vibecoding over the year. There remain unresolved issues with surrendering your understanding and your solution becoming a black box[1]. It's possible that the author has solutions that they will be revealing in their book.

They support their claim that training and patterns are the issue through 2 case studies starting with Adidas -

Fernando Cornago, global vice-president, Digital and E-Commerce Technology, Adidas, oversees nearly a thousand developers. In their generative AI (gen AI) pilot, they found that teams who worked in loosely coupled architectures and had fast feedback loops “experienced productivity gains of 20% to 30%, as measured by increases in commits, pull requests, and overall feature-delivery velocity,” and had a “50% increase in ‘Happy Time’”— more hands-on coding and less administrative toil.

And Booking.com -

We also appreciated the case study from Bruno Passos, group product manager, Developer Experience, Booking.com, which has a team of more than 3,000 developers. In their gen AI innovation efforts, they found that, “developer uptake of vibe coding and coding assistant tools was uneven ... Bruno’s team soon realized the missing ingredient was training. When developers learned how to give their coding assistant more explicit instructions and more effective context, they found up to 30% increases in merge requests and higher job satisfaction.

They conclude by pointing out that this report includes data from 5,000 participants, and aims to uncover groundbreaking insights similar to past DevOps breakthroughs.

AI adoption and use

The report defines AI adoption as the intersection between Reliance, Trust, and Reflexive use and tweaked their survey questions to measure those key facets. The results show that AI has seen overwhelming adoption. 90% of respondents say they use AI at work in some capacity. It's worth noting that this is within the margin of error of 84% reported in the Stackoverflow developer survey[2].

Unfortunately my excitement for this is somewhat tempered by AI tool use mandates that have become more common[3]. Unfortunately it's difficult to say how much of it is purely organic or as a result of mandates pushing for greater adoption.

The next section points out that in aggregate 60% of users report reflexively using AI half the time or more.

Although AI use is nearly ubiquitous in our sample, reflexive use—the default employment of AI when facing a problem—is not. Among AI users, only 7% report “always” using AI when faced with a problem to solve or a task to complete, while 39% only “sometimes” seek AI for help. Still, a full 60% of AI users in our survey employ AI “about half the time” or more when encountering a problem to solve or task to complete, suggesting that AI has become a frequent part of the development process.

Perception of productivity

Some of the stats focus on perception. Respondents report a perception of increased productivity and code quality

More than 80% of this year’s survey respondents report a perception that AI has increased their productivity. Although more than 40% report that their productivity has increased only “slightly,” fewer than 10% of respondents perceive AI contributing to any decrease in their productivity.

In addition to perceiving positive impacts on their productivity, a majority (59%) of survey respondents also observe that AI has positively impacted their code quality. 31% perceive this increase to be only “slight” and another 30% observe neither positive nor negative impacts. However, just 10% of respondents perceive any negative impacts on their code quality as a result of AI use.

While the data here is interesting, I think its weakness is that it primarily relies on self reported data. It's difficult to establish causal relationship to the tools. Do the devs feel more productive because the tool is actually making them more productive, or is there an illusion of productivity because they are typing messages to an LLM? The METR study[4] that came out this year made an attempt to measure this.

Surprisingly, we find that when developers use AI tools, they take 19% longer than without—AI makes them slower [4:1]

There is of course nuance to it but I argue that it's enough evidence to cast at least some doubt on self reported metrics in this context[5]. Speaking anecdotally, whether or not I see gains from AI depend on the task, how much the requirements have been figured out, how much of the solution is "cookie cutter", etc. What I have found it consistently do is save me from typing as much. Other anecdotes published by others seem to mirror my experience[6][7] but also point out some of the dangers that come from not understanding the nuance

These claims wouldn't matter if the topic weren't so deadly serious. Tech leaders everywhere are buying into the FOMO, convinced their competitors are getting massive gains they're missing out on. This drives them to rebrand as AI-First companies, justify layoffs with newfound productivity narratives, and lowball developer salaries under the assumption that AI has fundamentally changed the value equation.[6:1]

Trust

Overall 46% of developers "somewhat" trust AI-generated output, 20% say "a lot" and 4% say "a great deal". There isn't a direct analog in the Stackoverflow survey, however there is a section on "Accuracy of AI tools" we can use as a reference

More developers actively distrust the accuracy of AI tools (46%) than trust it (33%), and only a fraction (3%) report "highly trusting" the output. Experienced developers are the most cautious, with the lowest "highly trust" rate (2.6%) and the highest "highly distrust" rate (20%), indicating a widespread need for human verification for those in roles with accountability.[2:1]

The results line up quite well which points out frustrations with the reliability of the tools. They offer some advice on building trust in AI[8].

Importantly, developers who trust gen AI more reap more positive productivity benefits from its use. In a logs-based exploration of Google developers’ trust in AI code completion, our EPR team found that developers who frequently accepted suggestions from a gen AI-assisted coding tool submitted more change lists (CLs) and spent less time seeking information than developers who infrequently accepted suggestions from the same tool. This was true even when controlling for confounding factors, including job level, tenure, development type, programming language, and CL count. Put simply, developers who trust gen AI more are more productive.

Something that stands out to me is that it this comes from the perspective that using gen AI is a fixed productivity gain, this seems true based on the self reported data but it still depends heavily on who you ask[9][10]. The five pieces of advice they offer to increase trust all seem like good ideas regardless of how you feel about the tech or adoption.

  1. Establish a policy about acceptable gen AI use, even if your developers are good corporate citizens.

    ... establishing clear guidelines encouraging acceptable use of gen AI will likely also promote cautious and responsible developers to use gen AI, by assuaging fears of unknowingly acting irresponsibly

  2. Double-down on fast high-quality feedback, like code reviews and automated testing, using gen AI as appropriate.

    ... appropriate safeguards assuring them that any errors that may be introduced by gen AI-generated code will be detected before it is deployed to production.

  3. Provide opportunities for developers to gain exposure with gen AI, especially those which support using their preferred programming language.

    Providing opportunities to gain exposure to gen AI, like training, unstructured activities, or slack time devoted to trying gen AI, will help increase trust, especially if such activities can be performed in developers’ preferred programming language in which they are best equipped to evaluate gen AI’s quality

  4. Encourage gen AI use, but don’t force it.

    One approach to encouraging gen AI use in a manner that prioritizes developers’ sense of control is to promote the spread of knowledge organically, by building community structures to foster conversations about gen AI

  5. Help developers think beyond automating their day-to-day work and envision what the future of their role might look like.

    ... without a clear vision for what the transformed role of a developer working at a higher level of abstraction in which these repetitive tasks are delegated to gen AI resembles, it will be hard to assuage fears of unemployment.

I think trust here covers the fact that there is a learning curve to the tools. How and when you use them can be the difference between good and bad outcomes, however there is pressure to adopt it whenever and wherever possible which can backfire and erode trust in the tools.

Conclusion

The report concludes this section by opining that although respondents express concerns about trust in AI-generated code, they report positive impacts to productivity and code quality.

But, whether social pressure is a logical motivation to adopt a new technology is debatable. While our data shows many positive outcomes of AI adoption, we have also documented notable drawbacks.

For this reason, we caution against interpreting these findings of AI’s ubiquity as an indication that all organizations should rapidly move to adopt AI, regardless of their specific needs. Rather, we interpret these findings as a strong signal that everyone engaged in software development—whether an individual contributor, team manager, or executive leader— should think deeply about whether, where, and how AI can and should be applied in their work.

They note that their data also points to considerable drawbacks when exploring the impact of AI Adoption most notably that higher rates of AI adoption predict increased software delivery instability and developer burnout.

Overall, I think this is a great report but one that has deep nuance and is hurt by the fact that there are a lot of mixed signals coming from different sources that often shows opposing data. It acknowledges a lot of things and behaviors we don't know but also makes sweeping generalizations on productivity that the reader must read with nuance in mind. If you haven't already, I recommend taking the time to read the report.


  1. Goel N. (2025) Karpathy’s ‘vibe coding’ movement considered harmful. nmn.gl. Available at: https://nmn.gl/blog/dangers-vibe-coding (Accessed: 2025-10-21). ↩︎

  2. (no date) 2025 stack overflow developer survey. survey.stackoverflow.co. Available at: https://survey.stackoverflow.co/2025/ai (Accessed: 2025-10-21). ↩︎ ↩︎

  3. (no date) www.reddit.com. Available at: https://www.reddit.com/r/ExperiencedDevs/comments/1j7aqsx/ai_coding_mandates_at_work/ (Accessed: 2025-10-21). ↩︎

  4. METR. (2025) Measuring the impact of early-2025 AI on experienced open-source developer productivity. Available at: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ (Accessed: 2025-10-21). ↩︎ ↩︎

  5. Reading further it looks like they agree - "These mixed signals indicate to us that more evidence-based work should be done to evaluate the true impact of AI on product development, especially given the sheer scale of AI investment and adoption. We believe that the developer community and employers should be setting realistic expectations, and gaining a clear perspective on AI’s actual impact is the first step toward managing those expectations responsibly". ↩︎

  6. Judge, M. (2025) Where's the shovelware? Why AI coding claims don't add up. mikelovesrobots.substack.com. Available at: https://mikelovesrobots.substack.com/p/wheres-the-shovelware-why-ai-coding (Accessed: 2025-10-21) ↩︎ ↩︎

  7. (no date) Where's the shovelware? Why AI coding claims don't add up. news.ycombinator.com. Available at: https://news.ycombinator.com/item?id=45120517 (Accessed: 2025-10-21). ↩︎

  8. Storer, KM. et al. (no date) Fostering trust in AI. dora.dev. Available at: https://dora.dev/research/ai/trust-in-ai/ (Accessed: 2025-10-21). ↩︎

  9. Kapani, C. (2025) AI coding assistants aren’t really making devs feel more productive. leaddev.com. Available at: https://leaddev.com/velocity/ai-coding-assistants-arent-really-making-devs-feel-more-productive (Accessed: 2025-10-21). ↩︎

  10. (no date) www.reddit.com. Available at: https://www.reddit.com/r/ExperiencedDevs/comments/1lml3ti/did_ai_increase_productivity_in_your_company/ (Accessed: 2025-10-21). ↩︎

Subscribe to Another Dev's Two Cents

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe