<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Steve Fenton</title>
    <description>The latest articles on DEV Community by Steve Fenton (@_steve_fenton_).</description>
    <link>https://dev.to/_steve_fenton_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/_steve_fenton_"/>
    <language>en</language>
    <item>
      <title>Developer Productivity in the Age of AI: Why Your Past Predicts Your Future</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Mon, 20 Apr 2026 11:57:29 +0000</pubDate>
      <link>https://dev.to/_steve_fenton_/developer-productivity-in-the-age-of-ai-why-your-past-predicts-your-future-i84</link>
      <guid>https://dev.to/_steve_fenton_/developer-productivity-in-the-age-of-ai-why-your-past-predicts-your-future-i84</guid>
      <description>&lt;p&gt;You're looking at a list of things you'd love to do, and you're looking at AI coding tools as a way to boost your way down that list. You might not have the relationships mapped out, but you can see there is some route to value if you spend on LLMs that speed up code.&lt;/p&gt;

&lt;p&gt;You're now in the developer productivity game.&lt;/p&gt;

&lt;h2&gt;
  
  
  The idea behind developer productivity
&lt;/h2&gt;

&lt;p&gt;The roots of developer productivity are straightforward. Some smart engineering managers figured out that a small team of developers with the best machines, screens, and development tools could generate value at a rate and quality that far exceeded their "head count". You could also supply all these upgrades to developers at a cost way below the fully loaded cost of 1 more developer.&lt;/p&gt;

&lt;p&gt;The return on investment for this approach was incredible, but traditional engineering managers didn't understand it. They thought developers were asking for more screens because it made them look more important. This emerged from organizations that rewarded managers for empire-building by granting them larger offices with better views.&lt;/p&gt;

&lt;p&gt;I'm a big fan of Ron Westrum's Typology of Organizational Cultures. For this post, though, we'll keep things simple and refer to traditional thinking (keep equipment costs low) and modern thinking (provide high-quality tools).&lt;/p&gt;

&lt;p&gt;We have never shaken off this traditional-versus-modern divide over developer productivity. And now, the subject has returned to the spotlight due to AI and, more specifically, LLM-based coding tools. Your organization's past approach to developer productivity will determine whether you can successfully integrate AI tools into your development teams.&lt;/p&gt;

&lt;p&gt;Let's look at why.&lt;/p&gt;

&lt;h2&gt;
  
  
  A tale of two cities
&lt;/h2&gt;

&lt;p&gt;Traditional organizations operate through control. Managers dictate how work is done, choosing the processes and tools workers must use. Instructions flow downward, and managers define efficiency. Workers are evaluated individually against the manager's prescribed methods, rather than by outcomes.&lt;/p&gt;

&lt;p&gt;Modern organizations operate through trust. Teams choose how to work, selecting from available options or proposing new tools when needs emerge. Authority flows to those closest to the work. Performance is a team sport measured by outcomes.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Traditional&lt;/th&gt;
&lt;th&gt;Modern&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Operates through control&lt;/td&gt;
&lt;td&gt;Operates through trust&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Efficiency is manager directed&lt;/td&gt;
&lt;td&gt;Productivity is worker-led&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finds the cheapest tools&lt;/td&gt;
&lt;td&gt;Chooses the best tools for each job&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prefers expanding teams&lt;/td&gt;
&lt;td&gt;Prefers small teams&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tooling is a cost&lt;/td&gt;
&lt;td&gt;Tooling increases value&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Performance is individual&lt;/td&gt;
&lt;td&gt;Value flows from collaboration&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As we experiment with AI coding tools, we are gaining crucial insights. We are developing a better understanding of how much human oversight is needed to successfully and sustainably deliver high-quality software to users. A strong guiding hand is crucial in directing and correcting the output of these tools.&lt;/p&gt;

&lt;p&gt;The value of any software you build, by hand or with assistance, comes from the flow of information. That means listening to software users and collaborating internally. While the code is what gets left behind by the process, it's only an artifact of a more fundamental learning process. The ability to learn and share knowledge will also benefit teams as they discover how to apply AI coding tools to this process.&lt;/p&gt;

&lt;p&gt;It's also clear that Continuous Delivery and automation remain paramount. In the past, automated linting, security scanning, and tests gave us confidence in the code teams wrote; now, they can provide us with confidence in code generated by LLMs. DORA's &lt;a href="https://dora.dev/ai/" rel="noopener noreferrer"&gt;AI Capabilities Model&lt;/a&gt; includes 7 capabilities essential to successful AI adoption, including user-centric focus, strong version control practices, and working in small batches.&lt;/p&gt;

&lt;p&gt;For organizations that haven't adopted Continuous Delivery, rocky shores lie ahead when they unleash AI tools on their codebases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Return on an unspecified investment
&lt;/h2&gt;

&lt;p&gt;Now here's the fascinating conundrum for anyone trying to calculate a return on investment for AI coding tools. The commercial tools indeed remove many tasks that, as a developer, I don't want to do, though they also introduce new ones. I see why people would like to use them to remove much of the noise and focus on the essential details in the software. The problem is, you run out of credits fast, so if you want to use these tools full-time, you'll need subscription levels that support that.&lt;/p&gt;

&lt;p&gt;Credit exhaustion is the first friction point where traditional organizations will come unstuck. Developers who rely heavily on AI coding tools will slow drastically when credits run out. This will likely become a significant problem over time as developers become more dependent on working at the high level of abstraction that prompting offers.&lt;/p&gt;

&lt;p&gt;Imagine if coding languages had similar limits. You'd run out of Python hours and have to continue your work using assembly language.&lt;/p&gt;

&lt;p&gt;Organizations with a cost focus will challenge developers who want a higher budget for these tools. Any manager who has previously denied more screen real estate is likely to reject higher subscription costs for AI coding tools. Their view in both cases is that the promised productivity isn't real.&lt;/p&gt;

&lt;p&gt;The second hurdle for these commercial tools is the uncertain future pricing. We know some AI companies are burning through investment cash, which means the price we pay is subsidized by their desire for growth. There must be a pivot point at which they begin the search for profitability. This will once again trigger problems in cost-focused organizations.&lt;/p&gt;

&lt;p&gt;Some developers are already thinking ahead and looking for open-source models they can run locally to reduce cost uncertainty, but, as always, you pay one way or another. The time spent assessing, updating, and managing these models is a direct loss of the productivity you're trying to gain.&lt;/p&gt;

&lt;p&gt;One solution may be for commercial vendors to offer fixed-price, unlimited use through local models. The challenger to this solution will come from Platform Engineering or DevEx teams, who could supply a packaged open-source local solution for developers to reduce the overhead of selection and maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The nature of problems changes
&lt;/h2&gt;

&lt;p&gt;Traditional and modern organizations face the same challenges, but you can see that culture fundamentally shapes how they are addressed.&lt;/p&gt;

&lt;p&gt;Modern organizations will judge their return on investment by the value they deliver. Their past investments in Continuous Delivery will provide a solid foundation for them to experiment with new tools, and they'll creatively address the cost issues associated with AI coding tools.&lt;/p&gt;

&lt;p&gt;Traditional organizations will seek to minimize costs, avoid investing in automated pipelines, and demand higher developer output with no real basis for expecting it.&lt;/p&gt;

&lt;p&gt;The set of capabilities a modern organization applies to high-throughput, high-quality software delivery is surrounded by subtle, interconnected relationships. For the traditional organizations that just want to "buy AI", the benefits are unlikely to arrive.&lt;/p&gt;

&lt;p&gt;Happy deployments!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>devex</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Setting GitHub as a trusted publisher for npm</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Mon, 13 Apr 2026 13:36:59 +0000</pubDate>
      <link>https://dev.to/_steve_fenton_/setting-github-as-a-trusted-publisher-for-npm-560i</link>
      <guid>https://dev.to/_steve_fenton_/setting-github-as-a-trusted-publisher-for-npm-560i</guid>
      <description>&lt;p&gt;So, stuff happened and &lt;strong&gt;npm&lt;/strong&gt; has been updated to reduce the volume of stuff happening. In a world of SBOMs, SLSA, and supply chain attacks, it's time to get serious about publishing packages. In this case, that means using the new &lt;em&gt;Trusted Publisher&lt;/em&gt; feature to connect GitHub (or GitLab) to &lt;strong&gt;npm&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set the trusted publisher on npm
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Sign into &lt;a href="https://npmjs.com" rel="noopener noreferrer"&gt;npm&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Select the package you want to set up, for example &lt;code&gt;astro-accelerator-utils&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;em&gt;Settings&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;In the &lt;em&gt;Trusted Publishers&lt;/em&gt; section, select your provider, in my case it's &lt;strong&gt;GitHub&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Enter you repository information:

&lt;ul&gt;
&lt;li&gt;Organization or user name, for example &lt;code&gt;Steve-Fenton&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Repository name, for example &lt;code&gt;astro-accelerator-utils&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The result should be that &lt;code&gt;Steve-Fenton/astro-accelerator-utils&lt;/code&gt; matches your repo in GitHub&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Provide the workflow file name

&lt;ul&gt;
&lt;li&gt;This should match the workflow that will publish the package, in my case &lt;code&gt;build-astro.yml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The file must be in &lt;code&gt;.github/workflows/&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Set up connection&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you use environments, you can optionally limit publishing by environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Check you GitHub Action
&lt;/h2&gt;

&lt;p&gt;In your permissions section, you need to allow &lt;code&gt;id-token&lt;/code&gt; to be written.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then use the &lt;code&gt;npm publish&lt;/code&gt; step in your workflow.&lt;/p&gt;

&lt;p&gt;I conditionally publish based on the version number, so I only publish when the version number changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Publish if version has been updated&lt;/span&gt;
    &lt;span class="s"&gt;env&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;NPM_AUTH_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.NPM_AUTH_TOKEN }}&lt;/span&gt;
    &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;PACKAGE_NAME=$(node -p "require('./package.json').name")&lt;/span&gt;
        &lt;span class="s"&gt;LOCAL_VERSION=$(node -p "require('./package.json').version")&lt;/span&gt;
        &lt;span class="s"&gt;REMOTE_VERSION=$(npm view $PACKAGE_NAME version || echo "0.0.0")&lt;/span&gt;

        &lt;span class="s"&gt;if [ "$LOCAL_VERSION" != "$REMOTE_VERSION" ] &amp;amp;&amp;amp; [ "$(printf '%s\n%s' "$REMOTE_VERSION" "$LOCAL_VERSION" | sort -V | tail -n1)" = "$LOCAL_VERSION" ]; then&lt;/span&gt;
        &lt;span class="s"&gt;echo "Local version $LOCAL_VERSION is higher than remote version $REMOTE_VERSION. Publishing..."&lt;/span&gt;
        &lt;span class="s"&gt;echo "//registry.npmjs.org/:_authToken=$NPM_AUTH_TOKEN" &amp;gt; ~/.npmrc&lt;/span&gt;
        &lt;span class="s"&gt;npm publish --access public&lt;/span&gt;
        &lt;span class="s"&gt;else&lt;/span&gt;
        &lt;span class="s"&gt;echo "Version $LOCAL_VERSION is not newer than $REMOTE_VERSION. Skipping publish."&lt;/span&gt;
        &lt;span class="s"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a more secure way to publish npm packages, but it's also easier because you don't need to keep updating tokens and secrets.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>node</category>
      <category>npm</category>
      <category>github</category>
    </item>
    <item>
      <title>Roll up your chair: How one small change sparked a DevOps revolution</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Tue, 31 Mar 2026 09:22:47 +0000</pubDate>
      <link>https://dev.to/_steve_fenton_/roll-up-your-chair-how-one-small-change-sparked-a-devops-revolution-33p4</link>
      <guid>https://dev.to/_steve_fenton_/roll-up-your-chair-how-one-small-change-sparked-a-devops-revolution-33p4</guid>
      <description>&lt;p&gt;My first encounter with DevOps was so simple that I didn’t even realize its power. Let me share the story so you can see how it went from accidental discovery to deliberate practice, and why it was such a dramatic pivot.&lt;/p&gt;

&lt;p&gt;The backdrop to this pivotal moment was a software delivery setup you might find anywhere. The development team built software in a reasonably iterative and incremental fashion. About once a month, the developers created a gold copy and passed it to the ops team.&lt;/p&gt;

&lt;p&gt;The ops team installed the software on our office instance (we drank our own champagne). After two weeks of smooth running, they promoted the version to customer instances.&lt;/p&gt;

&lt;p&gt;It wasn’t a perfect process, but it benefited from muscle memory, so there wasn’t an urgent imperative to change it. The realization that a change was needed came from the first DevOps moment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The unplanned first moment
&lt;/h2&gt;

&lt;p&gt;When the ops team deployed the new version, they would review the logs to see if anything interesting or unexpected popped up as a result of the deployment. If they found something, they couldn’t get a quick answer, and it sometimes meant they opted to roll back rather than wait.&lt;/p&gt;

&lt;p&gt;This was a comic-strip situation because the development team was a few meters away in their team room. It’s incredible how something as simple as a door transforms co-located teams into remote workers.&lt;/p&gt;

&lt;p&gt;The ops team raised their request through official channels, and the developers didn’t even know they were causing more work and stress because the ticket hadn’t reached them yet.&lt;/p&gt;

&lt;p&gt;Thankfully, one of the ops team members highlighted this. The next time they started a deployment, a developer was paired with them to watch the logs. A low-fi solution and not one you’d think much about. That developer was me. For this post, we’ll call my ops team partner “Tony”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shared surprises lead to learning
&lt;/h2&gt;

&lt;p&gt;The day-one experience of this new collaborative process didn’t seem groundbreaking. When a log message popped up that surprised Tony, it surprised me too. The messages weren’t any more helpful to a developer than they were to the ops team.&lt;/p&gt;

&lt;p&gt;I could think through what might be happening, talk it through, and then Tony and I would come up with a theory. We’d test the theory by trying to make another similar log message appear. Then we’d scratch our heads and try to decide whether this could wait for a fix or warranted a rollback.&lt;/p&gt;

&lt;p&gt;The plan to bring people from the two teams together was intended to remove the massive communication lag, and it did. But further improvements were to come as a side effect, yielding more significant gains.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resolve pain pathways by completing the loop
&lt;/h2&gt;

&lt;p&gt;As a developer, when you generate log messages and then have to interpret them, you’ve completed a pain loop. Pain loops are potent drivers of improvement.&lt;/p&gt;

&lt;p&gt;Most organizations have unresolved pain pathways. That means someone creates pain, like a developer throwing thousands of vague exceptions every minute, and then someone else feels it, like Tony when he’s trying to work out what the log means.&lt;/p&gt;

&lt;p&gt;There are two ways to resolve the pain pathway.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Process: You create procedures to bring pain below the threshold and to limit the rate at which it is generated.&lt;/li&gt;
&lt;li&gt;Loops: You connect the pain into a loop, so the person causing the pain feels its signal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If I’m the one who gets the electric shock when I press the button, I stop pushing it, even if someone in a white coat instructs me to continue the experiment.&lt;/p&gt;

&lt;p&gt;With the pain loop connected, I realized we should log fewer messages to reduce the scroll and review burden. Instead of needing institutional knowledge of which messages were perpetually present and could therefore be ignored, we could stop logging them.&lt;/p&gt;

&lt;p&gt;The (perhaps asymptotic) goal was to log only the events that required human review, with a toggle that let more verbose logging be generated on demand. Instead of scrolling through a near-infinite list of logs, you’d have a nearly empty view. If a log appeared, it was important enough to warrant your attention.&lt;/p&gt;

&lt;p&gt;The next idea was to improve the information in the log messages. We could identify which customer or user experienced the error and provide context for it. By improving these error messages, we could often identify the bug before we even opened the code, dramatically reducing our investigation time.&lt;/p&gt;

&lt;p&gt;This process evolved into &lt;a href="https://stevefenton.co.uk/blog/2017/11/the-three-fs-of-event-log-monitoring/" rel="noopener noreferrer"&gt;the three Fs of event logging&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create positive spirals with delightful deployments
&lt;/h2&gt;

&lt;p&gt;Another thread that emerged from the simple act of sitting together during deployments was the realization that the deployment process was nasty. We created an installer file, and the ops team would move it to the target server, double-click it, then follow the prompts to configure the instance.&lt;/p&gt;

&lt;p&gt;Having to paste configuration values into the installer was slow and error-prone. We spent a disproportionate amount of time improving this process.&lt;/p&gt;

&lt;p&gt;Admittedly, we were solving this one “inside the box” by improving an individual installation with DIY scripts, a can of lubricating spray, and sticky tape. This didn’t improve the experience of repeating the install across several environments and multiple production instances.&lt;br&gt;
However, I did get to experience the stress of deployments when their probability of success was anything less than “very high”. When deployments weren’t a solved problem, they could damage team reputation, erode trust, and reduce autonomy.&lt;/p&gt;

&lt;p&gt;Failed deployments are the leading cause of organizations working in larger batches. Large batches are a leading cause of failed deployments. This is politely called a negative spiral, and you have to reverse it urgently if you want to survive.&lt;/p&gt;

&lt;h2&gt;
  
  
  At last, a panacea
&lt;/h2&gt;

&lt;p&gt;The act of sitting a developer with an ops team member during deployments isn’t going to solve all your problems. As we scaled from 6 to 30 developers, pursued innovative new directions for our product, and repositioned our offering and pricing, new pain kept emerging. Continuous improvement really is a game of whack-a-mole, and there’s no final state.&lt;/p&gt;

&lt;p&gt;Despite this, the simple act of sitting together, otherwise known as collaboration, caused a chain reaction of beneficial changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sharing goals and pain
&lt;/h3&gt;

&lt;p&gt;When you’re sitting with someone working on the same problem, all the departmental otherness evaporates. You’re just two humans trying to make things work.&lt;/p&gt;

&lt;p&gt;Instead of holding developers accountable for feature throughput and the ops team for stability, we shared a combined goal of high throughput and high stability in software delivery.&lt;/p&gt;

&lt;p&gt;That removed the goal conflict and encouraged us to share and solve common problems together. This also works when you repeat the alignment exercise with other areas, like compliance and finance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Completing the pain loop
&lt;/h3&gt;

&lt;p&gt;The problem with our logging strategy was immediately apparent when one of the people generating the logs had to wade through them. This is a powerful motivator for change.&lt;/p&gt;

&lt;p&gt;Identifying unresolved pain paths and closing the pain loop isn’t a form of punishment; it’s a moment of realization. It’s the reason we should all use the software we build: it highlights the unresolved pain paths we’re burdening our users with.&lt;/p&gt;

&lt;p&gt;Pain loops are crucial to meaningful improvements in software delivery.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reducing the toil
&lt;/h3&gt;

&lt;p&gt;Great developers are experts at automating things. When you expose this skill set to repetitive work, a developer’s instinct is to eliminate the toil.&lt;/p&gt;

&lt;p&gt;For the ops team, the step-by-step deployment checklist was just part of doing business. They were so familiar with the process that it became invisible.&lt;/p&gt;

&lt;p&gt;When we reduced the toil, the ops team was definitely happier, even though we hadn’t solved all the rough edges yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Refining the early ideas
&lt;/h2&gt;

&lt;p&gt;The fully-formed ideas didn’t arrive immediately. The rough shapes were polished over time into a set of repeatable and connected DevOps habits.&lt;/p&gt;

&lt;p&gt;The three Fs, incident causation principles, alerting strategy, and monitor selection guidelines graduated into deliberate approaches long after this story.&lt;/p&gt;

&lt;p&gt;I developed an approach to software delivery improvement that used these ideas to address trust issues between developers and the business. By reducing negative signals caused by failed deployments and escaped bugs, we increased trust in the development team, enhanced their reputation, and increased their autonomy.&lt;/p&gt;

&lt;p&gt;We combined these practices with Octopus Deploy for deployment and runbook automation and an observability platform, which meant the team was the first to spot problems rather than users. When there was a problem, it was trivial to fix, and the new version could be rolled out in no time.&lt;/p&gt;

&lt;p&gt;Unlike the original organization, where we increased collaboration between teams, we created fully cross-functional teams that worked together all the time. Every skill required to deliver and operate the software was embedded, minimizing dependencies and the risk of silos, tickets, and bureaucracy.&lt;/p&gt;

&lt;p&gt;These cross-functional teams also proved to be the best way to level up team members.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unicorn portals
&lt;/h2&gt;

&lt;p&gt;You can’t work with a database whizz for long before you start thinking about query performance, maintenance plans, and normalization. You build better software when you develop these skills. You can’t work with an infrastructure expert without learning about failovers, networking, and zero-downtime deployments. You build better software when you develop these skills, too.&lt;/p&gt;

&lt;p&gt;When people say they can’t hire these highly skilled developers, they miss the crucial point. A team designed in this cross-functional style takes new team members and upgrades them into these impossible-to-find unicorns. You may start as a backend developer, a database administrator, or a test analyst, but you grow into a generalizing specialist with many new skills.&lt;/p&gt;

&lt;p&gt;Creating these unicorn portals is the most valuable skill development managers can bring to an organization. You need to hire to fill gaps and foster an environment where skills transfer fluidly throughout the team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Roll up your chair
&lt;/h2&gt;

&lt;p&gt;What became a sophisticated and repeatable process for team transformation could be traced back to that simple act of sitting together. It was a small, easy change that led to increased empathy and understanding, and then a whole set of improvements.&lt;/p&gt;

&lt;p&gt;Staring at that rapid stream of logs was the pivot point that led to the most healthy and human approach to DevOps.&lt;/p&gt;

&lt;p&gt;We didn’t have the research to confirm it back then, but deployment automation, shared goals, observability, small batches, and Continuous Delivery are all linked to better outcomes for the people, teams, and organization. Everybody wins when you do DevOps right.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>software</category>
      <category>culture</category>
    </item>
    <item>
      <title>Modern developer experience has deep roots</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Wed, 25 Mar 2026 13:56:33 +0000</pubDate>
      <link>https://dev.to/_steve_fenton_/modern-developer-experience-has-deep-roots-a9a</link>
      <guid>https://dev.to/_steve_fenton_/modern-developer-experience-has-deep-roots-a9a</guid>
      <description>&lt;p&gt;In his 1956 account of the SAGE program, Herbert Benington highlighted the opportunity to use computers to reduce the cost of programming, documentation, and testing.&lt;/p&gt;

&lt;p&gt;The creation of utilities, compilers, and instrumentation accounted for about half of the programming effort for SAGE. Benington had recognized that writing programs to improve developer productivity was an essential investment.&lt;/p&gt;

&lt;p&gt;At the time, computers were costly, so the idea of making programmers more productive had far less economic weight than it does today. Programmers earned approximately $15,000 per year, and computers cost $500 per hour to operate. Now programmers cost about 70 times the cost of the computers they use. That means the value of developer productivity is worth far more today than it was in the 1950s.&lt;/p&gt;

&lt;h2&gt;
  
  
  Betting on software
&lt;/h2&gt;

&lt;p&gt;There has been a thread of developer experience all the way back to the earliest attempts to build software at scale. Savvy organizations know that money spent on providing the right environment and tools is worth more than simply “time saved”.&lt;/p&gt;

&lt;p&gt;There are two ways to look at software development: Cost and value. It certainly costs money to build software, so the software must provide value that exceeds this cost to be viable. Software systems are built based on the anticipation of value and survive if they manage to meet or exceed that expectation. When an organization becomes cost-obsessed with software, it suggests a low anticipated value or a realization that the value won’t be realized. It’s better to bravely abandon attempts with such a thin payoff.&lt;/p&gt;

&lt;p&gt;The sweet spot for software is where the value is highly likely to be obtained, or where there’s a chance of it providing a huge return on investment, so that in any 10 attempts to create value with software, a single success would pay for all the attempts and return a profit.&lt;/p&gt;

&lt;p&gt;When you consider how software is a bet, it divides software delivery approaches into two categories: cost-focused or value-focused. Traditional project management works to keep the promise of cost and timeline. Modern agile methods try to increase the probability of the bet succeeding by adjusting course as you learn more about the problem you’re trying to solve.&lt;/p&gt;

&lt;p&gt;And here’s the crucial insight. When you manage costs effectively, the best you can achieve is zero cost. When you seek out value, there is no absolute limit to how much value you could produce. It could be twice the cost, ten times the cost, or 1,000 times the cost. There is far more upside than downside if you’re creating valuable software.&lt;/p&gt;

&lt;p&gt;Cost-first approaches to software delivery decrease the probability of success, and one (of many) reasons is that it damages developer experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  DevEx is economics, not hugs
&lt;/h2&gt;

&lt;p&gt;For whatever reason, the universe decided that you must treat people well, whether you like them or not. Even when you examine organizational culture through the lens of cold, hard business goals, you’ll find that unhealthy cultures are less successful than healthy ones. You can be a philanthropist or a capitalist; either way, you have to treat your employees well, or it will damage the thing you care about.&lt;/p&gt;

&lt;p&gt;Here’s a simple way it plays out.&lt;/p&gt;

&lt;p&gt;The developers need a bit more screen real estate so they can display more information in front of them without having to switch between background and foreground apps. Additional monitors incur costs, and a cost-focused organization will likely deny the request. Developers will have a lower fully loaded cost, but produce less value. A value-focused organization sees the potential returns and their developers will get more screen space, be less frustrated, produce better work in a shorter time, and produce a lot of value.&lt;/p&gt;

&lt;p&gt;Having an extra monitor moves the needle a little, but it’s a strong signal. Once an organization chooses between the cost or value pathways, it tends to stick to that decision. That means it’s not just monitors; it’s also chairs, code editors, refactoring tools, test tools, and automation tools. The experience diverges further with each decision made based on cost rather than value.&lt;/p&gt;

&lt;p&gt;Another individual who understood the concept of developer experience was Joel Spolsky. He created &lt;a href="https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-steps-to-better-code/" rel="noopener noreferrer"&gt;the Joel Test&lt;/a&gt; as a “highly irresponsible, sloppy test to rate the quality of a software team.” The Joel Test has items like “Do programmers have quiet working conditions?” and “Do you use the best tools money can buy?”&lt;/p&gt;

&lt;p&gt;I haven’t met Joel, so I can’t speak for his motivation, but I don’t need to know if he was motivated by kindness or cash. The result was an excellent workplace for developers and phenomenal value creation; a win-win, as Stephen Covey called it. Spolsky’s most famous products, Trello and Stack Overflow, sold for $425 million and $1.8 billion, respectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  You don’t need to make it easy
&lt;/h2&gt;

&lt;p&gt;There’s a certain amount of inherent complexity to writing great software. You must fully grasp a problem, have a strong opinion about how to solve it, and be able to execute on your plans to make it happen. Developers don’t need protection from the difficulty of building software; they need minimal unnecessary complexity from tools, processes, and the workplace environment.&lt;/p&gt;

&lt;p&gt;There was a trend that prioritized developer comfort above all other needs, which meant providing them with frameworks to tame complexity. The frameworks made development easier, but limited a developer’s options to the extent it damaged user experience. User needs were subverted to developer ease, which is wrong and somewhat patronizing to developers.&lt;/p&gt;

&lt;p&gt;It’s not developer experience if you’re using frameworks that improve the ease of development while annoying those trying to use the software. Developer experience means providing the right environment and tools for developers to build valuable software. Software that doesn’t surprise people with a new paper cut every 5 minutes, pushing them ever closer to demanding an alternative solution.&lt;/p&gt;

&lt;p&gt;Think instead of how we set up a surgeon for success. A sterile room, excellent lighting, high-quality equipment, and working with skilled individuals who can anticipate and respond as the situation unfolds. Surgeon experience is centered around a shared goal of achieving optimal patient outcomes. We don’t simplify the scenario by removing things like the need to prevent infection; we make it possible to handle it well.&lt;/p&gt;

&lt;p&gt;Developer experience is the same. We don’t choose easier problems to solve; we set developers up to succeed at solving hard problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modern DevEx and Platform Engineering
&lt;/h2&gt;

&lt;p&gt;With the rise of Platform Engineering, developer experience has been largely absorbed. Your organization might have a DevEx team, a platform team, or even both. Across the industry, the two teams share more commonalities than differences. From a list of 30 features offered by platform and DevEx teams compiled by DX, only 4 were exclusive to a single discipline.&lt;/p&gt;

&lt;p&gt;Platform only:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Certificate management&lt;/li&gt;
&lt;li&gt;DNS&lt;/li&gt;
&lt;li&gt;Networking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DevEx only:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developer training and education&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything else is along the scale from DevEx to Platform Engineering, where it may be more common in one or the other, but can be found in both.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobv8yuowvv3nqkwkm3t2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobv8yuowvv3nqkwkm3t2.png" alt="Comparing DevEx and platform teams" width="800" height="976"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Platform Engineering and developer experience both build on Benington’s early thoughts and Spolsky’s belief that if we provide developers with the right environment and the best tools, we can amplify their skills and generate lots of value. Forming teams around this idea helps standardize and scale the approach, rather than each team being subjected to differing views based on management styles or simply not knowing what they don’t know.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://newsletter.getdx.com/p/devprod-headcount-benchmarks-q1-2026" rel="noopener noreferrer"&gt;Q1 2026 DevProd headcount benchmarking report&lt;/a&gt; from DX highlights how well this scaling works. Rather than costing a fixed percentage of your engineering organization’s headcount, developer productivity teams scale non-linearly, with their ratio shrinking as the number of engineers increases. This makes sense, as their work is being reused, unlike approaches that work within individual teams and depend on teams having access to the necessary skills and knowledge.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Engineers&lt;/th&gt;
&lt;th&gt;Productivity headcount&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;200-600&lt;/td&gt;
&lt;td&gt;5.1%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;600-1000&lt;/td&gt;
&lt;td&gt;4.2%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1000+&lt;/td&gt;
&lt;td&gt;3.49%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That’s not to say the goal is to have the smallest possible teams. The goal is to unlock the value you create by developing software. If having half of all engineers in productivity roles resulted in the highest levels of value creation, that would be the right mix. There is likely a point of diminishing returns if you approach 10-15%, but you should be testing this by tracking meaningful outcomes for your organization.&lt;/p&gt;

&lt;p&gt;Make sure developers have the right environment and the best tools so they can generate the most value for your organization.&lt;/p&gt;

</description>
      <category>devex</category>
      <category>software</category>
      <category>culture</category>
    </item>
    <item>
      <title>Snake Oil, Rituals, and Why We’re Wrong To Burn It All Down</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Tue, 17 Mar 2026 08:24:59 +0000</pubDate>
      <link>https://dev.to/_steve_fenton_/snake-oil-rituals-and-why-were-wrong-to-burn-it-all-down-5g9l</link>
      <guid>https://dev.to/_steve_fenton_/snake-oil-rituals-and-why-were-wrong-to-burn-it-all-down-5g9l</guid>
      <description>&lt;p&gt;&lt;em&gt;How to benefit from old knowledge without making old mistakes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The term “snake oil salesman” is often used to describe individuals who engage in deceptive marketing practices. Wild west characters like &lt;a href="https://en.wikipedia.org/wiki/Clark_Stanley" rel="noopener noreferrer"&gt;Clark Stanley&lt;/a&gt; advertised their snake oil as a wondrous cure-all remedy. But in 1916, the U.S. government’s Bureau of Chemistry tested the liniment, found it to be dramatically overpriced and of limited value, and Stanley was fined $20.&lt;/p&gt;

&lt;p&gt;Yet that’s not the end of the story.&lt;/p&gt;

&lt;h2&gt;
  
  
  You Probably Use Snake Oil
&lt;/h2&gt;

&lt;p&gt;Snake oil wasn’t entirely purposeless. While it’s true that it didn’t match the claims on the bottle, certain ingredients, such as capsaicin and camphor, proved valuable when used for valid purposes.&lt;/p&gt;

&lt;p&gt;Capsaicin, derived from chili peppers, is now used in skin-applied pain relief products to relieve muscular and joint pain. It’s an FDA-approved therapeutic treatment. Camphor is also commonly used as a counter-irritant, helping relieve itching from insect bites. It’s also the go-to ingredient for makers of chest rubs, which you’ve likely used as a decongestant when you’ve had a cold.&lt;/p&gt;

&lt;p&gt;So, while snake oil failed to match the wild claims of its peddlers, it wasn’t completely useless. This is also true in the software industry, so being able to separate the valuable ingredients from debunked software delivery recipes is a crucial skill.&lt;/p&gt;

&lt;h2&gt;
  
  
  Waterfall Is Bad, Mostly
&lt;/h2&gt;

&lt;p&gt;The term “waterfall” is often used as a catch-all name for phased software delivery, where tasks are performed in a sequential order that resembles a waterfall. When the lightweight rebellion overthrew the heavyweight models of the time, it created a mistaken belief that the phased software models were simply wrong.&lt;/p&gt;

&lt;p&gt;But the creators of these old models have been short-changed, as they had been telling us to work in this new way all along.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9o2yu35hmff21zq9cdt9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9o2yu35hmff21zq9cdt9.jpg" alt="The many stages, thought processes, and tests of phased software delivery. Source: Production of Large Programs. Herbert D. Benington. 1956." width="800" height="802"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In his 1956 paper, “Production of Large Programs,” Herbert Benington discusses concepts that we would now label as platform engineering. Benington decried the idea of top-down programming, where a specification would be completed before the code was written. Winston Royce, in his 1970 paper “Managing the Development of Large Software Systems,” advised people to work in small incremental changes, as this would reduce complexity and allow organizations to roll back to a previous version if they moved in the wrong direction. These ideas resurfaced in Barry Boehm’s Spiral Model.&lt;/p&gt;

&lt;p&gt;The success of Agile was largely due to how the proponents of lightweight software delivery carefully extracted the good ingredients from the heavyweight recipe used in the popular processes that dominated the industry in the 1990s. They preserved the good parts and discarded large swathes of the toxic ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Recurring Crisis
&lt;/h2&gt;

&lt;p&gt;Our crisis comes from a tendency for management to be attracted to process and repelled by technical and cultural practices. They have a craving to reintroduce elements of phased software models that were expertly removed, and they want to discard crucial techniques that they don’t understand (or sound like hard work).&lt;/p&gt;

&lt;p&gt;Increasing process weight while decreasing technical excellence is a path to destruction.&lt;/p&gt;

&lt;p&gt;The canonical example of this management error comes from the early days of Agile. Around the time of the Agile Manifesto, the leading lightweight method was Extreme Programming (XP). It had similar process elements to Scrum, but also a map of interconnected technical practices that kept the cost of change low, which is the key to sustaining agility over the long term.&lt;/p&gt;

&lt;p&gt;For managers, Scrum’s exclusive focus on process was unthreatening, while XP’s emphasis on technical skills struck fear into their hearts. When it came to management, Scrum was top dog. As a result, we spent a decade spinning wheels until Dave Farley and Jez Humble revived and renewed the ideas of XP in their landmark book, “Continuous Delivery.”&lt;/p&gt;

&lt;p&gt;Of course, it didn’t stop with Scrum. When you don’t have technical excellence, the process elements of Scrum don’t deliver the outcomes that are expected of agile software development. As a result, management responded by bulking up the process to “work at scale” or “handle Enterprise needs”. The real motivation behind this was, of course, the comfort of process working against the complexity of reality, which can only be resolved by social and technical means.&lt;/p&gt;

&lt;p&gt;When DevOps first emerged, it could be summed up as breaking down the silos between development and operations. This idea was further refined to align the goals of the two teams and encourage them to collaborate more effectively. Everyone was on board with this until a decade of research revealed the need for those intimidating technical elements, the necessity of transformational leadership, and the value of lean product management. When DevOps got too real, the desire to run away intensified.&lt;/p&gt;

&lt;p&gt;The rush from complex realities to simplifications is a mistake we repeatedly make. Putting it unkindly, the fall of all good methods is the result of managers fleeing in terror from things they don’t understand as well as they should.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding The Real Remedy
&lt;/h2&gt;

&lt;p&gt;The software industry’s snake oil problem isn’t that we have too many frameworks and practices; it’s that we have too few. It’s that we’ve lost the ability to think critically about them. We adopt wholesale when we should cherry-pick. We follow prescriptions when we should experiment.&lt;/p&gt;

&lt;p&gt;The most effective software teams aren’t the ones who’ve found the perfect framework. They’re the ones who’ve learned to extract value from imperfect ones, who understand that every practice is context-dependent, and who continuously question whether what they’re doing is actually helping.&lt;/p&gt;

&lt;p&gt;Snake oil taught us an important lesson, but it wasn’t the one we thought. It’s not that old remedies are worthless. It’s that we need to look past the marketing to understand what actually works. The same applies to software practices. Behind every framework, methodology, and best practice lies a kernel of insight that addresses a real problem.&lt;/p&gt;

&lt;p&gt;Our job isn’t to mindlessly follow or unthinkingly reject. It’s about understanding, extracting, and applying wisely.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>agile</category>
      <category>waterfall</category>
    </item>
    <item>
      <title>We Don’t Trust AI (and That’s a Good Thing)</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Mon, 09 Mar 2026 15:57:05 +0000</pubDate>
      <link>https://dev.to/_steve_fenton_/we-dont-trust-ai-and-thats-a-good-thing-3oe6</link>
      <guid>https://dev.to/_steve_fenton_/we-dont-trust-ai-and-thats-a-good-thing-3oe6</guid>
      <description>&lt;p&gt;&lt;strong&gt;Why maintaining a healthy skepticism gets you better outcomes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of my old hobbies was writing for independent music magazines, such as Spill Magazine (distributed free at music venues) and DV8 (distributed free at hair salons). Over the years, I saw hundreds of unsigned bands and learned a crucial lesson: Amplification makes everything you do really loud, but it doesn’t fundamentally change whether what you’re doing is good or bad.&lt;/p&gt;

&lt;p&gt;This law of amplification applies equally to software development, according to &lt;a href="https://dora.dev/research/2025/" rel="noopener noreferrer"&gt;DORA’s State of AI-assisted Software Development report&lt;/a&gt;. AI is an amplifier that will boost the volume of your software delivery capability, whether good or bad.&lt;/p&gt;

&lt;p&gt;And this is why I find the report’s findings on trust so reassuring.&lt;/p&gt;

&lt;h2&gt;
  
  
  We Don’t Trust AI
&lt;/h2&gt;

&lt;p&gt;The report found that AI is being used practically everywhere. Almost everyone (90%) is using AI for their work and believes it increases their productivity (80%) and code quality (59%). But they don’t trust it. In fact, when asked whether they trust AI-generated output, the response was an overwhelmingly subdued “somewhat”.&lt;/p&gt;

&lt;p&gt;This has led many people to ponder how we can increase trust in AI. There’s a perception that if we can get technical people to trust it, we’ll get even bigger gains. However, this is not an outcome we should strive for.&lt;/p&gt;

&lt;p&gt;One factor contributing to the successful adoption of AI is undoubtedly a healthy level of skepticism regarding the answers it provides. Encouraging people to increase their trust in AI can reduce agency, diminish personal responsibility, and lower vigilance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Absolute Trust Not Required
&lt;/h2&gt;

&lt;p&gt;Successful software developers have acquired critical thinking skills that enable them to envision potential pitfalls and anticipate how things might go wrong. When you create software used at scale, scenarios you perceive as atypical occur frequently.&lt;/p&gt;

&lt;p&gt;When I worked on a platform used by global automotive giants, we would process over 4 million requests in just 5 minutes. We were working on a feature, and my mind was working through potential failure scenarios and edge cases. When I highlighted a potential bear trap, the business folks would often dismiss it. “The chances of that happening are a million to one,” they said. However, that meant it could happen more than 1,152 times each day, so we had to accommodate it.&lt;/p&gt;

&lt;p&gt;When developers have a skeptical mindset, it’s healthy. They are thinking at scale and preventing a constant series of disruptive events. My team was following the “you build it, you run it” pattern, so we were highly motivated to silence the pager by creating robust software.&lt;/p&gt;

&lt;p&gt;Great developers can think ahead and prevent problems before they write a single line of code. Having low trust in AI-generated output is a key aspect of this mindset.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI Model Is The Q&amp;amp;A Model
&lt;/h2&gt;

&lt;p&gt;Though AI is often considered disruptive, it usually turns out that existing models can (and should) be applied. Those who don’t understand this are relearning lessons on small batches and user-centricity, as AI only exacerbates the problem of changing too much at once and over-investing in a feature idea before learning whether it’s helpful to users.&lt;/p&gt;

&lt;p&gt;Similarly, we have an existing model we can apply to AI-generated code. The Q&amp;amp;A model.&lt;/p&gt;

&lt;p&gt;When you find an answer on Stack Overflow, you don’t just copy and paste it into your application. Answers on these sites often contain a few crucial lines of code that directly address the question, as well as many additional lines that complete the example. There is some risk in taking those essential lines and even more in taking the wrapping ones.&lt;/p&gt;

&lt;p&gt;You’ll see occasional comments from developers highlighting the dangers of those wrapping lines, and while they’re not wrong, the answers would be less helpful if they contained more and better code in the wrapping lines.&lt;/p&gt;

&lt;p&gt;Experienced developers use the answer to understand how to solve their problem and then write their own solution, or make substantial adjustments to the code in the answer. We should apply these same reservations to all code we didn’t author, whether it’s from a Q&amp;amp;A site or from an AI-assistant. There’s no reason to trust the AI-generated code more than you would the answer on a Q&amp;amp;A site that likely formed a part of the training data in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Warned You
&lt;/h2&gt;

&lt;p&gt;Skepticism over AI-generated code shouldn’t be a controversial stance. The tools themselves provide these warnings when you start using them. Everyone using coding assistants and AI chat has clicked past a message such as: “Chat GPT can make mistakes. Check important info.” We’d be foolish to place high trust in them and the outcomes would be worse if we did.&lt;/p&gt;

&lt;p&gt;While AI-assistance is relatively new, experienced software developers are applying healthy models for handling the code it produces. That’s why our enthusiasm for toil-reduction is best served by muted trust levels.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>How To Measure AI’s Organizational Impact</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Mon, 02 Mar 2026 08:16:22 +0000</pubDate>
      <link>https://dev.to/_steve_fenton_/how-to-measure-ais-organizational-impact-54ji</link>
      <guid>https://dev.to/_steve_fenton_/how-to-measure-ais-organizational-impact-54ji</guid>
      <description>&lt;p&gt;When organizations introduce AI, they often make a critical error: they create entirely new metrics to measure its impact. This approach misses the fundamental truth that AI is a tool to help achieve existing goals, not a reason to change what success looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
   Your Goals Haven’t Changed
&lt;/h2&gt;

&lt;p&gt;Consider the difference between Formula 1 racing and EcoRally Scotland. Formula 1 teams optimize for speed — whoever crosses the finish line first wins. EcoRally teams have a completely different challenge: complete a 500-kilometer route with the best regularity score while using the least energy possible.&lt;/p&gt;

&lt;p&gt;These teams need different strategies, different driving styles, and different metrics. The goals determine everything else.&lt;/p&gt;

&lt;p&gt;The same principle applies to your organization. When you introduce AI, your fundamental purpose remains unchanged. You still want to create the best quality speakers, save bees, or deliver whatever value you were creating before. AI is simply a new tool to help you achieve those existing goals more effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stick With What Already Works
&lt;/h2&gt;

&lt;p&gt;Organizations often have sophisticated measurement systems in place — financial metrics, mission-based indicators, and proxy measures that track different parts of their value stream. If you’ve already established that software delivery performance correlates with organizational outcomes, for example, then continue using those same measures to evaluate AI’s impact.&lt;/p&gt;

&lt;p&gt;The danger lies in creating new metrics specifically for AI adoption. These measures rarely connect to meaningful business outcomes and can lead you to optimize for activities that don’t actually move the needle on what matters most.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Local Optimization Trap
&lt;/h2&gt;

&lt;p&gt;Here’s a common scenario: A development team starts using AI and reduces their feature delivery time from 16 hours to 12 hours — a 25% improvement that looks impressive on paper. However, when you examine the entire value stream, the lead time from customer request to delivered value remains unchanged at two weeks.&lt;/p&gt;

&lt;p&gt;This isn’t a new problem. Eli Goldratt explored this in “The Goal,” and Lean Software Development emphasizes optimizing for the whole system, not individual parts. AI amplifies this challenge because it’s easy to see immediate productivity gains in specific areas while missing the broader organizational impact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Focus On What Truly Matters
&lt;/h2&gt;

&lt;p&gt;Most teams collect numerous metrics that help them improve their work and maintain standards. But organizationally, only a few metrics are truly critical — usually some combination of financial performance and mission-based indicators that track whether you’re making the intended difference in the world.&lt;/p&gt;

&lt;p&gt;AI only delivers real value when its benefits flow through to these crucial numbers. Everything else is just interesting data.&lt;/p&gt;

&lt;h2&gt;
  
  
   Research-Driven Implementation
&lt;/h2&gt;

&lt;p&gt;The most effective approach follows basic research principles: form a hypothesis, design a test, then evaluate the results. Before implementing AI, articulate clearly how you expect it to impact your mission-level metrics. If you’ve already established relationships between local measures (like software delivery performance) and organizational outcomes, you can build your hypothesis on these proven connections.&lt;/p&gt;

&lt;p&gt;Too many organizations reverse this process — they implement AI first, then scramble to find metrics that show improvement. This backwards approach leads to hockey-stick charts that look impressive but don’t translate to meaningful business value. It’s the difference between running a business and running a marketing campaign.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Path Forward
&lt;/h2&gt;

&lt;p&gt;AI will impact your business — that’s inevitable. But whether that impact is positive depends largely on how thoughtfully you approach adoption. By maintaining focus on your existing goals and proven metrics, you can ensure that AI becomes a genuine accelerator of your mission rather than an expensive distraction.&lt;/p&gt;

&lt;p&gt;The organizations that will see the greatest benefit from AI are those that resist the temptation to change their definition of success and instead use AI to achieve their existing definition of success more effectively.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>analytics</category>
    </item>
    <item>
      <title>Avoiding golden cages in Platform Engineering</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Fri, 27 Feb 2026 14:12:48 +0000</pubDate>
      <link>https://dev.to/_steve_fenton_/avoiding-golden-cages-in-platform-engineering-3nda</link>
      <guid>https://dev.to/_steve_fenton_/avoiding-golden-cages-in-platform-engineering-3nda</guid>
      <description>&lt;p&gt;I zipped up to London to share the &lt;a href="https://octopus.com/publications/platform-engineering-pulse" rel="noopener noreferrer"&gt;Platform Engineering Pulse report&lt;/a&gt; with the amazing &lt;a href="https://www.linkedin.com/company/londondevops/" rel="noopener noreferrer"&gt;London DevOps&lt;/a&gt; group. Afterwards, we spent several hours talking through some of the findings and I thought I’d write up some of the results of those discussion.&lt;/p&gt;

&lt;p&gt;In particular, the question of whether platforms should be optional or mandatory has a lot of talking points. It also intersects with the golden cages problem, as an inflexible platform intensifies the nastiest problems of mandatory platforms.&lt;/p&gt;

&lt;p&gt;As we’re continuously talking golden paths, we’ll head to Oz to look through the hazards and how they come together to cause some serious problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  The wizard of ops
&lt;/h2&gt;

&lt;p&gt;Imagine our house has been lifted by a hurricane and deposited in a strange land. The friendly people we meet tell us about a golden path, and off we go to see a wizard. We sing a little tune, because we don’t yet know about the hazards awaiting us along the way.&lt;/p&gt;

&lt;p&gt;Why in Oz didn’t the munchkins mention the wolves, crows, and flying monkeys? They certainly had plenty to say about the darn road.&lt;/p&gt;

&lt;p&gt;Let’s explore the wonderful and magical world of gold and platforms.&lt;/p&gt;

&lt;h2&gt;
  
  
  The golden path
&lt;/h2&gt;

&lt;p&gt;There’s a crucial distinction between a paved path and a golden path. I’m sure the munchkins would have had a verse or two on it.&lt;/p&gt;

&lt;p&gt;Paved paths are an analogy based on desire paths; those animal trails and shortcuts that, over time, create a signal that people want to travel between two points. If your platform is just the encoding of desire paths, it’s not terribly different from whatever came before. You’re missing an excellent opportunity to create something better.&lt;/p&gt;

&lt;p&gt;In product development, we know that you don’t just build what the user asks for. Instead, you explore their needs and design something better than what is currently available to them. The same goes for golden paths.&lt;/p&gt;

&lt;p&gt;If you take existing paths and pave them, you’re just transferring the complexity from developers to platform engineers. There is some benefit of splitting complexity (the developers handle the product’s complexity, and the platform engineer handles, well, whatever toxic waste is ejected into the paved path).&lt;/p&gt;

&lt;p&gt;Golden paths shouldn’t just divide the complexity; they should manage it. This is vital as we hope the golden path handles aspects that were absent from the well-trodden desire path. Things like cost control and security, which were previously applied haphazardly, if at all.&lt;/p&gt;

&lt;p&gt;We’re not trying to achieve the shortest path (through the quicksand, tar pit, and snake-infested rocks), but the shortest route that satisfies the constraints (such as safety).&lt;/p&gt;

&lt;p&gt;Got a golden path? Great, we’ve defeated the wolves, now it’s time to face the crows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Golden cages
&lt;/h2&gt;

&lt;p&gt;On day one, golden paths and golden cages look exactly the same. You only really find out you’re in a cage when the platform you use doesn’t let you do something. You only discover the lack of flexibility when you push on a surface.&lt;/p&gt;

&lt;p&gt;As standardization is high on the list of goals organizations have for Platform Engineering, it’s no surprise to find platform teams taking this to a rigid extreme. Developers may want 90% of what the golden cage offers, but if they can’t achieve the other 10% they become frustrated. This is a contributing factor to cases where developers circumvent the rulebook and find a way to bypass the platform entirely.&lt;/p&gt;

&lt;p&gt;Signals of golden cages include a heck of a lot of negging the platform, highlighting its flaws, pointing out that development goals will be missed, and generally wearing down dev managers until they sign off on letting developers do things their own way.&lt;/p&gt;

&lt;p&gt;The solution isn’t to correct the developers. You have to correct the platform. It should provide extensibility points and escape hatches, so developers can achieve their goals within the policy constraints set by the organization.&lt;/p&gt;

&lt;p&gt;That’s the crows dismissed. Time for some flying monkeys.&lt;/p&gt;

&lt;h2&gt;
  
  
  Golden manacles
&lt;/h2&gt;

&lt;p&gt;Your organization is investing in a platform initiative. They have a bunch of goals in mind, often related to standardization, compliance, security, and cost control (and hopefully flow of value and developer experience). Why would they let all this time, effort, and attention be wasted by allowing development teams to choose whether to adopt it?&lt;/p&gt;

&lt;p&gt;It’s evident that platforms should be mandatory.&lt;/p&gt;

&lt;p&gt;Except this is the breeding ground for some very toxic outcomes. Everybody has some level of rebellion streaking through them, and mandating anything is the perfect way to energize it. Why do so many British kids hate Shakespeare? Because teachers forced them to read it.&lt;/p&gt;

&lt;p&gt;Now, you may think your developers are low on the rebel-scale, so you’ll be okay. You can tell them what to do. The thing is, while those high on the rebel-scale will provide noisy dissent, those lower on the scale will be more silent and subversive. When a mandated platform introduces friction, everyone will rebel, and they’ll do so in their own wonderful and unique style.&lt;/p&gt;

&lt;p&gt;You &lt;em&gt;could&lt;/em&gt; have a great platform and make it mandatory, and maybe never see this problem. If you mix mandatory adoption with a golden cage, you’re guaranteed to see strange behaviors as teams thrash around trying to achieve their conflicting goals. Developers are supposed to be delivering valuable software, platform teams are trying to force compliance, and the two are in constant conflict.&lt;/p&gt;

&lt;p&gt;If this sounds familiar, it’s because DevOps was the solution to this problem. When you have two silos with conflicting goals, you’re in flying monkey territory without a monkey-proof umbrella. The solution to this mandated golden cage conundrum is simple. You need to align goals, encourage collaboration, and let people do the good work.&lt;/p&gt;

&lt;p&gt;In Platform Engineering, the best way to achieve collaborative bliss is:&lt;/p&gt;

&lt;p&gt;Make platforms optional to increase the desire in platform teams to understand the needs of platform users. Make it a shared goal to meet the organization’s policies, so development teams and platform teams both want the same thing.&lt;/p&gt;

&lt;p&gt;When developers and platform teams share the goal to meet policy, the platform becomes a far more appealing option. Other goals, like flow of value, should also be shared, so platform teams are motivated to solve the right problems for the development teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  The silver slippers: Platform as a product
&lt;/h2&gt;

&lt;p&gt;This is why the prevailing advice from smart people is to treat the platform as a product and the developers as customers and prospects. Put a good feedback loop in place so you can see where the platform is starting to fit too tightly. Then, collaborate with your customers to provide a good way to flex where needed.&lt;/p&gt;

&lt;p&gt;Make your platform optional, and your policies mandatory.  &lt;/p&gt;

</description>
      <category>devops</category>
      <category>discuss</category>
      <category>management</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>It may not be Picasso, but it is Brunel</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Tue, 17 Feb 2026 10:17:40 +0000</pubDate>
      <link>https://dev.to/_steve_fenton_/it-may-not-be-picasso-but-it-is-brunel-1j5g</link>
      <guid>https://dev.to/_steve_fenton_/it-may-not-be-picasso-but-it-is-brunel-1j5g</guid>
      <description>&lt;p&gt;You want to paint a wall. The fastest way to start is to open the paint tin and start rolling out the color. Except that’s not the quickest way to paint a wall, as expert painters know. If you give a professional this job, they won’t touch the paint until the surface has been prepared.&lt;/p&gt;

&lt;p&gt;This involves removing previous wall coverings, filling holes and divots in the wall, and carefully sanding to achieve a perfect surface. When you apply paint to a prepared wall, it goes on smoothly, it looks great when it dries, and you need fewer coats (we amateur painters tend to use additional coats in attempts to disguise all the problems we left when we didn’t prepare).&lt;/p&gt;

&lt;p&gt;The preparation checklist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fill holes and cracks&lt;/li&gt;
&lt;li&gt;Sand the walls&lt;/li&gt;
&lt;li&gt;Clean the walls&lt;/li&gt;
&lt;li&gt;Let the walls dry&lt;/li&gt;
&lt;li&gt;Apply paint&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You may need to add additional tasks to the list, such as removing mildew or priming the surface, if these are required in your expert judgment. This is the model professional decorators have refined over decades. It’s not glamorous, but it works.&lt;/p&gt;

&lt;p&gt;So far, so good.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter the robot
&lt;/h2&gt;

&lt;p&gt;No matter what you do for a living, someone wants you to do more of it in less time. In the software industry, we have scrummy marketing to blame for the overwhelming presence of demanding “twice the work in half the time.”&lt;/p&gt;

&lt;p&gt;So what happens when we decide we want to paint faster? Someone buys a great big paint-spraying robot.&lt;/p&gt;

&lt;p&gt;The paint-spraying robot is 10x faster than a human at painting. It can cover 100 square meters per hour, while a human can only do 10 square meters an hour. It completes projects 60% faster, and it can run 24/7, unlike those pesky humans who want to see their family and sleep. Of course, you need to input floor plans and designate non-paintable areas. Additionally, there’s a 20-minute setup time, as well as a 30-minute post-painting clean cycle.&lt;/p&gt;

&lt;p&gt;Side panel: There are clues in the claims for the robot that tell us things are more complicated than they first appear. The robot is 10x faster, but projects complete only 2–3x faster. Something outside of blasting paint onto the wall is at play here. I’ve worked in enough organizations who purchase based on the 10x claim and then tripped and fell down the stairs of their own excitement.&lt;/p&gt;

&lt;p&gt;Oh, and there’s one more thing. It doesn’t prepare your walls.&lt;br&gt;
If you’ve ever painted walls without preparing them, you’re familiar with the kinds of problems it causes. The finish doesn’t look good, it’s not long-lasting, and your modern lighting turns the wall into a three-dimensional topology map of past picture hook holes. Over time an odd dark patch emerges. A reminder of the time little Lily missed her mouth with the Calpol and made an impromptu purple Rorschach test across the wall.&lt;/p&gt;

&lt;p&gt;Painting, it turns out, is a complex process. We may long for a reality where painting is easy, but we live in one where it’s not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rediscovering the wheel, one bruise at a time
&lt;/h2&gt;

&lt;p&gt;And that’s why the robot-first painting team is currently providing a fountain of incredible insights as they try to maximize their return on investment.&lt;/p&gt;

&lt;p&gt;They’re discovering that asking people what color they want to paint their walls results in happier customers. An idea about setting windows to be non-paintable is emerging. Some bright spark has worked out that filling cracks before painting achieves a better end result.&lt;/p&gt;

&lt;p&gt;Of course, they haven’t discovered everything on the simple checklist used by every professional decorator. It will take time for them to work it all out. It took professionals time to work it out in the first place, and these pioneers have decided to start from scratch instead of building on existing knowledge.&lt;/p&gt;

&lt;p&gt;Eventually, they’ll have a pre-robot preparation checklist that looks something like the one we had in the first place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fill holes and cracks&lt;/li&gt;
&lt;li&gt;Sand the walls&lt;/li&gt;
&lt;li&gt;Clean the walls&lt;/li&gt;
&lt;li&gt;Let the walls dry&lt;/li&gt;
&lt;li&gt;Apply paint&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  We’ve seen this movie before
&lt;/h2&gt;

&lt;p&gt;Of course, this isn’t about painting at all. It’s about software delivery.&lt;/p&gt;

&lt;p&gt;We spent decades refining the best way to build software. It’s called Continuous Delivery. We have even expanded this into the DevOps model, which combines practices and capabilities that work well with Continuous Delivery, such as generative workplace culture, lean product management, and transformational leadership.&lt;/p&gt;

&lt;p&gt;We literally have diagrams that show how all these things come together to improve software delivery. That’s right, “software delivery”. Not feature development time. Not coding speed. The whole darn thing.&lt;/p&gt;

&lt;p&gt;And right now, I’m witnessing the most surreal déjà vu of my career.&lt;/p&gt;

&lt;p&gt;Many people using AI are discovering Continuous Delivery practices through bruising experiences. The barrage of social posts from AI-first developers who are finding out from scratch why version control is a good idea, or why they ought to work in small batches with changes frequently integrated with the main branch, or why their builds shouldn’t take an hour.&lt;/p&gt;

&lt;p&gt;It’s funny, while also being not at all funny.&lt;/p&gt;

&lt;p&gt;In the 2000s, as I was first finding my way through Agile, Extreme Programming, and Lean, we drew on books and articles to inform our continuous improvement process. I worked on a team that ditched Scrum and developed a method that made sense for our work. We rapidly went from 6-month cycles to having always-shippable code, with a new version deploying every 3 hours or so.&lt;/p&gt;

&lt;p&gt;Therefore, there’s a whole generation of lean/agile software developers for whom AI doesn’t provide a significant boost. To us, AI is just another tool, like auto-complete or a compiler. Helpful; not transformational.&lt;/p&gt;

&lt;p&gt;We refined the elements of high-performance software delivery through numerous iterations and adjustments.&lt;/p&gt;

&lt;h2&gt;
  
  
  The paint dries on this one
&lt;/h2&gt;

&lt;p&gt;Continuous Delivery remains the best-known way to deliver software.&lt;/p&gt;

&lt;p&gt;A team using only Continuous Delivery will beat a team using only AI, because any benefit you get from AI will be lost to the first bottleneck it encounters on its way to production. Teams that start with Continuous Delivery will be more successful with AI, because they are already more successful than other teams. They have fast builds, automated deployment pipelines, and solid technical practices to enable the fast flow of work.&lt;/p&gt;

&lt;p&gt;Essentially, AI has enabled low-performing development teams to experience some of the speed that comes with Continuous Delivery, but without the enabling practices. They’re getting paint on the wall, but they skipped all the prep work. So far, this hasn’t led them back to Continuous Delivery, but if they want to succeed, that’s where they need to start.&lt;/p&gt;

&lt;p&gt;If you’re looking at seriously improving your productivity, it’s likely the answers that are proving so elusive with AI have been waiting for us all along in Continuous Delivery.&lt;/p&gt;

&lt;p&gt;You can buy the robot if you want. Just don’t be surprised to find your windows painted over and your wall covered in lumps, bumps, and cat-shaped silhouettes.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why Transformations Never Succeed: Even When They Do</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Thu, 05 Feb 2026 09:22:48 +0000</pubDate>
      <link>https://dev.to/_steve_fenton_/why-transformations-never-succeed-even-when-they-do-4gf5</link>
      <guid>https://dev.to/_steve_fenton_/why-transformations-never-succeed-even-when-they-do-4gf5</guid>
      <description>&lt;p&gt;We all read the daily announcements about another major company launching a sweeping transformation. We’ve had waves of Agile, digital, omni-channel and cloud-native transformations, and the AI-first transformations are a hazy silhouette on the horizon.&lt;/p&gt;

&lt;p&gt;The press releases are optimistic, but the results are depressingly predictable. &lt;a href="https://hbr.org/2019/03/digital-transformation-is-not-about-technology" rel="noopener noreferrer"&gt;Trillions of dollars are wasted&lt;/a&gt; on failed transformations each year, and organizations don’t always survive the attempt. While most transformations fail outright, even the successful ones leave organizations weaker than before.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning From The Browser Wars
&lt;/h2&gt;

&lt;p&gt;If you were to time travel back 25 years, you’d find a great battle taking place between web browser makers. Mosaic was long gone, Netscape Navigator was dominating the market with a 70% share (1998) and they were about to lose it all to Internet Explorer who soon flipped the table on them to get a 75% share (1999). Firefox, Chrome, Edge, and Vivaldi didn’t even exist back then, so Internet Explorer was as good as it got.&lt;/p&gt;

&lt;p&gt;There’s little doubt that Internet Explorer was a better browser than Netscape Navigator, but how did Microsoft get ahead of the dominant browser? Joel Spolsky &lt;a href="https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/" rel="noopener noreferrer"&gt;attributed the loss of market share&lt;/a&gt; to Netscape’s decision to rewrite their browser from scratch. While they spent 3 years building a browser from the ground up, everyone else was racing ahead.&lt;/p&gt;

&lt;p&gt;As Spolsky put it; “The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming. It’s harder to read code than to write it.” Every action the developers took to create a new browser based on “cleaner code” represented a loss of hard-won knowledge. The code looked messy because it handled scenarios and edge cases the developers weren’t originally aware of and have now forgotten.&lt;/p&gt;

&lt;p&gt;When you remove all that mess, you aren’t making things tidier and better; you’re just breaking things.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chesterton’s Fence
&lt;/h2&gt;

&lt;p&gt;There’s a principle for this problem of tearing things down before you know why they exist. It’s called “Chesterton’s fence”, after the way G. K. Chesterton described the idea in 1929:&lt;/p&gt;

&lt;p&gt;“There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”&lt;/p&gt;

&lt;p&gt;To apply this rule, you have to work out why something exists before you remove it. In the process of determining this, you’ll often discover the necessity of its existence. You can apply Chesterton’s fence to everything from that inconvenient pole you want to remove from your kitchen (which is supporting the weight of walls on the upper floors) to rewriting Netscape from scratch.&lt;/p&gt;

&lt;p&gt;And there’s a crucial lesson here because it also applies to transforming organizations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Reason Transformations Fail
&lt;/h2&gt;

&lt;p&gt;Organizations perform transformations because they’ve identified an area of weakness so fundamental they want to subvert the entire organization to the task of rectifying it. If you’re unable to deliver software frequently, you need an “Agile Transformation”. If you’re running your business in physical locations and customers can’t interact with you online, you need a “Digital Transformation”.&lt;/p&gt;

&lt;p&gt;Organizations resist the kind of change a transformation introduces. Their control structures are designed to maintain the status quo. Every transformation attempt faces countless small battles: budget reviews that question the initiative, middle managers who protect their territories, processes that favor the old way of doing things. Each small loss in these daily skirmishes undermines the transformation’s chances of success.&lt;/p&gt;

&lt;p&gt;But there’s an even more spectacular failure that lays in wait for transformations that succeed. You see, when a transformation is a total success, it has won all the skirmishes and torn down every fence as far as the eye can see. The process of ripping out fence posts leaves no time for debates on their purpose, so the post-transformation landscape requires a process of re-learning thousands of past lessons.&lt;/p&gt;

&lt;p&gt;There’s often limited time for this process of re-learning, because the next transformation is just around the corner.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Transformation Paradox
&lt;/h2&gt;

&lt;p&gt;You don’t find large-scale transformations in healthy companies. The only reason to kick off such a huge shockwave of change is that the organization has become disconnected from the reality it operates in. It takes years of denial to build up a gap so large you have to use the brute-force of transformation to close it.&lt;/p&gt;

&lt;p&gt;The organization who reprimanded me in an annual review for advocating for web-based technology later kicked off a huge digital transformation project in an attempt to catch up on their lost decade. Their headquarters, which sits on a road named after their organization, now stands empty.&lt;/p&gt;

&lt;p&gt;This is just a single example from many, but the lesson is clear; the longer and harder you resist the changes around you, the more the doom seeps into your organization. If you want to spot this kind of ingrained organizational rot, you need look no further than their number one symptom: transformation projects. In contrast, organizations that watch the world changing around them and make continuous small adjustments never build up the level of decay that requires a transformation.&lt;/p&gt;

&lt;p&gt;Think of it like taking a shower. Adaptive organizations keep one hand on the temperature valve, making minor adjustments as the water temperature fluctuates, while rigid organizations ignore the gradual changes until they’re either freezing or scalded, then they frantically spin the valve from one extreme to the other, creating even more chaos.&lt;/p&gt;

&lt;p&gt;You only need to attempt a transformation because you’ve delayed beyond the last responsible moment. The failure has, for all intents and purposes, already happened. The transformation is the last throw of the dice for an organization, so they accept the collateral damage it will cause; the miles and miles of fence that must be flattened along the way and the resulting loss of crops to hungry goats.&lt;/p&gt;

&lt;p&gt;This is the paradox of transformations. You shouldn’t do them, but if you are doing one it’s probably your last resort, so you have to carry on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lasting Change Is More Deliberate
&lt;/h2&gt;

&lt;p&gt;It’s far better to aim for continuous and lasting change. This is a more deliberate process that replaces denial with courage to experiment. To make change stick, you need to be doing it at a smaller scale and all the time.&lt;/p&gt;

&lt;p&gt;Instead of converting an organization from paper to digital, you instead convert an organization from static to dynamic by making responsiveness to change part of its operating model. This takes a few crucial organizational capabilities, in particular a generative culture with psychological safety and transformational leadership.&lt;/p&gt;

&lt;p&gt;You’re not forcing people to experiment, you’re making it safe to do so.&lt;/p&gt;

&lt;p&gt;Continuous small-scale change reduces the chance of having to race to catch up. Instead of performing house-clearance style transformations where you throw out everything, then find yourself re-purchasing expensive items you should have kept, you’ll prevent the hoarding that makes it necessary to throw everything in a skip.&lt;/p&gt;

&lt;p&gt;Ironically, the organizations who have learned how to be good at constant small scale change are better equipped to make a more dramatic change when they need to. Rather than a transformation, they undertake a Kaikaku or Kaizen Blitz to switch out larger puzzle pieces to boost their advantage. Unlike a transformation, these approaches to radical change start by understanding the current state and the target state, which often reduces the total amount of change required to reach the desired destination.&lt;/p&gt;

&lt;p&gt;If you want change to be successful, you do it all the time.&lt;/p&gt;

</description>
      <category>culture</category>
      <category>leadership</category>
    </item>
    <item>
      <title>Stop Force-Feeding AI to Your Developers</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Tue, 27 Jan 2026 11:48:59 +0000</pubDate>
      <link>https://dev.to/_steve_fenton_/stop-force-feeding-ai-to-your-developers-375i</link>
      <guid>https://dev.to/_steve_fenton_/stop-force-feeding-ai-to-your-developers-375i</guid>
      <description>&lt;p&gt;&lt;strong&gt;Before You Buy Another AI Tool, Fix These 5 Things&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It’s great that you want your developers to be productive. They want this, too. What I struggle to understand in many managers is the stark contrast between their directive adoption of brute-force AI and their indifference to straightforward techniques that have been proven to be more impactful than coding assistants.&lt;/p&gt;

&lt;p&gt;This isn’t a new problem; it’s common for a top-down productivity drive from management to be a smokescreen that hides a deeper problem in their organization. Often, the underlying issue is managers who have lost touch with their teams’ work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Great AI Gavage
&lt;/h2&gt;

&lt;p&gt;There’s a rather unsavory practice called “gavage”, which is the process of force-feeding a duck or goose through a tube to increase the size of its liver by up to ten times. The mentality of “force more in, get more out” is how many managers approach AI adoption.&lt;/p&gt;

&lt;p&gt;Unsurprisingly, there are animal welfare concerns with this approach, and some countries have banned force-feeding and the &lt;a href="https://calf.law/factsheets/sales-bans" rel="noopener noreferrer"&gt;production, import, and sale of foie gras&lt;/a&gt;. If you’re a developer, there is no law to prevent the force-feeding of AI into your workflow. You depend on having great leaders who want real outcomes.&lt;/p&gt;

&lt;p&gt;Like all technology, &lt;a href="https://dora.dev/guides/how-to-innovate-with-generative-ai/" rel="noopener noreferrer"&gt;AI needs an adoption strategy&lt;/a&gt; that starts small, tracks its impact, and encourages experimentation at the ground level. You can achieve successful outcomes by engaging developers and allowing them to explore tools, determining where they are most helpful and how to integrate them into daily workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer vs Manager-Led Productivity
&lt;/h2&gt;

&lt;p&gt;You don’t find many developers who don’t want to be productive. Over the past three decades, most of the complaints I’ve heard from developers are about obstacles that hinder their progress. Their desire to deliver high-quality software drives their efforts to acquire better computers, additional screens, and secure a budget for a cloud test runner.&lt;/p&gt;

&lt;p&gt;These have a comparable annual cost to the license for an LLM-based tool. Getting a machine with double the RAM, an extra monitor that you’ll use for several years, or a faster build server are all small costs compared to a developer’s salary. Increased developer productivity pays back at a higher-than-salary rate if you create valuable software.&lt;/p&gt;

&lt;p&gt;If you make developers beg for better kit or tools while forcing them to use the AI tools you selected, I question whether you are motivated by productivity. Some other organizational pathology is in play here, and this path leads away from success.&lt;/p&gt;

&lt;p&gt;Most developers want to experiment with LLM-based tools. They want to compare different options and see how they fit into the overall picture. You need them to take this experimental approach, as this technology is still in its infancy. Working out where it makes a meaningful difference will take time and knowledge sharing.&lt;/p&gt;

&lt;p&gt;If you genuinely want productive developers, start with the productivity blockers they are already raising to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Five Productivity Ideas To Try Before AI
&lt;/h2&gt;

&lt;p&gt;There have been several studies on the productivity benefits of AI. An expectation was set for some multiplicative factor benefit, like 2x or 10x productivity improvements. While you may achieve these numbers on an example task, they don’t accrue to the organization unless you look at the whole value stream.&lt;/p&gt;

&lt;p&gt;The real-world productivity benefit of LLM-based tools is typically between 5% and +20%.&lt;/p&gt;

&lt;p&gt;Assuming you’ve given your developers the basic tools of the trade (fast computers, plenty of screen real estate, the best development tools), here are five developer productivity boosts that all beat AI in terms of impact, based on research by &lt;a href="https://getdx.com/" rel="noopener noreferrer"&gt;DX&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Reduce Meeting-Heavy Days
&lt;/h2&gt;

&lt;p&gt;Some people are more productive in the morning, while others hit their peak productivity late in the afternoon. Everyone is different, but they all have something in common: Nobody is productive when their day is stacked full of meetings.&lt;/p&gt;

&lt;p&gt;Developers lose productivity when their calendars become fragmented. A day with four one-hour meetings scattered throughout isn’t just four hours of lost coding time; it’s often a complete write-off for deep work.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Encourage Flow
&lt;/h2&gt;

&lt;p&gt;When a developer gets their productivity flywheel spinning, it’s worth protecting. Each random interruption brings the flywheel crunching to a halt, and it takes time to bring it back up to speed. That doesn’t mean developers shouldn’t talk to each other, as having healthy information flow is crucial. It does mean creating a space where they can get up to speed and stay there for extended periods.&lt;/p&gt;

&lt;p&gt;Flow state, that magical condition where developers lose track of time and produce their best work, is fragile and valuable. For developers working on intricate logic or system design, interruptions don’t just pause progress; they can completely derail their mental model of the problem they’re solving. If you’re in an office, get them a space away from noisy phone calls and foot traffic where every person walking past diverts attention away from the work.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Improve CI/CD Pipelines
&lt;/h2&gt;

&lt;p&gt;When a developer commits a change, streamlining feedback loops is crucial. If the build takes 20 minutes, developers must choose whether to be idle or context switch to another task. If the build fails, fixing it will be delayed because the developer is currently focused on another task. Switching between tasks means losing context around changes, which makes fixes take longer.&lt;/p&gt;

&lt;p&gt;This pattern continues throughout the CI/CD pipeline, with each delay amplifying the problems caused by late feedback, context switching, and increasingly large batches of change. Slow pipelines increase the cost to fix issues and discourage good development practices like refactoring, as it takes too long to flow changes to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Organize Information
&lt;/h2&gt;

&lt;p&gt;Developer productivity plummets when they can’t find the information they need to do their work. This includes everything from API documentation to deployment procedures, architectural decisions, and debugging runbooks. When information is scattered, outdated, or buried in someone’s email, developers waste hours hunting for answers they need to make progress.&lt;/p&gt;

&lt;p&gt;High-quality documentation isn’t necessarily comprehensive. It’s more important that it’s up to date and easy to find. Organizations that value extensive documentation make it harder to find what you need and impossible to keep current. When managers fail to recognize documentation as real work, developers tend to optimize for tasks that are rewarded, which slows down the entire team.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Simplify Developer Inner Loops
&lt;/h2&gt;

&lt;p&gt;The developer’s inner loop is the cycle of making, testing, and iterating changes. This is the heartbeat of productivity. When this loop is slow, cumbersome, or unreliable, it creates friction that compounds throughout the day. A developer who can make a change and see results in seconds will iterate more, experiment more, and ultimately build better software than one who waits minutes for each feedback cycle.&lt;/p&gt;

&lt;p&gt;The inner loop encompasses the entire development process, from setting up a development environment to making code changes, running tests, reviewing results, and debugging issues. Modern development might involve spinning up containers, connecting to databases, running build processes, and coordinating multiple services. Each point of friction in this loop multiplies across hundreds of daily iterations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Productivity Whole
&lt;/h2&gt;

&lt;p&gt;Managers force-feeding AI to developers think they have a productivity hole, but they need to stop and consider the productivity whole. Developers are surrounded by an environment that either supports or damages the team’s goals and outcomes. The productivity benefits of AI amount to 4% of a developer’s annual output, while eliminating meeting-heavy days yields a 29% improvement, and reducing deployment lead times brings an additional 16%.&lt;/p&gt;

&lt;p&gt;Once you’ve done these stage-one productivity improvements, it’s time to empower teams to choose their AI tools and experiment with how they integrate with their workflows. Help them secure multiple options and funding while they determine what works best for their workloads. Measure AI adoption by existing success measures, rather than inventing new ones or trying to capture the ever-intangible “productivity”.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Golden Gooseherd
&lt;/h2&gt;

&lt;p&gt;Your development team is your golden goose. They produce valuable software that drives your business. You don’t use gavage on a golden goose because you want those valuable eggs, not inflamed organs. Force-feed it and you’ll lose the golden eggs entirely.&lt;/p&gt;

&lt;p&gt;Managers practicing AI gavage focus on the immediate gratification of “AI adoption metrics” going up. They aim to boost developer productivity to the level of foie gras, simply because it sounds impressive in boardroom presentations. However, forced AI adoption creates artificially inflated metrics that mask underlying organizational dysfunction.&lt;/p&gt;

&lt;p&gt;The wise manager tends their golden goose instead. They remove obstacles, provide the right environment and tools, and give teams autonomy to thrive naturally. In that healthy environment, developers naturally experiment with AI tools that genuinely help them, rather than rejecting forced mandates.&lt;/p&gt;

&lt;p&gt;The golden eggs of reliable, high-quality, high-performance software delivery come from healthy geese, not force-fed ones.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>The DORA 4 key metrics become 5</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Tue, 20 Jan 2026 14:56:07 +0000</pubDate>
      <link>https://dev.to/_steve_fenton_/the-dora-4-key-metrics-become-5-1ceg</link>
      <guid>https://dev.to/_steve_fenton_/the-dora-4-key-metrics-become-5-1ceg</guid>
      <description>&lt;p&gt;DORA has been researching software delivery for over a decade, but most people are familiar with their work through their famous four key metrics. This post will help you understand how the metrics have changed and why. I also want to encourage more people to go deeper than the metrics, as the research has so much more to offer.&lt;/p&gt;

&lt;p&gt;The idea behind the DORA metrics is that they provide an objective way to understand your software delivery process. The metrics apply to a team and application, providing a strong signal of where teams should focus if they want to make a meaningful improvement.&lt;/p&gt;

&lt;h2&gt;
  
  
  The long-standing metrics
&lt;/h2&gt;

&lt;p&gt;The 4 keys have been around for a while, and you may already be familiar with them. They represent two categories, throughput and stability. Before DevOps arrived, most organizations considered these to be a trade-off. If you increased throughput, the result would be instability. However, when you align development and operations around the same goals, you quickly find that the things they do to improve throughput also increase stability.&lt;/p&gt;

&lt;p&gt;For example, manual deployments are out if you want to deploy frequently. They just take too long. So, you automate the deployment to make it faster. When you do this, you also make the deployment more reliable, more repeatable, and much less stressful. You intended to increase throughput, but you also improved stability. It’s “win-squared”.&lt;/p&gt;

&lt;p&gt;This is why culture, automation, lean software development, measurement, and sharing combine in DevOps to create such a shockwave of positive change. It breaks the traditional conflict between developers who want to introduce more change, and operations who resist the change to maintain a stable system.&lt;/p&gt;

&lt;p&gt;Let’s examine the long-standing four keys.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7k2acuj4lvs63dbmtmm8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7k2acuj4lvs63dbmtmm8.png" alt="DORA 4 key metrics 2024" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Throughput&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lead time for changes&lt;/strong&gt;: The time it takes for a code change to reach the live environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment frequency&lt;/strong&gt;: How often you deploy to production or to end users.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Stability&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Change failure rate&lt;/strong&gt;: The percentage of changes resulting in a fault, incident, or rollback.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time to recover&lt;/strong&gt;: How long it takes to get back into a good state after a bad deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The combination of these 4 metrics helps you solve problems in healthy ways. While you could increase deployment frequency by making someone a full-time copy-and-paste hero, you’d likely see an adverse effect on stability metrics, as the manual process invites accidents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Metrics are better than blank walls
&lt;/h2&gt;

&lt;p&gt;Blank wall retrospectives are often used to drive the improvement process. The team will undoubtedly come up with ideas that improve their experience. Any motivated team of people will find ways to enhance their software delivery. Often, though, these improvements don’t translate into improved outcomes.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dora.dev/guides/dora-metrics-four-keys/" rel="noopener noreferrer"&gt;DORA metrics&lt;/a&gt; can help with this. Using measures for throughput (the flow of features) and stability (the ability to deliver without disruption), they identify specific areas that need improvement, and the statistical model of practices and capabilities produced by the research offers many ideas on what specific changes you could experiment with to improve things.&lt;/p&gt;

&lt;p&gt;This creates a complete feedback loop. For example, the metrics may highlight that you have a high change failure rate. If you frequently have to scramble to fix problems each time you deploy, you’ll disrupt the flow of valuable work, upset the software’s users, and damage your team’s reputation within your organization.&lt;/p&gt;

&lt;p&gt;When an organization doesn’t trust a development team, it starts adding heavyweight change control practices and irritating procedures. Nobody wants this because the research found that this leads to even worse outcomes than those that made people want to assert more control.&lt;/p&gt;

&lt;p&gt;So, the best way to use the metrics is within the team as part of the continuous improvement process. The numbers tell you where to look, the statistical model has some suggestions on how to improve (and you’ll have your own ideas, too), and you run an experiment to see if making a change to how you’re working results in an improvement.&lt;/p&gt;

&lt;h2&gt;
  
  
  The evolved metrics
&lt;/h2&gt;

&lt;p&gt;The updated metrics used in more recent reports still have a similar shape to the 4 keys. Throughput and stability remain the dual goals of the measurement system, and the same practices will continue to improve them. They just represent the world more clearly.&lt;/p&gt;

&lt;p&gt;Let’s look at the evolved version of the metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fioid93lij9n6o6hfav1s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fioid93lij9n6o6hfav1s.png" alt="DORA 5 key metrics 2025" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Throughput&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lead time for changes&lt;/strong&gt;: The time it takes for a code change to reach the live environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment frequency&lt;/strong&gt;: How often you deploy to production or to end users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Failed deployment recovery time&lt;/strong&gt;: How long it takes to get back into a good state after a bad deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Instability&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Change failure rate&lt;/strong&gt;: The percentage of changes resulting in a fault, incident, or rollback.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rework rate&lt;/strong&gt;: The ratio of unplanned deployments that happen due to a production issue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first change is that the recovery time has moved from stability to throughput. This is sensible, as teams with a short lead time for changes can progress fixes quickly without needing to use an expedited process. When organizations depend on shortcuts to get emergency fixes deployed, they tend to introduce more problems than they solve.&lt;/p&gt;

&lt;p&gt;Next up, the stability category has been renamed to instability. That’s because a high change failure rate or rework rate are signs of instability. Their absence doesn’t necessarily confirm stability, because they don’t necessarily capture all kinds of instability. An example might help here. If you have a high temperature, it’s a sign that you’re unwell. However, having a normal temperature doesn’t necessarily mean you’re healthy.&lt;/p&gt;

&lt;p&gt;And finally, rework rate is a new metric. You’d measure this by tracking the number of deployments and the number of unplanned deployments. The rework rate is the number of unplanned deployments divided by the total number of deployments. For example, if you deployed 10 times and 3 of these were unplanned deployments, you’d use 3 divided by 10, to get a rework rate of 0.3 (or 30%).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0llnnm865bhif2pcwzx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0llnnm865bhif2pcwzx2.png" alt="rework rate DORA Report 2025" width="800" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How this impacts your measurement initiative
&lt;/h2&gt;

&lt;p&gt;If you’re using the 4 keys and it still helps you get better at getting better, there’s no need to rush to update your metrics. You may already be capturing the signal on rework rate through your change failure rate, so you don’t have many dark corners to worry about.&lt;/p&gt;

&lt;p&gt;A more useful addition to your measurement strategy is to look at the other boxes in the DORA model. There are metrics for reliability and well-being that offer opportunities to improve operational performance and culture, which makes your improvement efforts more holistic. When you reach beyond software delivery performance, you’ll find these other areas amplify all the good work you’ve already done.&lt;/p&gt;

&lt;p&gt;In particular, building a healthy generative culture is a crucial step to take if you want to reach the best levels of performance. And if you want to succeed, increasing your user-centricity powers both product and organizational performance.&lt;/p&gt;

&lt;p&gt;The research is so much deeper than just the metrics and all the reports produced to date remain relevant to software teams today. Read the &lt;a href="https://dora.dev/research/2025/dora-report/" rel="noopener noreferrer"&gt;latest DORA report&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
    </item>
  </channel>
</rss>
