<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jono Herrington</title>
    <description>The latest articles on DEV Community by Jono Herrington (@jonoherrington).</description>
    <link>https://dev.to/jonoherrington</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jonoherrington"/>
    <language>en</language>
    <item>
      <title>Most People Use AI Like Google. That's Why It Sucks.</title>
      <dc:creator>Jono Herrington</dc:creator>
      <pubDate>Tue, 28 Apr 2026 14:12:51 +0000</pubDate>
      <link>https://dev.to/jonoherrington/most-people-use-ai-like-google-thats-why-it-sucks-15c2</link>
      <guid>https://dev.to/jonoherrington/most-people-use-ai-like-google-thats-why-it-sucks-15c2</guid>
      <description>&lt;p&gt;For the first month I treated Copilot like a junior engineer with ambition and no guardrails.&lt;/p&gt;

&lt;p&gt;It would write code I never asked for. I'd be halfway through a function, and Copilot would leap ahead ... creating new methods, suggesting whole classes, trying to take initiative like a junior who wants to prove themselves.&lt;/p&gt;

&lt;p&gt;The breaking point came during a refactor. It would output code that didn't just ignore our style guide ... it created variables named &lt;code&gt;R&lt;/code&gt; and &lt;code&gt;T&lt;/code&gt;. It duplicated logic I'd already abstracted. It wrote the same pattern three times instead of recognizing the abstraction. The tests broke, and it didn't care. Because why would it? I hadn't taught it to care.&lt;/p&gt;

&lt;p&gt;I spent two hours cleaning up a mess that should have taken twenty minutes to write correctly.&lt;/p&gt;

&lt;p&gt;We teach engineers to take initiative. But with AI, you have to reel it in.&lt;/p&gt;




&lt;p&gt;That became the pattern. I'd prompt, it would overreach, I'd prune. Prompt, overreach, prune. I was spending more time editing AI output than I would have spent writing it myself.&lt;/p&gt;

&lt;p&gt;Not because the AI was bad. Because I was asking for answers when I should have been defining the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Markdown File Realization
&lt;/h2&gt;

&lt;p&gt;The shift happened when I stopped trying to re-prompt my way out of bad output. Instead, I changed what brain the AI pulled from.&lt;/p&gt;

&lt;p&gt;I created markdown files with our engineering standards. How we write requirements. What questions we ask before we scope. The difference between a user story and a task. How we think about tradeoffs. When we prefer duplication over abstraction. What "clean" means in our codebase.&lt;/p&gt;

&lt;p&gt;The next time an agent generated code, it didn't improvise. It executed patterns we'd already defined.&lt;/p&gt;

&lt;p&gt;That was the difference. I wasn't prompting a junior engineer anymore. I was orchestrating a senior engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Senior Engineers Actually Do
&lt;/h2&gt;

&lt;p&gt;Senior engineers don't write better code because they type faster. They write better code because they recognize patterns. They know when to abstract and when to duplicate. They understand the constraints that aren't written down but exist in the culture of the codebase.&lt;/p&gt;

&lt;p&gt;You can't prompt that into an AI. You have to encode it.&lt;/p&gt;

&lt;p&gt;We built it into skills, rules, agents, hooks. We have markdown files for our BA persona. For our solution architect. For how we handle code review and QA. When an agent generates a spec or writes code, it references these files. It's not improvising. It's executing patterns we've already defined.&lt;/p&gt;

&lt;p&gt;The old constraint was headcount. "We don't have enough people." Now the constraint is algorithm quality. How well we've encoded judgment. How clearly we've defined what good looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Architects Are Winning
&lt;/h2&gt;

&lt;p&gt;Here's what I've noticed. Principal engineers and architects are adopting AI faster and deeper than mid-level developers. Their usage numbers are way higher.&lt;/p&gt;

&lt;p&gt;It's not because they're more technical. It's because they already think in systems.&lt;/p&gt;

&lt;p&gt;A mid-level engineer treats AI like a pair programmer. Prompt, review, accept, repeat. An architect treats AI like infrastructure. Define the patterns, encode the constraints, let the system execute.&lt;/p&gt;

&lt;p&gt;If you can think in systems ... and you can exponentially scale that thinking ... why wouldn't you?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Skill That Changed
&lt;/h2&gt;

&lt;p&gt;I thought the cost would be craft. The satisfaction of writing a clean function. It wasn't.&lt;/p&gt;

&lt;p&gt;The cost was learning a completely different skill. Not coding. Not even prompting. Orchestration.&lt;/p&gt;

&lt;p&gt;Understanding how to chain agents. Where to insert human judgment. How to encode standards so rigid that the system can't violate them, but flexible enough that it can still surprise you.&lt;/p&gt;

&lt;p&gt;The engineers struggling with AI aren't struggling with the tool. They're struggling because they spent years optimizing for a different skill ... writing code by hand ... and now the game has changed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Question
&lt;/h2&gt;

&lt;p&gt;So here's what I watch for now.&lt;/p&gt;

&lt;p&gt;When an engineer hits a wall with AI, do they re-prompt ... or do they re-architect? Do they try to steer the AI with better words, or do they change what the AI is pulling from?&lt;/p&gt;

&lt;p&gt;The first approach scales linearly. One prompt, one output, one edit. The second scales exponentially. Define the system once, let it execute forever.&lt;/p&gt;

&lt;p&gt;Most people use AI like Google because that's what the interface suggests. Type, accept, move on. It feels like progress.&lt;/p&gt;

&lt;p&gt;But progress isn't speed of retrieval. It's leverage.&lt;/p&gt;

&lt;p&gt;Google gives you answers.&lt;/p&gt;

&lt;p&gt;A senior engineer gives you systems.&lt;/p&gt;

&lt;p&gt;Which one are you running?&lt;/p&gt;




&lt;p&gt;One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. &lt;a href="https://www.jonoherrington.com/newsletter" rel="noopener noreferrer"&gt;Subscribe for free&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>leadership</category>
    </item>
    <item>
      <title>You're Great at Writing Code. That's the Problem.</title>
      <dc:creator>Jono Herrington</dc:creator>
      <pubDate>Mon, 27 Apr 2026 13:59:17 +0000</pubDate>
      <link>https://dev.to/jonoherrington/youre-great-at-writing-code-thats-the-problem-2bce</link>
      <guid>https://dev.to/jonoherrington/youre-great-at-writing-code-thats-the-problem-2bce</guid>
      <description>&lt;p&gt;A senior engineer posted on Reddit recently about how it's almost like he's in grief. More than a decade of craft. The things he loved. The identity he built. And now it's all changing.&lt;/p&gt;

&lt;p&gt;I read that post three times. Not because it was surprising, but because it was familiar.&lt;/p&gt;




&lt;p&gt;I had a conversation with an architect recently who described what he's seeing as culture shock. For the first time in a long time, technology is moving so fast that it's forcing us to change our worldviews. Question our identities. The ground that felt solid suddenly isn't.&lt;/p&gt;

&lt;p&gt;I know that feeling. I've been writing code since I was thirteen. I've worn every hat ... design, frontend, backend, data engineering, SEO, architecture, leadership. I prided myself on being the unicorn who could do it all.&lt;/p&gt;

&lt;p&gt;And then I realized something uncomfortable.&lt;/p&gt;

&lt;p&gt;I was measuring myself by the craft when the value was always in what the craft could build.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Identity Trap
&lt;/h2&gt;

&lt;p&gt;There's something seductive about writing code by hand. The feel of it. The sense that you're doing "real" engineering. I get it. The satisfaction of a clean function, well-tested, elegantly named ... it's real.&lt;/p&gt;

&lt;p&gt;But I had a moment six months ago. I was deep in a refactor, writing line after line, feeling that familiar flow state. Then I looked up and realized I'd spent three hours on something an agent could have scaffolded in twenty minutes.&lt;/p&gt;

&lt;p&gt;Worse ... I'd enjoyed it. The work felt like craft. But the business value was a rounding error.&lt;/p&gt;

&lt;p&gt;That's the trap.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The skill that made you valuable is becoming table stakes. The new value is building systems, encoding judgment, orchestrating tools.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Culture Shock
&lt;/h2&gt;

&lt;p&gt;That senior engineer on Reddit wasn't lazy. He was grieving. The thing he had built his identity around for more than a decade was shifting beneath him. The craft he loved was becoming ... different. Not worse. Not better. Different in a way that made him question whether the thing he had invested in still mattered.&lt;/p&gt;

&lt;p&gt;I talked to an architect who described it as culture shock. Technology is moving so fast that it's forcing us to change our worldviews. Our own identities.&lt;/p&gt;

&lt;p&gt;Some people think this is an AI bubble, like the internet bubble. And they're probably right from the business side. But here's what they're missing: even after the internet bubble burst, the web forever changed how we do business.&lt;/p&gt;

&lt;p&gt;I made my entire career off e-commerce. If the web had never happened, that wouldn't have been my career.&lt;/p&gt;

&lt;p&gt;The bubble might pop. The change won't reverse.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Changed for Me
&lt;/h2&gt;

&lt;p&gt;I gave agent mode a real shot. Not the "let me try this for a day" kind of shot. The "what if I actually leaned in" kind of shot.&lt;/p&gt;

&lt;p&gt;And I discovered something. AI isn't replacing my judgment. It's multiplying my reach.&lt;/p&gt;

&lt;p&gt;I'm a unicorn ... I've always been able to do the things ... but I was limited by time. There were only so many hours to write code, write words, think through problems, design systems. Now I can do things I was never able to do as an individual.&lt;/p&gt;

&lt;p&gt;I write a daily blog now. I use voice-to-text AI while walking, capturing thoughts that would have evaporated. I have systems that know my story library, that understand how I write from over a hundred articles, that help me amplify my voice instead of replacing it.&lt;/p&gt;

&lt;p&gt;This isn't about 10x typing speed. It's about exponential leverage.&lt;/p&gt;

&lt;p&gt;I've talked to engineers who describe the same shift. The ones who've figured it out aren't using AI because they're lazy. They're using it because they've built systems that compound. They're orchestrating agents, chaining workflows, buying leverage that compounds before late adopters even recognize the game has changed.&lt;/p&gt;

&lt;p&gt;The craft didn't disappear. It evolved.&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Division
&lt;/h2&gt;

&lt;p&gt;Here's what I'm seeing in my own teams and the companies I talk to.&lt;/p&gt;

&lt;p&gt;The divide isn't between people who use AI and people who don't. It's between people who are current with the profession and people who aren't.&lt;/p&gt;

&lt;p&gt;I've watched senior engineers reject agent mode because they think it's "cheating." I've watched juniors master orchestration and leapfrog people with ten years more experience. Not because they're smarter. Because they're not burdened by what the job used to require.&lt;/p&gt;

&lt;p&gt;The skill that made you valuable ... writing clean code, understanding systems deeply, making good architectural decisions ... that's not disappearing. It's becoming table stakes.&lt;/p&gt;

&lt;p&gt;The new value is everything around it. Building systems. Encoding judgment. Orchestrating tools that multiply your impact while you focus on the decisions that actually require human context.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you're still measuring yourself by lines written, you're optimizing for the last war.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What to Do Instead
&lt;/h2&gt;

&lt;p&gt;I'm not saying stop understanding code. That's the fear, right? That orchestration means abdication. That we'll have a generation of engineers who can prompt but can't debug.&lt;/p&gt;

&lt;p&gt;I don't think that's the risk. The risk is engineers who can write beautiful code but can't scale their impact. Who mistake craft for value.&lt;/p&gt;

&lt;p&gt;Here's what I'm telling my own team:&lt;/p&gt;

&lt;h3&gt;
  
  
  Learn the orchestration
&lt;/h3&gt;

&lt;p&gt;Not as a shortcut. As a skill. Understanding how to chain agents, how to validate their output, where to insert human judgment ... that's the new craft.&lt;/p&gt;

&lt;h3&gt;
  
  
  Encode your standards
&lt;/h3&gt;

&lt;p&gt;The engineers who are winning didn't just start prompting randomly. They built workflows. Defined constraints. Created feedback loops. The tool operates inside their standards, not the other way around.&lt;/p&gt;

&lt;h3&gt;
  
  
  Measure output, not input
&lt;/h3&gt;

&lt;p&gt;Stop counting lines written. Start counting problems solved, systems shipped, leverage created.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stay current or become irrelevant
&lt;/h3&gt;

&lt;p&gt;This sounds harsh, but I mean it kindly. You can move with it or become a craftsperson in a world of factories.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Question
&lt;/h2&gt;

&lt;p&gt;So here's what I'd ask any senior engineer who's proud of how much code they still write by hand:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Are you staying sharp ... or are you standing still?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The junior on your team who's learning orchestration isn't cutting corners. She's learning what the job is becoming. The engineers investing in leverage aren't wasting resources.&lt;/p&gt;

&lt;p&gt;The tech lead who rejected agent mode to stay "pure" isn't preserving craft. He's preserving a version of the job that won't exist.&lt;/p&gt;

&lt;p&gt;I write this as someone who loves writing code. Who still gets satisfaction from a clean function. Who will probably always enjoy the craft of it.&lt;/p&gt;

&lt;p&gt;But I've had to accept that craft alone isn't the value anymore. The value is what you build with it. The systems you create. The leverage you generate.&lt;/p&gt;

&lt;p&gt;Your AI agents aren't replacing your judgment. They're multiplying your reach.&lt;/p&gt;

&lt;p&gt;If you're still measuring yourself by lines written, you're optimizing for the last war. The current is moving. The question is whether you're moving with it.&lt;/p&gt;




&lt;p&gt;One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. &lt;a href="https://www.jonoherrington.com/newsletter" rel="noopener noreferrer"&gt;Subscribe for free&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>leadership</category>
    </item>
    <item>
      <title>Tech Debt Didn't Start with AI</title>
      <dc:creator>Jono Herrington</dc:creator>
      <pubDate>Sun, 26 Apr 2026 13:12:11 +0000</pubDate>
      <link>https://dev.to/jonoherrington/tech-debt-didnt-start-with-ai-4m7n</link>
      <guid>https://dev.to/jonoherrington/tech-debt-didnt-start-with-ai-4m7n</guid>
      <description>&lt;p&gt;There once was a junior who spent a week on one unit test. Thirty-seven commits. Each one broken. His senior approved the thirty-seventh because the sprint ended Friday. The test went to production. It passed, but only because it didn't actually test anything.&lt;/p&gt;

&lt;p&gt;Six months later, a real bug shipped. Same function. The checkout flow broke for twelve hours. The team spent two days in war rooms. A feature launch got pushed. The business felt it.&lt;/p&gt;

&lt;p&gt;That wasn't AI. That was a human approving code he didn't understand to make a deadline.&lt;/p&gt;




&lt;p&gt;I know that story intimately ... not because I was the junior, but because I've been the senior who clicked approve.&lt;/p&gt;

&lt;p&gt;The project delivery pressure is constant when you're shepherding code to production. The back-and-forth on pull requests feels like a tax you can't afford when the sprint is ending and the demo is Monday. So you start making calculations. &lt;em&gt;Is this worth the discussion? Will anyone notice if I just push this through?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I made that calculation once. Clicked approve on a PR I hadn't fully reviewed because I was already three meetings behind and the deploy window was closing. One line of code. One missing equal sign. &lt;code&gt;=&lt;/code&gt; instead of &lt;code&gt;===&lt;/code&gt;. JavaScript did what JavaScript does ... assigned instead of compared ... and suddenly a critical path was returning truthy for every user.&lt;/p&gt;

&lt;p&gt;The release went out the door. And instantly, we found the issue.&lt;/p&gt;

&lt;p&gt;We had to roll back the entire release. Pull it back. Fix the line. Redeploy. The feature that was supposed to ship stayed in staging while we cleaned up the mess I had approved.&lt;/p&gt;

&lt;p&gt;When we traced it back, there it was: my approval stamp on a line I hadn't read carefully enough. The dashboard said PRs merged. It didn't say comprehension verified.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Blame Reflex
&lt;/h2&gt;

&lt;p&gt;Nothing about the root cause changed.&lt;/p&gt;

&lt;p&gt;This is the part that gets lost in the current conversation about "AI slop." Search Twitter or LinkedIn and you'll find endless threads about how AI is flooding codebases with garbage ... vibe coders shipping hallucinated dependencies, juniors deploying agents they don't understand, repositories becoming "junk drawers with a CI/CD pipeline."&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The AI didn't create that mess. Humans rushing did. The AI just made the rush faster.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The frustration is real. The diagnosis is wrong.&lt;/p&gt;

&lt;p&gt;AI isn't creating technical debt. It's amplifying technical debt that already existed. It's exposing the gaps in your process that you could paper over when humans were the bottleneck. When the speed limit was human typing speed, your broken review process had natural constraints. Now that limit is gone, and the cracks are showing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Research Actually Shows
&lt;/h2&gt;

&lt;p&gt;A 2023 study from GitClear analyzed 153 million lines of changed code and found that code churn ... the percentage of lines that are reverted or updated within two weeks of being committed ... has increased significantly in AI-assisted environments. The code is moving faster, but it's also moving more. The suggestion isn't that AI writes worse code. It's that humans are reviewing it less carefully.&lt;/p&gt;

&lt;p&gt;Which brings us back to that junior with thirty-seven broken commits.&lt;/p&gt;

&lt;p&gt;The problem wasn't the thirty-seventh commit. It was the thirty-six approvals that came before. The system that let broken code reach production wasn't a tooling problem. It was a &lt;em&gt;permission&lt;/em&gt; problem. Someone had permission to ship code they didn't understand, and someone else had permission to approve it without understanding it either.&lt;/p&gt;

&lt;p&gt;AI didn't change that permission structure. It just accelerated it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three Accountability Shifts
&lt;/h2&gt;

&lt;p&gt;If you want to stop blaming AI for your technical debt, you need to make three shifts in how you think about accountability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shift 1: From Tool Blame to System Ownership
&lt;/h3&gt;

&lt;p&gt;When that production incident hit from my missing equal sign, my first instinct was defensive. &lt;em&gt;The diff was long. The deadline was tight. Anyone could have missed it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;But anyone didn't approve it. I did. And the system I was operating in ... where PR count mattered more than review depth, where velocity was celebrated and rigor was invisible ... made that mistake inevitable.&lt;/p&gt;

&lt;p&gt;AI puts the same pressure on every engineer now. The junior with agent mode is facing the same deadline squeeze I faced. The same dashboard that tracked my PR throughput is tracking theirs. The question isn't whether AI will generate slop. It's whether your system has permission structures that catch slop before it ships.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Your AI policy isn't a documentation problem. It's a values problem. What you measure is what you get. What you don't measure is what drifts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Shift 2: From Process Theater to Process Reality
&lt;/h3&gt;

&lt;p&gt;I've reviewed pull requests at four companies. The checkbox is always clicked. The approval is given. The JIRA ticket moves to Done. But the actual work of comprehension ... sitting with the code, tracing the edge cases, understanding the system being changed ... that work got traded for velocity long before AI arrived.&lt;/p&gt;

&lt;p&gt;I've sat in retrospectives where we discussed "improving our review process" for thirty minutes and then assigned the action item to "be more careful." As if care was a resource we could budget. As if attention wasn't already competing with Slack notifications, calendar invites, and the next sprint's planning.&lt;/p&gt;

&lt;p&gt;AI makes this worse only because it makes the theater more convincing. The PR looks professional. The code is syntactically clean. The agent followed patterns. It's easy to convince yourself that a quick skim is sufficient when the surface looks polished.&lt;/p&gt;

&lt;p&gt;But technical debt doesn't live on the surface. It lives in the assumptions. The edge cases. The interactions between systems that the author didn't fully model. The thing the junior didn't understand when they asked the AI to generate the fix.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can't review what you don't understand. And you can't understand what you haven't taken the time to sit with.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Shift 3: From Foundation Excuses to Foundation Reality
&lt;/h3&gt;

&lt;p&gt;This is the hardest shift because it requires honesty about where your team actually is, not where your onboarding docs say you are.&lt;/p&gt;

&lt;p&gt;I worked with a team once that had extensive coding standards documents. Pages of them. Every new hire got the link. Every PR template referenced them. But when I started reading the actual PRs, I found the same three anti-patterns in every other commit. The standards existed. The enforcement didn't.&lt;/p&gt;

&lt;p&gt;The junior with thirty-seven broken commits didn't fail the system. The system failed him. He was operating exactly within the bounds of what was permitted. Commits were cheap. Approvals were easy. Tests were optional if they "passed" on CI. The definition of "good enough" had drifted so far from "correct" that "compiles" became the bar.&lt;/p&gt;

&lt;p&gt;AI will operate within those same bounds. It will generate code that matches your patterns ... your good patterns and your bad ones. It will reproduce the shortcuts you take because those shortcuts are embedded in your codebase, your examples, your implicit standards.&lt;/p&gt;

&lt;p&gt;If your foundation is slop, AI is a slop multiplier. If your foundation is rigor, AI is a rigor multiplier. The tool doesn't care which one you have. It just makes more of it, faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Foundation Question
&lt;/h2&gt;

&lt;p&gt;So here's the question I'd ask any leader who's tempted to blame AI for their technical debt:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What were you allowing before the AI arrived?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not what was in your documentation. Not what was in your vision deck. What was actually permitted by your dashboard, your incentives, your daily trade-offs?&lt;/p&gt;

&lt;p&gt;If PRs merged without comprehension, AI will merge PRs without comprehension faster. If edge cases were handled by "we'll fix it in production," AI will generate code with that same assumption baked in. If your seniors were too busy to review carefully, your juniors with AI agents will be too busy to verify carefully.&lt;/p&gt;

&lt;p&gt;The junior with thirty-seven broken commits and the senior who approved them are still in your organization. They might be using Cursor now. They might be prompting Claude. But they're playing the same game with the same rules. The only difference is the speed.&lt;/p&gt;

&lt;p&gt;I still think about that rollback. The equal sign I didn't catch. The release we had to pull back. The feature that stayed in staging because I wanted to save five minutes on a Friday afternoon.&lt;/p&gt;

&lt;p&gt;The AI didn't create that scenario. I did. The AI just would have created it faster.&lt;/p&gt;

&lt;p&gt;That's the thing about technical debt. It doesn't care about your tools. It cares about your discipline. And discipline is what erodes when nobody's watching the thing that actually matters ... not how fast the code moves, but how well the humans understand it before it goes.&lt;/p&gt;

&lt;p&gt;Your AI isn't generating slop. It's reflecting the slop you were already permitting. The mirror is just clearer now.&lt;/p&gt;




&lt;p&gt;One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. &lt;a href="https://www.jonoherrington.com/newsletter" rel="noopener noreferrer"&gt;Subscribe for free&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>discuss</category>
      <category>leadership</category>
    </item>
    <item>
      <title>AI Is Becoming Infrastructure</title>
      <dc:creator>Jono Herrington</dc:creator>
      <pubDate>Sat, 25 Apr 2026 13:09:42 +0000</pubDate>
      <link>https://dev.to/jonoherrington/ai-is-becoming-infrastructure-47pd</link>
      <guid>https://dev.to/jonoherrington/ai-is-becoming-infrastructure-47pd</guid>
      <description>&lt;p&gt;I was talking to a tech lead a couple of months ago, over drinks. Someone who's been shipping production code for over twenty years. I asked him, almost as an afterthought, when was the last time he wrote a line of code without AI assistance.&lt;/p&gt;

&lt;p&gt;He paused. Actually paused. "Over a year," he said. "Maybe longer."&lt;/p&gt;

&lt;p&gt;The number wasn't the surprising part. It was the realization in his voice. He hadn't noticed the transition. It just ... happened.&lt;/p&gt;

&lt;p&gt;I've been thinking about that conversation ever since. Not because it's unusual, but because it's becoming the norm. The principal engineers I talk to, the ones running platforms at scale, are quietly shifting how they work. Not announcing it. Not debating it. Just ... doing it.&lt;/p&gt;

&lt;p&gt;This is how infrastructure arrives. Not with a press release. With a shrug.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is how infrastructure arrives. Not with a press release. With a shrug.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Pattern We Keep Repeating
&lt;/h2&gt;

&lt;p&gt;I remember when Docker was controversial.&lt;/p&gt;

&lt;p&gt;Two senior engineers I worked with got into an actual argument about it. Virtual machines versus containers. Resource overhead versus isolation guarantees. One of them was convinced Docker was a toy for startups. The other thought VMs were dinosaurs.&lt;/p&gt;

&lt;p&gt;I didn't understand the intensity at the time. I was too busy playing with the new tool to notice the religious war forming around it. But looking back, I see what was happening. People were trying to apply Docker in places where it didn't fit yet. Companies with three engineers treating containerization like they were Google. The tool was right, but the context was wrong.&lt;/p&gt;

&lt;p&gt;Then CI/CD was a "nice to have." Then Kubernetes was "overkill." Every infrastructure shift starts as controversy before it becomes furniture.&lt;/p&gt;

&lt;p&gt;The pattern is consistent. First comes the debate ... should we use this? Then comes the adoption ... how do we use this? Then, eventually, comes the invisibility ... we use this.&lt;/p&gt;

&lt;p&gt;We're somewhere between adoption and invisibility now.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Christmas Tree Problem
&lt;/h2&gt;

&lt;p&gt;Don't get me wrong. AI is not fully infrastructure yet.&lt;/p&gt;

&lt;p&gt;Look at Claude's status page on any given Tuesday. Green dots, yellow dots, red dots. Systems failing, recovering, failing again. It's less like reliable plumbing and more like a Christmas tree lighting up in sequence. We're failing, but we're failing fast. We're learning in public at scale.&lt;/p&gt;

&lt;p&gt;The flakiness is part of the transition. Early CI/CD pipelines broke constantly. Early container orchestration ate memory and crashed nodes. Infrastructure doesn't arrive fully formed. It arrives with bruises.&lt;/p&gt;

&lt;p&gt;But here's what I'm noticing. The conversations are changing.&lt;/p&gt;

&lt;p&gt;A VP at a lifestyle brand told me his engineering teams are still asking ... "Should we use AI?" Three months ago, it became ... "How do we govern AI?" Now I'm hearing ... "How do we orchestrate agents through our existing pipeline?"&lt;/p&gt;

&lt;p&gt;The question moved from permission to plumbing. That's the signal.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The question moved from permission to plumbing. That's the signal.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What Infrastructure Actually Looks Like
&lt;/h2&gt;

&lt;p&gt;Infrastructure has a specific quality ... you stop talking about it.&lt;/p&gt;

&lt;p&gt;Nobody holds meetings about DNS. Nobody writes strategy documents about load balancers. These things just ... are. They're assumed. They're the medium, not the message.&lt;/p&gt;

&lt;p&gt;AI is getting there. Not because the tools are perfect. Because the workflows are embedding.&lt;/p&gt;

&lt;p&gt;On my team, agents are part of the SDLC now. Not alongside it. Inside it. Code reviews don't separate "AI-assisted" from "manual" anymore. They review the output. Deployment pipelines don't ask who wrote the code. They validate that it meets standards. The distinction between "using AI" and "not using AI" is becoming as irrelevant as "using an IDE" versus "using a text editor."&lt;/p&gt;

&lt;p&gt;It's not a specific workflow. It's all the workflows. Not a specific integration point. All the integration points.&lt;/p&gt;

&lt;p&gt;This is what people miss when they ask ... "Is AI ready for production?" The question assumes AI is a thing you adopt or don't adopt. But infrastructure doesn't work that way. You don't adopt plumbing. You build houses, and the plumbing is just ... there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The FOMO Trap
&lt;/h2&gt;

&lt;p&gt;There's a counter-narrative I need to address. Every infrastructure shift creates this tension.&lt;/p&gt;

&lt;p&gt;Large companies need orchestration at scale. They have thousands of services, complex dependency graphs, teams that can't possibly understand every system they touch. Kubernetes makes sense for them.&lt;/p&gt;

&lt;p&gt;Smaller companies see the success stories and want the same. They have twelve engineers and three services, but they're standing up a full Kubernetes cluster because ... "that's what the big players do." They're applying infrastructure to problems that don't need it yet.&lt;/p&gt;

&lt;p&gt;AI has the same trap. I see teams with straightforward CRUD applications trying to build agent orchestration frameworks. I see companies with simple deployment pipelines adding complexity they can't maintain because "AI is the future."&lt;/p&gt;

&lt;p&gt;The future arrives unevenly. Not every team needs agents orchestrated through their pipeline. Not every problem benefits from AI automation. The infrastructure shift is real, but that doesn't mean you should front-run it.&lt;/p&gt;

&lt;p&gt;Know your context. Know your scale. Know what problem you're actually solving.&lt;/p&gt;

&lt;h2&gt;
  
  
  Six Months From Now
&lt;/h2&gt;

&lt;p&gt;I'll make a prediction I'm reasonably confident about.&lt;/p&gt;

&lt;p&gt;Six months from now, asking an engineer if they "use AI" will feel as strange as asking if they "use Git." The question assumes a choice that no longer exists as a meaningful distinction.&lt;/p&gt;

&lt;p&gt;The teams that thrive won't be the ones with the best AI policies. They'll be the ones where AI has become invisible. Where agents run through pipelines without fanfare. Where the conversation shifted from "should we" to ... "how do we make this reliable."&lt;/p&gt;

&lt;p&gt;The revolution isn't coming. It's quietly becoming the status quo.&lt;/p&gt;

&lt;p&gt;Blockchain faded because it never became infrastructure. It stayed specialized, stayed controversial, stayed a solution looking for problems that fit it. Every use case felt forced because the infrastructure never integrated.&lt;/p&gt;

&lt;p&gt;AI is different. It's integrating. Messily, imperfectly, with status pages that look like Christmas trees. But integrating nonetheless.&lt;/p&gt;

&lt;p&gt;We're not fully there. But I can see the transition from here. I've watched it happen enough times to recognize the pattern.&lt;/p&gt;

&lt;p&gt;Infrastructure doesn't make headlines. It just enables everything that does.&lt;/p&gt;




&lt;p&gt;One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. &lt;a href="https://www.jonoherrington.com/newsletter" rel="noopener noreferrer"&gt;Subscribe for free&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>leadership</category>
    </item>
    <item>
      <title>I Run My AI Content Pipeline on a $20 VPS (Because My $200 PC Crashed)</title>
      <dc:creator>Jono Herrington</dc:creator>
      <pubDate>Fri, 24 Apr 2026 21:05:56 +0000</pubDate>
      <link>https://dev.to/jonoherrington/i-run-my-ai-content-pipeline-on-a-20-vps-because-my-200-pc-crashed-4e3m</link>
      <guid>https://dev.to/jonoherrington/i-run-my-ai-content-pipeline-on-a-20-vps-because-my-200-pc-crashed-4e3m</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/openclaw-2026-04-16"&gt;OpenClaw Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The PC came from my daughter. She was getting a new one ... this was going in the trash. 16GB of RAM, a GPU as valuable as a potato, and the best part? It sounds like a rocket ship whenever I open Chrome. I'd been a Mac guy forever. The thought of using a Windows machine made my skin crawl. But the PC was free, so I took it. Within the hours of setting it up, I downloaded OpenClaw. Within the first fifteen minutes I saw the black screen of death. Not blue ... black. I'd heard of the blue screen. Apparently the new thing is black. Maybe I was right all this time.&lt;/p&gt;

&lt;p&gt;Everyone said get a Mac Mini. You for sure want the local model for privacy. So I looked. The M4 Pro starts at $1,399 with 24GB unified memory ... enough to run models. My friends bought them, created LLCs, wrote off the hardware as CapEx. Good for them.&lt;/p&gt;

&lt;p&gt;I wasn't ready to drop fourteen hundred bucks.&lt;/p&gt;

&lt;p&gt;The thing is ... I'm not against spending money on tools. I'm against spending my money.&lt;/p&gt;

&lt;h2&gt;
  
  
  The $40 Discovery
&lt;/h2&gt;

&lt;p&gt;I was determined to find another solution. AWS is my bread and butter. It's my go-to. When I need cloud infrastructure, I don't think about Azure or Google Cloud. I think about AWS.&lt;/p&gt;

&lt;p&gt;I went that route first. Asked other people using OpenClaw what specs I'd need, then looked up AWS pricing based on those specs. Came back with the price .. and wanted to throw up.&lt;/p&gt;

&lt;p&gt;So ... I did what we all do. Asked ChatGPT. Asked Claude. Both acted like they never heard of OpenClaw. It must be their distant cousin they pretend not to know.&lt;/p&gt;

&lt;p&gt;So I went to Google. Typed "OpenClaw Cloud Hosting." Found a YouTube Video by Hostinger. They made it look easy. I didn't believe it would work based on the specs ... but at the end of the day I was like, tag on Ollama cloud and ... "I can try it for $40."&lt;/p&gt;

&lt;p&gt;Spun up a KVM 1 instance with 1 vCPU and 4GB RAM, and stumbled into something that's now running 24/7 on my phone, my laptop, my Slack. The Mac Mini I was supposed to buy sits at $1,399 in my browser history, unwatched.&lt;/p&gt;

&lt;p&gt;I guarantee a $20 VPS is not as good as a Mac Mini. The models are obviously not running locally ... but it works for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  The System, Not the Tool
&lt;/h2&gt;

&lt;p&gt;What I have isn't just "OpenClaw on a VPS." That's the headline. The reality is more interesting.&lt;/p&gt;

&lt;p&gt;OpenClaw is running on that $20 VPS 24/7. It's integrated into my daily workflows through Slack and Telegram. I can message it from anywhere ... laptop, phone, doesn't matter. It has access to my content pipeline skills: research, drafting, editing, story management.&lt;/p&gt;

&lt;p&gt;The assistant doesn't write for me. It removes the friction between having a thought and getting it down.&lt;/p&gt;

&lt;p&gt;The gap between "I have an idea" and "that idea is captured" has always been the hard part. Not the thinking. Not the editing. The transfer from brain to document.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The gap between "I have an idea" and "that idea is captured" has always been the hard part. Not the thinking. Not the editing. The transfer from brain to document.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;OpenClaw bridges that gap. I'm still the strategist. I'm still the editor. I'm still the one with the voice. But I don't get stuck on blank pages anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Authenticity Question
&lt;/h2&gt;

&lt;p&gt;Here's what I see happening with AI and content: people are using it to make up shit. They're generating posts about experiences they haven't had, advice they haven't tested, frameworks they haven't built. The AI writes it, they publish it, and it sounds ... off.&lt;/p&gt;

&lt;p&gt;I get why they do it. The content treadmill is brutal. Daily posting is unsustainable without help. But there's a difference between AI-assisted and AI-generated.&lt;/p&gt;

&lt;p&gt;My hundred-story library contains real things that happened to me. OpenClaw helps me structure them, find connections, get unstuck. But the source material is mine. The judgment about what to publish is mine. The voice that lands or doesn't land is mine.&lt;/p&gt;

&lt;p&gt;That's the part people miss when they outsource the whole thing. AI can amplify what you have. It can't create what you don't.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Actually Built
&lt;/h2&gt;

&lt;p&gt;For the curious, here's the stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hostinger KVM 1 VPS&lt;/strong&gt; ($20/month): 1 vCPU, 4GB RAM, 50GB NVMe, 4TB bandwidth, Ubuntu 22.04&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ollama Pro&lt;/strong&gt; ($20/month)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenClaw&lt;/strong&gt;: Running 24/7 on the VPS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slack + Telegram integration&lt;/strong&gt;: Interface from any device&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom skills&lt;/strong&gt;: Story library, research assistance, drafting support, LinkedIn optimization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ollama&lt;/strong&gt;: Running models on the VPS for specific tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The VPS runs quietly. I SSH in whenever I need to. The rest of the time, OpenClaw is just ... there. An endpoint I can hit from anywhere. A consistent presence that knows my patterns.&lt;/p&gt;

&lt;p&gt;It's not fancy. It's reliable. It's been running for months without me touching it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It's not fancy. It's reliable. It's been running for months without me touching it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What OpenClaw Gets Right
&lt;/h2&gt;

&lt;p&gt;I've tried a lot of AI tools. What OpenClaw gets right that others don't:&lt;/p&gt;

&lt;h3&gt;
  
  
  It's not trying to be a chatbot
&lt;/h3&gt;

&lt;p&gt;It's trying to be an assistant ... something with memory, with skills, with integration into your actual life. The skill system means you can teach it what you need, not just prompt it differently.&lt;/p&gt;

&lt;h3&gt;
  
  
  It lives where you want
&lt;/h3&gt;

&lt;p&gt;Local if you want. VPS if you want. The abstraction is portable. You're not locked into someone else's infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  It's built for builders
&lt;/h3&gt;

&lt;p&gt;The people who made OpenClaw seem to understand that the value isn't in the model ... it's in the system around the model. The orchestration. The memory. The integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means For You
&lt;/h2&gt;

&lt;p&gt;If you're reading this and thinking about personal AI, ask yourself what you're optimizing for.&lt;/p&gt;

&lt;p&gt;Privacy? A VPS on a reputable host is private enough for most workflows. Your threat model may differ.&lt;/p&gt;

&lt;p&gt;Speed? Local wins on latency, but only if your hardware is good. My daughter's old PC with Intel graphics was slower than the VPS.&lt;/p&gt;

&lt;p&gt;Cost? $40/month vs $1,400 upfront is math you can do.&lt;/p&gt;

&lt;p&gt;Control? OpenClaw gives you plenty. It's open. You own your data. You're not locked into anyone's ecosystem.&lt;/p&gt;

&lt;p&gt;The point isn't that my setup is better. The point is that "local AI" became a default answer before people asked what problem they were actually solving.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Win
&lt;/h2&gt;

&lt;p&gt;I was already writing daily. I'd been using Claude for that. But I'm on the Pro plan, and I was running into my weekly limit with the number of things I was asking it to do.&lt;/p&gt;

&lt;p&gt;With OpenClaw and Ollama, I've never hit my rolling window. The assistant is always there. I can message it without counting tokens or watching a progress bar. It removes the friction between having something to say and getting it said.&lt;/p&gt;

&lt;p&gt;My story library has 100+ entries. I add to it weekly. The assistant helps me find patterns, structure arguments, and get unstuck. But the stories are mine. The voice is mine. The judgment about what goes out is mine.&lt;/p&gt;

&lt;p&gt;That's the system. Not the Mac Mini. Not the VPS. The system of having source material, a process, and an assistant that removes friction instead of adding it.&lt;/p&gt;

&lt;p&gt;You don't need $1,400 to get started. You need a clear sense of what you're trying to solve ... and the willingness to build around your constraints instead of someone else's recommendation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The Mac Mini I was supposed to buy sits at $1,399 in my browser history, unwatched. I don't need it. OpenClaw on a $20 VPS + Ollama pro at $20 is the right abstraction for my wiring.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;If you're building something similar with OpenClaw, I'm curious about your setup. Drop a comment or find me on LinkedIn. I'm always interested in how people solve the same problem with different constraints.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>openclawchallenge</category>
    </item>
    <item>
      <title>AI Needs Curriculum, Not Better Prompts</title>
      <dc:creator>Jono Herrington</dc:creator>
      <pubDate>Fri, 24 Apr 2026 13:28:40 +0000</pubDate>
      <link>https://dev.to/jonoherrington/ai-needs-curriculum-not-better-prompts-599f</link>
      <guid>https://dev.to/jonoherrington/ai-needs-curriculum-not-better-prompts-599f</guid>
      <description>&lt;p&gt;You've been there.&lt;/p&gt;

&lt;p&gt;The prompt that should work. The rephrase that feels clever. The context you're sure will fix it this time. And the AI just keeps barreling forward ... confident, eager, completely lost.&lt;/p&gt;

&lt;p&gt;I was in that spiral over a year and a half ago, trying to get a module refactored. Each exchange felt like I was getting dumber. The AI wasn't being difficult. It was being helpful in exactly the wrong way.&lt;/p&gt;

&lt;p&gt;That's when I wanted to throw my monitor out the window.&lt;/p&gt;

&lt;p&gt;Not because the AI was wrong. Because I was. I was treating a systematic problem like a communication problem.&lt;/p&gt;

&lt;p&gt;I've thought about that moment a lot since. The frustration wasn't the AI's fault. It was mine. I was asking for output without building the system that produces good output.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The frustration wasn't the AI's fault. It was mine.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Vending Machine Fallacy
&lt;/h2&gt;

&lt;p&gt;Here's how most engineers use AI.&lt;/p&gt;

&lt;p&gt;Prompt in. Code out. When it fails, prompt harder. Different words. More context. Hoping the next output works.&lt;/p&gt;

&lt;p&gt;We treat it like a vending machine. If the first dollar doesn't work, we try another dollar. Same machine. Same problem. Different input.&lt;/p&gt;

&lt;p&gt;That's nonsensical.&lt;/p&gt;

&lt;p&gt;When my junior breaks something ... and they will ... I don't rephrase the question. I don't keep feeding dollars into a broken machine. I trace their reasoning. Find the gap. Train the pattern. Encode the why so they learn.&lt;/p&gt;

&lt;p&gt;The junior who keeps going down the wrong track isn't being difficult. They're being human. They've lost sight of what they're actually trying to do. They're trying to please the person asking them to put output in. It's a quintessential systems problem. The context is missing. The constraints are unclear. The definition of "good" hasn't been established.&lt;/p&gt;

&lt;p&gt;AI has the same problem. It just doesn't know it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Non-Determinism Is a People Problem
&lt;/h2&gt;

&lt;p&gt;Engineers love to complain that AI is non-deterministic. Same prompt, different outputs. Unpredictable. Unreliable.&lt;/p&gt;

&lt;p&gt;I've got news. Humans are non-deterministic too.&lt;/p&gt;

&lt;p&gt;Ask an engineer the same question three times ... morning, afternoon, after a bad deploy ... and you'll get different answers based on stress, sleep, what they ate. We've always managed variability with standards and accountability. Same solution applies to AI.&lt;/p&gt;

&lt;p&gt;The problem isn't that the tool is unpredictable. It's that we haven't built the system to make predictions useful.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The problem isn't that the tool is unpredictable. It's that we haven't built the system to make predictions useful.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Operating System
&lt;/h2&gt;

&lt;p&gt;I built something on my team. Not because AI joined the codebase. Because good engineering requires it.&lt;/p&gt;

&lt;p&gt;Lint rules as feedback loops. Not nitpicks in pull requests ... encoded patterns that teach before the mistake ships. Architectural tests as training data. Constraints that define what "good" looks like before the AI starts guessing.&lt;/p&gt;

&lt;p&gt;Here's the thing engineers miss. These aren't AI-specific ideas. We didn't create lint rules because AI is in the codebase. Good engineers already have lint rules to avoid nitpicks in pull requests. We don't have architectural tests because AI is in there. We have them because we want to scale without breaking things.&lt;/p&gt;

&lt;p&gt;The AI doesn't get special treatment. It operates inside the constraints that were already there.&lt;/p&gt;

&lt;p&gt;Now when my AI breaks, the system catches it. Teaches it. Tightens the pattern. The feedback is immediate, consistent, and encoded. Not "try again with better words." Try again inside boundaries that define what right looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  Garbage In, Systems Out
&lt;/h2&gt;

&lt;p&gt;Real leaders create systems where people thrive. This isn't a new idea. It's the foundation of management.&lt;/p&gt;

&lt;p&gt;Give somebody a task without context, without training, without a definition of success ... even smart people produce garbage. Give them clear constraints, feedback loops, and guardrails ... suddenly they're producing at level.&lt;/p&gt;

&lt;p&gt;AI is no different. "Garbage in, garbage out" applies no matter how smart the tool is.&lt;/p&gt;

&lt;p&gt;The engineers who will win this transition aren't the ones who prompt better. They're the ones who think systematically. Who build infrastructure before they ask for output. Who understand that managing AI isn't different from managing people ... both need context, constraints, and curriculum to produce consistently.&lt;/p&gt;

&lt;p&gt;People leaders who understand technology are going to dominate the next decade. Not because they can write the best prompts. Because they know how to build systems that produce good work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Accountability Is Architecture
&lt;/h2&gt;

&lt;p&gt;Accountability isn't "I prompted better."&lt;/p&gt;

&lt;p&gt;It's "I built the system that trains."&lt;/p&gt;

&lt;p&gt;When I was in that prompting spiral over a year and a half ago, I was trying to solve a people problem with better vocabulary. The AI wasn't failing to understand me. It was failing to operate inside a system that could teach it what I wanted.&lt;/p&gt;

&lt;p&gt;That's the shift. The question isn't how do I get better output from my AI. The question is how do I build the infrastructure that produces good output reliably.&lt;/p&gt;

&lt;p&gt;My junior's mistakes teach because I built the system that turns mistakes into learning. My AI's breaks teach because I built the system that catches breaks before they ship.&lt;/p&gt;

&lt;p&gt;Build the operating system first. Then accelerate.&lt;/p&gt;

&lt;p&gt;The vending machine doesn't need better dollars. It needs better design.&lt;/p&gt;




&lt;p&gt;One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. &lt;a href="https://www.jonoherrington.com/newsletter" rel="noopener noreferrer"&gt;Subscribe for free&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>leadership</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The Math Doesn't Work</title>
      <dc:creator>Jono Herrington</dc:creator>
      <pubDate>Thu, 23 Apr 2026 14:45:32 +0000</pubDate>
      <link>https://dev.to/jonoherrington/the-math-doesnt-work-4f13</link>
      <guid>https://dev.to/jonoherrington/the-math-doesnt-work-4f13</guid>
      <description>&lt;p&gt;I built a dashboard to track how long PRs sat in review. I wanted to see where velocity was getting stuck. It was a pattern that made me question every approval I'd ever given.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers That Didn't Add Up
&lt;/h2&gt;

&lt;p&gt;The dashboard showed two bars side by side. PRs under two hundred lines averaged fifteen minutes in review. PRs over a thousand lines averaged twenty minutes.&lt;/p&gt;

&lt;p&gt;Five minutes difference. For five times the code.&lt;/p&gt;

&lt;p&gt;I sat with that for a minute. A ten-line change that renames a variable gets nearly the same scrutiny as a five-hundred-line refactor that touches authentication, caching, and the checkout flow. The math doesn't work. More code, less review. Less code, more scrutiny.&lt;/p&gt;

&lt;p&gt;I know why it happens. The thousand-line PR is opaque. You can't evaluate what you can't trace. The dependencies spider across files you haven't opened in months. The test coverage looks comprehensive but you can't tell if the tests are testing the right thing. So you scan for syntax, check that the build passes, and move on. The fifteen-line PR is legible. Every choice visible. Every tradeoff exposed. Fair game for critique.&lt;/p&gt;

&lt;p&gt;We're not being lazy. We're being human. We push back on what we understand and surrender to what we don't.&lt;/p&gt;

&lt;p&gt;But here's the cost: the code we can debug gets rejected. The code we can't gets merged.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where I Got It Wrong
&lt;/h2&gt;

&lt;p&gt;I've been the person rubber-stamping those thousand-line PRs.&lt;/p&gt;

&lt;p&gt;There was a Tuesday last quarter. I was running between two meetings, eating lunch at my desk when a notification popped up. A senior engineer had opened a twelve-hundred-line PR that touched our inventory system, our caching layer, and add to cart. I had twenty minutes before my next call. I scrolled through the first hundred lines, saw familiar patterns, checked that tests passed, and clicked approve.&lt;/p&gt;

&lt;p&gt;Later that week, a junior engineer opened a fifteen-line PR. Just fifteen lines. It changed how we handled null checks in a single component. I had time. I left seven comments. Three of them were about naming conventions. One was about indentation.&lt;/p&gt;

&lt;p&gt;I didn't realize what I'd done until I saw the dashboard data. I had spent more time debating formatting on fifteen lines than I had spent evaluating a twelve-hundred-line change to core infrastructure. The large PR sailed through because I was busy and it was big and I trusted the engineer who wrote it. The small PR got interrogated because I had the bandwidth and the code was small enough that I could see every detail.&lt;/p&gt;

&lt;p&gt;That's not rigorous review. That's convenient review.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Preference Problem
&lt;/h2&gt;

&lt;p&gt;This is where I think the conversation about PRs usually goes wrong. We talk about review culture like the problem is that people aren't reviewing thoroughly enough. But the deeper issue is that most PR debates aren't about correctness at all. They're about preference.&lt;/p&gt;

&lt;p&gt;I've watched teams spend thirty minutes arguing about whether to use early returns or nested conditionals. I've seen PRs blocked over variable naming that follows every convention but the reviewer's personal taste. I've been in threads that went fifty comments deep about formatting choices that could have been handled by a linter in half a second.&lt;/p&gt;

&lt;p&gt;If you care about consistency that much, automate it. Add it to a git hook. Configure your formatter. Build it into CI. Stop making humans repeat the same debates every PR when a machine can enforce the standard without ego, without fatigue, without mood.&lt;/p&gt;

&lt;h2&gt;
  
  
  It's Not AI Versus People
&lt;/h2&gt;

&lt;p&gt;We often frame these conversations as people versus AI. It's not.&lt;/p&gt;

&lt;p&gt;If an AI is opening a PR that's twelve hundred lines, it's because you're allowing the AI to open a PR that's twelve hundred lines. The AI will exponentially mimic your behavior. It's like having a kid ... you give them guidelines or they do whatever they want. Guardrails turn output into something good.&lt;/p&gt;

&lt;p&gt;The same pattern applies whether the PR comes from a senior engineer you've worked with for five years or an AI assistant that generated the code in thirty seconds. The bias isn't against AI. The bias is toward opacity. We rubber-stamp what we can't hold in working memory. We interrogate what we can see clearly.&lt;/p&gt;

&lt;p&gt;The engineers who complain that AI code gets more scrutiny than human code are missing the point. The AI code gets scrutiny because it's small enough to evaluate. The human code gets a pass because it's too big to comprehend. The problem isn't who's writing it. The problem is the size.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Changed
&lt;/h2&gt;

&lt;p&gt;We made a hard rule on what could be allowed inside PRs. Anything over two hundred lines got closed immediately. Not rejected ... redirected. Break it into vertical slices. Ship the inventory update end to end ... database, API, and UI together. Then the caching fix. Then the checkout flow. Each slice reviewable in under thirty minutes.&lt;/p&gt;

&lt;p&gt;And the leaders were accountable for holding the team accountable. Not optional. Not "when we have time." If a twelve-hundred-line PR showed up, we closed it and explained why.&lt;/p&gt;

&lt;p&gt;We also automated the preferences. Naming conventions, formatting, import order ... all of it went into git hooks and CI. If a machine can enforce it, humans don't debate it.&lt;/p&gt;

&lt;p&gt;The engineers pushed back at first. They were used to working in branches for weeks, accumulating changes, shipping everything at once. It felt like more work to break things apart. It felt like more overhead.&lt;/p&gt;

&lt;p&gt;But something happened after the first month. The reviews got better. Not longer ... better. People were actually reading the code instead of skimming it. They were catching things: race conditions they'd missed before, edge cases that only showed up when the changes were isolated, assumptions baked into the architecture that didn't hold up under scrutiny.&lt;/p&gt;

&lt;p&gt;And review time shrank. Once the style debates were automated, engineers stopped spending thirty minutes on indentation and naming conventions. They spent that time on the actual logic.&lt;/p&gt;

&lt;p&gt;When the scope is clear and the machines handle the preferences, reviewers can focus on what actually matters: does this change do what it claims to do, and does it do it safely?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We're not being lazy. We're being human. We push back on what we understand and surrender to what we don't.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Real Metric
&lt;/h2&gt;

&lt;p&gt;Most people think more review is better. I think consistent review is better.&lt;/p&gt;

&lt;p&gt;Same standard. Same depth. Same courage to say "this doesn't make sense" or "this needs to be smaller" whether it came from Claude or a colleague.&lt;/p&gt;

&lt;p&gt;The dashboard taught me something I didn't want to learn. I wasn't reviewing code based on risk or importance. I was reviewing based on cognitive load. The code that was easy to hold in my head got my attention. The code that exceeded my working memory got my trust.&lt;/p&gt;

&lt;p&gt;That's a dangerous way to run an engineering organization. Trust should be earned by the code, not assumed because of the author or the size. Attention should be paid to complexity, not convenience.&lt;/p&gt;

&lt;p&gt;I still catch myself doing it. I see a massive PR roll in and I feel the urge to scan and approve, to trust the process, to assume someone else is paying attention. Then I remember the numbers. Fifteen minutes for two hundred lines. Twenty minutes for a thousand.&lt;/p&gt;

&lt;p&gt;The math doesn't work. But it can. If we're willing to close the PRs that are too big to review properly. If we're willing to automate the preferences so humans can focus on the decisions that matter. If we're willing to admit that our attention is a scarce resource and allocate it deliberately instead of conveniently.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The code we can debug gets rejected. The code we can't gets merged.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What This Means for Leaders
&lt;/h2&gt;

&lt;p&gt;Your job isn't to review every line. Your job is to build a culture where the lines that get reviewed are the ones that matter.&lt;/p&gt;

&lt;p&gt;That means you close the PRs that are too big to evaluate. Not because you're being difficult. Because you're being honest about human limits. The twelve-hundred-line PR isn't getting reviewed. It's getting trusted. And trust without verification isn't quality control.&lt;/p&gt;

&lt;p&gt;It also means you automate the noise. If your engineers are debating formatting in 2026, that's a leadership failure. Not theirs. Yours. You had the tools to remove that distraction and you didn't.&lt;/p&gt;

&lt;p&gt;Most importantly, it means you hold the line. When the senior engineer opens a massive PR and expects it to sail through because of their title, you close it. When the deadline pressure mounts and someone says "just approve it this once," you say no. The pattern is only a pattern because someone allowed it.&lt;/p&gt;

&lt;p&gt;The dashboard will show you the problem. But the fix requires you to be the kind of leader who acts on what the data reveals, even when it's inconvenient.&lt;/p&gt;




&lt;p&gt;One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. &lt;a href="https://www.jonoherrington.com/newsletter" rel="noopener noreferrer"&gt;Subscribe for free&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>discuss</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>My Junior Can Explain It. My Senior Can Defend It. The AI Just... Did It.</title>
      <dc:creator>Jono Herrington</dc:creator>
      <pubDate>Wed, 22 Apr 2026 21:02:35 +0000</pubDate>
      <link>https://dev.to/jonoherrington/my-junior-can-explain-it-my-senior-can-defend-it-the-ai-just-did-it-40ip</link>
      <guid>https://dev.to/jonoherrington/my-junior-can-explain-it-my-senior-can-defend-it-the-ai-just-did-it-40ip</guid>
      <description>&lt;p&gt;Ninety-four percent test coverage on our React component library. Storybook running clean. The kind of coverage that lets you sleep at night when you're responsible for components shipping across four teams.&lt;/p&gt;

&lt;p&gt;Then a designer slides into Slack. "Small change to the button component. Can we add a loading state variant?"&lt;/p&gt;

&lt;p&gt;Small change. Sure. I pulled up the file, highlighted the component, and asked GitHub Copilot to handle the refactor. Forty seconds later it had regenerated the props interface, added the variant logic, handled three edge cases I hadn't even thought about yet. Clean. Efficient. Exactly what I asked for.&lt;/p&gt;

&lt;p&gt;I ran the tests to be safe. Twelve failures. Red wall of text. No explanation, no trail, no "I changed X which broke Y because Z." Just ... done.&lt;/p&gt;

&lt;p&gt;This was over a year ago. Before AI tools started checking their own homework.&lt;/p&gt;

&lt;p&gt;The commit got rejected before it ever touched the repo. Git hooks doing their job, saving me from myself. But there I was, staring at fifty lines of generated code that was somehow both elegant and broken. Clever abstractions. Cleaner syntax than I would've written. And twelve test failures with zero explanation of what went wrong or why.&lt;/p&gt;

&lt;p&gt;I scrolled through the diff looking for the bug. Looking for the logic I could follow, the decision I could understand. There wasn't one. Just ... output.&lt;/p&gt;

&lt;p&gt;My junior engineer can explain it. They'll point to the ticket, the requirements, the conversation in Slack where we debated the approach. There's a thread you can pull.&lt;/p&gt;

&lt;p&gt;My senior engineer can defend it. The pattern they considered and rejected. The coupling they accepted to ship faster, and the technical debt they deliberately took on. There's reasoning you can interrogate.&lt;/p&gt;

&lt;p&gt;The AI just ... did it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;My junior can explain it. My senior can defend it. The AI just ... did it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Defining the "Why"
&lt;/h2&gt;

&lt;p&gt;The problem wasn't that Copilot broke the tests. The problem was we hadn't told it what passing looks like.&lt;/p&gt;

&lt;p&gt;AI doesn't make exceptions. It compounds exponentially. Whatever your system actually is, AI will expose it faster and louder. Good patterns become good code faster. Missing standards become invisible bugs sooner.&lt;/p&gt;

&lt;p&gt;This is actually a gift for leaders. Before AI, problems could live in the gap between "what we say we do" and "what the codebase shows." They'd go unnoticed, growing bigger until someone tripped over them in production. Now those gaps surface immediately. You know where your issues are because AI keeps walking into them.&lt;/p&gt;

&lt;p&gt;The teams struggling with AI aren't struggling because the tool is unpredictable. They're struggling because their standards aren't defined where AI can see them.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI doesn't make exceptions. It compounds exponentially.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Encoding the Boundaries
&lt;/h2&gt;

&lt;p&gt;We had to build differently. Not slower. Differently.&lt;/p&gt;

&lt;p&gt;We defined what good engineering looked like first. Error handling patterns. Component structure. When to abstract versus duplicate. How to handle loading states. Where business logic belongs.&lt;/p&gt;

&lt;p&gt;Then we encoded those definitions. Lint rules that catch the patterns. Architectural tests that enforce the boundaries. Automated checks that fail the build before a human ever sees the PR.&lt;/p&gt;

&lt;p&gt;The tool doesn't operate in a vacuum anymore. It operates inside explainable boundaries.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We defined what good engineering looked like first. Then we encoded it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Pattern That Scales
&lt;/h2&gt;

&lt;p&gt;Every engineering team has implicit standards. The senior engineer who reviews every PR and catches the same three mistakes. The architecture decisions that never got written down but everyone knows. The patterns that "we just don't do here."&lt;/p&gt;

&lt;p&gt;AI can't see those standards. It can only see code. If the standards aren't encoded, AI operates in the gap between "what we say we do" and "what the codebase actually shows."&lt;/p&gt;

&lt;p&gt;The teams scaling AI successfully aren't the ones with the best prompts. They're the ones with the clearest constraints. Documentation that lives in the build pipeline, not the wiki. Rules that run before review, not during.&lt;/p&gt;

&lt;p&gt;Build the pattern AI can explain. Then scale it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Build the pattern AI can explain. Then scale it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What This Means for Leaders
&lt;/h2&gt;

&lt;p&gt;Your team is already using AI. They're already hitting this gap.&lt;/p&gt;

&lt;p&gt;The question isn't whether to allow AI. It's whether you've given AI something to be accountable to.&lt;/p&gt;

&lt;p&gt;Look at your code review process. How many comments are about patterns that could be automated? How many debates happen three times before someone writes them down? How much of your "engineering culture" lives in heads instead of systems?&lt;/p&gt;

&lt;p&gt;AI compounds exponentially what you already have. The teams that get this right won't be the ones with the best AI tools. They'll be the ones with the clearest definition of what good looks like.&lt;/p&gt;




&lt;p&gt;One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. &lt;a href="https://www.jonoherrington.com/newsletter" rel="noopener noreferrer"&gt;Subscribe for free&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>leadership</category>
      <category>productivity</category>
      <category>discuss</category>
    </item>
    <item>
      <title>I Stopped Trying to Focus</title>
      <dc:creator>Jono Herrington</dc:creator>
      <pubDate>Tue, 21 Apr 2026 13:46:22 +0000</pubDate>
      <link>https://dev.to/jonoherrington/i-stopped-trying-to-focus-4kb1</link>
      <guid>https://dev.to/jonoherrington/i-stopped-trying-to-focus-4kb1</guid>
      <description>&lt;p&gt;The Pomodoro timer was still running when I opened the fourth browser tab about something completely unrelated. Twenty-three minutes left on the clock. Search architecture document still open. We were adding personalized recommendation search through AI ... the kind of deep work that requires holding complex state in your head. The productivity system hadn't failed. I had outgrown the container it assumed I could fit into.&lt;/p&gt;

&lt;p&gt;The productivity industrial complex has a formula. Block your calendar. Close your notifications. Set a timer. Focus.&lt;/p&gt;

&lt;p&gt;I tried it all. The 25-minute intervals. The deep work blocks. The morning routine optimization. Each system promised the same thing: if I just arranged my environment correctly, my brain would cooperate.&lt;/p&gt;

&lt;p&gt;The problem was never the environment. The problem was that these systems were designed for brains that naturally return to tasks.&lt;/p&gt;

&lt;p&gt;ADHD isn't a focus problem. It's a neurological difference in executive function ... planning, prioritization, motivation, impulse control. Research on sustained attention in adults with ADHD shows that performance deteriorates significantly over time (what researchers call "time-on-task effects"). When tested on 20-minute sustained attention tasks, adults with ADHD showed moderate deficits in alertness, selective attention, and divided attention compared to neurotypical controls. The structured work periods that productivity systems promise don't reset attention for neurodivergent brains the same way they do for neurotypical ones. (Source)&lt;/p&gt;

&lt;p&gt;We can hyper-focus on things that interest us, but we struggle to sustain focus on demand. We can start strong, pull focus for maybe a day, but our brains don't naturally come back to the thing we were doing. Not because we lack discipline. Because our wiring doesn't loop back the way neurotypical brains do.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Not because we lack discipline. Because our wiring doesn't loop back.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That search feature was complex. Personalized recommendations through AI required holding vector embeddings, ranking logic, and platform integration patterns in working memory all at once. The kind of work where you need to hold architecture in your head while you build. I'd get started, make progress, then get pulled sideways. Not by distraction ... by the natural pattern of a brain that moves faster than its container allows.&lt;/p&gt;

&lt;p&gt;The productivity advice assumed the bottleneck was external. If I could just eliminate interruptions, I'd focus. But my interruptions were internal. Five solutions generated before my hands could type one. Architecture reconstructed three times while trying to write the first function. The gap between ideation speed and execution speed created a traffic jam of unexpressed thinking.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The gap between ideation speed and execution speed created a traffic jam.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Realization
&lt;/h2&gt;

&lt;p&gt;The breakthrough wasn't another productivity system. I just stopped believing I needed to focus harder.&lt;/p&gt;

&lt;p&gt;I could sustain attention. The workflow just assumed attention should be sustained through mechanical execution. Typing one file, one function, one line at a time, while my brain had already moved to the fifth iteration.&lt;/p&gt;

&lt;p&gt;What I needed wasn't better discipline. It was different mechanics.&lt;/p&gt;

&lt;p&gt;I found them building a CLI tool for DevOps automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Redesign
&lt;/h2&gt;

&lt;p&gt;I was building a CLI tool for DevOps automation. Started with a base command. Next thought: write the tests. Then: concurrency issues might slow this down, make it async. Then: API calls need rate limiting. Then: circuit breakers, retries.&lt;/p&gt;

&lt;p&gt;With the old workflow, I'd have typed one file, gotten pulled into something else, lost the thread, and three hours later had half a command with four better approaches forgotten.&lt;/p&gt;

&lt;p&gt;With AI, I described the base command. Claude wrote the first pass. It used &lt;code&gt;any&lt;/code&gt; for everything. I pushed back. We're using TypeScript, not JavaScript with extra steps. It tightened the types. I described the test pattern. It wrote the tests. I described the async pattern, the rate limiting, the circuit breakers. Each iteration took minutes, not hours.&lt;/p&gt;

&lt;p&gt;The five solutions in my head didn't decay in the queue waiting for my fingers. They became the next five prompts.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The five solutions don't decay waiting for my fingers. They become the next five prompts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is amplification, not delegation. I'm still making every architectural decision. Still reviewing every abstraction. The difference is I'm working at the speed I think instead of the speed I type.&lt;/p&gt;

&lt;p&gt;Which is when you see the criticism for what it is. Go to r/ExperiencedDevs. Scroll any AI post. You'll find the mantra: "AI is making engineers lazy."&lt;/p&gt;

&lt;p&gt;The angle they miss is neurodivergence. This isn't about AI making a certain type of person lazy. That person was already lazy.&lt;/p&gt;

&lt;p&gt;Engineers are inherently lazy. That's the job. The best engineers automate. They find cheat codes. They optimize. Now we're classifying which lazy we approve of and which we don't, which is its own irony.&lt;/p&gt;

&lt;p&gt;I'm lazy. When I'm engineering, I take the easiest approach because I have twenty things to do and each one needs my brain. If I can make them faster, I'm not thinking less. I'm thinking more deeply.&lt;/p&gt;

&lt;p&gt;That's something.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The grind was never proof of skill.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I spent fifteen years trying to fix my brain to match the tools. The tools should fit the brain. That's not limitation. That's design.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Leaders
&lt;/h2&gt;

&lt;p&gt;Look at your team. Someone is holding five solutions in their head and losing four of them to the gap between thinking and typing. They're not distracted. They're not undisciplined. They're waiting for a workflow that matches their speed.&lt;/p&gt;

&lt;p&gt;The engineers who think fastest often ship slowest. Not because they lack skill. Because the system is built for hands that move at average speed.&lt;/p&gt;

&lt;p&gt;AI doesn't just change what your team builds. It changes who can build at their natural pace. That's worth noticing.&lt;/p&gt;

&lt;p&gt;Now everyone gets to find out.&lt;/p&gt;




&lt;p&gt;One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. &lt;a href="https://www.jonoherrington.com/newsletter" rel="noopener noreferrer"&gt;Subscribe for free&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>mentalhealth</category>
      <category>leadership</category>
    </item>
    <item>
      <title>Writing Code Was Never the Hard Part</title>
      <dc:creator>Jono Herrington</dc:creator>
      <pubDate>Mon, 20 Apr 2026 13:47:27 +0000</pubDate>
      <link>https://dev.to/jonoherrington/writing-code-was-never-the-hard-part-10e2</link>
      <guid>https://dev.to/jonoherrington/writing-code-was-never-the-hard-part-10e2</guid>
      <description>&lt;p&gt;We were in UAT for a React checkout rebuild when someone said the thing that still haunts me. "Well, that's how it used to work, so we assumed it would continue to work that way." This was a completely new project. New codebase. New architecture. The new specs never defined that behavior, but the assumption felt so reasonable to them that nobody thought to ask.&lt;/p&gt;

&lt;p&gt;The project was evergreen. Fresh repository, blank slate, no inherited patterns to fall back on. The kind of greenfield project that feels liberating until you realize you're building without guardrails.&lt;/p&gt;

&lt;p&gt;We'd spent months on the React frontend. Defined the interfaces. Built the components. Handled the edge cases we knew about. Then UAT started, and the gaps revealed themselves one by one. Not technical failures. Specification failures. Behaviors that existed in the old system that nobody thought to document because everyone assumed they'd carry over.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Not technical failures. Specification failures.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The phrase that still scares me: "That's how it used to work, so we assumed..." It wasn't irrational. From their perspective, it was practical. Why wouldn't it work that way? Except it was a completely new project, and that behavior was never in the spec. We never built it because we never knew it was expected.&lt;/p&gt;

&lt;p&gt;Looking back, the problem wasn't communication volume. It was contract clarity. We had meetings. We had tickets. We had documentation. What we didn't have was an explicit agreement about what "done" actually meant before we started building.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Syntax Illusion
&lt;/h2&gt;

&lt;p&gt;Every junior engineer thinks the job is the syntax. The semicolons. The function names. The clever abstraction.&lt;/p&gt;

&lt;p&gt;Every senior engineer who's been promoted twice knows better. The job is the conversation in the room three weeks before the syntax. The translation of ambiguity into specification. The pushback when the requirement doesn't match the system.&lt;/p&gt;

&lt;p&gt;I've shipped enterprise commerce platforms for over a decade. The expensive problems were never missing semicolons. They were assumptions that looked like requirements, requirements that shifted between meetings, behaviors that "everyone knew" except the people building the system.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The expensive problems were never missing semicolons. They were assumptions that looked like requirements.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The keyboard work was always the easy part. It just looked hard because we spent so much time doing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Learning
&lt;/h2&gt;

&lt;p&gt;After that UAT, we had to build differently. Not more documentation. Clearer contracts.&lt;/p&gt;

&lt;p&gt;We started defining contracts at every phase. Interface agreements before implementation. Acceptance criteria that didn't depend on interpretation. What "done" looked like, written down, agreed upon.&lt;/p&gt;

&lt;p&gt;This led to spec-driven development. Not as bureaucracy. As survival. When the spec is clear, the code writes itself. When the spec is ambiguous, every engineer invents their own version of the truth, and every stakeholder assumes the old rules still apply.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When the spec is clear, the code writes itself.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The same contracts that helped us communicate with each other turned out to help with AI. The machine can't infer what you meant. It can only build what you specified. Ambiguity that creates human confusion creates machine confusion too. Clarity that prevents human error prevents machine error.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI Changes
&lt;/h2&gt;

&lt;p&gt;Now everyone is panicking that AI writes the code. They're panicking about the wrong thing.&lt;/p&gt;

&lt;p&gt;AI is brilliant at the thing engineers always claimed was hard. It's terrible at the thing engineers always pretended was easy.&lt;/p&gt;

&lt;p&gt;It cannot sit in a meeting with a VP of Merchandising and ask the follow-up question that reveals the assumption nobody voiced. It cannot push back when the requirement contradicts the system the team next door just shipped. It cannot tell you that the acceptance criteria you just wrote contains a gap that will surface in UAT.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI is brilliant at the thing engineers claimed was hard. It's terrible at the thing engineers pretended was easy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The engineers who built their identity around being the smartest person in the room with a keyboard are discovering that wasn't actually the moat. The ones who learned how to translate ambiguity into specification just had their value compounded.&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Divide
&lt;/h2&gt;

&lt;p&gt;There are two kinds of engineers now. Those who can write specs that make AI productive. And those who can't.&lt;/p&gt;

&lt;p&gt;The first group is shipping faster than ever. They describe the architecture, the constraints, the edge cases. AI generates the implementation. They review, redirect, refine. The bottleneck moved from their fingers to their judgment.&lt;/p&gt;

&lt;p&gt;The second group is still fighting the same battles. Assumptions that surface in UAT. Requirements that shift between meetings. Code that does what was asked but not what was needed. They have the same problems, just with a faster typing assistant.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The bottleneck moved from their fingers to their judgment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Specification Skill
&lt;/h2&gt;

&lt;p&gt;Writing clear requirements is a skill. Like any skill, it can be learned. Like any skill, most people don't bother.&lt;/p&gt;

&lt;p&gt;It starts with explicit contracts. What goes in. What comes out. What happens at the edges. Not "handle errors gracefully." "Return 400 for missing fields, 409 for conflicts, 500 for downstream failures."&lt;/p&gt;

&lt;p&gt;It continues with behavioral specifications. Not "fast." "P95 under 200ms, tested at expected peak load."&lt;/p&gt;

&lt;p&gt;It ends with context that doesn't fit in tickets. The dependency on the team next door. The constraint from the legacy system. The stakeholder who needs to sign off but doesn't attend standups.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Writing clear requirements is a skill. Like any skill, it can be learned.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is the work that matters now. The translation layer between human intent and machine execution. AI eliminated the transcription tax. It didn't eliminate the thinking tax.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Reveal
&lt;/h2&gt;

&lt;p&gt;The engineers who will dominate this transition aren't the ones who code faster. They're the ones who specify clearer.&lt;/p&gt;

&lt;p&gt;They learned something years ago that the industry is just discovering. The keyboard was never the hard part. It was just the part we could see.&lt;/p&gt;

&lt;p&gt;Now everyone gets to find out.&lt;/p&gt;




&lt;p&gt;One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. &lt;a href="https://www.jonoherrington.com/newsletter" rel="noopener noreferrer"&gt;Subscribe for free&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>leadership</category>
      <category>discuss</category>
      <category>systemsthinking</category>
    </item>
    <item>
      <title>AI Masters vs Everyone Else</title>
      <dc:creator>Jono Herrington</dc:creator>
      <pubDate>Sun, 19 Apr 2026 13:06:20 +0000</pubDate>
      <link>https://dev.to/jonoherrington/ai-masters-vs-everyone-else-8j2</link>
      <guid>https://dev.to/jonoherrington/ai-masters-vs-everyone-else-8j2</guid>
      <description>&lt;p&gt;I closed Cursor after ten minutes.&lt;/p&gt;

&lt;p&gt;It was September. Everyone was talking about AI agents and context windows and something called "agent mode" that supposedly changed everything. I downloaded Cursor, opened it, watched it index my codebase ... and felt my chest tighten. The interface was wrong. The shortcuts were wrong. Everything I'd spent fifteen years optimizing was suddenly foreign.&lt;/p&gt;

&lt;p&gt;I went back to VS Code that afternoon. Told myself I'd try again later. Kept maxing out my GitHub Copilot credits instead.&lt;/p&gt;

&lt;p&gt;Three weeks later, I tried again. This time I stayed long enough to build something real. Now I have ChatGPT, Claude, Cursor with multiple models, and OpenClaw + Ollama running in a virtual environment. I use different tools for different problems. I've become the person I couldn't imagine being in that first ten minutes.&lt;/p&gt;

&lt;p&gt;The divide I almost fell into isn't about willingness to try new things. It's deeper than that. The split won't be good developers versus bad developers. It'll be AI masters versus everyone else. And the gap is already widening faster than most people realize.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The split won't be good developers versus bad developers. It'll be AI masters versus everyone else.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The $2,000 Engineer
&lt;/h2&gt;

&lt;p&gt;I've been talking to principal engineers across the industry. Large enterprise companies. Household names. People whose scope spans systems most developers never touch. And I'm seeing a pattern that contradicts the mainstream narrative entirely.&lt;/p&gt;

&lt;p&gt;The analysts say AI is a 2-5x multiplier. The principal engineers I'm talking to say that's wrong. They argue AI isn't multiplication ... it's exponentiation.&lt;/p&gt;

&lt;p&gt;One of them spends over $2,000 a month on tokens. Not because he's wasteful. Because he's built systems that compound. He's orchestrating agent loops, chaining workflows, training algorithms on his codebase until they anticipate his architectural decisions. The output isn't faster code. It's different code. Solutions he wouldn't have reached through traditional means.&lt;/p&gt;

&lt;p&gt;These aren't junior engineers copy-pasting Claude outputs. These are masters of their craft who recognized that systematic thinking ... the skill that got them to principal ... is the exact skill that unlocks AI's real potential.&lt;/p&gt;

&lt;p&gt;They're not prompting. They're building operating systems.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;They're not prompting. They're building operating systems.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The License Turned In
&lt;/h2&gt;

&lt;p&gt;I have another story. An old colleague. Tech lead at a well-known lifestyle brand. Solid engineer. Smart person. Someone who could solve problems other people got stuck on.&lt;/p&gt;

&lt;p&gt;His company doesn't mandate AI usage. Some teams use it, some don't. Leadership treats it like a preference ... Vim versus Emacs, tabs versus spaces. Personal choice.&lt;/p&gt;

&lt;p&gt;He turned in his AI license.&lt;/p&gt;

&lt;p&gt;Said he wanted to stay "pure." Keep his skills sharp the traditional way. Prove he could still code without assistance. And I get the instinct ... I had it in those first ten minutes with Cursor. The discomfort of a new tool feels like a threat to your identity.&lt;/p&gt;

&lt;p&gt;But here's what I can't reconcile. No one can deny the speed. No one can deny the output. The engineers spending $2,000 a month aren't producing marginally better work. They're producing categorically different work. Solutions at different altitudes. Architectures that wouldn't emerge from traditional workflows.&lt;/p&gt;

&lt;p&gt;My old colleague thinks he can catch up when he's ready. That the gap is just familiarity with a new interface. Ten minutes of discomfort, stretched across a few weeks.&lt;/p&gt;

&lt;p&gt;He's wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Masters Actually Do
&lt;/h2&gt;

&lt;p&gt;The narrative that AI is "just prompting" is dangerously incomplete. Yes, you can get value from good prompts. But the engineers who are pulling ahead aren't writing better prompts. They're thinking in systems.&lt;/p&gt;

&lt;p&gt;A regular AI user goes to their codebase and says: "I need a function that does X." They write a prompt. They get frustrated when the output is non-deterministic. They try again. They settle for something close enough.&lt;/p&gt;

&lt;p&gt;The master builds differently. They start with guardrails. Error handling patterns. Architectural constraints encoded into the environment. They train their AI on their standards until it knows ... before they ask ... what "good" looks like in their system.&lt;/p&gt;

&lt;p&gt;Principal engineers have an unfair advantage here. They're already thinking in systems. They've spent years building mental models of how components interact, where complexity hides, which abstractions hold and which collapse. AI doesn't replace that thinking. It amplifies it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Principal engineers have an unfair advantage. They're already thinking in systems.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The master engineer doesn't fight the non-determinism. They harness it. They know that asking the same question three times yields three valid approaches, and their job is selecting the right one for the context. They're curators, not just consumers.&lt;/p&gt;

&lt;p&gt;This is why "I use ChatGPT sometimes" doesn't mean what people think it means. The tool is available to everyone. The operating system around the tool is what separates the masters from everyone else.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Non-Determinism Problem
&lt;/h2&gt;

&lt;p&gt;People complain that AI is unreliable. Same prompt, different outputs. Unpredictable behavior. Hard to trust.&lt;/p&gt;

&lt;p&gt;But humans are equally non-deterministic. Ask an engineer the same architecture question three times ... morning, afternoon, after a bad deploy ... and you'll get three different answers based on sleep, stress, blood sugar, whether their standup ran long. We've always managed this variability with standards and accountability. Code review. Design patterns. Architectural decision records.&lt;/p&gt;

&lt;p&gt;The same solution applies to AI. The problem isn't tool reliability. It's absence of standards.&lt;/p&gt;

&lt;p&gt;The engineers who are thriving aren't luckier or working with better models. They've built constraint systems. Their AI operates inside encoded rules ... lint configs, architectural tests, workflow definitions that keep the creativity pointed in productive directions.&lt;/p&gt;

&lt;p&gt;This is what I missed in my first ten minutes with Cursor. I thought the tool was the unlock. It's not. The unlock is the system you build around the tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Two Camps
&lt;/h2&gt;

&lt;p&gt;This story has been told before. New technology arrives. Two camps form. One embraces the change, builds expertise, pulls ahead. The other waits for the dust to settle, for best practices to emerge, for the tool to mature.&lt;/p&gt;

&lt;p&gt;Only one camp ever wins.&lt;/p&gt;

&lt;p&gt;The divide isn't about talent. Talent is too distributed, too random. The divide is timing. How early someone decided this was worth getting good at.&lt;/p&gt;

&lt;p&gt;I see it in my own trajectory. Those first ten minutes of discomfort could have stretched into weeks, months, years. I could have kept maxing out Copilot credits in VS Code, telling myself I was staying current, while the real practitioners were building fundamentally different capabilities.&lt;/p&gt;

&lt;p&gt;The engineers spending $2,000 a month on tokens aren't spending money. They're buying optionality. They're compounding advantages that will compound again before the late adopters even recognize the game has changed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The engineers spending $2,000 a month on tokens aren't spending money. They're buying optionality.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What This Means for Leaders
&lt;/h2&gt;

&lt;p&gt;If you're leading engineering teams, you have a decision to make. You can treat AI like a productivity tool ... optional, preference-based, something engineers pick up if they're interested. Or you can recognize it as a fundamental shift in what engineering excellence looks like.&lt;/p&gt;

&lt;p&gt;The "purists" on your team aren't preserving their skills. They're preserving their comfort. And comfort is expensive in a market that's moving this fast.&lt;/p&gt;

&lt;p&gt;The question isn't whether your engineers use AI. It's whether they've built systematic approaches to using it well. Whether they can describe their eval harnesses, their agent loops, their constraint systems. Whether they've thought beyond prompting to orchestration.&lt;/p&gt;

&lt;p&gt;If they can't ... they're not in the first camp. They're in the second.&lt;/p&gt;

&lt;p&gt;The measuring stick has changed. You're not being evaluated against the engineer next to you anymore. You're being evaluated against the one who started practicing eighteen months ago.&lt;/p&gt;

&lt;p&gt;The split is here. The camps are formed. Darwin already explained what happens next.&lt;/p&gt;




&lt;p&gt;One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. &lt;a href="https://www.jonoherrington.com/newsletter" rel="noopener noreferrer"&gt;Subscribe for free&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>leadership</category>
    </item>
    <item>
      <title>AI Isn't a Crutch for Bad Developers ... It's the Unlock for Neurodivergent Ones</title>
      <dc:creator>Jono Herrington</dc:creator>
      <pubDate>Sat, 18 Apr 2026 18:02:40 +0000</pubDate>
      <link>https://dev.to/jonoherrington/ai-isnt-a-crutch-for-bad-developers-its-the-unlock-for-neurodivergent-ones-11ek</link>
      <guid>https://dev.to/jonoherrington/ai-isnt-a-crutch-for-bad-developers-its-the-unlock-for-neurodivergent-ones-11ek</guid>
      <description>&lt;p&gt;I was sitting at my desk at 10:47 PM on a Tuesday, Cursor open in front of me, when I finally understood what had been happening for months. My hands weren't moving. The cursor was blinking. And yet I was in flow ... the kind of deep focus I'd spent my entire career chasing and losing and chasing again. For the first time, my brain and the screen were moving at the same speed.&lt;/p&gt;

&lt;p&gt;A major global study by the Tavistock Institute surveyed over 2,000 tech employees across companies like Colt, Nokia, Samsung, and Vodafone. Of the 2,176 respondents, 562 self-identified as neurodivergent ... roughly 1 in 4. The real finding was about the gap in disclosure: over half hadn't told their employers because they lacked a formal diagnosis or didn't see the value. Many developers haven't had language for what they've been experiencing. (&lt;a href="https://www.change-the-face.com/neurodiversity-in-tech/" rel="noopener noreferrer"&gt;Source&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;The loudest voices in engineering keep framing AI as the great equalizer that will let mediocre developers ship junk code. They're missing what's actually happening.&lt;/p&gt;

&lt;p&gt;AI isn't a crutch for bad developers.&lt;/p&gt;

&lt;p&gt;It's the unlock for neurodivergent ones.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I've built a career on the assumption that the eight hours between "I know what to do" and "I've done it" was a tax I had to pay forever.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Tax I Thought Was Permanent
&lt;/h2&gt;

&lt;p&gt;I've been coding since I was thirteen. I have ADHD. I went undiagnosed until I was thirty-six.&lt;/p&gt;

&lt;p&gt;Those twenty-three years were ... frustrating is too small a word. I never knew why getting pulled away from things made me so irritable. Why I couldn't get into the zone the same way other people seemed to. Why, once I finally found that zone, it felt like a physical injury when someone interrupted it.&lt;/p&gt;

&lt;p&gt;I built a career on the assumption that the eight hours between "I know what to do" and "I've done it" was a tax I had to pay forever. That the friction was just ... me. That the forty-five minutes of warmup before I could write the line I'd been holding in my head all morning was a character flaw I needed to engineer around.&lt;/p&gt;

&lt;p&gt;The hardest part of writing code with ADHD was never the thinking. It was the typing. The context switching. The rebuilding of state every time I came back to the terminal after lunch.&lt;/p&gt;

&lt;p&gt;My brain moves like a conductor thinking about thirty things at once. The strings need to come in here, but also watch the percussion section, and is that soloist rushing, and what's happening with the dynamics in the back rows? When you're writing code ... one file, one function, one line at a time in a linear text editor ... that multithreaded mind becomes a liability. I would get stalled because I would just be continually thinking and rethinking and rethinking. Five ways to implement something before I even started typing.&lt;/p&gt;

&lt;p&gt;That tax is gone.&lt;/p&gt;

&lt;h2&gt;
  
  
  When the Conductor Found an Orchestra
&lt;/h2&gt;

&lt;p&gt;It was when I realized that the state of flow you can get with AI is almost a benefit of my focus, not a workaround for it. I can hyper-focus on certain things. The ability to get from zero to one and then continue to iterate allows your brain and the AI to move almost as fast as your brain moves for the first time in my entire life.&lt;/p&gt;

&lt;p&gt;I've never thought AI was cheating. One of my top five CliftonStrengths is Competition. There's a saying in sports: if you aren't cheating, you aren't trying. The best developers find the cheat codes and the holes. That's what makes them great. They think through edge cases. They approach things differently than people who follow the rules.&lt;/p&gt;

&lt;p&gt;AI doesn't replace my judgment. It doesn't replace my ability to see thirty things at once and hold the architecture in my head. It eats the overhead for breakfast. The transcription tax. The syntax lookup. The "how do I express this thought in code" friction that used to burn forty-five minutes every single session.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI doesn't replace my judgment. It eats the overhead for breakfast.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Team I Didn't Design For
&lt;/h2&gt;

&lt;p&gt;I've watched the same thing happen to engineers on my team who learn differently, think differently, work differently.&lt;/p&gt;

&lt;p&gt;We assume everyone thinks like us. Even when we intellectually know better ... our day-to-day frustrations betray us. We build processes for carbon copies. We put people in boxes, and if they don't fit, we tell them to conform to the box, not to expand the box.&lt;/p&gt;

&lt;p&gt;The truth is that we all have different ways of learning. Some learn orally. Some learn better visually. Some learn better by sitting in a classroom talking about theory. Others learn better by getting into the mess and making mistakes.&lt;/p&gt;

&lt;p&gt;AI doesn't fit into a perfect mold, and the unlocks are for everyone differently. How we approach AI is different, and that's actually the beauty of it.&lt;/p&gt;

&lt;p&gt;One person on my team has a pilot's license. They think in checklists and systems. For them, plan mode is the unlock. Being able to spec every last thing out, have the AI ask clarifying questions, keep up with their brain wave ... that's the fit.&lt;/p&gt;

&lt;p&gt;Another has a scholar background. Master's degree, deep researcher. They gravitate toward ask mode. Starting with questions, iterating on understanding before building. The AI keeps up with their need to explore.&lt;/p&gt;

&lt;p&gt;A third just combos their way in and goes straight toward agent mode. No spec. Just velocity and iteration.&lt;/p&gt;

&lt;p&gt;Iron against iron creates sparks. That's the type of sparks we need on our teams. But for years, only one of these thinkers got to operate at full speed. The task-list person got buried in planning paralysis. The scholar got stuck in research loops. The combo-coder got dinged for "moving too fast without thinking."&lt;/p&gt;

&lt;p&gt;The neurotypical engineer who could just sit and grind for six hours straight ... that person set the pace. That person's workflow was "normal." Everyone else was a deviation that needed accommodation.&lt;/p&gt;

&lt;p&gt;Now the ones who used to get edged out are shipping faster than that engineer ever did.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The ones who used to get edged out are shipping faster than that engineer ever did.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Tax on Focus
&lt;/h2&gt;

&lt;p&gt;A 2023 controlled experiment published in arXiv found developers using GitHub Copilot completed tasks 55.8% faster than those without AI assistance. The researchers didn't segment by neurotype, but the mechanism is clear. AI reduces activation cost ... the cognitive tax of starting. For people with ADHD, that tax is higher. (&lt;a href="https://arxiv.org/abs/2302.06590" rel="noopener noreferrer"&gt;Source&lt;/a&gt;) The distractions of the modern-day world ... Slack, email, calendar notifications ... hit different when your executive function is already a limited resource.&lt;/p&gt;

&lt;p&gt;The AI doesn't just make me faster. It makes the fragments of focus I still have more valuable. When I get twenty minutes between meetings, I don't spend nineteen of them rebuilding context. I spend one. Then I ship.&lt;/p&gt;

&lt;p&gt;This is why the "AI is for bad developers" crowd misses what's actually happening.&lt;/p&gt;

&lt;p&gt;They're so busy defending the keyboard that they're missing who just got let into the room.&lt;/p&gt;

&lt;p&gt;They're not wrong that AI can paper over gaps. A junior engineer can ship senior-looking code without understanding it. A developer can generate features without ever sitting with the system long enough to develop judgment. Those are real risks.&lt;/p&gt;

&lt;p&gt;But they're wrong about who the tool is unlocking. They're wrong about what "good developer" even means when the constraints change.&lt;/p&gt;

&lt;p&gt;Different brains were always doing the work. They just had a worse tax rate.&lt;/p&gt;

&lt;p&gt;The engineers who could sit and grind six hours straight weren't better. They were better suited to a toolset designed by people who could sit and grind six hours straight. The IDE, the terminal, the git workflow, the PR review process ... all of it optimized for a particular cognitive style. The rest of us were paying a productivity tax for the crime of thinking differently.&lt;/p&gt;

&lt;p&gt;AI doesn't level the playing field by lowering the bar. It levels the playing field by removing a barrier that was never about skill in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Leaders
&lt;/h2&gt;

&lt;p&gt;I keep watching leaders who think their job is policing AI usage. Tracking credits, enforcing policies, making sure people aren't "cheating." They're missing the same thing I had to learn about open floor plans destroying deep work. The same thing I had to recognize about "culture fit" being code for "thinks like me."&lt;/p&gt;

&lt;p&gt;We've gotten better over the years. More aware of different learning styles, sensory processing, the invisible tax of environments that don't fit. But we still default to designing for the median and accommodating the edges.&lt;/p&gt;

&lt;p&gt;AI is flipping that. The edges are finding tools that fit their minds for the first time. The median is discovering that their six-hour grind sessions aren't actually the only way to ship quality code.&lt;/p&gt;

&lt;p&gt;Your job as a leader isn't to police who's using AI and how. It's to notice who's suddenly shipping faster, who's suddenly contributing more, who's suddenly engaged in ways they weren't before ... and ask what changed.&lt;/p&gt;

&lt;p&gt;Sometimes the answer is a new tool. Sometimes the answer is that the tool finally fits the person.&lt;/p&gt;

&lt;p&gt;Different brains were always doing the work. They just had a worse tax rate.&lt;/p&gt;

&lt;p&gt;The tax is gone.&lt;/p&gt;




&lt;p&gt;One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. &lt;a href="https://www.jonoherrington.com/newsletter" rel="noopener noreferrer"&gt;Subscribe for free&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mentalhealth</category>
      <category>productivity</category>
      <category>leadership</category>
    </item>
  </channel>
</rss>
