<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Leonid Bugaev</title>
    <description>The latest articles on DEV Community by Leonid Bugaev (@leonid_bugaev_51880f4aa87).</description>
    <link>https://dev.to/leonid_bugaev_51880f4aa87</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/leonid_bugaev_51880f4aa87"/>
    <language>en</language>
    <item>
      <title>AI Made Implementation Faster. Verification Is Still the Bottleneck</title>
      <dc:creator>Leonid Bugaev</dc:creator>
      <pubDate>Thu, 23 Apr 2026 15:35:12 +0000</pubDate>
      <link>https://dev.to/leonid_bugaev_51880f4aa87/ai-made-implementation-faster-verification-is-still-the-bottleneck-2o89</link>
      <guid>https://dev.to/leonid_bugaev_51880f4aa87/ai-made-implementation-faster-verification-is-still-the-bottleneck-2o89</guid>
      <description>&lt;p&gt;AI made implementation dramatically faster.&lt;/p&gt;

&lt;p&gt;Trust did not.&lt;/p&gt;

&lt;p&gt;I live in two different worlds now.&lt;/p&gt;

&lt;p&gt;In one, I build my own projects with AI and ship more software than ever. I have written more software in the last two years than across the rest of my career, and I have barely written any code manually in the last year.&lt;/p&gt;

&lt;p&gt;In the other, I lead engineering for software used by banks, governments, and other regulated environments, where mistakes are expensive and confidence matters more than speed.&lt;/p&gt;

&lt;p&gt;In both worlds, I keep hitting the same wall:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation got dramatically faster. Trust did not.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is the part I think the industry still keeps smoothing over.&lt;/p&gt;

&lt;h2&gt;
  
  
  Faster code generation is not faster engineering
&lt;/h2&gt;

&lt;p&gt;The current AI coding conversation often assumes that if code generation speeds up, engineering speeds up too.&lt;/p&gt;

&lt;p&gt;That is not what I see.&lt;/p&gt;

&lt;p&gt;On my own projects, I can build much faster than before. AI helps me move quickly, clean things up, write tests, refactor, and push ideas further in less time.&lt;/p&gt;

&lt;p&gt;But it also asks me to trust more.&lt;/p&gt;

&lt;p&gt;I am not just delegating typing.&lt;/p&gt;

&lt;p&gt;I am delegating thinking, validation, and judgment too.&lt;/p&gt;

&lt;p&gt;And I am still not sure where the safe line is.&lt;/p&gt;

&lt;p&gt;In enterprise software, the picture is different but the problem is the same.&lt;/p&gt;

&lt;p&gt;AI absolutely helped us in some areas. It reduced noise. It reduced interruption-based work. It helped other teams answer questions about system behavior without constantly pulling senior engineers into ad hoc investigations.&lt;/p&gt;

&lt;p&gt;That mattered.&lt;/p&gt;

&lt;p&gt;People were less interrupted. Context switching got better. Engineers were happier.&lt;/p&gt;

&lt;p&gt;But it did not suddenly make us ship features 2x faster.&lt;/p&gt;

&lt;p&gt;Not even close.&lt;/p&gt;

&lt;p&gt;Because implementation was never the whole job.&lt;/p&gt;

&lt;p&gt;Verification is the bigger slice.&lt;/p&gt;

&lt;h2&gt;
  
  
  The verification gap
&lt;/h2&gt;

&lt;p&gt;The phrase I keep coming back to is:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;verification gap&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By that I mean the distance between what I intend the software to do and what I can actually prove about its behavior.&lt;/p&gt;

&lt;p&gt;Between intended behavior and demonstrated behavior.&lt;/p&gt;

&lt;p&gt;That gap always existed.&lt;/p&gt;

&lt;p&gt;AI did not invent it.&lt;/p&gt;

&lt;p&gt;It amplified it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI makes this problem worse
&lt;/h2&gt;

&lt;p&gt;When humans wrote the code, the same brain often held the intent, the implementation, and the validation loop together.&lt;/p&gt;

&lt;p&gt;Not perfectly. People still shipped bugs. Specs were incomplete. Tests missed things.&lt;/p&gt;

&lt;p&gt;But there was at least one place where the system could be understood as a whole: the person writing it.&lt;/p&gt;

&lt;p&gt;That is no longer the default.&lt;/p&gt;

&lt;p&gt;Now the human writes the prompt.&lt;/p&gt;

&lt;p&gt;The model writes the code.&lt;/p&gt;

&lt;p&gt;The model writes the tests.&lt;/p&gt;

&lt;p&gt;The human skims the diff.&lt;/p&gt;

&lt;p&gt;The model writes the cleanup.&lt;/p&gt;

&lt;p&gt;The CI passes.&lt;/p&gt;

&lt;p&gt;The feature ships.&lt;/p&gt;

&lt;p&gt;And if the original intent was slightly wrong, incomplete, or misunderstood, that mistake does not stay in one place anymore.&lt;/p&gt;

&lt;p&gt;It gets propagated through the whole stack.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The plan is based on the wrong assumption.&lt;/li&gt;
&lt;li&gt;The implementation is based on the wrong assumption.&lt;/li&gt;
&lt;li&gt;The tests are based on the wrong assumption.&lt;/li&gt;
&lt;li&gt;The documentation often reflects the same wrong assumption.&lt;/li&gt;
&lt;li&gt;The "manual validation" is often the same model being asked to sanity-check itself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point, what exactly are we proving?&lt;/p&gt;

&lt;p&gt;Often just that the system is internally consistent with the assumption it invented for itself.&lt;/p&gt;

&lt;p&gt;Not that it matches our intent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bug free is not the same as intent-correct
&lt;/h2&gt;

&lt;p&gt;This is why I think a lot of AI productivity discourse still misses the real problem.&lt;/p&gt;

&lt;p&gt;People say: just write better tests.&lt;/p&gt;

&lt;p&gt;I do write tests.&lt;/p&gt;

&lt;p&gt;AI writes tests for me too.&lt;/p&gt;

&lt;p&gt;That is not the point.&lt;/p&gt;

&lt;p&gt;Tests verify behavior for cases somebody thought of.&lt;/p&gt;

&lt;p&gt;That somebody used to be a human.&lt;/p&gt;

&lt;p&gt;Now it is often a human plus a model.&lt;/p&gt;

&lt;p&gt;That is still not the same thing as verifying intent.&lt;/p&gt;

&lt;p&gt;You can have 100% line coverage and still miss the thing that matters.&lt;/p&gt;

&lt;p&gt;You can have a green CI run and still not know whether the software behaves the way you intended.&lt;/p&gt;

&lt;p&gt;A green pipeline can still be a polished misunderstanding.&lt;/p&gt;

&lt;p&gt;Bug free is not the same as intent-correct.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software is not flat. It is layers.
&lt;/h2&gt;

&lt;p&gt;This gets worse as software gets bigger.&lt;/p&gt;

&lt;p&gt;Software is not flat.&lt;/p&gt;

&lt;p&gt;It is layers.&lt;/p&gt;

&lt;p&gt;It is wide, deep, and full of interacting components, hidden assumptions, old decisions nobody remembers, backwards compatibility constraints, and behavior that only makes sense if you know four other subsystems.&lt;/p&gt;

&lt;p&gt;Any project that lives long enough eventually reaches a point where one brain is no longer enough.&lt;/p&gt;

&lt;p&gt;That was true before AI.&lt;/p&gt;

&lt;p&gt;It is still true now.&lt;/p&gt;

&lt;p&gt;AI does not remove that limit.&lt;/p&gt;

&lt;p&gt;In some cases it makes you hit it faster, because you can generate change faster than you can understand its consequences.&lt;/p&gt;

&lt;p&gt;A lot of our engineering process exists because of this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CI/CD&lt;/li&gt;
&lt;li&gt;QA&lt;/li&gt;
&lt;li&gt;RFCs&lt;/li&gt;
&lt;li&gt;architecture reviews&lt;/li&gt;
&lt;li&gt;team boundaries&lt;/li&gt;
&lt;li&gt;approval workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not random rituals.&lt;/p&gt;

&lt;p&gt;They are patches over the same underlying problem: software complexity grows beyond what one brain can safely manage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where does intent actually live?
&lt;/h2&gt;

&lt;p&gt;I think mainstream software engineering is still missing something fundamental.&lt;/p&gt;

&lt;p&gt;We do not maintain a real source of truth for intent.&lt;/p&gt;

&lt;p&gt;If I ask where the intended behavior of a system lives right now, the honest answer in most teams is:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;all of it combined badly&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some of it is in source code.&lt;/p&gt;

&lt;p&gt;Some of it is in tests.&lt;/p&gt;

&lt;p&gt;Some of it is in RFCs.&lt;/p&gt;

&lt;p&gt;Some of it is in Jira tickets.&lt;/p&gt;

&lt;p&gt;Some of it is in Confluence.&lt;/p&gt;

&lt;p&gt;Some of it is in the heads of senior engineers.&lt;/p&gt;

&lt;p&gt;None of those is the place where I can go and see, clearly, how the system is supposed to behave right now.&lt;/p&gt;

&lt;p&gt;That is not a source of truth.&lt;/p&gt;

&lt;p&gt;That is archaeology.&lt;/p&gt;

&lt;p&gt;And that feels like a major difference between mainstream software and more regulated domains like aerospace or automotive, where intended behavior is at least treated as a first-class artifact.&lt;/p&gt;

&lt;p&gt;In mainstream software, especially in large, complex systems, we mostly reconstruct intent after the fact from scattered artifacts.&lt;/p&gt;

&lt;p&gt;And then we act surprised when regressions keep happening.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what is the actual bottleneck now?
&lt;/h2&gt;

&lt;p&gt;If a feature can be implemented in hours instead of weeks, why have so many teams not seen the full payoff?&lt;/p&gt;

&lt;p&gt;Because implementation was never the only bottleneck.&lt;/p&gt;

&lt;p&gt;The harder part is deciding what should be built, making that intent explicit enough, and then verifying that the resulting system still matches it after the code, tests, and surrounding context have all changed.&lt;/p&gt;

&lt;p&gt;That is where the time goes.&lt;/p&gt;

&lt;p&gt;That is why I think AI did not remove the hard part of engineering.&lt;/p&gt;

&lt;p&gt;It moved it from writing to verification.&lt;/p&gt;

&lt;p&gt;If you want the next essays on this topic, subscribe on Substack: &lt;a href="https://blog.reqproof.com/p/ai-writes-your-code-nobody-verifies" rel="noopener noreferrer"&gt;https://blog.reqproof.com/p/ai-writes-your-code-nobody-verifies&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
