<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sweekar Koirala</title>
    <description>The latest articles on DEV Community by Sweekar Koirala (@sweekarkoirala).</description>
    <link>https://dev.to/sweekarkoirala</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sweekarkoirala"/>
    <language>en</language>
    <item>
      <title>I got 2x faster with AI. I also got 2x better at shipping bugs I couldn't catch.</title>
      <dc:creator>Sweekar Koirala</dc:creator>
      <pubDate>Sat, 18 Apr 2026 09:50:20 +0000</pubDate>
      <link>https://dev.to/sweekarkoirala/i-got-2x-faster-with-ai-i-also-got-2x-better-at-shipping-bugs-i-couldnt-catch-hmb</link>
      <guid>https://dev.to/sweekarkoirala/i-got-2x-faster-with-ai-i-also-got-2x-better-at-shipping-bugs-i-couldnt-catch-hmb</guid>
      <description>&lt;p&gt;I stared at a bug for 45 minutes last week that Claude had introduced three days earlier. The function looked fine. It passed my review. It even passed a quick manual test. It broke in production because the AI had confidently used an API method that doesn't exist in the version we were running, and I had no idea because the code &lt;em&gt;read&lt;/em&gt; like it was written by someone who knew what they were doing.&lt;/p&gt;

&lt;p&gt;That's the thing nobody says out loud: AI makes you faster and simultaneously makes you worse at catching mistakes.&lt;/p&gt;

&lt;p&gt;I've been building with AI assistance pretty much daily for the past year. My output velocity is genuinely up, maybe 2x on a good week. Features that used to take me a full day I can now push in a morning. But somewhere around month three I noticed something uncomfortable: my bug rate wasn't going down at the same rate my speed was going up. If anything, the bugs were getting harder to catch, because they were surrounded by good-looking code.&lt;/p&gt;

&lt;p&gt;The failure mode isn't obvious AI slop. It's not hallucinated function names in a language the model clearly doesn't know. It's confident, well-structured, plausible code that breaks on a specific edge case the model didn't think about because it had no reason to think about it. The model doesn't know your database has nulls in that column. It doesn't know that the third-party library you're using has a breaking change in v2.1. It doesn't know that your team has a convention around error handling that isn't written anywhere. It just writes the most statistically reasonable thing and moves on.&lt;/p&gt;

&lt;p&gt;Confidence without context. That's the actual problem.&lt;/p&gt;

&lt;p&gt;The obvious counterargument here is: just review your code better. Write tests. Don't be lazy. And okay, yes, that's true. But also: the whole reason I'm using AI is to move faster, and careful line-by-line review of every generation eats exactly the time I thought I was saving. You're not going 2x, you're going 1x with extra steps and a false sense of security. There's a real tension here and pretending there isn't doesn't help anyone.&lt;/p&gt;

&lt;p&gt;What actually changed things for me wasn't slowing down the AI. It was giving it better constraints before it started.&lt;/p&gt;

&lt;p&gt;I started writing structured context files for the domains I work in most, things like: how we handle API errors in this codebase, what version of what library we're on and why, which patterns we've explicitly decided not to use and why they're tempting. Not a prompt. Not a system message. An actual file that sits in the project and gets pulled in when I start a session. The model stopped making the same class of mistakes because the mistakes were coming from missing information, not missing intelligence.&lt;/p&gt;

&lt;p&gt;This is the thing I think the whole "vibe coding" conversation misses. The productivity gains from AI are real, but they're fragile. They depend on the model having enough context to be right, not just fluent. Fluency is easy. Context is the hard part, and right now that burden is almost entirely on you. The model will never tell you "I don't have enough information about your codebase to answer this confidently." It will just answer.&lt;/p&gt;

&lt;p&gt;Structured skill files are a partial solution to this. Not a complete one, but a real one. When you encode your conventions, your stack constraints, your known gotchas into a format that travels with the project, you're not making the AI smarter. You're reducing the gap between what it knows and what it needs to know to be useful rather than dangerous. The bugs don't disappear entirely, but the class of "confidently wrong" bugs shrinks noticeably.&lt;/p&gt;

&lt;p&gt;I still ship faster than I did before AI. I'm not going back. But I'm also a lot more honest now about what I gave up to get there, and a lot more deliberate about how I give the model the context it needs to not quietly break things I'll spend 45 minutes debugging on a Thursday afternoon.&lt;/p&gt;

&lt;p&gt;The speed is real. The cost is also real. Both things are true, and pretending otherwise is how you end up trusting code you shouldn't.&lt;/p&gt;

&lt;p&gt;We built &lt;a href="//npxskills.xyz"&gt;npxskills.xyz&lt;/a&gt; partly because of exactly this problem, structured skill files that install into your agent and travel with your project. If that's the direction you want to go, that's where we put the work.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vibecoding</category>
      <category>claude</category>
      <category>claudecode</category>
    </item>
  </channel>
</rss>
