<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kirill</title>
    <description>The latest articles on DEV Community by Kirill (@klem42).</description>
    <link>https://dev.to/klem42</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/klem42"/>
    <language>en</language>
    <item>
      <title>I asked an LLM to add one button. It rewrote half my repo.</title>
      <dc:creator>Kirill</dc:creator>
      <pubDate>Tue, 21 Apr 2026 16:42:08 +0000</pubDate>
      <link>https://dev.to/klem42/i-asked-an-llm-to-add-one-button-it-rewrote-half-my-repo-1l1f</link>
      <guid>https://dev.to/klem42/i-asked-an-llm-to-add-one-button-it-rewrote-half-my-repo-1l1f</guid>
      <description>&lt;p&gt;Let’s be honest: modern LLMs write amazing code. Sometimes I look at the output and realize I couldn’t have done it better or faster myself. It hits you with this almost addictive rush of speed.&lt;/p&gt;

&lt;p&gt;But that speed comes with a cost: you stop understanding what’s happening.&lt;/p&gt;

&lt;p&gt;Recently, I was working on my pet project — a Telegram bot that turns articles into audio. I wanted to add one simple feature: a "Detailed Summary" mode. I threw a quick prompt at the AI: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Give users a way to get more detailed summaries&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At first glance, everything looked fine. Then I tried to understand the diff.&lt;br&gt;
I couldn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Chaos of "Just Prompting"
&lt;/h2&gt;

&lt;p&gt;I had a clean setup: all my prompts lived in a prompts.yaml file, and a PromptBuilder class would neatly assemble them. It was a predictable, single-source-of-truth system.&lt;/p&gt;

&lt;p&gt;The agent ignored all of it. Instead of adding a template to the YAML, it shoved raw strings directly into the C# code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcxwe2z58qy03bh7njjl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcxwe2z58qy03bh7njjl.png" alt="Hardcoded strings in BuildSummaryPrompt" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It introduced if/else logic inside the builder and added extra prompt instructions as raw string literals. My architecture just left the building. Now I had two sources of truth.&lt;/p&gt;

&lt;p&gt;But the worst part was the UX. Instead of adding a simple Telegram button to trigger the detailed mode, the AI decided that the user should manually type magic hashtags like &lt;code&gt;#detailed&lt;/code&gt;. It chose the easiest path for the code, not the user.&lt;/p&gt;

&lt;p&gt;The model optimized for its own convenience, not for my system. And it made dozens of decisions I never asked for.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Spec as Anxiety Relief
&lt;/h2&gt;

&lt;p&gt;I realized I was tired. Tired of holding my breath every time I hit "Apply," wondering what exactly was about to break.&lt;/p&gt;

&lt;p&gt;At some point, I realized this isn’t just about AI being unpredictable. It’s about me not defining things clearly enough. That’s when I started using a &lt;strong&gt;Spec&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It’s not just a technical fix; it’s anxiety relief. When I put a Markdown file in Git between my head and the code, I can finally breathe. I’m no longer guessing what’s going to happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Workflow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The Architecture Chat: I use ChatGPT to argue about edge cases. I often dictate my thoughts by voice—it's easier to "think out loud" about things like race conditions. We talk until we have a solid &lt;code&gt;feature_spec.md&lt;/code&gt; file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Consistency Check: I make the coding agent compare the new spec with my actual repo. If the AI finds a contradiction before writing code, I’ve already won.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Implementation: Only when the spec and architecture are aligned do I let the AI touch the code.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Same Task. Same AI. Different Outcome.
&lt;/h2&gt;

&lt;p&gt;When I implemented the same feature using a spec, the result was night and day.&lt;/p&gt;

&lt;p&gt;I explicitly defined the rules: "Use the existing YAML prompt storage. Use Telegram's native buttons. Do not force the user to type hashtags."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1nlrldzbangfv90mnzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1nlrldzbangfv90mnzl.png" alt="Clean button logic vs hashtag mess" width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The agent followed the contract. This time, it created a new prompt template in the right place and implemented a clean button-based UX.&lt;/p&gt;

&lt;p&gt;The difference wasn’t the model. It was the spec. The code remained clean, the architecture stayed intact, and I actually understood the diff.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;AI is a fast, but very average developer.&lt;/p&gt;

&lt;p&gt;Without boundaries, it will pick the easiest, messiest path. It will still "work." And that's the dangerous part. You won't notice the rot until it's too late.&lt;/p&gt;

&lt;p&gt;Stop prompting. Start defining.&lt;/p&gt;

&lt;p&gt;How do you handle the AI's urge to "improve" your architecture without asking? Let’s discuss in the comments.&lt;/p&gt;

&lt;p&gt;Photo: &lt;a href="https://www.flickr.com/photos/infomatique/6281472940" rel="noopener noreferrer"&gt;William Murphy / flickr&lt;/a&gt; — CC BY-SA 2.0&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>softwareengineering</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
