<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Christopher Groß</title>
    <description>The latest articles on DEV Community by Christopher Groß (@grossbyte).</description>
    <link>https://dev.to/grossbyte</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/grossbyte"/>
    <language>en</language>
    <item>
      <title>My worst Claude Code sessions – and what they taught me</title>
      <dc:creator>Christopher Groß</dc:creator>
      <pubDate>Wed, 29 Apr 2026 14:31:49 +0000</pubDate>
      <link>https://dev.to/grossbyte/my-worst-claude-code-sessions-and-what-they-taught-me-e4i</link>
      <guid>https://dev.to/grossbyte/my-worst-claude-code-sessions-and-what-they-taught-me-e4i</guid>
      <description>&lt;p&gt;Everyone talks about how much time AI saves. True. But nobody talks much about the hours you lose to AI mistakes.&lt;/p&gt;

&lt;p&gt;I've been using Claude Code daily for months. My workflow has fundamentally changed, I'm faster, I ship more consistent code. All of that is true. This article is the counterpart – the moments where I sat at my terminal thinking: "What on earth did that just do?"&lt;/p&gt;

&lt;p&gt;Not to bash AI. But a realistic picture is more useful than endless hype.&lt;/p&gt;

&lt;h2&gt;
  
  
  The aggressive refactor
&lt;/h2&gt;

&lt;p&gt;I had a Vue component spread across three files – historically grown, not pretty, but functional. My request was harmless enough: "Clean this up and consolidate the types."&lt;/p&gt;

&lt;p&gt;Claude Code did that. And then some more. And then some more.&lt;/p&gt;

&lt;p&gt;At the end I had an elegant, clean component – and four features that had quietly disappeared. Including an edge case for loading empty content that wasn't documented anywhere, but was silently expected by two other parts of the app. No test, no comment, no hint. The AI analysed the code, not the behaviour behind it.&lt;/p&gt;

&lt;p&gt;What happened: Claude Code had no context about the overall application behaviour. The refactoring was technically correct – but it broke implicit behaviour that was never written down anywhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I do differently now:&lt;/strong&gt; For refactors, I explicitly state which behaviours must be preserved – not just which files should be touched. "Clean this up, but these three cases must still work afterwards: ..." That sounds obvious. But I forgot it regularly at the start.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hallucinated API
&lt;/h2&gt;

&lt;p&gt;This one is uncomfortable to tell, because the mistake was partly mine.&lt;/p&gt;

&lt;p&gt;I asked Claude Code to write an integration with an external service. The response was fluent, confident, well structured. Methods that sounded plausible, parameters that made sense – everything there.&lt;/p&gt;

&lt;p&gt;Only: two of the methods didn't exist. The API never had them. Claude knew an older version of the documentation, or simply made them up – I don't know exactly. What I do know: the code compiled, threw no type errors, and only crashed on the first real API call.&lt;/p&gt;

&lt;p&gt;I could have caught this earlier. I should have checked the official docs before accepting the code. I didn't. I was in flow, the code looked good, and I trusted it.&lt;/p&gt;

&lt;p&gt;That's what makes confidently wrong output dangerous. When AI hesitates or writes "I'm not sure", I'm cautious. When it responds clearly and structured, I lower my guard. That's exactly the wrong reflex.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I do differently now:&lt;/strong&gt; Everything touching external APIs or libraries gets verified against the original docs. Not because I fundamentally distrust the AI – but because the risk is high and the check usually takes two minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The missing context in CLAUDE.md
&lt;/h2&gt;

&lt;p&gt;This one is entirely my fault – but it involves Claude Code, so it belongs here.&lt;/p&gt;

&lt;p&gt;I had a new website section implemented. The builder agent did good work, the section looked clean. Except: the section labels – small, monospace formatted labels above headings – were written in plain text instead of CLI style.&lt;/p&gt;

&lt;p&gt;That rule wasn't documented in my &lt;code&gt;CLAUDE.md&lt;/code&gt;. I had introduced it at some point – all section labels must use terminal style, with a &lt;code&gt;$&lt;/code&gt; prefix or &lt;code&gt;//&lt;/code&gt; comment syntax – but I'd forgotten to write it down.&lt;/p&gt;

&lt;p&gt;The AI oriented itself to what it saw: older sections I had built myself before introducing the style. It derived a pattern from those. The pattern was wrong, because I had changed my own system over time without updating the documentation.&lt;/p&gt;

&lt;p&gt;The result: I had to correct all the labels manually and finally write the rule into the &lt;code&gt;CLAUDE.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I do differently now:&lt;/strong&gt; When I dislike an output and think "the AI should have known that" – first question: is it in the &lt;code&gt;CLAUDE.md&lt;/code&gt;? Usually the answer is: no, not yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  The logically wrong, syntactically correct code
&lt;/h2&gt;

&lt;p&gt;This is the most subtle mistake – and the one I took longest to find.&lt;/p&gt;

&lt;p&gt;I had a filter function that should filter posts by category and tag – combined, with an AND operator. Claude Code wrote the function, it compiled, it passed the obvious test case.&lt;/p&gt;

&lt;p&gt;What I only noticed days later: the function had implemented OR instead of AND. Not because the AI doesn't know the difference – but because my request hadn't defined the logic unambiguously. I had written "filter by category and tag". That can mean AND. But it can also mean OR if you read it as: "give me posts that match the category, and give me posts that match the tag".&lt;/p&gt;

&lt;p&gt;The AI chose an interpretation. It didn't ask me. And I checked the output for syntactic correctness, not logical correctness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I do differently now:&lt;/strong&gt; For logic involving boolean operators, I write the expected results explicitly into the request. "Filtered with AND: only posts that satisfy both conditions should appear." Two extra sentences. Prevents hours of debugging.&lt;/p&gt;

&lt;h2&gt;
  
  
  What these failures have in common
&lt;/h2&gt;

&lt;p&gt;When I look back at my worst Claude Code sessions, I see a pattern:&lt;/p&gt;

&lt;p&gt;The AI didn't fail because it's bad. It failed because context was missing – either in my request, in my documentation, or in my review process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implicit knowledge&lt;/strong&gt; about my project doesn't exist for the AI. What I find "obvious" because I've been working with this codebase daily for months is only visible to Claude Code if it's documented or directly derivable from the code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Confident output is not a quality signal.&lt;/strong&gt; That's perhaps the most important insight. The AI sounds roughly equally certain at all times. Whether it's doing a trivial formatting task or inventing an API – the tone stays similar. My skepticism must not depend on the tone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My requests were imprecise.&lt;/strong&gt; At least half of the described failures could have been prevented with a more precise prompt. "Clean this up" is not a specification. "Implement the filter" is not a specification. That's a wish. A specification defines what must be true at the end.&lt;/p&gt;

&lt;h2&gt;
  
  
  The developer stays responsible
&lt;/h2&gt;

&lt;p&gt;I review Claude Code output differently than before. Not more superficially – more thoroughly, but in different dimensions.&lt;/p&gt;

&lt;p&gt;Syntax? Usually correct. Types? Usually correct. Logic in non-trivial cases? I read slower. External dependencies? Cross-check. Implicit behaviour expectations? Formulate explicitly before the AI starts.&lt;/p&gt;

&lt;p&gt;The speed Claude Code gives me is real. The mistakes that can happen along the way are too. Knowing that and working with it saves time. Ignoring it and blindly accepting output will eventually cost you a night with a bug that a two-minute review would have prevented.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;AI is not an autopilot. It's a very capable copilot that believes everything you tell it – including the things you forgot to say.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://grossbyte.io/en/blog/when-ai-gets-it-wrong" rel="noopener noreferrer"&gt;grossbyte.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>webdev</category>
      <category>ai</category>
      <category>devjournal</category>
    </item>
    <item>
      <title>Vibe coding vs. spec-driven – I got the bill</title>
      <dc:creator>Christopher Groß</dc:creator>
      <pubDate>Wed, 22 Apr 2026 10:23:33 +0000</pubDate>
      <link>https://dev.to/grossbyte/vibe-coding-vs-spec-driven-i-got-the-bill-57od</link>
      <guid>https://dev.to/grossbyte/vibe-coding-vs-spec-driven-i-got-the-bill-57od</guid>
      <description>&lt;p&gt;There is that moment at the beginning of a new feature where everything is still open. No legacy code, no constraints, no unexpected dependencies. Just you, an empty directory and an AI tool that promises to deliver results in minutes.&lt;/p&gt;

&lt;p&gt;In moments like that, vibe coding is incredibly tempting. Just start. Prompt in, code out, next prompt. No planning, no specifying, no thinking about architecture. Just do it.&lt;/p&gt;

&lt;p&gt;I did that. I paid for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What vibe coding really means
&lt;/h2&gt;

&lt;p&gt;The term "vibe coding" – coined by Andrej Karpathy – describes an approach where you use AI tools without having clearly defined what you are actually building. No schema, no design system, no explicit rules. You "vibe" your way through the feature.&lt;/p&gt;

&lt;p&gt;That is not fundamentally wrong. For a quick prototype, a throwaway demo, a first feasibility check it works well. Within an hour you have something running on screen. The feeling is great.&lt;/p&gt;

&lt;p&gt;The problem comes later.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bill
&lt;/h2&gt;

&lt;p&gt;Over the past months I have seen projects – my own and others' – that started with vibe coding. The pattern is always similar: the first sprint runs surprisingly smoothly. Then sprints two and three arrive. And suddenly you are fighting code that technically works but is architecturally a mess.&lt;/p&gt;

&lt;p&gt;Specifically: components that know too much. State management spread across three different approaches because the AI chose a different path with each prompt. Inconsistent naming. Duplicated code, because the AI on the second prompt did not know it had already built something similar on the first.&lt;/p&gt;

&lt;p&gt;And the worst part: when you try to fix it afterwards, you are not fighting the AI – you are fighting the accumulated decisions of twenty previous prompts, each locally sensible but globally inconsistent.&lt;/p&gt;

&lt;p&gt;Those of us working as senior developers or freelancers with real architectural influence notice this particularly fast. You have the experience to see where it is heading. And that experience tells you: this is going to be expensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  The difference that specification makes
&lt;/h2&gt;

&lt;p&gt;The principle is simple: before I let the AI write even a single line of code, I have defined what I am building, how it should be structured, and by which rules the code gets written. Not as a hundred-page requirements document, but as a living document – a &lt;code&gt;CLAUDE.md&lt;/code&gt;, a component library, an explicit design system.&lt;/p&gt;

&lt;p&gt;What changes because of this: the AI no longer makes architectural decisions. I made those beforehand. It just implements them.&lt;/p&gt;

&lt;p&gt;That is the decisive shift. Vibe coding lets the AI decide which path to take. Spec-driven development tells it which path to take.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this looks like in practice
&lt;/h2&gt;

&lt;p&gt;A concrete example from my own website. When I built the blog section, I had two options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option A – vibe:&lt;/strong&gt; Tell Claude Code "build me a blog" and see what comes out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option B – spec:&lt;/strong&gt; First define which frontmatter fields a blog post needs, which components get used, how routing works, how i18n integrates – and then let Claude Code handle the implementation.&lt;/p&gt;

&lt;p&gt;I chose option B. The result was consistent code, fewer revisions, and I never had to explain "no, not like that" – because "like that" was clear from the start.&lt;/p&gt;

&lt;h3&gt;
  
  
  The specification as single source of truth
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# CLAUDE.md&lt;/span&gt;

&lt;span class="gu"&gt;## Frontmatter schema&lt;/span&gt;
title, slug, translationSlug, date, category, description, readingTime, tags, featured, draft

&lt;span class="gu"&gt;## Components&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; GbCard: default / project / highlight variants
&lt;span class="p"&gt;-&lt;/span&gt; GbButton: primary / ghost / cta-inverted variants
&lt;span class="p"&gt;-&lt;/span&gt; No new markup before checking existing components

&lt;span class="gu"&gt;## Coding rules&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Composition API only
&lt;span class="p"&gt;-&lt;/span&gt; Tailwind only
&lt;span class="p"&gt;-&lt;/span&gt; No hardcoded text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That sounds like bureaucracy. But it is the opposite: it is the foundation on which I can work quickly without constantly looking back.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for senior developers and freelancers
&lt;/h2&gt;

&lt;p&gt;I think vibe coding is a particular problem when you – like me – work as a freelancer and carry full architectural responsibility. As a junior developer in a large team there are guardrails: existing codebases, team conventions, architects who make decisions. The structure is already there – you simply work within it.&lt;/p&gt;

&lt;p&gt;As a freelancer, you are the architect. If you do not define the structure, nobody does – and then the AI "defines" it implicitly, prompt by prompt. That sounds more efficient than it is.&lt;/p&gt;

&lt;p&gt;The real strength of AI tools is not that they can write code fast. It is that they work consistently and reliably within clear boundaries. Those boundaries have to be set by a human.&lt;/p&gt;

&lt;h2&gt;
  
  
  The compromise I found
&lt;/h2&gt;

&lt;p&gt;I am not writing this because I think vibe coding is fundamentally wrong. Sometimes a quick prototype is exactly the right thing. A throwaway experiment, a proof of concept for a client, a feature demo – there you can vibe.&lt;/p&gt;

&lt;p&gt;But as soon as something goes to production, as soon as other developers need to work on it, as soon as maintainability matters: then I need the specification first.&lt;/p&gt;

&lt;p&gt;The difference in my day-to-day: I sometimes vibe prototypes. I always spec projects.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Vibe coding gives you something running fast. Spec-driven development gives you something that runs – and that someone will still understand six months from now.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://grossbyte.io/en/blog/vibe-coding-vs-spec-driven" rel="noopener noreferrer"&gt;grossbyte.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>webdev</category>
      <category>ai</category>
      <category>devjournal</category>
    </item>
    <item>
      <title>The ticket as a thinking tool – before the first line of code</title>
      <dc:creator>Christopher Groß</dc:creator>
      <pubDate>Wed, 15 Apr 2026 14:27:36 +0000</pubDate>
      <link>https://dev.to/grossbyte/the-ticket-as-a-thinking-tool-before-the-first-line-of-code-51a</link>
      <guid>https://dev.to/grossbyte/the-ticket-as-a-thinking-tool-before-the-first-line-of-code-51a</guid>
      <description>&lt;p&gt;It was a Tuesday afternoon. I had a task: a filter component for a product list needed to be faster. I knew roughly where the problem was – so I just started.&lt;/p&gt;

&lt;p&gt;Three hours later I had a significantly more complex component, a refactored state management layer and a pile of open questions. Was this solved on the frontend or could it have been addressed on the backend? Which filter options should actually be cached? And what was the difference with the mobile view again?&lt;/p&gt;

&lt;p&gt;I had been coding without really understanding the problem. The result showed it.&lt;/p&gt;

&lt;h2&gt;
  
  
  A ticket is not a bureaucratic act
&lt;/h2&gt;

&lt;p&gt;When I use the word "ticket", I sometimes hear a quiet groan. Fairly enough – in many teams, a ticket means: fill out a Jira form, write "N/A" in three fields, add a deadline, assign a sprint. That has about as much to do with what I mean as a doctor's appointment with a spontaneous walk in the park.&lt;/p&gt;

&lt;p&gt;A ticket in my sense is simply: a short, clear description of a problem and the expected result. Two paragraphs. Sometimes three bullet points. Done.&lt;/p&gt;

&lt;p&gt;The crucial thing is not the ticket itself – it's the act of writing. Whoever has to write a problem down first has to think it through. And that is the real gain.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a good ticket contains
&lt;/h2&gt;

&lt;p&gt;I keep it simple. Three things are almost always enough:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Goal:&lt;/strong&gt; What should be better after this change? Not "optimize the filter component" – but "the filter component should respond without visible lag at 500+ entries."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Acceptance criteria:&lt;/strong&gt; When is the task done? "Filter interaction under 100 ms, tested on a mid-range Android device." No essay – but concrete enough that I know at the end of the day whether I'm finished.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Out of scope:&lt;/strong&gt; What is explicitly not part of this task? That's the underrated part. When I write down "backend caching is a separate ticket", I'm not suddenly deep in a different problem three hours later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters even more with AI
&lt;/h2&gt;

&lt;p&gt;Anyone who has been following the context engineering discussion already knows the principle: the AI is only as good as the context it receives. A ticket is the application of that principle at the level of the individual task.&lt;/p&gt;

&lt;p&gt;Before, I could start coding with a vague plan and at least trust my own judgment. I know the codebase, I understand the context, I notice when I'm going off track.&lt;/p&gt;

&lt;p&gt;An AI model can't do that. It has no context beyond what I give it. If I write "optimize the filter component", I get code that optimizes something – maybe exactly the right thing, maybe something that improves performance but breaks accessibility, maybe a generic solution that doesn't fit the project at all.&lt;/p&gt;

&lt;p&gt;The ticket forces me to build that context before I hand it off. After that I can give the AI a precise task – and I get a precise result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean commits as a side effect
&lt;/h2&gt;

&lt;p&gt;Working with tickets comes with a gift: a readable git history.&lt;/p&gt;

&lt;p&gt;Every commit references a ticket. &lt;code&gt;fix: GBW-42 – filter response time under 100ms&lt;/code&gt;. Three months later I know immediately: what was the problem? Why was this changed? What was the scope?&lt;/p&gt;

&lt;p&gt;Without tickets it looks different. &lt;code&gt;fix: performance&lt;/code&gt;. &lt;code&gt;update filter&lt;/code&gt;. &lt;code&gt;changes&lt;/code&gt;. These commits have given me many grey hairs in my career – usually my own.&lt;/p&gt;

&lt;p&gt;The connection is direct: whoever has described the problem clearly can also name the solution clearly. And a clearly named solution is a good commit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The side effect that surprised me most
&lt;/h2&gt;

&lt;p&gt;Sometimes, when I sit down to write the ticket, I realize I can't.&lt;/p&gt;

&lt;p&gt;Not because the problem is too complex – but because I can't clearly formulate the goal. I sit down and realize: I don't actually know what I want. Or: the feature doesn't actually make sense in the current state of the project. Or: this isn't a frontend problem at all, it's a data model problem.&lt;/p&gt;

&lt;p&gt;That's not a failure of the ticket. That is the point. Writing a ticket costs me ten minutes. Coding in the wrong direction for three hours costs me – well – three hours. Plus the time to undo it.&lt;/p&gt;

&lt;p&gt;The ticket is the cheapest mistake I can make. Because when it happens, it happens on paper.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I handle it today
&lt;/h2&gt;

&lt;p&gt;I use YouTrack for my own projects – but the tool doesn't matter. What matters is the moment before the first commit: is there a ticket for this?&lt;/p&gt;

&lt;p&gt;For my own projects I write the ticket myself – often in five minutes. For client projects it's usually already there, or I help shape it. And when someone says "this is just a small thing, does it really need a ticket?"&lt;/p&gt;

&lt;p&gt;My answer: precisely because it's a small thing, the ticket takes two minutes. And if it turns out not to be a small thing, you just figured that out – without having written a single line of code.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Whoever cannot write a ticket has not yet understood their problem. And whoever has not understood their problem should not be writing code yet – neither themselves nor via AI.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://grossbyte.io/en/blog/tickets-before-code" rel="noopener noreferrer"&gt;grossbyte.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>webdev</category>
      <category>ai</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Context Engineering: Why Your Prompt Is the Smallest Problem</title>
      <dc:creator>Christopher Groß</dc:creator>
      <pubDate>Wed, 08 Apr 2026 09:33:27 +0000</pubDate>
      <link>https://dev.to/grossbyte/context-engineering-why-your-prompt-is-the-smallest-problem-3li</link>
      <guid>https://dev.to/grossbyte/context-engineering-why-your-prompt-is-the-smallest-problem-3li</guid>
      <description>&lt;p&gt;I was at a client project recently, facing a situation I know well by now. New codebase, no AI tooling set up, and a developer on the team asks me: &lt;em&gt;"How do you get the AI to produce such good results?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I looked at his screen. He had Claude open - in the browser, no project context, a fresh chat for every single task.&lt;/p&gt;

&lt;p&gt;That's the problem. Not the tool, not the model, not the prompt. The missing structure.&lt;/p&gt;

&lt;p&gt;After a year of using AI daily in real projects, I'm convinced: the difference between "AI is pretty decent" and "AI has fundamentally changed how I work" isn't the perfect prompt. It's the context you give the AI - &lt;strong&gt;before it answers the first question&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  CLAUDE.md Is Where Everything Starts
&lt;/h2&gt;

&lt;p&gt;When I start working on a new project with Claude Code, the first thing I create is the &lt;code&gt;CLAUDE.md&lt;/code&gt;. Not the first component, not the first feature - the context file.&lt;/p&gt;

&lt;p&gt;What goes in it? Everything a new developer would need to know on day one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tech stack and versions&lt;/li&gt;
&lt;li&gt;Project structure and why it's set up that way&lt;/li&gt;
&lt;li&gt;Design system - colors, typography, spacing - concrete, with actual values&lt;/li&gt;
&lt;li&gt;Coding rules: what always applies, what never does&lt;/li&gt;
&lt;li&gt;External links, API endpoints, important accounts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That sounds like work. It is - &lt;strong&gt;once&lt;/strong&gt;. But after that I save myself from rebuilding that context in every single AI session. I don't explain "we use Tailwind, not Styled Components" or "all texts must go through i18n" anymore. It's in the file. Claude reads it, follows it.&lt;/p&gt;

&lt;p&gt;The practical difference: when I say "create a new section for the services page", I get code that uses my design system, my components, is TypeScript strict and contains no hardcoded strings. Not because Claude is magic - because it &lt;strong&gt;knows the rules&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  PRODUCT_SPEC.md - The Product Vision Written Down
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;CLAUDE.md&lt;/code&gt; describes &lt;em&gt;how&lt;/em&gt; things are built. &lt;code&gt;PRODUCT_SPEC.md&lt;/code&gt; describes &lt;em&gt;what&lt;/em&gt; and &lt;em&gt;why&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I created it the first time when I realized: if I haven't touched a project for three weeks and then pick it back up, I need a moment to get back into the right frame of mind myself. The AI needs that even more.&lt;/p&gt;

&lt;p&gt;My &lt;code&gt;PRODUCT_SPEC.md&lt;/code&gt; covers things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is the core promise of the product?&lt;/li&gt;
&lt;li&gt;Who are the users and what do they want?&lt;/li&gt;
&lt;li&gt;What features exist, and what design decisions are behind them?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;What was deliberately not built - and why?&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last point is underrated. "What we don't build" is just as important as "what we build". If I don't tell the AI that we've deliberately decided against a certain feature, it will suggest or even implement it the next time it seems relevant.&lt;/p&gt;




&lt;h2&gt;
  
  
  Externalizing Coding Rules - Not Explaining Them in the Prompt
&lt;/h2&gt;

&lt;p&gt;I've noticed that a lot of people include their rules in the prompt every time. "Don't write inline CSS. Don't use &lt;code&gt;any&lt;/code&gt; types in TypeScript. Always create a German and English version."&lt;/p&gt;

&lt;p&gt;And then they write that again for the next task.&lt;/p&gt;

&lt;p&gt;That's exhausting - and error-prone. At some point you forget it, or you phrase it slightly differently, and the AI interprets it slightly differently.&lt;/p&gt;

&lt;p&gt;My solution: a thorough section in the &lt;code&gt;CLAUDE.md&lt;/code&gt;. Everything that always applies goes there.&lt;/p&gt;

&lt;p&gt;A few examples from real projects:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- ESLint must pass without errors before every commit
- No eslint-disable comments without explicit approval
- Every new text must appear in i18n/de.json and i18n/en.json
- Typography: en dash with non-breaking spaces, never the English em dash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That last rule is one anyone recognizes who has forgotten to specify it. I learned it by spending an hour fixing wrong dashes across an entire project.&lt;/p&gt;




&lt;h2&gt;
  
  
  Recurring Tasks as Skills
&lt;/h2&gt;

&lt;p&gt;This is the part that gets talked about the least - and the one that has helped me the most.&lt;/p&gt;

&lt;p&gt;Every project has tasks that always run the same way. After a new feature: run the linter, take a browser screenshot, comment on the ticket with the result. For a new blog post: validate frontmatter, run the typography check, create the German and English versions.&lt;/p&gt;

&lt;p&gt;I used to describe this in the prompt every time. "Don't forget to run the linter afterwards" - every single time.&lt;/p&gt;

&lt;p&gt;Now I put these workflows into skill files. A Markdown file that describes step by step what to do in a given context. I mention it briefly in the prompt: &lt;em&gt;"Use the &lt;code&gt;publish-blog-post&lt;/code&gt; skill."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That sounds like a small thing. In practice, it's what makes the difference between an AI I have to correct and an AI that simply does the right thing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Define Data Models Before Writing Code
&lt;/h2&gt;

&lt;p&gt;This is a lesson I learned the hard way.&lt;/p&gt;

&lt;p&gt;I had given an AI the task of implementing a new feature with several database tables. The result was technically working - but the data model was wrong. Not wrong in the sense of "syntax error", but wrong in the sense of "this will be a problem in three months". Missing constraints, an N:M relation that should have been 1:N, and a field that semantically belonged in a different table.&lt;/p&gt;

&lt;p&gt;Since then I do it differently: before I let the AI loose on database work, I write a short model spec. In prose, no special syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Table: orders
- id: UUID, Primary Key
- user_id: FK → users.id, NOT NULL
- status: ENUM (pending, processing, shipped, cancelled), NOT NULL
- total_cents: INTEGER, NOT NULL (no DECIMAL - we calculate in cents)
- created_at: TIMESTAMP WITH TIME ZONE, NOT NULL, DEFAULT now()

Relations:
- One order has many OrderItems (1:N)
- One order belongs to exactly one user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That takes 15 minutes. It saves an hour of refactoring.&lt;/p&gt;




&lt;h2&gt;
  
  
  Context Engineering Instead of Prompt Engineering
&lt;/h2&gt;

&lt;p&gt;I notice the term "prompt engineering" showing up everywhere right now. Workshops, courses, LinkedIn posts about the perfect phrasing.&lt;/p&gt;

&lt;p&gt;I think that's the wrong focus.&lt;/p&gt;

&lt;p&gt;The prompt is the &lt;strong&gt;last step&lt;/strong&gt;. The system around it - &lt;code&gt;CLAUDE.md&lt;/code&gt;, &lt;code&gt;PRODUCT_SPEC.md&lt;/code&gt;, coding rules, skills, data models - is what actually determines quality. The AI is only as good as the context it receives. And that context can be built systematically, versioned and improved over time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A good prompt in a poorly prepared project delivers mediocre results.&lt;br&gt;
A mediocre prompt in a well-prepared project delivers good results.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I call it &lt;strong&gt;context engineering&lt;/strong&gt;. Whoever understands this has a real advantage - not just with Claude, but with any AI tool.&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;CLAUDE.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tech stack, project structure, design tokens, coding rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PRODUCT_SPEC.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;What we build, why, and what we deliberately don't build&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Skill files&lt;/td&gt;
&lt;td&gt;Recurring workflows defined once, referenced forever&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data model specs&lt;/td&gt;
&lt;td&gt;Written before code, not discovered through refactoring&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The better the structure around the AI, the better the results. That's not theory - that's the experience from a year of daily use in real projects.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm Christopher, a freelance fullstack developer from Hamburg. I write about Vue/Nuxt, TypeScript and working with AI in real projects - at &lt;a href="https://grossbyte.io/en" rel="noopener noreferrer"&gt;grossbyte.io&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tooling</category>
      <category>webdev</category>
    </item>
    <item>
      <title>My Agent Workflow – From Idea to Deployment in Minutes</title>
      <dc:creator>Christopher Groß</dc:creator>
      <pubDate>Sun, 29 Mar 2026 23:21:50 +0000</pubDate>
      <link>https://dev.to/grossbyte/my-agent-workflow-from-idea-to-deployment-in-minutes-375n</link>
      <guid>https://dev.to/grossbyte/my-agent-workflow-from-idea-to-deployment-in-minutes-375n</guid>
      <description>&lt;h2&gt;
  
  
  A Bug, a Prompt, Done
&lt;/h2&gt;

&lt;p&gt;The other day I noticed: blog posts weren't showing up in my sitemap. Annoying, especially for SEO.&lt;/p&gt;

&lt;p&gt;In the past I'd have opened the sitemap module, read through the config, found the fix, tested, committed, deployed. Maybe 30 minutes, maybe an hour.&lt;/p&gt;

&lt;p&gt;Instead I typed one sentence into my terminal:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Blog posts are not included in the sitemap."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Five minutes later, the fix was live.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Setup: Three Agents, Clear Roles
&lt;/h2&gt;

&lt;p&gt;I use a multi-agent setup in &lt;strong&gt;Claude Code&lt;/strong&gt;. Three agents, clear responsibilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lead Agent&lt;/strong&gt; – Plans, creates tickets, coordinates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Builder Agent&lt;/strong&gt; – Implements the code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tester Agent&lt;/strong&gt; – Runs real browser tests with screenshots&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The workflow is always the same: I give the lead agent a task – sometimes a sentence, sometimes a paragraph. It analyzes the problem, searches the codebase, creates a &lt;strong&gt;YouTrack ticket&lt;/strong&gt; with a detailed description and proposed solution. I glance at it, say "go" – and the rest happens.&lt;/p&gt;

&lt;p&gt;The lead delegates to the builder, which writes the code following my coding rules from &lt;code&gt;CLAUDE.md&lt;/code&gt;. Then the tester spins up a real browser, loads the page, checks if the problem is solved. Only when everything is green does the lead come back to me: ready for review.&lt;/p&gt;




&lt;h2&gt;
  
  
  A More Impressive Example
&lt;/h2&gt;

&lt;p&gt;The sitemap fix was trivial. But the workflow really shines on bigger tasks.&lt;/p&gt;

&lt;p&gt;When I wanted &lt;strong&gt;full WCAG accessibility&lt;/strong&gt; for my website, I essentially told the lead agent:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The website needs WCAG-compliant accessibility."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What happened:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The lead analyzed the scope – every page, every component, every form&lt;/li&gt;
&lt;li&gt;It created a ticket with a prioritized list of all necessary changes&lt;/li&gt;
&lt;li&gt;The builder systematically worked through every component – ARIA labels, focus management, contrast, skip links, focus traps&lt;/li&gt;
&lt;li&gt;The tester took screenshots after each iteration and verified accessibility&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Result: &lt;strong&gt;WCAG 2.1 Level AA in under 2 hours.&lt;/strong&gt; Estimated manual effort: 2–3 days.&lt;/p&gt;

&lt;p&gt;Not because the AI is magic, but because the workflow eliminates context switching and parallelizes the repetitive parts.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Actually Love About It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;No context loss.&lt;/strong&gt; I describe the problem once – the context is preserved from ticket through implementation to testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation happens automatically.&lt;/strong&gt; Every change is documented as a YouTrack ticket with description, proposed solution, and screenshots. No writing tickets after the fact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Small fixes actually get done.&lt;/strong&gt; You know how you spot a typo or a small visual bug and think "I'll fix it later"? With this workflow, the barrier is so low I just do it immediately.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Part Nobody Wants to Hear
&lt;/h2&gt;

&lt;p&gt;This is not "AI does everything, I sit back."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In roughly every third or fourth task, I intervene.&lt;/strong&gt; Sometimes the AI interprets a requirement differently. Sometimes the code is technically correct but stylistically off. Sometimes it's faster to change three lines myself than explain what I want.&lt;/p&gt;

&lt;p&gt;That's fine. The workflow doesn't save 100% of my work – it saves &lt;strong&gt;70–80%&lt;/strong&gt;. The remaining 20–30% is where human judgment actually matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I review every single change. Every one.&lt;/strong&gt; Not because I don't trust the AI, but because it's my code running in production. Read the diff, check the logic, manually test critical paths. That usually takes 2–5 minutes per change – and those minutes are non-negotiable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Concrete Time Savings (Real Numbers)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Task&lt;/th&gt;
&lt;th&gt;With agents&lt;/th&gt;
&lt;th&gt;Manual estimate&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Sitemap bug&lt;/td&gt;
&lt;td&gt;5 minutes&lt;/td&gt;
&lt;td&gt;30–60 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Full WCAG accessibility&lt;/td&gt;
&lt;td&gt;~2 hours&lt;/td&gt;
&lt;td&gt;2–3 days&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;i18n text adjustments (DE + EN)&lt;/td&gt;
&lt;td&gt;3 minutes&lt;/td&gt;
&lt;td&gt;~20 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;New blog section&lt;/td&gt;
&lt;td&gt;1 day&lt;/td&gt;
&lt;td&gt;~1 week&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The leverage is greatest for tasks that touch many files, follow clear rules, and are repetitive. It's smallest for creative work, complex business logic, and architecture – that's still on me.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Need to Get Started
&lt;/h2&gt;

&lt;p&gt;Three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;A good &lt;code&gt;CLAUDE.md&lt;/code&gt;&lt;/strong&gt; – Your project spec. Design system, coding rules, project structure. The better this file, the better the results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clear agent definitions&lt;/strong&gt; – Each agent has a role, tools, and boundaries. This prevents chaos.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A place for tasks&lt;/strong&gt; – Doesn't have to be a ticket system. Markdown files in your repo work fine. But personally, I prefer YouTrack – better history, easy referencing, agents comment directly in tickets.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The setup takes a few hours. The time savings afterward are multiples of that.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Reality Behind the Hype
&lt;/h2&gt;

&lt;p&gt;AI agents aren't autopilot. They're more like a very fast, very patient junior developer who never gets tired and follows your specs exactly – &lt;strong&gt;as long as you state them clearly.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The workflow works because I stay in control. I decide what gets built. I review what was built. I intervene when necessary. The agents accelerate execution – but the responsibility stays with me.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI agents don't replace developers. They replace the parts of the work that keep developers from focusing on what actually matters.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://grossbyte.io/en/blog/agent-workflow-from-idea-to-deployment" rel="noopener noreferrer"&gt;grossbyte.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Curious how the agent setup looks in detail? Happy to write a follow-up on the CLAUDE.md structure and agent definitions if there's interest.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Accessibility in 2 hours – with AI instead of days of manual work</title>
      <dc:creator>Christopher Groß</dc:creator>
      <pubDate>Mon, 23 Mar 2026 12:18:27 +0000</pubDate>
      <link>https://dev.to/grossbyte/accessibility-in-2-hours-with-ai-instead-of-days-of-manual-work-28hm</link>
      <guid>https://dev.to/grossbyte/accessibility-in-2-hours-with-ai-instead-of-days-of-manual-work-28hm</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://grossbyte.io/en/blog/accessibility-in-2-hours-with-ai" rel="noopener noreferrer"&gt;grossbyte.io&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Accessibility was on my list for months. I knew what needed to be done. I kept putting it off anyway — because it always felt like days of tedious work.&lt;/p&gt;

&lt;p&gt;Last week I just did it. With Claude Code CLI in under 2 hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  What was built
&lt;/h2&gt;

&lt;p&gt;Not just the basics – the result exceeds WCAG 2.1 Level AA in the areas covered, including support for system preferences that go beyond the standard:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semantic HTML &amp;amp; ARIA on every interactive element&lt;/li&gt;
&lt;li&gt;Skip-to-content link, visible focus styles, focus trap in the mobile menu&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;prefers-contrast: more/less&lt;/code&gt;, &lt;code&gt;prefers-reduced-motion&lt;/code&gt;, &lt;code&gt;prefers-reduced-transparency&lt;/code&gt;, &lt;code&gt;forced-colors: active&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Manual &lt;code&gt;.a11y-mode&lt;/code&gt; toggle in the header&lt;/li&gt;
&lt;li&gt;Fully accessible contact form with &lt;code&gt;aria-live&lt;/code&gt;, &lt;code&gt;aria-busy&lt;/code&gt;, &lt;code&gt;aria-describedby&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;iOS zoom fix for inputs below 16px font size&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What surprised me
&lt;/h2&gt;

&lt;p&gt;The quality. I expected to do a lot of manual follow-up. Instead, Claude Code implemented the ARIA patterns correctly – disclosure widgets, live regions, focus traps. These aren't trivial patterns.&lt;/p&gt;

&lt;p&gt;Also: &lt;code&gt;prefers-contrast: less&lt;/code&gt; and &lt;code&gt;prefers-reduced-transparency&lt;/code&gt; are CSS features I had never actively implemented before. Claude Code not only included them but chose sensible color values too.&lt;/p&gt;

&lt;h2&gt;
  
  
  The honest take
&lt;/h2&gt;

&lt;p&gt;AI doesn't replace understanding accessibility. You need to know what &lt;code&gt;aria-live&lt;/code&gt; does, why a focus trap is necessary, and whether the contrast values make sense. But the implementation – writing the labels, adjusting CSS variables, adding role attributes across dozens of files – that's exactly what AI does well.&lt;/p&gt;

&lt;p&gt;When it takes 2 hours instead of 2 days, there are no more excuses.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Full writeup with all implementation details: &lt;a href="https://grossbyte.io/en/blog/accessibility-in-2-hours-with-ai" rel="noopener noreferrer"&gt;grossbyte.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>a11y</category>
      <category>ai</category>
      <category>claudeai</category>
    </item>
  </channel>
</rss>
