<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Harel Coman</title>
    <description>The latest articles on DEV Community by Harel Coman (@haco29).</description>
    <link>https://dev.to/haco29</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/haco29"/>
    <language>en</language>
    <item>
      <title>The Red Queen Code Review Pattern — Perpetual Evolution in AI-Powered Development</title>
      <dc:creator>Harel Coman</dc:creator>
      <pubDate>Wed, 05 Nov 2025 08:07:16 +0000</pubDate>
      <link>https://dev.to/haco29/the-red-queen-code-review-pattern-perpetual-evolution-in-ai-powered-development-2ego</link>
      <guid>https://dev.to/haco29/the-red-queen-code-review-pattern-perpetual-evolution-in-ai-powered-development-2ego</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; I built a feedback loop where every AI code review either fixes the code or evolves the rule — creating a self-improving system that keeps our team’s standards alive as both humans and AI adapt.&lt;br&gt;
Like the Red Queen Hypothesis in evolution, the goal isn’t to win — it’s to keep evolving just to stay aligned.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In other words, it's a system that evolves with you, not just for you.&lt;/p&gt;




&lt;p&gt;This article introduces a framework I call &lt;strong&gt;The Red Queen Code Review Pattern&lt;/strong&gt; — a way to make code reviews self-evolving through human-AI collaboration.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Red Queen Effect 🧬
&lt;/h2&gt;

&lt;p&gt;In evolutionary biology, the Red Queen Hypothesis describes a race with no finish line.&lt;br&gt;
It comes from Lewis Carroll’s Through the Looking-Glass, where the Red Queen tells Alice:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“It takes all the running you can do, to stay in the same place.”&lt;br&gt;
— Lewis Carroll, Through the Looking-Glass (1871)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Biologists use this idea to explain how species must constantly evolve just to survive, because their environment — and everything competing within it — is also evolving.&lt;br&gt;
Gazelles run faster, lions hunt faster — neither “wins,” but both must keep adapting or fall behind.&lt;/p&gt;

&lt;p&gt;In AI-powered development, we face the same dynamic.&lt;br&gt;
Our tools learn, our frameworks shift, our patterns change — and our code reviews can’t stay static.&lt;br&gt;
Every time the AI flags an outdated rule, we evolve it — &lt;strong&gt;and the race continues&lt;/strong&gt;.&lt;br&gt;
Every time it enforces a pattern, our team runs a little faster to stay consistent.&lt;/p&gt;

&lt;p&gt;That’s the Red Queen Code Review Pattern:&lt;br&gt;
a system where humans and AI co-evolve through feedback.&lt;br&gt;
Each cycle keeps our rules relevant, our code consistent, and our development process alive.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Fix the code → follow the rule.&lt;br&gt;
Update the rule → evolve the system.&lt;br&gt;
Either way, the race never stops — and that’s exactly the point.&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;After years of doing code reviews, I hit a frustrating pattern.&lt;/p&gt;

&lt;p&gt;Same type of issue. Different reviewers. Completely different feedback.&lt;/p&gt;

&lt;p&gt;Developer A would flag a magic number. Developer B would approve the exact same pattern.&lt;/p&gt;

&lt;p&gt;Six months later, our codebase looked like it was written by five different teams.&lt;/p&gt;

&lt;p&gt;Then I started using AI to write code, and it got worse.&lt;/p&gt;

&lt;p&gt;Cursor would generate perfectly valid React code that violated our unwritten patterns. I'd ask for revisions, but there was no source of truth to reference. Just vibes and inconsistent PR comments.&lt;/p&gt;

&lt;p&gt;That's when I realized: if my rules aren't written down, my AI assistant can't follow them. And neither can my team.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Moment It Clicked
&lt;/h2&gt;

&lt;p&gt;Last week, I finished a feature and ran my usual code review command. Cursor analyzed my branch against main and came back with this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;### src/components/UserProfile.tsx&lt;/span&gt;

&lt;span class="gs"&gt;**Issue:**&lt;/span&gt; Using axios directly instead of HttpClient
&lt;span class="gs"&gt;**Rule Violated:**&lt;/span&gt; api-development.mdc - "Always use HttpClient for API requests"
&lt;span class="gs"&gt;**Suggestion:**&lt;/span&gt; Replace axios with HttpClient static methods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Normally, I'd just fix it and move on.&lt;/p&gt;

&lt;p&gt;But this time I stopped.&lt;/p&gt;

&lt;p&gt;The rule was outdated. We'd decided weeks ago that direct axios was fine for one-off requests. But I never updated the documentation.&lt;/p&gt;

&lt;p&gt;So I had two options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option A:&lt;/strong&gt; Fix the code (and get the same comment next time)&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Option B:&lt;/strong&gt; Update the rule (and never see this comment again)&lt;/p&gt;

&lt;p&gt;I updated the rule.&lt;/p&gt;

&lt;p&gt;And that's when it hit me: &lt;strong&gt;every code review comment is either validating a rule or exposing an outdated one.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The system I built — what I now call the &lt;strong&gt;Red Queen Code Review Pattern&lt;/strong&gt; — runs as a simple Cursor command called @code-review.md.&lt;br&gt;
You can see the implementation and rule examples on &lt;a href="https://github.com/haco29/ai-workflow/blob/main/commands/code-review.md" rel="noopener noreferrer"&gt;GitHub → haco29/ai-workflow&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s not fancy. Just a markdown-driven feedback loop that keeps evolving with every review.&lt;/p&gt;

&lt;p&gt;Here's the workflow I've been using:&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Document Standards as Cursor Rules
&lt;/h3&gt;

&lt;p&gt;I keep all our patterns in &lt;code&gt;.cursor/rules/*.mdc&lt;/code&gt; files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.cursor/rules/
├── react-patterns.mdc          # Component architecture
├── api-development.mdc         # API patterns
├── code-quality.mdc            # Constants, errors
├── testing-accessibility.mdc   # Testing &amp;amp; WCAG
└── python-core-architecture.mdc # Backend patterns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These aren't vague guidelines. They're concrete examples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## HTTP Status Codes&lt;/span&gt;

❌ Never use magic numbers:
if (error.response?.status === 401) { redirect('/login') }

✅ Always use constants:
import { AUTHENTICATION_ERROR_STATUS_CODE } from '@/constants'
if (error.response?.status === AUTHENTICATION_ERROR_STATUS_CODE) {
redirect('/login')
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I write new features, Cursor reads these rules and generates code that follows our patterns automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Run Automated Code Review
&lt;/h3&gt;

&lt;p&gt;I created a Cursor command called &lt;code&gt;@code-review.md&lt;/code&gt; that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Diffs my branch against main&lt;/li&gt;
&lt;li&gt;Reads all rules from &lt;code&gt;.cursor/rules/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Reviews every change against those rules&lt;/li&gt;
&lt;li&gt;Provides structured feedback with explicit rule references&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The output looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## 🔴 Critical Issues (Must Fix)&lt;/span&gt;

&lt;span class="gu"&gt;### app/services/user_service.py&lt;/span&gt;

&lt;span class="gs"&gt;**Issue:**&lt;/span&gt; Missing error handling decorator
&lt;span class="gs"&gt;**Rule Violated:**&lt;/span&gt; python-core-architecture.mdc -
"All service methods must use @handle_service_errors"
&lt;span class="gs"&gt;**Suggestion:**&lt;/span&gt; Add the decorator

&lt;span class="gh"&gt;# Current&lt;/span&gt;

async def get_user(user_id: str) -&amp;gt; User:
return await db.users.get(user_id)

&lt;span class="gh"&gt;# Suggested&lt;/span&gt;

@handle_service_errors
async def get_user(user_id: str) -&amp;gt; User:
return await db.users.get(user_id)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every issue points to a specific rule. Every suggestion includes code.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The Feedback Loop
&lt;/h3&gt;

&lt;p&gt;This is the key part.&lt;/p&gt;

&lt;p&gt;When I get a review comment, I ask myself: &lt;strong&gt;Is the rule right?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If yes → Fix the code.&lt;br&gt;&lt;br&gt;
If no → Update the rule.&lt;/p&gt;

&lt;p&gt;This creates a self-improving system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Code Review → Rule Violation Found →
  Fix Code OR Update Rule →
    Next Review Uses Updated Standards →
      Codebase Stays Consistent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Over time, the rules get better. The AI learns our actual patterns. And new team members can run the same review on their PRs to learn our standards instantly.&lt;/p&gt;

&lt;p&gt;What This Actually Looks Like&lt;/p&gt;

&lt;h3&gt;
  
  
  Before: Random Feedback
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;PR #1:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Reviewer: "Can you add error handling here?"
Developer: "Sure... what kind?"
Reviewer: "Just try-catch I guess?"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;PR #2 (Same Pattern):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Reviewer: "Looks good ✅"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  After: Consistent Standards
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;PR #1:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gs"&gt;**Rule Violated:**&lt;/span&gt; python-core-architecture.mdc
&lt;span class="gs"&gt;**Suggestion:**&lt;/span&gt; Use @handle_service_errors decorator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;PR #2 (Same Pattern):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gs"&gt;**Rule Violated:**&lt;/span&gt; python-core-architecture.mdc  
&lt;span class="gs"&gt;**Suggestion:**&lt;/span&gt; Use @handle_service_errors decorator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same issue. Same comment. Every time.&lt;/p&gt;

&lt;p&gt;Reviewing Other People's Code&lt;/p&gt;

&lt;p&gt;This system shines when I review someone else's work.&lt;/p&gt;

&lt;p&gt;I checkout their branch and run &lt;code&gt;@code-review.md&lt;/code&gt;. The AI generates structured feedback I can copy directly into GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gs"&gt;**Rule Violated:**&lt;/span&gt; Security Guidelines - "Never store secrets in code"
&lt;span class="gs"&gt;**Suggestion:**&lt;/span&gt; Replace with environment variable or AWS Secrets Manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of my usual:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"um... maybe don't hardcode that?"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The author immediately knows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What rule was violated&lt;/li&gt;
&lt;li&gt;Where it's documented&lt;/li&gt;
&lt;li&gt;How to fix it&lt;/li&gt;
&lt;li&gt;Whether the rule needs updating&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No more vague feedback. No more "I think we should..." discussions.&lt;/p&gt;

&lt;p&gt;Why This Actually Works&lt;/p&gt;

&lt;h3&gt;
  
  
  1. AI Generates Consistent Code
&lt;/h3&gt;

&lt;p&gt;Cursor reads my rules before generating code. Instead of random patterns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Random patterns from AI&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;users&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;httpClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I get consistent code that matches our standards:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Follows documented pattern&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;users&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;HttpClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUsers&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. New Developers Learn Faster
&lt;/h3&gt;

&lt;p&gt;New team members run &lt;code&gt;@code-review.md&lt;/code&gt; on their first PR. They instantly see what patterns we follow and why.&lt;/p&gt;

&lt;p&gt;No more "how do we handle errors here?" questions. The rules show concrete examples.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Less Bikeshedding in PRs
&lt;/h3&gt;

&lt;p&gt;Debates end faster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dev: "Should we use axios here?"
Reviewer: "Check api-development.mdc - we use HttpClient"
Dev: "Got it, fixing"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dev: "The rule says HttpClient, but this is a one-off call"
Reviewer: "Good point. Update the rule to include the exception"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The conversation shifts from opinions to whether the rule is right.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Code Reviews Focus on What Matters
&lt;/h3&gt;

&lt;p&gt;Instead of catching style issues, reviewers focus on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Business logic correctness&lt;/li&gt;
&lt;li&gt;Edge cases&lt;/li&gt;
&lt;li&gt;Architecture decisions&lt;/li&gt;
&lt;li&gt;Security implications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The boring stuff gets caught automatically.&lt;/p&gt;

&lt;p&gt;The Gut Check&lt;/p&gt;

&lt;p&gt;There's something I think about every time I review code now: &lt;strong&gt;Is my expertise actually being used here?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If I'm just checking for formatting or patterns we've agreed on, that's a waste of my time. The AI should catch that.&lt;/p&gt;

&lt;p&gt;If I'm evaluating trade-offs, thinking about failure modes, or spotting subtle bugs, that's where I add value.&lt;/p&gt;

&lt;p&gt;This system automates the first part so I can focus on the second.&lt;/p&gt;

&lt;p&gt;When I feel like I'm just rubber-stamping PRs, that's a signal. Either the rules need updating, or I need to look deeper at the actual logic.&lt;/p&gt;




&lt;h2&gt;
  
  
  Common Questions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;"Isn't this just a linter?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No. Linters catch syntax. This catches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Architecture violations (logic in wrong layer)&lt;/li&gt;
&lt;li&gt;Missing patterns (forgot to use our HttpClient)&lt;/li&gt;
&lt;li&gt;Security issues (hardcoded secrets)&lt;/li&gt;
&lt;li&gt;Accessibility problems (missing ARIA labels)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;"Won't this slow down development?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It speeds it up. Less back-and-forth in PRs. Fewer surprises. AI generates correct code the first time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"What if we disagree on a rule?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Perfect. Discuss it once. Document the decision. Never discuss it again.&lt;/p&gt;

&lt;p&gt;The rule can include the &lt;strong&gt;why&lt;/strong&gt; so future developers understand the reasoning.&lt;/p&gt;

&lt;p&gt;Start Small&lt;/p&gt;

&lt;p&gt;You don't need 50 rules to see benefits.&lt;/p&gt;

&lt;p&gt;Start with:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;One rule file for your biggest pain point&lt;/li&gt;
&lt;li&gt;The code review command&lt;/li&gt;
&lt;li&gt;The feedback loop (fix code OR update rule)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Run it on your next PR. See what happens.&lt;/p&gt;

&lt;p&gt;You can see my complete code review command and rule examples at my GitHub: &lt;a href="https://github.com/haco29/ai-workflow" rel="noopener noreferrer"&gt;github.com/haco29&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a world where tools evolve faster than teams, the only sustainable strategy is to evolve with them.&lt;br&gt;&lt;br&gt;
That’s the Red Queen race — and it’s how we keep our standards alive.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Note: This article was collaboratively written with AI assistance following a structured workflow.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cursor</category>
      <category>ai</category>
      <category>codereview</category>
    </item>
    <item>
      <title>Stop “Vibe Coding”: What Worked for Me as a Front-End Tech Lead</title>
      <dc:creator>Harel Coman</dc:creator>
      <pubDate>Sun, 12 Oct 2025 06:13:34 +0000</pubDate>
      <link>https://dev.to/haco29/stop-vibe-coding-what-worked-for-me-as-a-front-end-tech-lead-1ljh</link>
      <guid>https://dev.to/haco29/stop-vibe-coding-what-worked-for-me-as-a-front-end-tech-lead-1ljh</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: I built a 7-step workflow to pair with AI effectively: align on context, plan, track progress, implement in small steps, reflect, test, and run deterministic quality checks before every commit.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After more than ten years of coding, I hit a weird spot.&lt;br&gt;
The tools kept getting smarter — React was rock solid, TypeScript was everywhere, and AI assistants started feeling like magic.&lt;br&gt;
But my workflow? Total chaos.&lt;/p&gt;

&lt;p&gt;Some days, I’d treat the AI like a teammate and actually collaborate. Other days, I’d just “vibe code,” throw random prompts at it, and hope for the best.&lt;br&gt;
Sometimes it worked. Most of the time, it didn’t.&lt;/p&gt;

&lt;p&gt;That’s when I realized I needed to figure out a better way to actually work with AI — not just use it.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Problem with “Vibe Coding”
&lt;/h2&gt;

&lt;p&gt;Let’s be real — AI coding assistants are amazing.&lt;br&gt;&lt;br&gt;
They can spin up components from a single prompt, explain code better than Stack Overflow ever did, and sometimes even save you from your own bugs.  &lt;/p&gt;

&lt;p&gt;But when I first started using them, I completely messed up my approach.&lt;br&gt;&lt;br&gt;
I’d let the AI take the wheel.  &lt;/p&gt;

&lt;p&gt;One day I’d say, “Build me a user dashboard,” and it would spit out 200 lines of working code.&lt;br&gt;&lt;br&gt;
The next day I’d ask for “a better version,” and suddenly everything looked different — styles, logic, even naming.&lt;br&gt;&lt;br&gt;
It was fast, sure. But it was chaos.  &lt;/p&gt;

&lt;p&gt;No consistency.&lt;br&gt;&lt;br&gt;
No real sense of progress.&lt;br&gt;&lt;br&gt;
And no way to actually &lt;em&gt;learn&lt;/em&gt; from what worked.&lt;/p&gt;
&lt;h2&gt;
  
  
  Building a Systematic Approach
&lt;/h2&gt;

&lt;p&gt;After months of experimentation, I developed a workflow that treats AI as a skilled collaborator rather than a code generator. It's structured, repeatable, and gives me complete control while leveraging AI's strengths. Every commit follows the same 7-step process, and every decision is intentional.&lt;/p&gt;

&lt;p&gt;Here's exactly how I work now:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jtk2a5x822kc7qsmo0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jtk2a5x822kc7qsmo0p.png" alt="workflow progress" width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Context &amp;amp; Planning 🎯
&lt;/h2&gt;

&lt;p&gt;I start every feature by giving my AI assistant complete context. This isn't just about what I want to build - it's about ensuring the AI understands my codebase, patterns, and standards.&lt;/p&gt;

&lt;p&gt;In this phase, I deliberately elaborate as much technical knowledge as possible - architecture constraints, data models, invariants, performance budgets, failure modes, and edge cases - so the plan encodes my expertise and trade-offs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;My typical prompt to Cursor:

"I need to add a table view and switcher to my card list component.

Please read the context:
- @src/components/ui/Table/Table.tsx
- @src/components/ui/ViewSwitcher/ViewSwitcher.tsx
- @src/components/Cases/Cases.tsx
- @.cursor/rules/react-development.mdc (React/TS patterns &amp;amp; standards)
- @.cursor/rules/quality-check.mdc (quality requirements)
- @.cursor/rules/api-development.mdc (API interaction patterns)

Now create a detailed implementation plan with:
1. Small, deliverable steps...
2. Testing strategy...
3. Component architecture...
4. Best practices to follow...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key here is my Cursor rules - custom guidelines I maintain that ensure consistent React patterns, accessibility standards, and quality requirements. Without these, I'd be starting from scratch every time.&lt;/p&gt;

&lt;p&gt;Note: these &lt;code&gt;@...md&lt;/code&gt; Commands are my custom Cursor actions that encapsulate prompts and validations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Generate Progress Document 📋
&lt;/h2&gt;

&lt;p&gt;Once I have the plan, I run my custom command: &lt;code&gt;@progress-md.md&lt;/code&gt;. This generates a structured progress tracker that breaks the feature into small, testable steps.&lt;/p&gt;

&lt;p&gt;The output looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Table View Feature - Progress Tracker&lt;/span&gt;

&lt;span class="gu"&gt;## Project Overview&lt;/span&gt;

&lt;span class="gs"&gt;**Objective**&lt;/span&gt;: Add table view and switcher to card list
&lt;span class="gs"&gt;**Status**&lt;/span&gt;: In Progress | &lt;span class="gs"&gt;**Phase**&lt;/span&gt;: 1 of 5 | &lt;span class="gs"&gt;**Progress**&lt;/span&gt;: 0%

&lt;span class="gu"&gt;## Pending Steps&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; [ ] &lt;span class="gs"&gt;**STEP-001**&lt;/span&gt;: Create ViewSwitcher component
&lt;span class="p"&gt;-&lt;/span&gt; [ ] &lt;span class="gs"&gt;**STEP-002**&lt;/span&gt;: Integrate Table component with data
&lt;span class="p"&gt;-&lt;/span&gt; [ ] &lt;span class="gs"&gt;**STEP-003**&lt;/span&gt;: Add view switching logic
&lt;span class="p"&gt;-&lt;/span&gt; [ ] &lt;span class="gs"&gt;**STEP-004**&lt;/span&gt;: Add comprehensive tests
&lt;span class="p"&gt;-&lt;/span&gt; [ ] &lt;span class="gs"&gt;**STEP-005**&lt;/span&gt;: Update documentation

Each step is designed to be committed independently.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This document becomes my roadmap. No more vague "implement the feature" - I know exactly what's next.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Implement Features ⚡
&lt;/h2&gt;

&lt;p&gt;Here's where the actual coding happens. I work through each step systematically, implementing small chunks that can be committed independently.&lt;/p&gt;

&lt;p&gt;For UI components, I use another custom command: &lt;code&gt;@start-playwright.md&lt;/code&gt; to launch Playwright. This gives the AI browser context, allowing it to see rendered components and validate interactions in real-time.&lt;/p&gt;

&lt;p&gt;I tell Cursor: "Based on our plan, implement STEP-001. Follow the patterns in &lt;code&gt;@src/components/ui/&lt;/code&gt; and ensure accessibility compliance."&lt;/p&gt;

&lt;p&gt;The result? A focused, testable component that fits perfectly into my architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Reflect on Changes 🔍
&lt;/h2&gt;

&lt;p&gt;After implementation, I ran &lt;code&gt;@reflect-changes.md&lt;/code&gt; to analyze what was built. This command validates compliance with my development patterns and identifies improvement opportunities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Summary of Changes:
- Added ViewSwitcher component with proper state management
- Integrated Table component with existing data structure
- Used consistent styling patterns across components

React Development Pattern Compliance:
- Component Structure: PascalCase naming, proper organization
- TypeScript Usage: Proper interfaces and type safety
- Accessibility: WCAG 2.1 AA compliance, proper ARIA attributes
- State Management: Correct hooks usage and data flow

Extraction Opportunities:
- Switcher logic could be moved to a custom hook for reuse
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This reflection step catches inconsistencies before they become problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Add Tests 🧪
&lt;/h2&gt;

&lt;p&gt;Testing is non-negotiable. I run &lt;code&gt;@add-tests.md&lt;/code&gt; to generate user-centric tests that focus on real interactions, not implementation details.&lt;/p&gt;

&lt;p&gt;The generated tests look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;View switching&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;switches between card and table views&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Cases&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;click&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getByRole&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;button&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sr"&gt;/table view/i&lt;/span&gt; &lt;span class="p"&gt;}))&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getByRole&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;table&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;toBeInTheDocument&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;click&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getByRole&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;button&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sr"&gt;/card view/i&lt;/span&gt; &lt;span class="p"&gt;}))&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getAllByTestId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;case-card&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBeGreaterThan&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These tests validate user workflows while ensuring accessibility compliance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Quality Assurance ✅
&lt;/h2&gt;

&lt;p&gt;Before any commit, I run &lt;code&gt;@precommit.md&lt;/code&gt; for deterministic validation. No opinions here - just tools checking my work.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Quality Checks:
✅ TypeScript compilation - No type errors
✅ ESLint - Code quality standards met
✅ Prettier - Consistent formatting
✅ Vitest - All 127 tests passing
✅ React patterns - Development standards compliance

📊 Quality metrics:
- 0 linting errors
- 0 type errors
- 98% test coverage
- All accessibility checks passed

🎯 SUGGESTED COMMIT MESSAGE:
feat: add table view and switcher to card list

✨ Key improvements:
- ViewSwitcher component for toggling between views
- Table component integration with existing data
- Accessible view switching with keyboard navigation
- Comprehensive test coverage for user workflows
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the tools pass, the code is ready. No human judgment needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Commit &amp;amp; Continue 🚀
&lt;/h2&gt;

&lt;p&gt;With validation complete, I commit using the generated message and update my progress document:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## ✅ Completed Steps&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; [x] &lt;span class="gs"&gt;**STEP-001**&lt;/span&gt;: Create ViewSwitcher component ✅
&lt;span class="p"&gt;  -&lt;/span&gt; Commit: feat: add table view and switcher
&lt;span class="p"&gt;  -&lt;/span&gt; Validated: All quality checks passed

&lt;span class="gu"&gt;## 🔄 In Progress&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; [ ] &lt;span class="gs"&gt;**STEP-002**&lt;/span&gt;: Integrate Table component with data 🔄
&lt;span class="p"&gt;  -&lt;/span&gt; Status: Ready to implement
&lt;span class="p"&gt;  -&lt;/span&gt; ETA: Next work session
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each step gets its own commit with clear value and context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Works
&lt;/h2&gt;

&lt;p&gt;After 10 years of coding, I've learned that consistency compounds. Every feature I build now follows this exact process, which means:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Predictable Quality&lt;/strong&gt;: Every commit meets the same high standards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster Iteration&lt;/strong&gt;: Clear progress tracking means I can resume work instantly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Accountability&lt;/strong&gt;: The structured approach prevents "vibe coding" while maximizing AI benefits&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measurable Improvement&lt;/strong&gt;: I can optimize each step because the process is consistent&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  AI Should Amplify - Not Obscure - Your Expertise
&lt;/h2&gt;

&lt;p&gt;There's a gut-check I run every day: do I feel my existing knowledge is being fully used? If the answer is ever "no," that's a signal I'm slipping into autopilot. The goal of this workflow is not to outsource judgment - it's to amplify it, so my advantages (taste, domain context, architectural instincts) compound through the AI, not get washed out by it.&lt;/p&gt;

&lt;p&gt;Practically, I design each step so my expertise has to show up: I reference my own rules and patterns in prompts, I constrain options (trade-offs I care about), and I reflect on diffs with questions only I can answer ("Does this match our failure modes?" "Will this scale with our data shape?"). When that loop is present, I feel my experience steering the outcome.&lt;/p&gt;

&lt;p&gt;If I ever feel my expertise isn't factoring into day-to-day work, I treat it as a process bug - not a personal one - and adjust the guardrails until my judgment is front and center again.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tools That Make It Possible
&lt;/h2&gt;

&lt;p&gt;My workflow relies on several key components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cursor AI&lt;/strong&gt;: The intelligent assistant that understands my codebase and patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Cursor Rules&lt;/strong&gt;: &lt;code&gt;@.cursor/rules/react-development.mdc&lt;/code&gt;, &lt;code&gt;@.cursor/rules/quality-check.mdc&lt;/code&gt;, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Commands&lt;/strong&gt;: &lt;code&gt;@progress-md.md&lt;/code&gt;, &lt;code&gt;@reflect-changes.md&lt;/code&gt;, &lt;code&gt;@add-tests.md&lt;/code&gt;, &lt;code&gt;@precommit.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Playwright MCP&lt;/strong&gt;: For real browser context during UI development&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured Progress Tracking&lt;/strong&gt;: Living documents that guide each feature&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Looking Forward
&lt;/h2&gt;

&lt;p&gt;This workflow has transformed how I approach development. What used to take days of scattered effort now happens in focused, high-quality sessions. The AI isn't replacing my judgment - it's amplifying my systematic approach.&lt;/p&gt;

&lt;p&gt;If you're considering AI-assisted development, don't just "try it out." Build the guardrails, establish the patterns, and create the consistency that lets you optimize every iteration.&lt;/p&gt;

&lt;p&gt;The shift from traditional coding to this AI-augmented approach wasn't about the tools - it was about finding a systematic way to collaborate with them effectively.&lt;/p&gt;

&lt;p&gt;You can see my complete workflow presentation and methodology at: &lt;a href="https://github.com/haco29/ai-workflow" rel="noopener noreferrer"&gt;github.com/haco29/ai-workflow&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Who Am I
&lt;/h2&gt;

&lt;p&gt;Hey, I’m &lt;strong&gt;Harel&lt;/strong&gt; — a &lt;strong&gt;Front-End Tech Lead at &lt;a href="https://www.linkedin.com/company/verbit-ai/posts/?feedView=all" rel="noopener noreferrer"&gt;Verbit&lt;/a&gt;&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
Over the past few years, I’ve built and led React projects that grew from small ideas into large-scale ecosystems used daily by hundreds of people.&lt;/p&gt;

&lt;p&gt;Somewhere along the way, I realized that writing great code isn’t just about knowing React or TypeScript — it’s about &lt;strong&gt;building systems that make other developers faster, calmer, and more confident&lt;/strong&gt;. That’s the craft I’m obsessed with.&lt;/p&gt;

&lt;p&gt;I love exploring tools, patterns, and workflows that bring more intention to coding — from shared UI libraries and micro frontends to AI-assisted development.&lt;br&gt;&lt;br&gt;
The goal is always the same: &lt;strong&gt;make the front end less chaotic, and coding more deliberate&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Outside of work, I’m usually chasing my two little kids around the house or over-engineering some side project for fun.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/harel-coman-16703289/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn →&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Note: This article was collaboratively written with AI assistance following the same structured workflow described above.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>react</category>
      <category>cursor</category>
      <category>vibecoding</category>
    </item>
  </channel>
</rss>
