<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Marlon M</title>
    <description>The latest articles on DEV Community by Marlon M (@nolrm).</description>
    <link>https://dev.to/nolrm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nolrm"/>
    <language>en</language>
    <item>
      <title>AI Agents Can Ship Code Faster Than You Can Review It. Here's What Stops Them.</title>
      <dc:creator>Marlon M</dc:creator>
      <pubDate>Wed, 18 Mar 2026 09:58:13 +0000</pubDate>
      <link>https://dev.to/nolrm/ai-agents-can-ship-code-faster-than-you-can-review-it-heres-what-stops-them-6k3</link>
      <guid>https://dev.to/nolrm/ai-agents-can-ship-code-faster-than-you-can-review-it-heres-what-stops-them-6k3</guid>
      <description>&lt;p&gt;&lt;em&gt;Most teams running AI agents have no enforcement at the git layer. Here's what's quietly building in your repo — and the two-line defence that stops it.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;An AI agent just wrote and committed 47 files. Did you review all of them?&lt;/p&gt;

&lt;p&gt;Probably not.&lt;/p&gt;

&lt;p&gt;Nobody does. That's the point — agents move faster than review. And if nothing is enforcing standards at the git layer, bad code reaches the repo at the same speed the agent writes it.&lt;/p&gt;

&lt;p&gt;This is the problem quality gates were built for. It used to be a slow, human-speed problem. Now it's urgent.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Migration That Broke Everything (And What Actually Saved It)
&lt;/h2&gt;

&lt;p&gt;Three years ago I was maintaining a white-label registry platform — a government web app powering multiple clients. We had to migrate from Vue 2 to Vue 3. Vue 3 changed almost everything: the reactivity system, the component model, the entire ecosystem. Some of that pain was inevitable. But the wall we hit in the first hour? That was ours.&lt;/p&gt;

&lt;p&gt;The terminal had thousands of errors. Some components were in TypeScript. Some weren't. Some had proper props with default values. Others had been copy-pasted years ago and never revisited. Slot handling had changed between Vue 2 and Vue 3 — a component would render in isolation, pass unit tests, and then silently break in a parent layout. Every broken slot had to be found by hand, by loading the full app, by navigating to the right screen.&lt;/p&gt;

&lt;p&gt;The framework changed. But the real damage was the absence of guardrails — no enforced patterns, no consistent structure, nothing that would have caught the drift commit by commit before it compounded into a wall.&lt;/p&gt;

&lt;p&gt;That was the human-speed version. Years of unchecked inconsistency, made visible all at once by a forced migration.&lt;/p&gt;




&lt;h2&gt;
  
  
  Now Imagine an Agent Doing That Every Five Minutes
&lt;/h2&gt;

&lt;p&gt;Now imagine the same thing — but an agent is committing code every five minutes.&lt;/p&gt;

&lt;p&gt;No guardrails means the agent generates code in whatever pattern it infers from context. Some files use TypeScript strictly. Others don't. Some follow your component conventions. Others are plausible-looking code that passes a basic check but violates three rules you defined in your style guide six months ago. The agent doesn't know. It wasn't told. And nothing is stopping it.&lt;/p&gt;

&lt;p&gt;By the time a human reviews it, the inconsistency is already in the repo. Multiplied across 47 files. And now it's part of your baseline.&lt;/p&gt;

&lt;p&gt;This isn't a hypothetical risk — it's already the default of agentic workflows without enforcement. The agent is fast. Capable. And completely indifferent to your project's conventions unless those conventions are defined, fed to the agent, and enforced at the git layer.&lt;/p&gt;

&lt;p&gt;Quality gates are that enforcement. When a human pushes, they've at least read the code. When an agent pushes, the gate &lt;em&gt;is&lt;/em&gt; the review.&lt;/p&gt;

&lt;p&gt;Some teams argue that agents are still "junior devs" and humans are still in control. I think that's already outdated.&lt;/p&gt;




&lt;h2&gt;
  
  
  Two Lines of Defence
&lt;/h2&gt;

&lt;p&gt;Whether the code comes from a human or an agent, you need the same two layers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Developer or Agent
        ↓
Line 1: Quality Gates  (pre-push, before code hits the repo)
  → linter (ESLint, ruff, clippy...)
  → formatter (Prettier, black...)
  → type checker (tsc, mypy...)
  → unit tests (jest, pytest, cargo test...)
        ↓
Repository → CI Pipeline
        ↓
Line 2: Integration / UI Tests  (before code hits production)
        ↓
Production
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Line 1&lt;/strong&gt; is a pre-push git hook that detects your stack automatically and runs all four checks in sequence — linting, formatting, type checking, and unit tests — but only the tools your project actually has installed. If anything fails, the push is blocked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❌ Quality Gates FAILED — push blocked.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No Husky, no package manager, no Node.js dependency. Works in any git repo regardless of language. One line sets it up for the whole team: &lt;code&gt;git config core.hooksPath .contextkit/hooks&lt;/code&gt;. See the &lt;a href="https://contextkit-docs.vercel.app/docs/quality-gates" rel="noopener noreferrer"&gt;full stack support in the docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 2&lt;/strong&gt; catches what line 1 structurally can't — things that only appear at runtime: broken user flows, visual regressions, components that pass unit tests in isolation but fail in a real browser context. Tools like Cypress and Playwright handle this.&lt;/p&gt;

&lt;p&gt;The key insight: because unit tests already ran at line 1, your integration suite can focus purely on critical user paths rather than trying to cover everything. An hour-long test suite is expensive. The more you invest in unit coverage at line 1 — fast, cheap, build-time feedback — the leaner and more focused line 2 can be.&lt;/p&gt;

&lt;p&gt;Line 1 stops bad code at the gate. Line 2 stops broken behaviour from reaching users.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Need Before the Gates Can Work
&lt;/h2&gt;

&lt;p&gt;Most teams already have standards. They're in Confluence, a Notion doc, a wiki somewhere. Someone wrote them. Someone approved them. And then a deadline hit, and the push went out anyway — because nothing mechanically stopped it.&lt;/p&gt;

&lt;p&gt;That's the real problem. Not that standards don't exist. It's that documentation you have to remember to follow isn't a guardrail. It's a suggestion. And suggestions don't scale with agents.&lt;/p&gt;

&lt;p&gt;What gates actually enforce are standards that live close to the code — version-controlled, readable by your AI tools, and checked on every push:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.contextkit/standards/
├── glossary.md        ← project terminology
├── code-style.md      ← coding conventions
├── testing.md         ← test patterns
├── architecture.md    ← decisions and constraints
└── ai-guidelines.md   ← rules for AI-generated code
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the loop: &lt;strong&gt;standards define what correct looks like. Gates enforce it. Agents read the standards and write to them.&lt;/strong&gt; Without the standards, the agent guesses. Without the gates, the guesses reach the repo unchecked. And the Confluence doc nobody checked before pushing? It doesn't help either.&lt;/p&gt;

&lt;p&gt;This is exactly what I didn't have three years ago. The standards existed — loosely, informally, in people's heads and in docs nobody opened under pressure. If &lt;code&gt;code-style.md&lt;/code&gt; had been enforced from day one, the inconsistency we found during the migration would have been caught year by year, push by push, instead of all at once.&lt;/p&gt;

&lt;p&gt;I ended up building &lt;a href="https://contextkit-docs.vercel.app" rel="noopener noreferrer"&gt;ContextKit&lt;/a&gt; to handle this — standards folder, git hooks, and bridge files for whatever AI tools your team uses. One install.&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;With agents writing code at scale, quality gates are no longer optional — they're the only automated review that scales with them. Letting agents push without enforced gates is the fastest way to degrade a codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 1 — Quality Gates (pre-push)&lt;/strong&gt;&lt;br&gt;
Runs linter, formatter, type checker, and unit tests before code hits the repo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 2 — Integration / UI Tests (CI)&lt;/strong&gt;&lt;br&gt;
Catches broken flows, runtime regressions, visual bugs before code hits production.&lt;/p&gt;

&lt;p&gt;Strong unit coverage at line 1 reduces the cost and surface area of line 2.&lt;/p&gt;




&lt;p&gt;The real question isn't whether you trust your agent. It's whether your repo does.&lt;/p&gt;

&lt;p&gt;I'm curious how teams are actually handling this right now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are you letting agents commit directly to your repo?&lt;/li&gt;
&lt;li&gt;Or is every change still gated by human review?&lt;/li&gt;
&lt;li&gt;If you have gates — what are you actually enforcing?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have a strong opinion — but I'm genuinely curious what teams are doing right now.&lt;/p&gt;






&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; @nolrm/contextkit &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;your-project &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; ck &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;a href="https://npmjs.com/package/@nolrm/contextkit" rel="noopener noreferrer"&gt;npmjs.com/package/@nolrm/contextkit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/nolrm/contextkit" rel="noopener noreferrer"&gt;github.com/nolrm/contextkit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://contextkit-docs.vercel.app/docs/quality-gates" rel="noopener noreferrer"&gt;contextkit-docs.vercel.app/docs/quality-gates&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Written by Marlon Maniti. I build tools for AI-native development workflows. Follow for more on context engineering, squad pipelines, and shipping with AI at speed.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>softwareengineering</category>
      <category>devops</category>
    </item>
    <item>
      <title>How I Get Claude to Think Like 5 Specialists (And Never Lose Context)</title>
      <dc:creator>Marlon M</dc:creator>
      <pubDate>Wed, 04 Mar 2026 23:22:33 +0000</pubDate>
      <link>https://dev.to/nolrm/how-i-get-claude-to-think-like-5-specialists-and-never-lose-context-4h4l</link>
      <guid>https://dev.to/nolrm/how-i-get-claude-to-think-like-5-specialists-and-never-lose-context-4h4l</guid>
      <description>&lt;p&gt;&lt;em&gt;Start alone. Scale to a team. The context never dies.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;You open a new session. You tell the AI to be your product manager. Then your architect. Then your developer, tester, and reviewer, all in the same conversation, all at once. And it tries. It really does. But it's juggling five jobs with no structure, and something always slips. An edge case the PO should have caught. A test that never got written. A review that missed the obvious thing.&lt;/p&gt;

&lt;p&gt;This isn't an AI problem. It's a structure problem.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Fix: Five Roles, Not Five People
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.npmjs.com/package/@nolrm/contextkit" rel="noopener noreferrer"&gt;contextkit&lt;/a&gt;&lt;/strong&gt; is built around a simple idea: the best way to build something is to think about it in distinct roles, in sequence, with each role fully informed by the one before it.&lt;/p&gt;

&lt;p&gt;Its Squad workflow gives you five specialized commands, each representing a role:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/squad&lt;/code&gt;: Product Owner&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/squad-architect&lt;/code&gt;: Architect&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/squad-dev&lt;/code&gt;: Developer&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/squad-test&lt;/code&gt;: Tester&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/squad-review&lt;/code&gt;: Reviewer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You don't need five people. One person can run all five. The discipline of switching roles is what forces you to catch things you'd miss if you just dove straight into code.&lt;/p&gt;

&lt;p&gt;And everything flows through a single shared file: &lt;code&gt;.contextkit/squad/handoff.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbs0ta9ltm3v6i3vgead.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbs0ta9ltm3v6i3vgead.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Works: One File Carries Everything
&lt;/h2&gt;

&lt;p&gt;When you run &lt;code&gt;/squad "add a user authentication flow"&lt;/code&gt;, contextkit creates the handoff file and the Product Owner role writes into it: the user story, acceptance criteria, edge cases, and what's explicitly out of scope. That's the spec. It stays in the file forever.&lt;/p&gt;

&lt;p&gt;Then &lt;code&gt;/squad-architect&lt;/code&gt; reads that spec and writes its own section: the technical approach, which files to change, the trade-offs, the steps in order.&lt;/p&gt;

&lt;p&gt;Then &lt;code&gt;/squad-dev&lt;/code&gt; reads the architect's plan and implements. Every decision made along the way gets documented in the handoff.&lt;/p&gt;

&lt;p&gt;Then &lt;code&gt;/squad-test&lt;/code&gt; reads the acceptance criteria and the implementation. Writes and runs tests against both.&lt;/p&gt;

&lt;p&gt;Then &lt;code&gt;/squad-review&lt;/code&gt; reads everything and gives a &lt;code&gt;PASS&lt;/code&gt; or &lt;code&gt;NEEDS-WORK&lt;/code&gt; verdict with specific notes.&lt;/p&gt;

&lt;p&gt;No role starts cold. No context is lost. Each role inherits everything the previous one knew.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Makes You Better Solo
&lt;/h2&gt;

&lt;p&gt;When you wear all five hats in a single AI session, you tend to skip the uncomfortable parts. The architect in you gets impatient and jumps to code. The developer in you doesn't want to write tests. The reviewer in you is too close to the work to see the gaps.&lt;/p&gt;

&lt;p&gt;Squad forces you to slow down at each stage, think from one perspective at a time, and document what you found before moving on. The result is work that's more complete, more reasoned, and easier to revisit.&lt;/p&gt;

&lt;p&gt;The handoff file becomes your own second brain for the task. When you come back tomorrow, or next week, you don't re-read the codebase. You open the file and know exactly where things stand.&lt;/p&gt;




&lt;h2&gt;
  
  
  Use the Right Model for Each Role
&lt;/h2&gt;

&lt;p&gt;Not every role needs the same horsepower. A reviewer reading a complex spec might warrant your most capable model. A developer grinding through implementation steps can run on something faster and cheaper.&lt;/p&gt;

&lt;p&gt;Squad's &lt;code&gt;config.md&lt;/code&gt; lets you configure this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .contextkit/squad/config.md
checkpoint: po
model_routing: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;model_routing: true&lt;/code&gt;, &lt;code&gt;/squad-auto&lt;/code&gt; automatically routes Dev and Test phases to Claude Haiku, saving roughly 35% on tokens, while keeping Architect and Review on your primary model. You get the right intelligence at each step without overpaying for it.&lt;/p&gt;




&lt;h2&gt;
  
  
  And When You Want to Bring Someone Else In
&lt;/h2&gt;

&lt;p&gt;Here's where it gets powerful beyond solo work.&lt;/p&gt;

&lt;p&gt;Because the handoff file is plain markdown, it's completely portable. Any developer, any AI tool, any teammate can open it and contribute to the next role.&lt;/p&gt;

&lt;p&gt;You could run the PO spec yourself in Claude Code, then send the file to a senior engineer who does the architecture in Cursor. A junior dev picks up the implementation. QA runs the tests. Anyone does the review.&lt;/p&gt;

&lt;p&gt;No re-prompting. No "let me explain the project." No "what did we decide about the database?" It's all in the file. It was always in the file.&lt;/p&gt;

&lt;p&gt;The same structure that makes you better as a solo developer becomes the coordination layer for your entire team, with zero extra setup.&lt;/p&gt;




&lt;h2&gt;
  
  
  Run It Your Way
&lt;/h2&gt;

&lt;p&gt;Run &lt;code&gt;/squad "add user authentication"&lt;/code&gt; and then &lt;code&gt;/squad-auto&lt;/code&gt;. The pipeline writes the spec, designs the architecture, implements the code, runs tests, and delivers a verdict. Hands-free.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/squad &lt;span class="s2"&gt;"add user authentication"&lt;/span&gt;   &lt;span class="c"&gt;# PO writes the spec&lt;/span&gt;
/squad-auto                        &lt;span class="c"&gt;# Architect → Dev → Test → Review, hands-free&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Want to step through manually or run batch tasks across multiple features? The &lt;a href="https://contextkit-docs.vercel.app/docs/squad" rel="noopener noreferrer"&gt;docs cover it&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  When Something Doesn't Add Up
&lt;/h2&gt;

&lt;p&gt;Real work isn't linear. Any role can raise a question for an upstream role. The pipeline pauses, you answer it, you re-run the command to continue. Nothing gets silently skipped. Every question and answer lives in the handoff file. The trace is complete.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Shift
&lt;/h2&gt;

&lt;p&gt;Most developers using AI work in isolated sessions: private, fragile, starting from scratch every time.&lt;/p&gt;

&lt;p&gt;Squad makes your context durable and portable. It lives in a file that survives session resets, tool switches, and handoffs between people. Start alone, scale to a team. The file travels with the work.&lt;/p&gt;

&lt;p&gt;One developer can think like five specialists. Five developers can work like one team.&lt;/p&gt;

&lt;p&gt;That's the shift.&lt;/p&gt;




&lt;h2&gt;
  
  
  Get Started in 60 Seconds
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Install the CLI&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; @nolrm/contextkit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Set up your project&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;your-project
contextkit &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a &lt;code&gt;.contextkit/&lt;/code&gt; folder with skeleton standards files. After setup, run &lt;code&gt;/analyze&lt;/code&gt; in your AI tool: it scans your codebase and fills those files with your project's actual conventions: naming patterns, architecture decisions, tech stack specifics. From that point on, every AI session starts with full project context already loaded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Start your first Squad task&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/squad &lt;span class="s2"&gt;"your task here"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The handoff file is created. The PO spec is written. You're off.&lt;/p&gt;




&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;npm:&lt;/strong&gt; &lt;a href="https://www.npmjs.com/package/@nolrm/contextkit" rel="noopener noreferrer"&gt;npmjs.com/package/@nolrm/contextkit&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/nolrm/contextkit#readme" rel="noopener noreferrer"&gt;github.com/nolrm/contextkit&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docs:&lt;/strong&gt; &lt;a href="https://contextkit-docs.vercel.app/docs/squad" rel="noopener noreferrer"&gt;contextkit-docs.vercel.app/docs/squad&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Written by Marlon Maniti&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I got tired of re-explaining my codebase to AI tools every session. contextkit is what I built to fix that. If you're using AI to ship software, this is for you.&lt;/p&gt;

&lt;p&gt;Follow me for more on AI-native dev workflows. If this saved you from one lost-context session, hit the clap button.&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>⚡ Stop Explaining Your Project to AI - Let It Learn with Vibe Kit</title>
      <dc:creator>Marlon M</dc:creator>
      <pubDate>Tue, 28 Oct 2025 02:36:19 +0000</pubDate>
      <link>https://dev.to/nolrm/stop-explaining-your-project-to-ai-let-it-learn-with-vibe-kit-40fh</link>
      <guid>https://dev.to/nolrm/stop-explaining-your-project-to-ai-let-it-learn-with-vibe-kit-40fh</guid>
      <description>&lt;p&gt;You know that feeling when your AI assistant writes decent code… but completely ignores your style, naming, and structure? You spend five minutes explaining your setup before you even get a usable response. Yeah, that’s not productive.&lt;/p&gt;

&lt;p&gt;So I built Vibe Kit, a CLI that helps your AI tools &lt;em&gt;vibe&lt;/em&gt; with your codebase. No more essay-length prompts. Just smarter AI that already understands your project.&lt;/p&gt;

&lt;p&gt;👉 Full docs and setup guide: &lt;a href="https://vibe-kit-docs.vercel.app" rel="noopener noreferrer"&gt;vibe-kit-docs.vercel.app&lt;/a&gt;&lt;br&gt;
📦 npm → &lt;a href="https://www.npmjs.com/package/@nolrm/vibe-kit" rel="noopener noreferrer"&gt;@nolrm/vibe-kit&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqodw4rvhwgmnwrldfnv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqodw4rvhwgmnwrldfnv.png" alt=" " width="800" height="624"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🧩 The Problem
&lt;/h2&gt;

&lt;p&gt;AI tools like Cursor, Copilot, Claude, or Gemini are incredible at writing syntax. But they don’t know &lt;em&gt;your&lt;/em&gt; architecture, naming patterns, or test style They hallucinate stuff. They mix tabs with spaces. They think your “checkout flow” is a PayPal button.&lt;/p&gt;

&lt;p&gt;You end up rewriting half the code they generate. And that kills the flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 The Fix: Context Engineering
&lt;/h2&gt;

&lt;p&gt;Instead of explaining your conventions every time you chat with an AI, teach it once, through structured markdown context. That’s what Vibe Kit does.&lt;/p&gt;

&lt;p&gt;It sets up a &lt;code&gt;.vibe-kit/&lt;/code&gt; folder in your repo filled with markdown files that describe your project standards, architecture, and patterns. Every AI tool you use can read those files directly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyse60ayrkjjwyng4jft.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyse60ayrkjjwyng4jft.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each file becomes part of your project’s knowledge base. The next time your AI assistant runs, it already knows how your code should look and behave.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⏱️ Less Explaining, More Building
&lt;/h2&gt;

&lt;p&gt;One of the biggest time-sinks in AI-assisted coding is &lt;em&gt;repeating context&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Before you get the perfect output, you end up typing paragraphs like:&lt;/p&gt;

&lt;p&gt;“We use strict TypeScript, React functional components, Jest for testing, and our checkout flow lives in &lt;code&gt;/src/features/payments&lt;/code&gt;…”&lt;/p&gt;

&lt;p&gt;With Vibe Kit, that context already lives in &lt;code&gt;.vibe-kit/&lt;/code&gt; markdown files.&lt;/p&gt;

&lt;p&gt;The AI reads them automatically. So now, your prompts can be short and human:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;“Add checkout flow for customer”&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Boom. It already knows your stack, coding standards, and testing approach.&lt;/p&gt;

&lt;p&gt;Less talking. More shipping.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧠 Why It Matters
&lt;/h2&gt;

&lt;p&gt;We’re moving past “prompt engineering.” The real unlock is &lt;strong&gt;context engineering&lt;/strong&gt;. It’s not about crafting perfect prompts , it’s about giving your AI the &lt;em&gt;right information&lt;/em&gt; before you even start typing. With &lt;code&gt;.vibe-kit/&lt;/code&gt;, your team defines one shared source of truth:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Glossary&lt;/strong&gt; → Project terms &amp;amp; language&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Style&lt;/strong&gt; → Patterns, naming, formatting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing&lt;/strong&gt; → Rules for consistency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture&lt;/strong&gt; → Key design decisions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Guidelines&lt;/strong&gt; → Dos and don’ts for your AI tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once defined, it’s reusable across projects, teams, and tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  🤝 Works With All The Cool Tools
&lt;/h2&gt;

&lt;p&gt;Cursor, VS Code (Copilot Chat), &lt;strong&gt;Codex CLI&lt;/strong&gt;, Claude CLI, Aider, Continue.dev, Gemini CLI , Vibe Kit plays nicely with all of them.&lt;/p&gt;

&lt;p&gt;Each integration adds its own setup without overwriting anything.&lt;/p&gt;

&lt;p&gt;Every developer keeps their preferred AI, but shares the same project context.&lt;/p&gt;

&lt;h2&gt;
  
  
  💬 Why Developers Love It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🧠 &lt;strong&gt;Smarter AI&lt;/strong&gt; — Generates code that fits your stack&lt;/li&gt;
&lt;li&gt;🌍 &lt;strong&gt;Cross-platform&lt;/strong&gt; — Works everywhere&lt;/li&gt;
&lt;li&gt;⚡ &lt;strong&gt;Zero config&lt;/strong&gt; — Auto-detects tools and package managers&lt;/li&gt;
&lt;li&gt;🛡️ &lt;strong&gt;Safe install&lt;/strong&gt; — Always keeps backups&lt;/li&gt;
&lt;li&gt;🤝 &lt;strong&gt;Team-friendly&lt;/strong&gt; — One context shared across all AI tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🌐 Links
&lt;/h2&gt;

&lt;p&gt;📦 npm → &lt;a href="https://www.npmjs.com/package/@nolrm/vibe-kit" rel="noopener noreferrer"&gt;@nolrm/vibe-kit&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💻 GitHub → &lt;a href="https://github.com/nolrm/vibe-kit" rel="noopener noreferrer"&gt;github.com/nolrm/vibe-kit&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📚 Docs → &lt;a href="https://vibe-kit-docs.vercel.app" rel="noopener noreferrer"&gt;vibe-kit-docs.vercel.app&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ✍️ Made by a Dev, for Devs
&lt;/h2&gt;

&lt;p&gt;Hey, I’m Marlon Maniti . I built Vibe Kit because I got tired of explaining my codebase to AI tools over and over.&lt;/p&gt;

&lt;p&gt;Now, I just give them context once and they remember.&lt;/p&gt;

&lt;p&gt;If you use AI to build, you’ll vibe with this.&lt;/p&gt;

&lt;p&gt;⭐ Star it, share feedback, or open an issue, let’s make AI coding actually collaborative.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tooling</category>
      <category>rag</category>
    </item>
  </channel>
</rss>
