DEV Community

Cover image for The Vibe Coding Delusion
Edward Burton
Edward Burton

Posted on

The Vibe Coding Delusion

I recently sat in a code review that terrified me.

A junior engineer—bright, enthusiastic, well-meaning—had just "vibe coded" a new feature. He hadn't written the logic himself. He had described the "vibe" of the feature to an LLM, pasted the output into our repository, and opened a PR.

It worked. The pixels were in the right place. The button clicked. The data loaded.

But when I opened the file, I saw the ghost of a future outage.

There were hard-coded timeout values. There was state logic duplicated across three different components. There was a useEffect hook with a dependency array so wild it looked like a lottery ticket. It was a "happy path" masterpiece. If the API responded in 200ms and the user never clicked the button twice, it was perfect.

In the real world, it was a grenade.

I've written a detailed analysis of why "Vibe Coding" is an economic delusion, but today I want to get practical. I want to show you the difference between generating code and engineering software.

We are going to take a piece of AI-generated "slop", dissect why it fails in production, and refactor it using what I call Specification Engineering.

The Trap: "Just Make It Work"

The promise of the current AI wave is that natural language is the new programming language. You tell the computer what you want. It handles the "how".

This is dangerous.

The "how" is where the bugs live. The "how" is where security vulnerabilities hide. When you abdicate the "how" to a probabilistic text generator, you aren't abstracting away complexity. You are hiding it until 3 AM when PagerDuty fires.

Let's look at a concrete example. I asked a popular coding agent to: "Create a React component that fetches user data and displays a profile card with an edit button."

Here is the "Vibe Code".

// UserProfile.jsx (The Vibe Version)
import React, { useState, useEffect } from 'react';

const UserProfile = ({ userId }) => {
  const [user, setUser] = useState(null);
  const [isEditing, setIsEditing] = useState(false);
  const [name, setName] = useState('');
  const [email, setEmail] = useState('');

  useEffect(() => {
    fetch(`https://api.example.com/users/${userId}`)
      .then(res => res.json())
      .then(data => {
        setUser(data);
        setName(data.name);
        setEmail(data.email);
      });
  }, [userId]);

  const handleSave = () => {
    fetch(`https://api.example.com/users/${userId}`, {
      method: 'PUT',
      body: JSON.stringify({ name, email }),
      headers: { 'Content-Type': 'application/json' }
    }).then(() => {
      setIsEditing(false);
      // refetch to update
      fetch(`https://api.example.com/users/${userId}`)
        .then(res => res.json())
        .then(data => setUser(data));
    });
  };

  if (!user) return <div>Loading...</div>;

  return (
    <div className="card">
      {isEditing ? (
        <div>
          <input value={name} onChange={e => setName(e.target.value)} />
          <input value={email} onChange={e => setEmail(e.target.value)} />
          <button onClick={handleSave}>Save</button>
        </div>
      ) : (
        <div>
          <h2>{user.name}</h2>
          <p>{user.email}</p>
          <button onClick={() => setIsEditing(true)}>Edit</button>
        </div>
      )}
    </div>
  );
};

export default UserProfile;
Enter fullscreen mode Exit fullscreen mode

If you are a junior developer, this might look fine. It's clean. It's readable. It works.

If you are a senior engineer, your skin is crawling.

The Anatomy of Slop

Let's break down why this code—which an AI will happily generate for you a thousand times a day—is technically "slop".

  1. The Race Condition: Look at that useEffect. If userId changes rapidly (say, the user clicks through a list of profiles), the network requests will fire in order. But they might return out of order. You could end up viewing User A with User B's data overwriting it a split second later. The AI doesn't know about AbortController.
  2. The State Desync: We have user state, plus name and email local state. They are manually synchronized in the useEffect. This is the source of a thousand bugs. If the parent updates, does the local state reset? No.
  3. The Error Vacuum: The fetch promise has no .catch(). If the API is down, the user sees... nothing? Or the app crashes when it tries to access user.name on undefined? The AI assumes the happy path because training data is full of tutorials, not production war stories.
  4. The Hardcoded Fragility: URLs are hardcoded. There is no loading state for the save action. If the user clicks "Save" five times because the internet is slow, we fire five PUT requests.

This is the "Context Gap". The AI sees the file. It does not see the network latency. It does not see the user mashing the mouse button.

Security Note: Notice how the fetch implementation includes no authorization headers? The AI assumed a public API because I didn't explicitly tell it otherwise. In a real app, you just shipped a component that fails 401 silently.

The Fix: Specification Engineering

The solution isn't to stop using AI. I use AI every day. The solution is to change how you use it.

You cannot just give the AI the "vibe". You must give it the specification.

We need to shift from "prompting" to "architecting". Before I generate a single line of implementation code, I write the types and the constraints. I force the AI to operate within a box I have defined.

Step 1: Define the Contract (Types)

Don't ask the AI to "make a component". Ask it to "implement this interface".

I start by writing the TypeScript definitions myself. This forces me to think about the data shape before the code exists.

// types.ts
export type User = {
  id: string;
  name: string;
  email: string;
  updatedAt: string;
};

export type UserProfileProps = {
  userId: string;
  onUpdate?: (user: User) => void;
  // Dependency injection for fetching prevents hardcoded URLs
  fetchUser: (id: string, signal: AbortSignal) => Promise<User>;
  updateUser: (id: string, data: Partial<User>) => Promise<User>;
};
Enter fullscreen mode Exit fullscreen mode

Step 2: Define the Constraints

Now, when I prompt the AI, I don't talk about vibes. I talk about constraints.

Prompt:
Implement the UserProfile component based on the UserProfileProps interface above.

Constraints:

  1. Use useQuery or a similar pattern to handle loading/error states explicitly.
  2. Handle race conditions using AbortSignal.
  3. Show a loading state on the Save button to prevent double-submission.
  4. Do not duplicate state. Use the server data as the source of truth.
  5. Handle error states with a user-friendly UI, not console logs.

Step 3: The Result (Engineered Code)

Here is what the AI produces when you actually treat it like a junior engineer who needs strict instructions rather than a wizard who reads minds.

// UserProfile.tsx (The Engineered Version)
import React, { useState, useEffect } from 'react';
import { User, UserProfileProps } from './types';

// Utility for safe state management
type FetchState<T> = 
  | { status: 'idle' }
  | { status: 'loading' }
  | { status: 'success'; data: T }
  | { status: 'error'; error: Error };

export const UserProfile = ({ 
  userId, 
  fetchUser, 
  updateUser,
  onUpdate 
}: UserProfileProps) => {
  const [state, setState] = useState<FetchState<User>>({ status: 'idle' });
  const [isSaving, setIsSaving] = useState(false);
  const [editForm, setEditForm] = useState<Partial<User> | null>(null);

  useEffect(() => {
    const controller = new AbortController();
    setState({ status: 'loading' });

    fetchUser(userId, controller.signal)
      .then(data => setState({ status: 'success', data }))
      .catch(error => {
        if (error.name !== 'AbortError') {
          setState({ status: 'error', error });
        }
      });

    return () => controller.abort();
  }, [userId, fetchUser]);

  const handleSave = async () => {
    if (!editForm || state.status !== 'success') return;

    try {
      setIsSaving(true);
      const updated = await updateUser(userId, editForm);
      setState({ status: 'success', data: updated });
      setEditForm(null); // Exit edit mode
      onUpdate?.(updated);
    } catch (e) {
      alert("Failed to save"); // Ideally use a toast system here
    } finally {
      setIsSaving(false);
    }
  };

  if (state.status === 'loading') return <div className="skeleton">Loading...</div>;
  if (state.status === 'error') return <div className="error">Error loading user</div>;
  if (state.status === 'idle') return null;

  const { data } = state;
  const isEditing = editForm !== null;

  return (
    <div className="card">
      <header>
        {isEditing ? (
          <div className="form-group">
            <input 
              value={editForm.name ?? data.name} 
              onChange={e => setEditForm(prev => ({ ...prev!, name: e.target.value }))}
              disabled={isSaving}
            />
            <div className="actions">
              <button 
                onClick={handleSave} 
                disabled={isSaving}
              >
                {isSaving ? 'Saving...' : 'Save Changes'}
              </button>
              <button 
                onClick={() => setEditForm(null)}
                disabled={isSaving}
              >
                Cancel
              </button>
            </div>
          </div>
        ) : (
          <div className="view-mode">
            <h2>{data.name}</h2>
            <button onClick={() => setEditForm({})}>Edit Profile</button>
          </div>
        )}
      </header>
    </div>
  );
};
Enter fullscreen mode Exit fullscreen mode

This code is longer. It is more complex. But it is software.

  1. AbortController is present. Rapid ID changes won't corrupt the view.
  2. Explicit State Machine. We know exactly what state the UI is in (loading, error, success).
  3. Dependency Injection. The fetch logic is passed in, making this component testable and decoupled from the specific API implementation.
  4. UI Feedback. The user knows when it's saving. They can't double-click.

The Shift: From Writer to Auditor

If you are just copy-pasting code from ChatGPT, you are not a developer. You are a clipboard manager.

The rise of AI coding tools changes the job description of a software engineer. We used to be writers. We spent 80% of our time generating syntax.

Now, we are auditors.

You can generate 1,000 lines of code in seconds. But can you verify them? Can you spot the subtle memory leak in the generated code? Can you see that the AI used a deprecated library because its training cutoff was 2023?

How to Audit AI Code

Here is my mental checklist when I review AI-generated PRs (including my own):

  1. The Happy Path Fallacy: Does this code assume the network never fails? Break it. Disconnect your wifi and click the button. What happens?
  2. The Security Scan: Did the AI sanitize inputs? Did it expose secrets? Did it accidentally create an injection vector? (AI loves to concatenate SQL strings if you let it).
  3. The Complexity Creeper: Did the AI create a new utility function that is almost identical to one we already have? AI doesn't know your codebase (yet). It loves to reinvent the wheel.
  4. The Hallucination Check: Check the imports. I once spent 30 minutes debugging a library that didn't exist. The AI had hallucinated a "perfect" npm package name and imported it.

Click for a pro-tip on hallucinated packages
When an AI suggests an import like import { heavyCompute } from 'react-heavy-utils', check npm immediately. AI models often combine real library names to create fake ones that sound plausible.

The Bigger Picture

We are seeing a paradox. It has never been easier to create code, yet it has never been harder to build maintainable software.

GitClear recently released data showing that since the explosion of AI coding tools, "code churn" (code written then deleted shortly after) is up, and code duplication is skyrocketing. We are creating legacy code faster than we can document it.

The abstraction is leaking.

When you use "Vibe Coding" tools—the ones that promise you can build an entire SaaS without knowing code—you are accruing debt. You are building on a foundation you do not understand. Eventually, you will need to optimize a query. You will need to integrate a legacy payment gateway. You will need to fix a race condition that only happens on Safari on Tuesdays.

If you don't know how the machine works, you cannot fix it.

TL;DR

  • Vibe Coding is a trap. Generating code based on loose intent creates "slop"—brittle, insecure, happy-path-only code.
  • Context is King. AI lacks the context of your architecture, security requirements, and network constraints.
  • Specify, Don't Prompt. Write types and interfaces first. Force the AI to fill in the implementation details within strict constraints.
  • Audit Everything. Your job is now code verification. Check for race conditions, error handling, and security flaws that AI ignores.
  • Complexity cannot be hidden. Abstractions always leak. You still need to understand the code.

Full analysis with code →


Let's Chat

Built something with AI that looked perfect but exploded in production? Or do you think I'm just an old man yelling at clouds? I'm genuinely curious.

More technical breakdowns at tyingshoelaces.com. I write about what works in production, not what looks good in demos.


Enter fullscreen mode Exit fullscreen mode

Top comments (0)