DEV Community

Cover image for LLMs Aren’t the Problem. Your Prompts Are.
Mr. 0x1
Mr. 0x1

Posted on

LLMs Aren’t the Problem. Your Prompts Are.

“I don’t care how the code got into your IDE.

I care whether it respects the repo like the rest of us do.”

That’s the argument. Plain and simple.

You can prompt, paste, pair with GPT-4o, or summon code from the divine energy of Vim macros — doesn’t matter.

But if what you’re shipping doesn’t think like the repo thinks?

You’re vibe coding.


🔍 We Know When You’re Prompting

Here’s what it looks like:

🤖 Prompt:

Write a function to fetch data from an API with retry logic and timeouts.
Enter fullscreen mode Exit fullscreen mode

💀 Output:

async function fetchWithRetry(url: string, retries = 3) {
  for (let i = 0; i < retries; i++) {
    try {
      const res = await fetch(url);
      if (!res.ok) throw new Error("Request failed");
      return await res.json();
    } catch (err) {
      if (i === retries - 1) throw err;
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

✅ What You Should’ve Prompted:

Use our `dataClient.get()` wrapper from `shared/http.ts`, apply `ExponentialRetry` from `lib/retry.ts`, and follow the pattern in `getUserById()` from `user-service.ts`.
Enter fullscreen mode Exit fullscreen mode

💎 Output:

import { dataClient } from 'shared/http';
import { ExponentialRetry } from 'lib/retry';

export async function fetchUserData(endpoint: string) {
  return await ExponentialRetry(() =>
    dataClient.get(endpoint)
  );
}
Enter fullscreen mode Exit fullscreen mode

👃 Code Smells Like Vibe

You're not wrong...

You're just not with us.

Here’s what sets off our Spidey-sense in PRs:

🤢 What You Did 😤 What We Expected
Created a custom retry loop Used our shared retry utility
Defined new logging config Centralized logging in core/logger.ts
Wrote a class Functional composition everywhere
Modified .env directly Used ConfigLoader overrides
Used raw fetch() calls Used Axios with interceptors configured

🧠 Prompting Is Programming Now

Prompting isn’t “just asking for help.”

It’s authoring system behavior.

Prompting is your IDE. It’s your co-author.

If you don’t guide it like a junior engineer, it will happily hallucinate an entire sub-framework into your codebase.


🧵 Your Prompts Shape the Codebase

Think about it like this:

🚫 Bad Prompt ✅ Good Prompt
“Write a function to fetch X” “Use fetchClient; apply retry from lib/retry.ts; match getFoo() pattern.”
“Add logging” “Use auditLogger from lib/logging with eventName, userId.”
“Save to database” “Use dbClient.transaction() with scoped tenantId logic.”
“Sort results alphabetically” “Mirror sort logic in listActiveUsers() for pagination stability.”

💡 Prompt Hygiene Checklist

Before you hit Enter, ask yourself:

  • ✅ Did I reference internal APIs or utilities?
  • ✅ Did I give it a pattern or example to follow?
  • ✅ Did I define how the output will be used or tested?
  • ✅ Did I call out any anti-patterns to avoid?
  • ✅ Did I keep the scope small and focused?
  • ✅ Did I actually care about the result?

🎯 The Real Goal

You don’t need to fight LLMs.

You need to prompt like a maintainer.

You need to care like a teammate.

Speed isn’t the problem.

Shipping unshaped, contextless prototypes into prod is.


🖼️ Bonus Meme Break

“When the PR works but violates every convention we’ve ever agreed on”

krusty the clown fire


🔚 Final Word

LLMs are brilliant.

But they’re pattern mimics, not context stewards.

You are the steward.

You’re not just coding anymore — you’re orchestrating.

Write better prompts.

And remember: your repo deserves more than vibes.

Top comments (0)