DEV Community

Erich
Erich

Posted on

The LLM Imposter

A few weeks ago I finished a project that actually works. Handles real data, solves a real problem, runs well. I'm proud of it. I'm also... something else. Not ashamed exactly. Just aware of a voice I can't shake: You didn't really do this. This doesn't count.

I used LLMs¹ heavily throughout the process. Not vibe coding. I wasn't just prompting "build me a thing" and shipping whatever came out. I made architectural decisions, debugged failures, understood trade-offs. But still. I can't shake that voice.

There's an image of what a "real programmer" looks like. Someone who writes syntax from memory, who suffered through documentation for years, who earned their skills through late nights and cryptic error messages. The suffering was the point. If you didn't struggle, you didn't learn.

I internalized that standard somewhere along the way. And by that standard, using an LLM to accelerate past the friction feels like skipping the exam.

But this isn't the first time the standard changed.

Every abstraction layer in programming history faced the same resistance. Assembly to C: "You're hiding the machine, you'll never understand what's actually happening." C to managed languages: "Garbage collection? Memory management is the job." Using libraries: "You're importing code you've never read." And for a decade straight: "You're not a real engineer, you're just copying from Stack Overflow."

Each time, skeptics said the new way wasn't real programming. Each time, they were defending a standard that was about to become obsolete.

The abstraction didn't eliminate the need for understanding. You didn't need to manage registers anymore, but you still needed to understand performance. You didn't need to manually free memory, but you still needed to know why your program was leaking.

The programmers who insisted assembly was the only "real" programming were guarding a gate nobody needed to pass through anymore. Not because they were wrong about assembly being powerful. Because they were wrong about what mattered.

So what matters this time?

Code became cheap. Producing working syntax is commoditized now. An LLM can generate a function faster than I can type the signature. Maybe I just type slow.

But software is still expensive.

Knowing which components the system needs, how they interact, where it will fail at scale, what trade-offs you're making. None of that got cheaper. The LLM produces parts. It doesn't know which parts matter or where they go.

Think about what it means to be a mechanic. A parts supplier can hand you a carburetor². That's the easy part. Being a mechanic means knowing where it goes, how it connects to everything else, whether this particular carburetor is right for this particular engine. It means looking at a car that won't start and tracing the problem backward through systems you understand. It means knowing that a failing fuel pump will starve the engine. It means finding out the problem wasn't with the carburetor at all.

Anyone can order parts. The mechanic knows why the car runs.

Vibe coding is ordering parts and bolting them on until something happens. Sometimes you get a car. Usually you get an expensive mess that breaks in ways you can't diagnose because you never understood how it was supposed to work in the first place.

The friction didn't disappear when LLMs arrived. It relocated.

The old slog was syntax memorization, Stack Overflow archaeology, decoding documentation written by someone who hated you. The new slog is architecture, system design, evaluating outputs, catching "You're absolutely right!" mistakes, knowing when generated code is subtly wrong in ways that won't surface until production.

Different friction. Still friction. Still earns the outcome.

I've stopped asking myself "did I use AI to build this?"

The better question: if this breaks, can I fix it?

If yes, you built it. The tool you used to get there is irrelevant. If no, you have a pile of parts and a prayer.

I can debug my project. I can explain why the components exist and what they do. I can extend it, refactor it, reason about its failure modes. The LLM accelerated the syntax production. The engineering was mine.

That voice is real. But it's grading me on a standard for a game that already changed.

Besides, I use VS Code. Half the internet already doesn't think I'm a real programmer.


¹ Large Language Model - it's worth distinguishing from the broader "AI" label.

² Intentionally obsolete car part.

Top comments (0)