DEV Community

Cover image for On the Vibe Coders and Their Lies
zblauser
zblauser

Posted on

On the Vibe Coders and Their Lies

On the Vibe Coders and Their Lies

Let me tell you what I was doing while Andrej Karpathy was coining the term "vibe coding."

I was writing a single-file, modal text editor in C. For years, my primary editor was vim; not neovim, yes, regular old vim. Not an IDE person, but my intention in writing this isn't to make you a terminal supremacist. Some time back, I found snaptoken's "Build Your Own Text Editor" tutorial, based on antirez's original kilo, and kind of just spun out from there. I finished the tutorial, though I still had an itch to scratch and wanted to dive further into both the C language and the editors themselves: just me, a terminal, and a lot of segfaults. Before long, it was generally usable, and I was constantly reaching for Vim motions that weren't there. I started building these out, then I wanted split paneling, built that out. Then I wanted another feature, and another, ultimately growing beyond the time I'd set aside to work on it. Life and family are busy. By the time I was done, Hako (Japanese for "box"), the editor, was north of 5,000 lines, and yet I still had ideas. I was still missing vim functions I needed, and I had begun thinking about integrating AI into both the workflow and the project itself.

Hako as SVG

The idea was to create something like what Codex or Claude Code eventually became, though living inside an editor I had built, and what's more, the ability to use any model you want with it, local or otherwise. I have actually been working on a Vim plugin with much the same ability. Hako will have this soon. Now, I am not glossing over the "into the workflow" part of what. I started having Claude scaffold out sections that I planned to work on, brainstorm new paths that diverge from code I already wrote, and so on. Before I knew it, I had something functional, useful, and that I reached for daily. I used Hako to write code through a large portion of my computer science degree, including the courses on the very systems concepts it was built on.

This isn't a flex. There is a point here. When someone with Karpathy's platform is out there saying he's fully giving in to vibe coding, and forgets that C even exists, I have a fairly concrete frame of reference for what that actually looks like, and what is actually lost when you teach this posture to people who haven't built anything yet.

What Vibe Coding Is(n't)

Karpathy's original framing was fairly honest: you fully embrace the LLM, you stop reading the code, you just describe what you want, and accept what you get. What gets left out of the retelling is that he explicitly framed it as something for "throwaway weekend projects." The guardrail was right there in the original post. Somewhere between his tweet and the posture adopted by half the dev influencer economy, that guardrail quietly vanished.

The technique isn't the problem. For certain things, it works. If you're an experienced engineer spinning up a prototype at 2 a.m. or a researcher who needs a quick data pipeline and doesn't particularly care about internals, vibe coding is genuinely useful.

The problem is the audience it's being sold to, or rather, the problems it solves for an inexperienced developer, and the problems it does not.

When you're a senior engineer telling people to go ahead and vibe it out, you're skipping over several decades of intuition you built that actually allows you to do that. Intuition on knowing when to read the code, when the LLM is confidently wrong, when the abstraction is leaking in a way that's going to haunt you in three weeks. The junior doesn't have that. You're handing them a methodology that depends entirely on that knowledge while actively telling them they don't need it.

The Debugging Problem People Aren't Talking Enough About

Here's the thing about code you didn't write and don't understand: when it breaks, you're helpless in a very specific way. No matter how far a handful of markdown prompts got you, there is a wall. I promise.

It's not that you can't fix it. It's that you can't locate the problem. You quite literally don't have a mental model of the system, so you can't even reason about where the failure might be. So you just start prompting the LLM again and hoping. Sometimes that works. Often it doesn't, especially when the bug is subtle, stateful, or involves an interaction between components that the LLM generated independently and never had to reason about holistically.

I know this because I've been the person on the other side of that, debugging enterprise systems where I didn't own the original code, where nobody had written down why something worked the way it did, where institutional knowledge had walked out the door. It's miserable. It's slow. It's the kind of technical debt that doesn't show up for months and then eats your entire sprint.

AI Code Statistics

Vibe coding at scale creates that environment by design, then hands it to the next developer like, "Here you go, buddy." The industry is already feeling it. By September 2025, Fast Company was reporting that the "vibe coding hangover" had arrived, with senior software engineers citing "development hell" when maintaining AI-generated codebases. A December 17, 2025 CodeRabbit analysis of 470 open-source GitHub pull requests found that AI co-authored code contained roughly 1.7 times as many issues overall as human-written code, with security vulnerabilities rising 1.5x and cross-site scripting flaws appearing 2.74 times as often. And perhaps the most damning data point for the "but I'm shipping faster" argument: a July 2025 randomized controlled trial by METR found that experienced open-source developers using AI tools took 19% longer to complete tasks, despite believing they were 20% faster. The subjective feeling of velocity was masking a measurable slowdown.

In January 2026, a pre-print titled "Vibe Coding Kills Open Source" argued that the methodology is systematically degrading the open-source ecosystem by breaking the engagement loop between users and maintainers. Simon Willison, co-creator of Django, has warned we're "due a Challenger disaster" in AI coding practices, invoking the Normalization of Deviance to describe how the industry is running coding agents with near-root permissions and getting away with it until they don't.

The only way this environment wouldn't exist is if we trusted the output 100% implicitly, and that just isn't feasible in any near future, given the critical nature of some of the environments this code is now reaching.

"But I Don't Actually Code That Way"

Karpathy is catching a lot of heat from this article, and I should be frank: I greatly admire the man and what he has done in terms of building a practical understanding of how LLMs think, create, and progress. The same goes for Pieter Levels. Pieter has made a cottage industry out of positioning himself as the indie hacker who just vibes his way to serious monthly revenue, often expressing quantity over quality. Some of what he's built is legitimately impressive. But watch him actually work, or read his older writing, and the guy has real technical depth. He knows PHP. He understands databases. He's been building on the web for over a decade. The vibe coding is layered on top of a foundation that isn't visible in the tweet.

The audience watching him doesn't see the foundation. They see a guy who says, "I just asked Claude to do it," and conclude that's the whole story. Then they build something, it breaks in production, and they have no idea what to do.

This isn't Pieter's fault exactly. It's a real problem with how vibe coding gets communicated by people who've built real skills before LLMs existed and are now performing, "I don't even need those skills anymore."

You do. You just don't notice because they're internalized.

What's Actually Being Created?

The optimistic take on vibe coding for juniors is that it lowers the barrier to entry. More people building things is good. Abstraction has always been the direction software moves.

Sure. All of that is true.

But consider: there's a difference between abstracting over complexity you understand and hiding complexity you've never seen. High-level languages didn't make C knowledge useless; they made it foundational. Engineers who understand memory, systems, and what's actually happening under the abstraction are the ones who can work at every level of the stack, debug anything, and build things that actually hold up.

Core understandings

What vibe coding-as-a-pedagogy produces isn't that. It produces developers who can prompt their way to a demo but can't explain what their application does or why it sometimes doesn't. That's not a foundation. That's a ceiling.

There's a newer wrinkle here worth flagging: slopsquatting. LLMs hallucinate package names with some regularity, confidently suggesting pip install flask-utils-pro or npm install react-auth-helper for packages that don't exist. Attackers figured this out and started registering the most commonly hallucinated names and filling them with malware. The "Shai-Hulud" supply chain attack in 2025 compromised over 40 npm packages this way. If you're accepting AI-generated code without reading it, you're not just inheriting technical debt, you might be installing a backdoor because an LLM made up a dependency.

The industry will figure this out eventually, just as it figured out that knowing jQuery doesn't mean you know JavaScript. The people who will be fine are those who understood the fundamentals before abstraction arrived. The people who learned only the abstraction will be exposed the moment something breaks in a way the LLM can't paper over.

What Information is Worth Providing to New Developers in 2026?

Use AI tools. Use them aggressively. They're legitimately useful, and pretending otherwise is nostalgic gatekeeping.

But also: build something you actually understand. Write in C. Implement the data structure. Read the error message before you paste it into Claude. When the LLM gives you code, read it. Try to explain to yourself what it does and why. If you can't, that's a gap. Fill it before you move on.

For the record, if you want to see what thoughtful use of these tools actually looks like, Karpathy himself has published two artifacts worth reading. His CLAUDE.md is a set of behavioral guidelines for coding agents that opens with "Don't assume. Don't hide confusion. Surface tradeoffs." His llm-wiki gist is a pattern for using LLMs to build compounding personal knowledge bases. Both are genuinely useful. Both are also, notably, abstract and open-ended: Karpathy explicitly says of the wiki gist that "this document is intentionally abstract. It describes the idea, not a specific implementation." The reason he can write like that is the same reason the writing works. He has the expertise to leave the details to the reader, because he trusts his audience will bring the foundation needed to fill them in. That's a feature of material written by someone who built the thing he's describing. It's not the same document a beginner would get value from cold, and it's worth noticing the difference.

The vibe coding crowd will tell you that reading code, understanding fundamentals, actually doing the work is inefficient. Maybe it is, in the short term. The hours I spent segfaulting through Hako bought me the ability to look at a diff and know something's wrong before I run it. They bought me the ability to sit in an interview and explain my architecture without notes. They bought me the ability to debug a system I've never seen before. That's not nostalgia. That's infrastructure.

Vibe coding will always be there. Fundamentals won't teach themselves.

Author

Zachary Blauser; an engineer in Florida, US. Primarily writes C, Python, and Rust, and is currently building systems software while finishing an MS in Computing Systems.

zblauser on GitHub

References

Karpathy, A. (2026, April 4). llm-wiki [Gist]. GitHub. https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f

Karpathy, A. / forrestchang. (2026). CLAUDE.md — Behavioral guidelines to reduce common LLM coding mistakes. https://github.com/forrestchang/andrej-karpathy-skills/blob/main/CLAUDE.md

Karpathy, A. (2025, February 2). "There's a new kind of coding I call 'vibe coding'..." [Post]. X. https://x.com/karpathy/status/1886192184808149383

snaptoken. (n.d.). Build Your Own Text Editor (based on antirez's kilo). https://viewsourcecode.org/snaptoken/kilo/

Levels, P. [@levelsio]. Pieter Levels on X. https://x.com/levelsio

Fast Company. (2025, September 8). The vibe coding hangover is upon us. https://www.fastcompany.com/91398622/the-vibe-coding-hangover-is-upon-us

CodeRabbit. (2025, December 17). State of AI vs Human Code Generation Report. https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report

METR. (2025, July). Measuring the impact of early-2025 AI on experienced open-source developer productivity. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

Koren, M., Békés, G., Hinz, J., & Lohmann, A. (2026, January 21). Vibe Coding Kills Open Source [Pre-print]. arXiv:2601.15494. https://arxiv.org/abs/2601.15494

Willison, S. (2026, January 8). LLM predictions for 2026, shared with Oxide and Friends. https://simonwillison.net/2026/Jan/8/llm-predictions-for-2026/

The New Stack. (2026, January 20). Vibe coding could cause catastrophic 'explosions' in 2026. https://thenewstack.io/vibe-coding-could-cause-catastrophic-explosions-in-2026/

Top comments (0)