Three months ago, I said something to my terminal window: "Help me build a bilingual personal website that supports both Chinese and English, with physics engine animations."
Then I hit enter.
I'm not a frontend engineer. I've never properly written React, and Next.js routing configuration is completely foreign to me. But today, three months later, the website you're looking at—75 TypeScript files, 29 React components, a draggable tag wall built with the Matter.js physics engine, and an AI chat assistant connected to a large language model—was "talked" into existence.
This is called vibe coding. A year ago, most people hadn't even heard this term.
A Word, A Movement
On February 2, 2025, Andrej Karpathy posted a casual tweet on X:
"I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works."
He called this new way of programming "vibe coding." You no longer write code line by line; instead, you use everyday language to tell the AI what you want, then watch it build things. When you encounter errors, you throw the error messages at it, and it usually fixes them. The code grows to a point where you can't even read through it all, and you simply don't bother.
Karpathy himself said it was just a "shower thought" that he posted casually. But it seemed to articulate something many people were already doing but didn't know what to call.
You may have noticed what happened next—Collins Dictionary selected "vibe coding" as the 2025 Word of the Year, and Google searches for the term surged by 6,700%. A casual tweet became the name of a movement.
Put simply, vibe coding boils down to one sentence: You don't need to understand code; you just need to be able to clearly articulate what you want.
From Completion to Conversation: Three Leaps in AI Programming
AI helping people write code isn't new. But the past few years have seen three completely different leaps. Understanding the distinctions between them is key to grasping what's actually new about vibe coding.
The first step was code completion. When GitHub launched Copilot in 2021, programmers got excited—it could guess what you were going to write next based on your half-finished code. Much like the predictive text on your phone, it made typing faster, but you still had to know what you wanted to type. Non-programmers facing Copilot were still completely lost.
The second step was code generation. After ChatGPT took off in 2023, you could describe a requirement in natural language and it would give you a block of code. This was a big step up from Copilot, but you still had to understand that code, know where to put it, and fix bugs yourself. It was like having an intern who works fast but isn't very reliable—helpful, but you have to watch them closely.
The third step is vibe coding. Starting from early 2025, tools like Claude Code and Cursor can read your entire project codebase, create files, modify files, run tests, fix bugs, and even find workarounds when they hit obstacles. You go from being the person writing code to the person making requests. In other words, you become the product manager, and the AI becomes the engineering team.
The difference here isn't one of degree—it's one of nature. Crossing from "AI helps you write code" to "you tell the AI what you want" is a threshold. As I discussed in my previous article, AI isn't the next Copilot; this is a category leap.
The Numbers Don't Lie
Last year's Y Combinator Winter batch data surprised many people. YC CEO Garry Tan said that for 25% of startups in that cohort, 95% of their code was AI-generated.
YC is a top-tier global startup incubator; these are people seriously raising funding and building products.
Around the same time, Stack Overflow's developer survey showed that 84% of programmers use AI tools in their daily work. At LlamaCon, Nadella said that 20% to 30% of the code in Microsoft's own repositories is AI-written.
Jensen Huang once said something that I think captures this most accurately:
"Everyone is a programmer. The new programming language is human language."
Two years ago, this sounded like a vision. Looking back now, it reads more like a plain description of what's already happening.
My Personal Experiment
Back to the website I mentioned at the beginning. Honestly, when I decided to start, I had little confidence.
The tech stack looks intimidating: Next.js 16, React 19, TypeScript strict mode, Tailwind CSS v4, Velite + MDX content system, Matter.js physics engine. But throughout the entire process, I didn't systematically learn any of these frameworks.
All I did was talk to Claude Code.
That particle animation on the homepage—the floating light points that follow your mouse when you refresh the page—is backed by 187 lines of Canvas rendering code, with device pixel ratio adaptation, dark mode switching, and detection of user "reduced motion" preferences. I know nothing about Canvas programming. But I could describe the visual effects I wanted to the AI, then iterate round after round.
The draggable physics tag wall is even more interesting. Under the hood runs the Matter.js physics engine, where each tag has gravity, friction, and restitution coefficients; if they fall off screen, they bounce back automatically. 262 lines of code, not a single one typed by me. Yet every line exists because I said something like "if a tag falls down, it should bounce back."
Then there's the AI chat assistant—299 lines of complete chat interface code, connected to a large language model API, supporting streaming output, displaying in WeChat style for Chinese environments and switching to WhatsApp style for English environments.
The whole project took 3 months and 38 commits. If I had to learn these tech stacks from scratch and write it myself, conservative estimates put it at over six months. But the point isn't just speed—many of these things I simply wouldn't have attempted if starting from zero.
I'm not watching this transformation happen from the sidelines. I'm using it to build houses.
But—
If I only wrote about the good side up to this point, this article would become marketing fluff. Things aren't that simple.
In July 2025, METR published a rigorous controlled experiment: they had experienced open-source developers (averaging 5 years of experience, 1,500 commits) use AI tools for development tasks on their own projects. The result: these people were actually 19% slower when using AI. More intriguingly, the developers themselves felt that AI made them 20% faster—perception and reality were completely reversed.
Another data point from CodeRabbit's analysis at the end of 2025: code written with AI involvement had 1.7 times the proportion of serious issues compared to purely human-written code, and security vulnerabilities were 2.74 times higher.
These numbers are real; there's no point in avoiding them.
But I think they're actually describing two different things. The METR experiment measured "experts doing work they were already proficient at"—having a race car driver navigate every turn while explaining it to the passenger in the seat next to them would of course slow them down. But the point of vibe coding was never to make the race car driver faster; it was to let people who couldn't even get on the road before start driving.
As for security vulnerabilities, this shows that AI-written code does need human review before going into production. But consider this: without vibe coding, many of these projects wouldn't exist at all. The question to ask isn't "is AI-written code perfect," but rather "is it good enough to build things that couldn't be built before?" For personal projects, prototype validation, and internal tools, the answer is obvious.
These problems are real. But tools are iterating rapidly; shortcomings from six months ago may already be fixed today. The direction is right; the road is still being paved.
Code Is Just the Beginning
If you still think vibe coding is just "making websites without learning to program," you're underestimating it.
Following the logic from my previous article: When AI can convert capital directly into productivity, what exactly is that conversion mechanism in the middle? I believe the answer is code.
The digital world runs on code. Every app on your phone, every website you use, every online payment—behind all of them is code executing. Code is humanity's universal interface for controlling the digital world.
And vibe coding means AI has mastered this interface.
When AI can reliably write code, it can build software. When it can build software, it can automate almost any digital task—booking flights, managing schedules, analyzing reports, building websites, calling various APIs. These things are essentially all variations of "write a program and run it."
This is why the AI Agents I discussed in the previous article are so worth watching. For an agent to do things for you in the digital world, it needs to be able to operate the digital world. How? By writing code, calling APIs, reading and writing files. Vibe coding gives agents this capability. Or put another way: vibe coding is the interface between you and the AI agent—you state your intent in natural language, and the AI turns it into reality through code.
The previous article said "capital can bypass human labor and convert directly into productivity." Vibe coding is that bypass route.
Your First Step
If you've read this far, you might have two voices battling in your head. One says "this is so cool," the other says "but I don't know how to program."
The good news is that the second voice is precisely describing the problem vibe coding solves. You don't need to know how to program. You just need to be able to speak and type.
I personally use Claude Code, a terminal-based AI coding tool from Anthropic. Installation takes just one line of code.
For macOS or Linux users, open your terminal:
curl -fsSL https://claude.ai/install.sh | bash
For Windows users, open PowerShell:
irm https://claude.ai/install.ps1 | iex
After installation, open your terminal in any folder, type claude, and try saying: "Help me write a small tool to calculate BMI."
See what happens.
What you just did would have required a computer science degree five years ago. Three years ago, you would have had to dig through StackOverflow for hours. One year ago, you would have had to copy-paste code snippets back and forth from ChatGPT.
Now you just said one sentence.
Tools will continue to get faster and smarter; this direction won't change. Looking back from five years in the future, 2026 might be remembered as the year ordinary people began "writing" software in natural language. Rather than sighing about it then, why not give it a try now?
Originally published at https://guanjiawei.ai/en/blog/vibe-coding
Top comments (0)