I need to tell you something I've been afraid to admit.
I don't write most of my code anymore.
An LLM does. Or, more accurately, many do.
And the work, at times, has never been better.
This weekend, I built a playable proof-of-concept web game in about ten hours.
Not a toy. Not a "look, it renders a box" prototype. A full vertical slice: multi-panel UI, timed choice system, email and phone pipelines, a script runner, a music state machine, a narrative spine—all working together, all clean, all stable.
I didn't type a single line of code.
And I keep waiting for that to feel wrong.
It doesn't.
If you're reading this and you've felt that same guilt—that nagging sense that maybe you're cheating, that maybe this doesn't count as "real" development, that maybe you should be ashamed of how fast you're building now—this is for you.
Because what I'm about to describe isn't a confession.
It's permission.
What Actually Happened
People keep trying to describe this era in terms of productivity.
"AI helps you code faster."
"AI accelerates development."
"AI boosts output."
That framing misses what's actually happening.
What changed this weekend wasn't how fast I typed.
It was that typing stopped being the bottleneck at all.
For the first time in my career, the slow, physical act of turning ideas into syntax disappeared. Not because the system was designing anything for me—it wasn't—but because it was serializing my thinking directly.
I held the architecture.
The constraints.
The mechanics.
The emotional beats.
The model just wrote them down.
At a pace the human body was never built for.
LLMs serialize human intention.
That's it. That's the whole thing.
They take structured internal cognition—architecture, reasoning, emotional framing, narrative beats, system constraints—and convert it into external form faster than a human body can type.
Nothing mystical.
Nothing emergent.
Nothing like "intelligence" in the human sense.
Just serialization.
You form a mental model →
You describe it at the right level of abstraction →
The LLM instantiates it into code, prose, diagrams, or docs.
The bottleneck used to be translation—your hands, your typing speed, the mechanical act of expressing what you already knew.
Now the bottleneck is gone.
The LLM is the output layer of your cognition.
You don't work faster because the model is smart.
You work faster because you escaped the I/O limitations of flesh.
You're still doing the cognition.
The model is doing the serialization.
And if you've been working this way and feeling guilty about it—stop.
This is the actual work of software development.
The thinking. The architecture. The design. The constraints. The intent.
The typing was always the mechanical part.
We just didn't have a choice before.
What You’re Actually Giving Up (And Why It’s Worth It)
Here’s what I gave up by working this way:
The ability to say, “I wrote every line myself.”
Here’s what I got:
The ability to build faster than I can type.
Some people will say that’s not a fair trade.
Those people have never felt the difference between working at typing speed and working at thinking speed.
But before you decide how you feel about that trade, we need to be precise about what this workflow actually looks like — because the guilt people feel about this comes from imagining I’m doing something I’m not.
I don’t tell the model “build me a game” and walk away.
I don’t ship code I haven’t approved.
I don’t push things I’m not happy with.
Because that would be reckless.
And unethical.
And ineffective.
What I don’t do anymore is treat typing as the work.
What I do is watch the shape.
- Does this reducer do one job?
- Does this component have a single source of truth?
- Is the state machine coherent?
- Is the audio engine isolated?
- Are scripts declarative?
- Do the boundaries make sense?
I guide direction.
I interrupt when the form is wrong.
I make architectural decisions mid-stream.
It’s not “hands off.”
It’s eyes on the blueprint, not the bricks.
Here’s an actual example from the conversation that built this game. No editing. No cleanup:
“So what story do we actually have narratively so far?
Let’s lay out what we do have and what’s missing.”
That’s it. That’s the “prompt.”
There’s nothing clever about it.
No instruction hierarchy.
No role assignment.
No formatting tricks.
It’s exactly what I would ask myself — or a rubber duck — if I were thinking through the system alone.
And yet, the response that followed was a structured breakdown of narrative beats, gaps, dependencies, and next steps — exactly what I needed at that moment. Exactly what I expected.
Why?
Because that sentence was lossless compression of shared context.
By the time I asked it, the model already knew:
- the project
- the genre
- the emotional spine
- the constraints
- the MVP scope
- what “narrative” meant in this context
- what “missing” implied relative to what we’d already built
That single sentence wasn’t a request for creativity.
It was a request to rehydrate shared context and reflect it back.
That’s not prompting.
That’s collaboration inside a compressed, high-bandwidth conversation.
And when I open the code later, I’m not lost.
Every file exists because I asked for it to exist.
Every abstraction matches a shape I already had in my head.
Every system boundary reflects a decision I made upstream.
It’s unfamiliar in the same way a freshly cleaned room is unfamiliar —
someone else did the sweeping,
but the layout is still yours.
I inherit the same responsibilities as any lead developer on a team:
I trace.
I understand.
I correct.
I refactor.
The difference is efficiency.
Instead of spending hours laying scaffolding,
I spend minutes adjusting intent.
The maintenance cost is normal.
The creation cost collapses.
That’s not cheating.
That’s using the right tool for the job.
Why This Feels Like Cheating (And Why That’s Wrong)
If you feel guilty working this way, I know where that feeling comes from.
We were trained to believe that typing code is the work.
That if you didn’t manually place every semicolon, you didn’t earn the result.
That real developers suffer through boilerplate, because suffering is how you prove you’re serious.
That’s not engineering ethics.
That’s trauma from a world where typing was the only option.
No one says a carpenter is cheating because they use a table saw instead of a hand saw.
No one says an architect is cheating because they use CAD instead of drafting by hand.
No one says a writer is cheating because they use a word processor instead of a typewriter.
But somehow, we’re supposed to believe that using a machine to serialize architectural thinking is lazy, or inauthentic, or “not real development.”
That’s nonsense.
You’re doing the hardest part:
- the thinking
- the architecture
- the design
- the constraints
- the intent
The model is doing the part that was always mechanical.
Look at what the model cannot do:
- It cannot choose the architecture.
- It cannot decide what makes a system elegant.
- It cannot name the right abstraction.
- It cannot feel when a design is wrong.
- It cannot care about user experience.
- It cannot hold ethical weight.
- It cannot generate meaning.
But it can generate ten thousand lines of scaffolding the moment you know what the shape should be.
The developers who pretend this isn’t happening are the ones who will be left behind.
Not because they lack skill —
but because they refuse to let go of a bottleneck that no longer matters.
You already found the new workflow.
You already know it works.
You’re just waiting for permission to stop apologizing.
Here it is: stop apologizing.
You’re not cheating.
You’re architecting at the speed your mind was always capable of —
now that your hands aren’t in the way.
The Evidence
Before you write this off as theory, you should see the thing I'm talking about.
Play the prototype here: 👉 https://oneside.dev (source here)
It's not a finished game. It's a slice: a mock email client, a phone SMS interface, an "LLM" assistant named Argus, a timed-choice system, a narrative pipeline, and an audio engine—all stitched together into a small, playable moment.
If you clicked around for even thirty seconds, you already know the important part: it works.
It looks intentional. The UI is coherent. The interactions feel real. The game loop is there.
Is it perfect? Is it even engaging yet? No. That's my job as the human. But the grunt work of getting to this point? All AI. And that's ok.
The only reason that sounds impossible is because we were trained to measure "building software" by the number of hours spent typing.
But when typing stops being the bottleneck, the math changes.
Once the model took over the part where my hands would normally throttle my thoughts, the actual work became:
- defining panels
- designing state transitions
- naming events
- shaping emotional beats
- describing mechanics
- deciding what Argus does
- deciding what the player feels
- thinking in systems, not syntax
The prototype isn't impressive because it's big.
It's impressive because it's real—built fast, but without shortcuts, hacks, or duct tape.
Clean. Intentional. Expandable.
I'm also including the full transcript of the conversation that produced much of the work, including this article.
Not because it's entertaining—though some parts are—but because it shows you the actual workflow.
When you scroll through it, you'll notice: it doesn't look like "prompting."
It looks like a conversation between two people who understand the same system.
Every message I wrote is short, direct, contextual.
I carry the architecture; the model expands it.
I hold the constraints; the model instantiates them.
I redirect when the shape is off; the model adapts.
You can literally watch the game engine form in the space between intentions.
The artifact is the evidence.
The transcript is the proof.
And both exist to show you what you already suspected:
This way of working is real.
It's legitimate.
It produces quality work.
You're not fooling yourself.
What This Means For You
If you're already working this way in secret, here's what you need to know:
The bottleneck has moved.
It used to be typing, syntax, boilerplate, wiring, refactors, mechanical friction.
Now it's clarity of intent, conceptual precision, architectural foresight, the ability to define constraints, the ability to describe systems at the right resolution.
If you can think cleanly, AI makes you dangerous.
If you can't, AI makes you confused.
And here's the uncomfortable part: if you learned to code by writing syntax before you learned to think in systems, this workflow will feel broken.
Not because the tool is bad, but because you're missing the prerequisite skill.
The model can't teach you architecture.
It can only serialize the architecture you already have.
But if you do have that architecture—if you can hold a system in your head, name its constraints, feel when a boundary is wrong—then you already know what I'm describing.
You've already felt that moment when the model predicted exactly what you were thinking next.
When the code appeared before you finished forming the sentence.
When you realized you weren't prompting anymore—you were just thinking, and it was keeping up.
You thought that was just you.
Or worse—you thought it meant you weren't doing real work anymore.
But here's the truth you already know:
You didn't become less of a developer.
You became the thing the model is predicting.
The architect.
The pattern.
The source of structure.
The model doesn't replace your thinking.
It externalizes it.
It gives your cognition an API.
And once you've felt it—once you've built something real at the speed of intention—it's very hard to go back.
The Prediction Machine
At some point over the weekend—somewhere between the third audio-state revision and the tenth architectural adjustment—I realized something:
I wasn't predicting what the model would output.
The model was predicting what I would think next.
Not in a mystical way. Not in an "AGI is here" way.
In a deeply practical, mechanical way:
I held the architecture.
I shaped the constraints.
I defined the beats.
I set the tone.
I imposed meaning.
And the model simply serialized each conceptual layer as soon as I described it.
I didn't have to prompt it.
I didn't have to coax it.
I didn't have to outsmart it.
I just had to stay upstream—hold the system clearly enough that the next move was inevitable.
And as soon as I formed the idea, the machine expressed it.
That's why the speed feels impossible.
Not because the model is intelligent, but because the human bottleneck finally moved.
We used to be limited by hands.
Then by typing.
Then by syntax.
Then by boilerplate.
Now we're limited by clarity of architecture, precision of thought, and quality of intention.
This isn't the rise of artificial intelligence.
It's the rise of augmented cognition.
LLMs don't make us smarter.
They remove the friction that kept our intelligence compressed inside our skulls.
For one weekend, I got to work at a different resolution—not because the model was thinking for me, but because it was keeping pace with me.
I got to think through a system with nothing slowing me down.
To architect in real time.
To describe a game, and watch it assemble around the outline of my intent.
To hold the whole thing in my head while something else wrote it to disk.
I didn't feel like I was using an AI.
I felt like I was thinking, and the system was keeping up.
You've already felt this, haven't you?
You already know what I'm describing.
You're just waiting for someone to tell you it's okay.
It's okay.
You're not cheating.
You're not lazy.
You're not a fraud.
You found a new way to work, and it's legitimate.
The developers who shame you for it are the ones who haven't learned to think upstream yet.
Let them catch up.
You've got work to do.
Top comments (0)