I quit AI for 30 days and found something I didn’t expect: my brain still had opinions.
I didn’t plan to quit AI for 30 days. It just sort of… happened after I made a ranty video about why AI coding feels wrong lately. You know the type the “Copilot just hallucinated an entire database schema again” vibe. At the end of that video, I said I’d take a month-long break from AI tools. Viewers dared me to actually do it. My pride said “yeah, easy.” My brain, on day one, immediately disagreed.
Because for the past couple years, AI has slipped into my workflow the way energy drinks slip into a LAN party: quietly at first, then suddenly you feel like you can’t function without it. Autocomplete becomes a reflex. Explaining bugs to ChatGPT becomes emotional therapy. And when you disable all of it, you start reaching for the AI keybind like a phantom limb.
So I spent a month writing apps, debugging weird Linux machines, reverse-engineering a pizza GraphQL API (yes, really), building a real-time game for a conference booth, and touching raw documentation like it was 2013 again. No copilots. No agents. No “explain this error.” Just me, browser devtools, and the creeping suspicion that my brain had forgotten how to think in full sentences.
TL;DR:
Quitting AI didn’t make me faster. It made me honest. I relearned how to choose tech like a senior dev, how to read docs without crying, and how dangerous AI-generated search results are becoming. And somewhere in the middle of all that… I remembered I actually enjoy making things.
The cravings and the pizza GraphQL rabbit hole
On day one of this “no AI” experiment, I jumped straight into a task that absolutely should’ve come with a warning label: reverse-engineering a pizza ordering flow. Yes, a real website. Yes, for an actual challenge between friends. And yes, the irony of doing something so repetitive without AI was not lost on me.
The workflow was simple in theory and spiritually exhausting in practice:
open devtools → place an order → inspect every request → copy the payload → decipher the GraphQL query → rebuild the whole thing in TypeScript → repeat until your brain soft-locks.
It was the perfect storm for AI cravings. My head kept whispering, “Dude… just let GPT generate the next function.” And every time I resisted, it felt like I was trying to stop myself from alt-tabbing into a game mid-Zoom call.
The funny thing is, the repetition was the lesson. Somewhere around the third or fourth endpoint functions like getRestaurants, addToCart, createPaymentIntent, confirmOrder I realized how much my brain had silently outsourced over the past year. That tiny voice saying “AI would do this faster” wasn’t wrong… but it also missed the point. I was learning the pattern. I was building the mental model that AI normally guesses at.
And once that clicked, everything made more sense.
I could give AI the JSON body, the GraphQL query, and a short example and it would nail the pattern.
But if I asked too early, it would just invent structure, mix fields, or hallucinate a schema like a sleep-deprived intern.
Takeaway:
Don’t ask AI for the recipe before you’ve at least chopped two onions yourself.
Pattern first → AI second.
That order matters way more than we admit.
Rediscovering docs like it’s the pre-AI era
After the pizza API marathon, I switched to something “simpler”: writing a tiny Node.js CLI tool to download background images from Pexels for a Slidev talk I was giving in Japan.
It should’ve been easy. Tiny script. Light logic. Senior-dev territory.
Instead, I got humbled immediately.
I could not remember how to read command-line arguments in Node.
Like… at all.
My fingers hovered on the keyboard waiting for Copilot to auto-complete the answer, and then I remembered: right, I turned all of that off. So I sat there typing “node read cli args” into a search bar like someone booting into safe mode. It felt weird. Like forgetting a childhood friend’s name.
The old instinct kicked in to copy whatever Stack Overflow answer popped up. But here’s the weird twist:
Stack Overflow now feels AI-generated. Even when it isn’t. Something about the phrasing, the formatting, the uncanny “here’s a generic snippet you didn’t ask for” tone. It all blends into the same grey paste.
So I did the thing we all pretend we do more than we actually do: I opened the official docs.
DevDocs became my best friend offline Node docs with actual examples, actual descriptions, and zero hallucinated flags. I looked up process.argv, fs.writeFile, how to handle a stream, how to save a file, and how to shove it into my clipboard with a little utility. One step at a time, like a dev manually grinding XP.
The more I used the docs, the more I realized how much I’d let AI be my first responder. Every time I figured something out myself, the answer actually stuck. The logic behind it stuck. I wasn’t pasting I was understanding again.
Docs felt slow at first. Then they felt clean. Then they felt like the safest thing on the internet.
Takeaway:
AI is great once you already know what you’re doing.
But if you can’t read your own platform’s docs, the AI isn’t helping it’s babysitting.

Linux chaos and why search results are cooked now
The next thing I built had nothing to do with code and everything to do with self-inflicted suffering: turning an ancient 32-bit netbook into a distraction-free “writer deck.”
No GUI. No browser. No notifications. Just a pure TTY boot and an editor called Micro.
Basically the Elden Ring of writing environments.
This adorable little machine is from a timeline where Flash games still existed and YouTube had a star rating system. Which means modern Linux distros looked at it and said, “Bro… no.”
Most of them are 64-bit only now, so I had to hunt down something that still supported 32-bit. That led me into the deepest corners of the internet the kind where the answers are either from 2014 or written by someone using AI with the confidence of a sysadmin who hasn’t slept in three days.
And here’s where things got scary.
Multiple search results Google, DuckDuckGo, even AI-assisted ones gave me Linux commands that were just wrong. Not “slightly outdated” wrong, but “this will break your boot sequence” wrong. And every time I’d click a link, it led to some website that clearly generated 10,000 Linux articles a week using an LLM fed the exact same 12 Stack Overflow posts.
It’s an ouroboros of AI eating its own tail:
AI writes a tutorial → search engines index it → AI search rewrites that tutorial → humans copy it → no one checks man pages anymore.
The only reliable info I found?
Older articles from before the AI content gold rush… and actual man pages.
If the pre-2021 web ever disappears, we’re toast.
Once I finally got Debian Bookworm running, added custom fonts with fbterm, configured auto-save, wired up a background Git-push service, and tested sleep/wake, the system actually felt magical. Pure, focused writing. Zero distractions. No browser black hole.
Takeaway:
AI can summarize, explain, and accelerate but it can’t validate truth.
Sometimes the only fix is touching the real docs, the real man pages, the real system.
The real-time game and choosing tech like a human
Midway through the no-AI month, I had to build something a bit more chaotic: a real-time “Spot the Syntax Error” game for the Sentry + Syntax booth at GitHub Universe. Think: multiple iPads, a countdown timer, backend validation, a leaderboard updating in real time, and a bunch of devs trying to prove they can spot a missing semicolon faster than their friends.
It was one of those projects where the architecture matters more than the actual code. Which meant I had to pick a stack before writing anything serious. And for the first time in a while, I had to pick it without AI whispering “use X, everyone on Reddit says it’s cracked.”
My brain started jumping around like a tabby in a cardboard maze:
Supabase? Real-time built in.
Convex? Trendy, reactive, shiny.
Firebase? Boring but reliable.
Mongo + websockets? Full control, but more plumbing.
Normally I’d feed all of that into an AI prompt and let it spit out trade-offs, benchmarks, maybe even a prototype. But this time it was just me, a notepad, and a couple hours of tiny manual tech spikes.
And honestly? It felt good.
Building a miniature version of the system in each stack made the real trade-offs painfully obvious. Firebase Realtime was the least glamorous option, but it was the only one that didn’t fight me on latency, setup friction, or weird edge cases. So I picked it, wired up backend-only answer validation, synced leaderboard updates, and shipped the whole thing without once asking a bot what to do.
The big realization: architecture is where you level up. If you outsource that thinking to an AI, you outsource your ability to reason about systems which is basically what senior engineering is.
Takeaway:
AI agents can explore the dungeon for you, but you should still choose which door to open.
Creativity in an ai-slop world
Somewhere in the middle of all the debugging and doc-reading, I slipped into a completely different skill tree: taking photos again. I was in Japan for a talk, wandering around with a camera, chasing interesting light instead of interesting stack traces. And the weird part is… it felt just as satisfying as solving a bug.
I’d post a shot on X or Bluesky a seagull staring straight at the lens, neon reflections on wet pavement, a crooked alleyway and it reminded me that creativity hits way harder when you make something that isn’t instantly reproducible by a model. In a timeline filled with infinite AI-generated sunsets, the slightly blurry one you actually took means more.
Coding gives me structure. Photography gives me instincts. Both make me a better builder than any autocomplete ever will.
Takeaway:
If everything online is turning into AI paste, be the person who still makes real things.

What I’m keeping (and ditching) about ai going forward
After 30 days without AI, I didn’t come out of it screaming “AI bad!” or planning to live in a cabin writing C programs by candlelight. What actually happened was way more boring and way more useful: I rebuilt my internal compass.
I realized AI is amazing at accelerating things after you understand the thing. It’s basically a power drill incredible once you’ve measured the wall, terrible if you let it design the house. When I wrote the first few functions myself, AI would’ve been great at filling in the rest. But if I had involved it too early, I wouldn’t have noticed the weird patterns, the subtle data shapes, or the logic errors hiding between the lines.
The same went for docs, Linux installs, and picking a tech stack. Those decisions shape everything that comes after. If you outsource them, you outsource your growth. Seniority isn’t about having the right answer it’s about knowing why it’s the right answer.
So yeah, I’m using AI again. But it’s staying in its lane. I’ll use it to scaffold, refactor, summarize, generate test cases, and handle the boring parts.
What I won’t do is let it think for me.
If you feel like AI has quietly taken over your brain, try coding without it for a day or even just an afternoon. You’ll be surprised how quickly the gears start turning again.
And honestly? It feels good to know they still do.
Helpful resources
- GraphQL official docs: https://graphql.org/learn/
- Node.js documentation: https://nodejs.org/api/
- DevDocs (offline docs: https://devdocs.io/
- Debian Bookworm: https://www.debian.org/releases/bookworm/
- Firebase Realtime Database: https://firebase.google.com/docs/database
Top comments (0)