DEV Community

Cover image for AI Won't Steal Your Job. You Already Handed It Over.
Phil Rentier Digital
Phil Rentier Digital

Posted on • Originally published at rentierdigital.xyz

AI Won't Steal Your Job. You Already Handed It Over.

Marrakech, last week. I'm looking for a specific shop in the medina. The map the riad gave me is in Arabic. I pull out my phone, open Gemini: "not available in your region." And there I just stand. Phone in hand, five seconds, six seconds. Like someone unplugged me.

TLDR: Everyone's panicking about AI stealing developer jobs. Wrong panic. Something slower is happening, in silence, on every front at once. Nobody talks about it because nobody sees it leaving. There's a muscle you've been outsourcing without noticing, and there's a way to get it back. The hard part is admitting which level you're actually at.

I end up turning to the guy next to me. Hand gestures, simple words, broken French. It works. But on the way back, one question stuck in my skull: this dependency on AI, isn't it slowly making us deeply stupid?

The Map Was in Arabic. My Brain Was Empty.

That five-second freeze in the medina kept replaying. Not because of what happened, but because of what didn't. No reflex. No backup plan. No "OK plan B is to ask someone." Just a blank.

It used to be automatic. Lost in a city, you'd ask. Map in a language you don't read, you'd point. Confused, you'd improvise. You'd treat the world as a problem you could poke at with what you had, and you'd find your way through.

That reflex was gone. Or at least asleep.

And once I started looking, I saw it everywhere. Couldn't remember a phone number to save my life. Couldn't navigate without the blue dot. Couldn't write a quick email without asking Claude to polish it. Couldn't even decide which restaurant to walk into without checking the rating first.

None of those individually look like a problem. That's the whole trick. Each one is small. Each one feels like a productivity win. But add them up across six months of intensive AI use, and you don't have a productivity win anymore.

You have a wiring change.

The Marrakech freeze wasn't the bug. It was the alert.

Everyone's Selling You "Taste." They're Selling Half a Diagnosis.

For the last few months, the entire AI discourse has been one word: taste. Sam Altman said it. Then every influencer-parrot on the timeline repeated it for three months straight. Taste is the new moat for engineers in the age of LLMs.

They're not wrong. They're selling half the equation.

Taste is judgment. Judgment is built by exposure plus friction. Not by exposure alone. You don't develop the judgment of a chef by watching cooking videos. You develop it by burning a sauce, ruining a service, getting yelled at by someone who knows better, and trying again. The friction is the teacher. Skip the friction, skip the lesson.

The real problem with the current AI workflow is what it removes: the daily friction that builds taste in the first place. Every "ask Claude in two seconds" is a small piece of friction you didn't experience. A small decision you didn't make. A small frustration you didn't sit with. A small mistake you didn't have to walk back from.

Bloomberg ran a piece earlier this year about an AI coding productivity panic. They were diagnosing the wrong disease. The numbers aren't the story. The story is what's happening to the operator behind those numbers.

The numbers go up. The muscle goes down.

Great trade for a quarter. Terrible trade for a career.

This is the perfect crime. The victim doesn't know anything was stolen. Just feels faster than ever and slightly more anxious than usual.

Roller Skates Work Until You Hit the First Pebble.

Building a business with AI is like running a marathon on roller skates.

The other day I was rollerblading on a parking lot with my daughter. She was complaining about the small stones.

Roller skates are great as long as the ground is smooth. You glide. You go three times faster than walking. The first pebble of any size, you eat the asphalt.

AI coding is the same physics. Greenfield project, generic CRUD, scaffolding, boilerplate: Claude Code carries you. You ship in an afternoon what used to take a week. But the day there's something weird (an obscure lib failing silently, a non-standard architecture decision, a client you have to read between the lines, a map in Arabic with no wifi), it's your muscle that has to take over.

And if you haven't kept it warm, you're flat on the ground.

I'd noticed the airplane version of this already. Long-haul flight, no wifi, you have to write something serious, and it hurts. Not because the task is hard. Because the muscle hasn't been used in weeks. Like a leg you forgot you had.

While you're going fast on rollers, you forget how to run.

And one day there's a pebble.

The No-AI Protocol I Run (And Why You're Probably Lying About Your Level).

3x3 matrix. Columns: Daily Friction / Weekly Anchor / Quarterly Reset. Rows: Low / Medium / High. Each cell shows the time commitment (30 min, 90 min, 2x90 min for Daily; 2h, 6h, 10h for Weekly; 2 days, 5 days, 2 weeks for Quarterly). Color gradient progressive from light green (Low) to dark green (High). Pictograms per column: brain icon for Daily, open book for Weekly, compass or plane for Quarterly. Style: rentier digital flat geometric + drop shadows, 9-color palette.


The No-AI Protocol: Daily, Weekly & Quarterly Friction Levels

The good news: the muscle starves, it doesn't die. You can re-feed it.

The science isn't new either. Newport's deep work blocks. Leroy's attention residue. Ericsson's deliberate practice. They all converged decades ago on the same point: the brain builds judgment in repeated 45-to-90-minute blocks of friction, not in fragmented quick-checks. Everyone knows. Almost nobody does it.

So this is what I run. Three scales of friction (daily, weekly, quarterly) and three levels of commitment (low, medium, high). Pick your level honestly. Start there. Level up when you can.

I've cycled through every level over the past year. Most of them I failed at first.

Daily Low (30 min/day, no AI, no socials, no podcast) is where I started. Failed it for two weeks. Not because 30 min is long, but because the silence is loud. The first three days, the brain yells. Reaches for the phone. Asks for any stimulation. By day five, something else shows up. Old ideas. Forgotten threads. Stuff you didn't know was queued.

Quarterly High (two weeks of geographic retreat) is where I am right now, in Marrakech. Not a Tibetan monastery. Just a place where the wifi is bad enough to be honest, the language isn't mine, and the friction is built into the day. Best ROI on judgment recovery I've found.

The Bonus Vibe Coder Low (writing your CLAUDE.md by hand before asking Claude to polish) is the one most devs will refuse out loud and steal in private. The full case for spec-first work lives in Prompt Contracts. Even the Low version is a meaningful unlock.

The pebble test, per axis: can I still do X without Claude, GPT, Gemini? If the answer is no, level up. Not all axes at once. One by one.

There's a trap built in. Almost everyone reads this and self-assesses High. They picture the version of themselves that exists three productive Tuesdays a year. The honest test is what you did this morning between waking up and the first ask. If the phone got there first, you're Low.

That's fine. Start from where you actually are.

Anthropic Pays to Let Claude Dream. We Pay to Stop Ourselves From Thinking.

Sit on a bench in any European city for thirty minutes. Watch the street. Count the headphones. To walk. To run. To eat alone. To buy bread. Nobody has five minutes of idle brain. We outsourced computation to the model and we outsourced silence to Spotify.

Result: zero windows where the brain works on its own. Zero windows where it can even notice it's losing the ability.

Meanwhile, Anthropic just shipped a feature in Claude Code called Auto Dream. It gives Claude idle cycles between sessions to consolidate its memory. The parallel with REM sleep is explicit and assumed by the engineers themselves: without that consolidation phase, Claude's memory degrades, contradictions pile up, signal-to-noise drops. The feature was inspired by a UC Berkeley paper from last spring called "Sleep-time Compute," which showed that idle preprocessing can cut inference cost by a factor of five.

So we pay an LLM provider for the right to let our model dream. And we refuse the same right to ourselves. We treat Claude better than we treat us.

The smartest engineering teams in the world figured out their models need quiet time to sort their own thoughts. They engineered it. They shipped it. And the supposedly intelligent species running those models walks around with earbuds in at the bakery line. 🤔

AI won't steal your job. You already gave it your brain. Take it back.


Sources

  • Anthropic Claude Code Auto Dream feature (rolling out March 2026)
  • "Sleep-time Compute: Beyond Inference Scaling at Test-time," UC Berkeley, April 2025
  • Cal Newport, Deep Work and related research on focused attention blocks
  • Sophie Leroy, "Why is it so hard to do my work?" (University of Washington), on attention residue
  • Anders Ericsson, foundational research on deliberate practice and elite performance

(*) The cover is AI-generated. Faster than finding an honest stock photo of a guy looking lost in a medina.

Top comments (0)