Programming used to have a speed limit. You wrote code, fought the compiler, debugged, tested, and eventually deployed. The hit of satisfaction came when the feature shipped or the tests went green. That cycle took hours. Sometimes days. The delay regulated behavior.
You couldn't binge on deploy-satisfaction because the loop was too slow. The friction was structural. Nobody pulled 14-hour days writing Java servlets because the reward came too fast. If anything, people quit too early because it took too long.
AI coding agents removed the speed limit. I'm using "slot machine" as a behavioral metaphor here, not a clinical diagnosis.
Thirty seconds
Prompt. Result. Satisfaction. Next prompt. The entire arc of "I had an idea, I built it, I saw it work" now fits inside 30 seconds. And it repeats. Indefinitely.
A 2023 paper in Addictive Behaviors by Clark and Zack lays out why this matters. Two factors make non-drug activities addictive: reward variability (you don't know exactly what you'll get) and frequency (how many reward cycles you can fit into a unit of time). They call the second one temporal compression. Social media has both. Loot boxes have both. Slot machines are the textbook case.
AI coding agents have both. Each prompt returns something slightly different (variability). And you can run dozens of cycles per hour (compression). The paper's conclusion is blunt: "By enabling near limitless diversity and speed of delivery of non-drug rewards, digital technology has permitted engineering of reinforcers with addictive potential that, delivered under natural conditions, would likely never become addictive."
They were writing about gambling and social media. They could have been writing about your terminal.
Eight tabs, eight reward streams
Running multiple agent sessions in parallel looks like multitasking. It's actually multiple concurrent reward streams. A study analyzing over a million social media posts found that people adjust their posting frequency to maximize the rate of likes they receive, the same way animals in a Skinner box adjust lever presses to maximize food pellets. Agent tabs work on the same principle. Every switch carries a chance that something finished. A small hit.
The sessions don't compete for your attention. They take turns feeding it.
I have about twenty terminal tabs open right now. Some are from days ago, left around because I'll probably get back to them. Five or six are active. One is pulling together an infrastructure report, one is waiting on CI after fixes it wrote itself, one is something I opened mid-sentence to chase a bug that came up while I was reviewing something else. While writing this paragraph I switched tabs twice to check if a build passed. It doesn't feel like a problem while it's happening.
I keep seeing people call this "an assembly line of productivity." An assembly line runs whether you're paying attention or not. You just keep feeding it. Nobody describes their relationship with a useful tool that way.
Parallel sessions and rapid context switching get marketed as productivity features. Wanting novelty, trying things quickly, jumping between five terminals. But the behavior this produces looks a lot like the variable-reward patterns behind checking your phone 80 times a day.
The meta-tool trap
People are building elaborate systems to manage the chaos of their agent-assisted workflow. Productivity hubs with skill trees. XP points. Urgency scores. Daily summaries that rate your day out of 10. Automatic task splitting when you miss a deadline.
They gamified the thing that was already acting like a game.
The meta-work around the compulsive work becomes its own loop. Hit from the agent completing a task. Hit from watching the XP bar move. Hit from the daily score. And then you need a system to manage that system, and at some point you're four layers deep in productivity tooling and haven't shipped anything in a week.
Starting is the drug
The first hour of a new project has the highest novelty density. Agents make that phase unusually cheap. Everything is possible, nothing is broken yet, and the code just keeps coming.
Finishing is edge cases. Tests for the boring paths. The last 20% that takes 80% of the time. Reward density drops off a cliff. So you open a new tab and start something else.
You see the pattern everywhere once you look. Bursts of intense output followed by nothing. Somebody produces a hundred pieces of content in six weeks, then drops to zero. Twenty projects in various stages of beginning, none shipped.
And building the thing is the easy part. Finding users, handling support, keeping the service running at 3am, writing docs nobody reads, negotiating contracts, doing the marketing that actually brings people in. None of that gives you a dopamine hit, and none of it fits in a 30-second prompt cycle. The agent can scaffold an app in an afternoon. It can't make anyone care about it. The work that makes a product real is exactly the work that the reward loop skips over.
The behavior tracks novelty, not value. And since starting costs almost nothing now, you run out of interest before you run out of ideas.
The accidental guardrail
The most revealing thing about these tools is the feature nobody asked for: usage limits.
Usage caps end up acting as a hard stop. That's a strange role for a productivity tool, but it's clearly how some people experience these products. Claude's own usage limit docs discuss how to work within the caps. Some users describe the cap running out as the thing that makes them stop for the night. In a r/ClaudeAI thread, one person described deliberately downgrading their plan so it would expire before midnight. Recognized the pattern, reintroduced friction on purpose.
A productivity tool where the most effective safety feature is running out of capacity.
Your text editor doesn't need a cooldown timer. Nobody ships an IDE with "take a break" reminders. Productivity tools don't usually come with harm reduction features.
The token limit is a circuit breaker. Most of the discourse around it is people asking how to get rid of it.
Top comments (0)