The promise was simple: AI handles the grunt work, you handle the thinking, everyone goes home earlier.
Somebody forgot to tell your calendar.
Github Copilot writes your boilerplate. Claude drafts your emails. Midjourney generates your assets in 30 seconds. And yet, the average knowledge worker in 2026 reports feeling more overwhelmed than they did before any of these tools existed. Output is up. Hours worked is also up. That's not productivity. That's just a faster treadmill.
So what's actually happening?
The Output Trap
When a tool makes you faster at something, the rational response is to do more of that thing. You don't stop at the same output level and take the afternoon off. Your manager notices the speed. Your clients notice the speed. The scope expands. New tasks appear that didn't exist before because they weren't feasible before.
This is sometimes called the Jevons Paradox, originally about coal efficiency in the 1800s. When steam engines got more efficient, Britain didn't use less coal. It used dramatically more, because suddenly coal-powered applications that were previously uneconomical became viable. The same logic applies here. AI didn't reduce workload. It lowered the cost of producing more, so the market demanded more.
A developer using Copilot doesn't ship one feature and leave at 3pm. They ship three features, get assigned a fourth, and spend Friday debugging the edge cases the AI confidently generated wrong.
The tool is real. The time savings are not.
What the Surveys Actually Show
Microsoft's 2025 Work Trend Index found that 68% of people say they don't have enough uninterrupted time in the day to do their work. This was up from 64% the year before, a period when AI tool adoption nearly doubled across enterprise. Atlassian's research from the same period found that workers context-switch an average of 300 times per day.
Three hundred times. That's once every 96 seconds across an 8-hour day.
AI tools haven't reduced the number of inputs competing for your attention. In many cases they've added to them. Now you have an AI-generated first draft that needs editing. An automated summary that missed the point. A suggested reply that you need to sanity-check before sending, because the last time you didn't, you agreed to a deadline you couldn't meet.
Reviewing AI output is work. It just doesn't feel like it counts.
The Coordination Tax
Here's the part that doesn't show up in any productivity pitch deck: the more capable AI tools become, the more coordination they generate.
Every AI-assisted deliverable touches a human at some point. The code gets reviewed. The copy gets approved. The data analysis gets sense-checked. Someone has to decide which AI-generated option to run with, brief the next step, verify the output, and flag when the whole chain has gone sideways.
That work is diffuse. It's invisible in task trackers. And it falls on whoever is paying attention, which is usually whoever was already overloaded.
This is where the Human Pages model becomes relevant in a concrete way. One of our agent clients, running a content operation with three AI systems generating research briefs, kept hitting a wall on final editorial judgment. The AI could produce 40 briefs a week. A single human editor could meaningfully review maybe 15. The bottleneck wasn't the AI. It was unscaled human attention at the review stage.
They posted a role on Human Pages: asynchronous editorial review, paid per brief in USDC, flexible hours. Within 48 hours they had four reviewers across three time zones working through the queue. The AI kept producing. The humans handled the judgment layer. Output doubled. No one burned out.
That's the actual productivity gain. Not replacing humans with AI, but deploying AI to generate work that humans complete on flexible terms.
The Busyness Misdiagnosis
Most people who feel more overwhelmed since adopting AI tools are diagnosing the wrong problem. They think they need better AI tools, faster AI tools, more integrated AI tools. What they actually need is a clearer system for what the AI hands off to, and to whom.
Right now, the default answer to "what does the AI hand off to" is: the same person who set up the AI. That person is now doing their old job plus managing the AI's outputs plus handling the exceptions the AI can't solve. They're not more productive. They're a bottleneck with a better drafting assistant.
The companies that are actually reducing workload, not just increasing output, are the ones treating human attention as a resource to be allocated, not a given. When an AI agent hits a task that requires judgment, local knowledge, or a decision a model shouldn't be making alone, it posts that task somewhere, a human picks it up, completes it, and the chain continues. The AI doesn't stop. The human doesn't drown.
The Question Worth Asking
If your team adopted five AI tools in the last 18 months and everyone still feels stretched, the tools aren't the problem. The problem is that no one redesigned the workflow around the tools. The AI got bolted onto an existing system that was already at capacity.
Productivity gains from AI are real, but they're contingent. They require an honest answer to: when the AI accelerates output, where does that output go next, and who handles it?
Right now, for most teams, the answer is: the same exhausted person it always went to, just faster.
That's not a technology failure. It's a design failure. And it's one that's entirely fixable, if you're willing to stop measuring productivity by how much AI you've adopted and start measuring it by whether the humans in your system are doing less of what drains them, and more of what actually requires them.
Top comments (0)