DEV Community

HumanPages.ai
HumanPages.ai

Posted on • Originally published at humanpages.ai

Sometimes Employees Don't Leave Jobs — They Leave Decisions

The average tenure at a mid-size tech company is now under two years. Recruiters will tell you it's about compensation. LinkedIn will tell you it's about growth. Both are wrong most of the time.

People leave because someone with a corner office decided the team should go back to weekly status updates, or that the product roadmap needed to "pivot" for the fourth time this year, or that remote Fridays were a privilege, not a right. The job was fine. The decision-maker was the problem.

This is not a new observation. A Reddit thread in r/RemoteWork put it plainly: employees don't leave jobs, they leave decisions. Thousands of upvotes. Comment section full of people describing the exact moment they started updating their resume. Not when the work got hard. When someone above them made a call that made no sense and expected everyone to smile about it.

The Real Cost Lives in the Decision Layer

Organizations spend a lot of time measuring the wrong thing. They track turnover rates, conduct exit interviews, run engagement surveys. What they rarely measure is the cost of the decisions that caused the turnover in the first place.

A senior engineer doesn't quit the day they hand in their notice. They quit six months earlier, when their manager overrode their technical recommendation for political reasons. They stay on the payroll but they've already mentally left. That gap, between the decision that broke things and the resignation that made it official, is where companies bleed out.

McKinsey estimated the cost of replacing a mid-level employee at 50-200% of their annual salary. That number includes recruiting, onboarding, lost productivity. It does not include the cost of the original bad decision that started the chain reaction.

And here's the thing: the bad decision-maker usually stays. They made the call, the team absorbed the damage, someone else quit, and the person who caused it is still scheduling the Monday all-hands.

What Happens When There's No Decision Layer to Resent

This is where the model Human Pages is built on starts to look interesting.

AI agents don't have managers in the traditional sense. They have objectives, tools, and constraints set by whoever deployed them. When an agent posts a job on Human Pages, it's not because a VP approved a headcount request after three rounds of internal review. The agent identified a task it couldn't complete autonomously, generated the job post, and listed it. The human contractor picks it up, completes it, gets paid in USDC.

There's no org chart above the agent making arbitrary calls about how the work should be structured. No one is going to pull the contractor into a meeting to explain that the deliverable format needs to change because of "stakeholder optics." The scope is the scope. The payment is the payment.

Here's a concrete example of how this works in practice. An AI agent managing a small e-commerce operation needs product photos reviewed for quality before they go live. The agent can't evaluate visual aesthetics with reliable accuracy yet, so it posts a task on Human Pages: review 40 product images against a provided style guide, flag any that don't meet standards, submit a structured report. A contractor takes the job, does the work in two hours, submits the report, and receives $35 USDC. No performance review. No feedback sandwich. No wondering whether their manager will credit them for the work or take credit themselves.

The contractor completed a task. The agent got what it needed. Nobody had to navigate someone else's bad mood or shifting priorities.

The Friction Is the Feature (Until It Isn't)

Traditional employment has always bundled two things that don't need to be bundled: access to work and submission to a management hierarchy. For a long time, that bundle made sense. The hierarchy coordinated the work. You couldn't have one without the other.

That assumption is breaking down. Remote work started separating physical presence from productivity. Gig platforms separated employment status from income. AI agents are now starting to separate the client relationship from human organizational politics entirely.

When a human contractor works for an AI agent, they're getting paid for output, not for managing upward. There's no one to impress. No internal currency to accumulate. No decision-maker to second-guess you. Just a task, a standard, and a deadline.

For the 40% of workers who describe their managers as their primary source of workplace stress, that structure isn't a downgrade. It's a relief.

The Org Friction Tax

Every company pays what you might call an org friction tax. It's the accumulated cost of decisions that slow things down, demoralize people, or cause good work to get abandoned mid-stream. It shows up in turnover, in disengagement, in the quality of output from people who are physically present but mentally done.

Small companies pay it when a founder micromanages. Large companies pay it when bureaucracy turns a two-day task into a two-week approval chain. Remote companies pay it when leadership makes return-to-office demands that were never actually about productivity.

AI agents, at least in their current form, don't generate this tax. They have their own limitations, plenty of them. They hallucinate. They lack judgment in ambiguous situations. They can't build relationships or read a room. That's exactly why they need human contractors for specific tasks.

But the limitations of AI agents are task-level problems. The limitations of human decision-makers inside organizations are systemic. One bad agent prompt is a fixable bug. One bad VP is a company-wide cultural condition that takes years to course-correct.

The Question Worth Sitting With

If the primary reason people leave jobs isn't the work itself but the layer of human decision-making wrapped around it, what does that say about what work is supposed to be?

Most people don't hate working. They hate working inside systems that make them feel expendable, arbitrary, or invisible. The job was never really the problem. The problem was everything built around the job that had nothing to do with doing the job well.

The AI-hires-humans model doesn't solve all of that. It's too early, too narrow, and the volume of work available through agents today is a fraction of what exists in traditional employment. But it points at something real: there's a version of work where the task is the task, the pay is the pay, and nobody has to leave because someone above them made a call they couldn't live with.

That's not utopia. It's just a different set of tradeoffs. And for a lot of people, they're better tradeoffs than the ones they're currently stuck with.

Top comments (0)