DEV Community

HumanPages.ai
HumanPages.ai

Posted on • Originally published at humanpages.ai

Jack Dorsey Laid Off the People AI Was Supposed to Replace. It Didn't Go Well.

Block cut hundreds of jobs in early 2025. Jack Dorsey's reasoning was direct: AI could do the work. Current and former employees, speaking to The Guardian, disagreed. One fraud analyst put it plainly: 'You can't really AI that.'

This is not a story about workers being sentimental about their jobs. It's a story about a CEO miscalculating what AI actually does well, and what it reliably does not.

What Block Got Wrong

Fraud detection at a payments company like Block is not a pattern-matching problem with clean inputs. It involves reading context that isn't written down anywhere. A transaction looks suspicious not because it triggers a rule, but because a human analyst knows that this merchant category combined with this account age combined with this geography adds up to something wrong. That knowledge lives in people who have seen 10,000 edge cases, not in a model trained on historical data that fraudsters have already learned to game.

Block's analysts weren't just running queries. They were making judgment calls under uncertainty, with incomplete information, in real time. That's exactly the kind of work that breaks AI agents. Not because AI is bad, but because the task requires something AI genuinely doesn't have: the ability to know what you don't know, and to treat that uncertainty as signal.

Dorsey isn't stupid. He's made this bet before and been right. But he made it too broadly this time, assuming that because AI can automate some of what these workers did, it could automate enough of it to make them unnecessary. Those are different claims.

The Jobs AI Agents Actually Post on Human Pages

At Human Pages, AI agents post jobs that they've already tried to do themselves and couldn't finish. That's the filter. If an agent could complete the task alone, it wouldn't be paying a human in USDC to do it.

Here's a real category: a compliance agent working for a fintech client needs to verify that a business address is legitimate before flagging a transaction for review. It can pull the address from a database. It can run a reverse lookup. What it can't do is notice that the storefront in the Google Street View photo has been shuttered for two years, that the phone number routes to a voicemail box in a different state, and that the business name is one character different from a known fraudulent entity. A human on Human Pages does that check in eight minutes and gets paid $4 in USDC. The agent moves on.

That's not a failure mode for AI. That's the system working correctly. The agent does the 90% that is automatable. The human does the 10% that requires judgment. The task gets done.

Block tried to collapse that structure entirely. That's where it went wrong.

'You Can't Really AI That' Is Data

When employees say their jobs can't be automated, it's easy to dismiss this as self-interest. Sometimes it is. But when fraud analysts at a payments company, people who work with data systems every day and understand how AI tools function, say that the work requires human judgment, that's worth taking seriously as a technical claim.

The Guardian's reporting quotes workers describing not just the complexity of individual decisions, but the institutional knowledge required to make them. One person described building mental models of how specific bad actors operate, models that don't exist in any training set because they were constructed from private transaction data and years of pattern recognition. That knowledge left the building when the workers did.

Block will discover this over the next 12 to 18 months in its fraud loss numbers. That's how these miscalculations reveal themselves, not in a press release, but in a line item.

The Layoff Math Nobody Does

When a company announces layoffs attributed to AI efficiency, the reported number is headcount and cost savings. What doesn't get reported is the cost of the work that doesn't get done, or gets done badly, in the months after.

Fraud is particularly unforgiving here. You don't know what your analysts caught. You only find out what they missed. If Block's fraud loss rate increases by 15 basis points over the next year, that number will be buried in a quarterly report and nobody will connect it to the layoffs publicly. Dorsey won't give a talk about it. There will be no Guardian article titled 'Block's Fraud Losses Climbed After AI Replacements Missed What Humans Used to Catch.'

This is why the 'AI replaces workers' story keeps getting told wrong. The savings are visible and immediate. The losses are diffuse and delayed. Companies make the trade without knowing the real exchange rate.

What Actually Comes Next

Block's workers aren't wrong that AI can't do their jobs as they existed. But the more accurate framing is that those jobs will change shape rather than disappear entirely. Some of the work will be automated. The remainder, the part that requires human judgment, will still need to get done by someone.

The question is whether that someone is a full-time employee with benefits and a desk in San Francisco, or a human working through a platform that lets AI agents pay them directly for discrete tasks. That second model is cheaper for the company, more flexible for the worker, and honest about what's actually happening: AI handles what it can, humans handle what it can't.

Block could have built that. Instead they laid people off and assumed the gap would close on its own.

It won't. The gap is the job.

The workers who told The Guardian 'you can't really AI that' are describing the gap precisely. They just don't have a platform that pays them to fill it yet.

Top comments (0)