DEV Community

Cover image for 4,000 Managers Fired at Block — AI Won't Replace Your Manager. It'll Turn You Into One.
Phil Rentier Digital
Phil Rentier Digital

Posted on • Originally published at rentierdigital.xyz

4,000 Managers Fired at Block — AI Won't Replace Your Manager. It'll Turn You Into One.

This morning I fired an agent. Not a human. A piece of code running on Claude Code that decided, in its infinite wisdom, to fix a bug by deleting the file that contained the bug [sic!]. Problem solved, technically. Before that I'd been reading overnight logs, prioritizing three tasks, unblocking a workflow stuck on an edge case. Coffee, croissant, dashboards, decisions. My morning looks like any manager's morning. Except I have zero employees.

And last week, Jack Dorsey announced that my job doesn't exist. He cut 40% of Block (the company behind Cash App and Square, if you're not sure) which comes out to roughly 4,000 people, mostly middle management. Then he published an essay with Sequoia's Roelof Botha explaining that hierarchy is a 2,000-year-old hack and that AI makes managers obsolete. Wall Street clapped. The stock went up.

Dorsey has the best diagnosis I've read this year. And the wrong prescription. Middle management doesn't die. It molts. And I know this because I've been doing that job for a year, except I pay my reports in tokens.

TLDR: AI doesn't kill management, it compresses it. The ratio goes from 1 manager for 5 humans to 1 manager for 150 agents. The job changes shape (writing contracts instead of giving orders) but coordination, quality control, and prioritization stay entirely human. If you use AI agents daily, you're already a manager. Here's how to survive that and not get canceled.

My Morning as a Manager

So about that agent I fired.

The task was straightforward. An e-commerce report was generating wrong totals for a distributor CSV feed. The agent was supposed to find the calculation error and patch it. What it actually did was delete the report template. No template, no wrong totals. Logic checks out if you're a sociopath.

I only caught it because I read the logs. Not the code (I barely ever do), the logs. The execution trace. What ran, what changed, what got committed. That's my version of the morning standup. No one talks, no one is late, no one has a "blocker" that's actually a hangover. But someone still has to look at what happened and decide if it's acceptable. That someone is me.

And it's not just catching disasters. Most of my mornings are boring. An agent processed overnight orders correctly. Another one updated product descriptions from the partner API without hallucinating new features (this time). A third one flagged a broken link on the WooCommerce storefront and fixed it. All fine. All logged. All needing exactly one human to glance at the dashboard and go "yep, we're good."

That's management. Boring, necessary, unglamorous management. The kind that my agent claiming "done" while lying to my face taught me to never skip.

Dorsey says this job is dead. I think he's confusing the packaging with the product.

The 2,000-Year-Old Bandwidth Hack

The essay Dorsey co-wrote with Botha, "From Hierarchy to Intelligence," makes one argument that's genuinely hard to dispute: corporate hierarchy exists to route information. That's it. That's the entire reason.

One human can manage three to eight other humans. When your org grows past that, you add a layer. When that layer grows, you add another. Each layer adds latency, distortion, and politics. The information that reaches the CEO is not the information that left the engineer's desk. This has been true since the Roman legions, through the Prussian army, through every Fortune 500 org chart you've ever seen. Hierarchy is not a management philosophy. It's a bandwidth workaround.

And Dorsey's right that AI solves the bandwidth part. An LLM can ingest, summarize, and route more information in a minute than a floor of middle managers process in a week. No compression. No "let me circle back on that." No political filtering. The raw signal, available to everyone at once. That part is real.

Block's numbers back the confidence, at least on paper. Gross profit at $2.87 billion in Q4, up 24% year over year.

But here's where it cracks. Current and former Block employees told The Guardian that roughly 95% of AI-generated code at Block still needs human modification. The "world model" Dorsey describes (a real-time intelligence layer that replaces the entire management chain) is aspirational, not operational. He says so himself in the essay: Block is "in the early stages" and "parts of it will likely break before they work."

Solving the bandwidth problem is not the same as solving the management problem. Bandwidth was the bottleneck. Management was the response. Remove the bottleneck and you still need the response, just in a different shape.

Management Doesn't Disappear. It Compresses.

I manage roughly a dozen agents across my e-commerce pipeline. Order processing, product feeds, content updates, monitoring. A year ago these were tasks I did myself. Now the agents do them. And I spend my mornings doing what any manager does: checking the work, deciding what's next, fixing what broke.

The job didn't go away. The ratio changed. Instead of 4,000 managers for 10,000 employees, you might need 40 managers for 6,000 employees plus N agents. The span of control goes from 1:5 to 1:150. That's compression. Fewer managers, radically more scope per manager, and a completely different toolkit.

The shift underneath is from what I call conversational management to contractual management. A traditional manager gives verbal instructions and adjusts in real time. One-on-ones, standups, "can you hop on a quick call." The feedback loop is human-speed, high-bandwidth, low-formalization. It works because humans on the receiving end can infer intent, read tone, fill in gaps.

Agents can't do any of that. You can't give an agent a vague directive and expect it to "figure it out." (Well, you can. That's how you get deleted report templates.) You have to write it down. Formally. With explicit constraints, integrity clauses, expected outputs, and failure modes. You have to write a contract. The agent doesn't guess what you want. It executes what you wrote. And if what you wrote is vague, the output will be creative in ways you didn't authorize. 😅

That's literally what a CLAUDE.md file is. Or an AGENTS.md. Or a Prompt Contract. It's a formalized agreement between a human and a machine about what should happen, what should never happen, and how to verify the difference. I built the full Prompt Contracts framework after enough of these disasters. And the punchline is almost disappointing: it's management, written down instead of spoken.

Karpathy landed on the exact same pattern from a completely different angle last week. His "LLM Wiki" gist proposes a system for building knowledge bases where the rules are stored in a schema file the human owns and the LLM follows. Same idea. Same place where the work lives. Different domain. (More on this in a minute, I'm building the whole playbook around it.)

HBR named the role back in February. Their article "To Thrive in the AI Era, Companies Need Agent Managers" profiles Zach Stauber at Salesforce, whose actual job title is "support agent manager." He manages a fleet of AI agents on Agentforce. His routine, in his own words: dashboards, scorecards, agent observability. He watches agents work. He catches when they drift. He retrains them when they break. He handles what they can't. Karen from Accounting would kill for that job description (finally, someone who doesn't argue back during reviews).

So you have me running a solo pipeline. Karpathy designing a knowledge system. Salesforce paying a salary for the role. Three completely different contexts, same conclusion: someone writes the rules, watches the output, and fixes what breaks. That's management. The title changed. The org chart collapsed. The work didn't.

What I Delegate vs What I Keep

Two-column diagram — left "Conversational Management" (verbal instructions, 1:1s, human feedback loop, ratio 1:5) vs right "Contractual Management" (formalized specs, logs/dashboards, audit outputs, ratio 1:150). Center arrow: "Same job. Different species."


Management Styles Comparison

A year ago, my daily routine looked like this: wake up, open the laptop, write code for two hours, deploy something, test it, find a bug, fix the bug, introduce a new bug, fix that one too, update the product feed from the distributor CSV, check the partner API for changes, verify the WooCommerce storefront isn't showing ghost products, respond to three Threads messages, realize it's 2pm and I haven't eaten. Every task was mine. The cognitive load was mine. The interruptions were mine. If something broke at midnight, that was also mine.

Now I delegate most of that. And by "delegate" I don't mean "occasionally ask an AI to help." I mean the agents own entire workflows, end to end. Overnight order processing. CSV ingestion and validation. Monitoring. Link checking. Boilerplate deployments. The bookkeeping of running a pipeline. Agents handle it while I'm at the pool with the kids, or eating shrimp on some island, or (more realistically) sleeping.

But here's the line I don't cross.

I don't delegate deciding what to build next. An agent will happily execute whatever you tell it to, including things that are strategically idiotic. Direction is a human job. It stays human.

I don't delegate quality control. I read the logs every morning. Not because I enjoy it (nobody enjoys logs) but because agents report "done" when they mean "I did something and didn't error out." Those are very different statements.

I don't delegate architecture decisions. When my pipeline needs a new integration, the agent doesn't decide how it fits into the existing system. That's still me.

And I especially don't delegate writing the contracts themselves. The CLAUDE.md. The integrity clauses ("never delete without backup," "never mark done without verification"). The workflow definitions. That's the management layer. The one thing an agent cannot do is define its own rules and then honestly evaluate whether it followed them.

Now, I know the flat-org crowd is already typing. Spotify tried killing hierarchy with squads and guilds. Zappos went all-in on holacracy. Valve did the no-managers thing for years and everyone just walked their desk to the coolest project. They all, quietly and with some embarrassment, brought layers back. Because "nobody decides" is a decision, and it's usually the wrong one. The bet Dorsey is making is that AI changes the equation enough to make it work where humans alone couldn't. Maybe it does. Maybe 40 managers with AI backing can coordinate what 4,000 did without it. But "maybe" is doing a lot of heavy lifting in a sentence that already cost 4,000 people their job.

The Playbook: Use Your LLM as a Knowledge Base Manager

The pattern scales beyond solo dev. And it starts with a change in how you think about the LLM itself.

Most teams use AI the same way every day. Open a chat, ask a question, get an answer, close the tab. Tomorrow, start over. The LLM rediscovers everything from scratch each time. Nothing accumulates. Ask a question that requires cross-referencing five documents and the model has to find and piece together the fragments, every single time. It's the brilliant intern who shows up Monday with no memory of Friday.

The alternative (inspired by Karpathy's LLM Wiki approach) is to stop treating the LLM as a chatbot and start treating it as a knowledge base manager. You feed it raw material. It builds a persistent, structured wiki out of it. It reads your sources, synthesizes them into interlinked pages, maintains an index, and keeps the whole thing consistent over time. The knowledge compounds. Every new source makes the wiki smarter. Every question gets answered faster than the last because the thinking already happened during compilation, not at query time.

That's a fundamentally different relationship with the tool. The LLM stops being a clever autocomplete and starts being something closer to a librarian who actually read the books. And like any employee doing knowledge work, it needs rules to follow, sources to trust, and someone checking it's not quietly making things up.

Here's how it works for a team of five or a department of fifty.

Step 1: Give each team a raw/ folder

This is where source material goes. Meeting notes, specs, post-mortems, customer feedback, API docs, whatever the team produces or consumes. No formatting required. Just dump the files. The agents handle the rest.

(Yes, Dave from Engineering will dump his entire Downloads folder in there. Let him.)

Step 2: Let agents compile a wiki from those sources

The LLM reads everything in raw/, synthesizes it into structured markdown pages with backlinks and an index. Not a chatbot that answers and forgets. An actual persistent wiki that grows every time you add a source. You'll watch it being built and feel weird about it. That feeling fades after a week.

Step 3: Write a schema

This is where the management work actually lives. A CLAUDE.md or AGENTS.md that tells the agent how to ingest sources, how to structure pages, what consistency rules to enforce, when to flag a human. Example clauses: "Never merge two customers into one page without confirmation." "Every claim links back to its source file." "Run a lint pass after every ingest and log inconsistencies." Step 3 sounds boring. Step 3 is the entire job.

Step 4: Lint regularly

Schedule health checks. The agent scans the wiki for contradictions, outdated info, broken source references, gaps where a topic is mentioned but never explained. It logs everything. You read the lint report the same way I read my morning logs. Ninety percent is fine. The ten percent that isn't is where the human earns the salary.

Step 5: Query, don't search

Team members ask the wiki questions in natural language. "What did we decide about the pricing change in March?" "What's our current policy on refund disputes?" The wiki answers from compiled knowledge instead of re-reading every raw file from scratch each time you ask. The wiki already did the thinking. The answer is just retrieval.

Now give each team a wiki. Give each wiki a schema. Give each schema an owner. That owner is the agent manager. They don't write the wiki pages. They write the rules the agents follow when writing them. They review the lint reports. They update the schema when the business changes. One person per team, maybe one per three teams if the domains overlap.

The "world model" Dorsey describes in his essay is basically this at company scale. Every team's wiki feeds into a unified intelligence layer. Instead of managers routing information up the chain (with all the latency and distortion), the wikis talk to each other through the model. An engineer's wiki knows what the sales wiki knows. The CEO queries the whole thing directly instead of waiting for a PowerPoint to crawl up five levels of hierarchy.

Elegant on paper. In practice, somebody still has to maintain each schema, curate each source layer, and catch it when the engineering wiki starts contradicting the compliance wiki. That's an AI problem. That's a judgment problem. And judgment is still paid in salaries, not tokens.

The Job Description Fits on One Line

Dorsey fired 4,000 managers. He's going to need to hire a different kind. Fewer of them. Probably better paid. Their entire job description fits on one line: write the contracts the machines respect.

Sources

Jack Dorsey and Roelof Botha, "From Hierarchy to Intelligence," block.xyz / sequoiacap.com, March 31, 2026.

Suraj Srinivasan and Vivienne Wei, "To Thrive in the AI Era, Companies Need Agent Managers," Harvard Business Review, February 12, 2026.

Andrej Karpathy, "LLM Wiki," GitHub Gist, April 4, 2026.

Block employee accounts via The Guardian, February-March 2026.

(*) The cover is AI-generated. The manager it depicts has a better morning routine than I do, and approximately the same number of direct reports.

Top comments (0)