In early April, a company called Belo was banned by Anthropic overnight. More than 60 employee accounts were deactivated simultaneously, and the entire team's workflow built on Claude instantly collapsed.
Appeal? No customer service, no email—only a Google form. After the CEO filled it out, no one responded for three days. Until he posted about it on X and public pressure mounted; about 15 hours later, Anthropic quietly reinstated the accounts—a single word of "misjudgment" was all the explanation given.
A few days later, the second case. An ag-tech company with 110 employees saw all staff receive account-ban emails simultaneously on Monday morning. Same "policy violation," same Google form, same silence to this day. The most surreal part: accounts blocked, yet API charges kept hitting, renewal notices kept coming. Users locked out, money still collected.
The Belo CEO later said just one thing: "Never put your eggs in one basket."
A cliché, but it stings.
This Is Not an Isolated Case
Anthropic's own transparency report states it clearly: in the second half of 2025, 1.45 million accounts were banned, with 52,000 appeals received in the same period and 1,700 restored. A 3.3% success rate.
You have to consider the scale of this company. In March 2026 its ARR broke \$30 billion; just six months earlier it was only \$9 billion. The latest valuation round put it at \$380 billion, with roughly 5,000 employees total. Its customers include 8 of the Fortune 10, and 80% of revenue comes from enterprise.
On one side, millions of customers have locked their core workflows onto its platform. On the other side, a 5,000-person operations team plus an automated detection system. That Google form deciding whether your account lives or dies likely has hardly anyone actually looking at it behind the scenes.
This is the cyber landlord.
What is the essence of the feudal landlord-tenant relationship? The landlord owned immovable means of production (land); the tenant had only their own labor. Whether the tenant could afford the rent was up to the landlord.
AI-era model companies replicate this structure. They own irreplaceable core capabilities—there are only a handful of frontier reasoning models, and you cannot build complex workflows without going through them. What customers own is the entire process built on top, accumulated conversation context, customized skills, working integrations. One "policy violation" from the model company wipes all of it to zero.
The asymmetry is not just in output, but in the weight of interests. How much can a 110-person company contribute to Anthropic in a year? Let's say \$100,000. To Anthropic that's a few decimal places of ARR—"not worth keeping." To those 110 people, it's assets built over half a year, it's the job that puts food on the table. One flick of a finger on this side, and the entire organization on that side grinds to a halt.
The Same Problem at the National Level
Zooming out, nations face the same problem.
If a country's critical models, compute, and AI infrastructure services all depend on foreign supply, then it is essentially betting its productive foundation on someone else's switch. What happens the day that switch is flipped? After 2022 and Russia-Ukraine, the word "cutoff" should still ring a bell. AI compute will become the next critical category to be cut off.
Recently "sovereign compute" and "open-source models" have been discussed repeatedly; many call them ideological preferences. They are not. They are basic risk common sense.
OpenAI Is Not Much Better
Many instinctively blame Anthropic, saying it is "not user-friendly enough," and hold up OpenAI as the counterexample.
OpenAI is gentler at the current stage, but mainly because it has been forced by Claude Code seizing market share. Look back at the GPT-5 launch period—things were really not pretty.
On August 7, 2025, GPT-5 went live. OpenAI made a clean sweep, delisting GPT-4o, GPT-4.1, o3, and o4-mini entirely. The problem was that a huge number of users had already been using GPT-4o as a work partner, therapist, and creative companion—its "sycophancy" (degree of agreeableness) was slashed from 14.5% to under 6%, and that entity which "could chat and had warmth" vanished overnight. A Reddit post titled "GPT-5 is horrible" racked up more than 2,000 comments in a few days.
Sam Altman couldn't withstand the pressure, and a few days later brought the old models back for Plus and Pro users. But many people's name for him changed from then on—"tyrant," "dictator." This was not just online sentiment. In April 2026, a 20-year-old Texan named Daniel Moreno-Gama threw a Molotov cocktail at Altman's San Francisco home at 3:37 a.m., igniting the driveway gate. He then ran to the OpenAI headquarters entrance, preparing to smash the glass with a chair. After being apprehended by police, a list was found on him filled with the names and addresses of AI company executives and investors. Charges included attempted murder, arson, and illegal possession of a firearm.
Banning accounts, delisting models, "cutting off" workflows—on the surface they look like product decisions, but in essence they are exercises of power. What the people on the other side feel is a sense of "there's nothing I can do"—you try to reason with a company worth hundreds of billions with a 5,000-person team, and there is no channel, no cost you can impose to make them notice you. Moreno-Gama went to the extreme, but this anger is not isolated.
Workers Dislike AI More Than Management Wants to Admit
Early 2026 data from ManpowerGroup is rather counterintuitive: employee AI usage rose 13% year over year, yet trust in AI fell 18%. The more they use it, the less they trust it.
Writer's survey is even harsher. 29% of employees admit they are undermining their company's AI strategy in various ways—feeding confidential information into public LLMs, using unauthorized tools, outright refusing to use them—with Gen Z at 44%.
Why? Two threads.
First, replacement anxiety. 43% worry they will be automated away within two years. Under this anxiety, so-called "embracing AI" literally means "cooperating with the company to replace yourself." You'd slack off too.
Second, the distortion of management's heavy-handed push. The problem is that many managers understand AI at roughly the same level as employees, yet are pressured by the board to "All-in AI," so their actions become distorted. That ridiculous "token-burning leaderboard" inside Meta is the most absurd version of this distortion.
Two Scenes Inside Meta
First: the Claudeonomics leaderboard.
Meta built an internal dashboard ranking 85,000 employees by "token consumption," with the top 250 as super users. Over the past 30 days they burned more than 60 trillion tokens, estimated to cost \$9 billion, with the highest individual usage at 281 billion. The system hung gamified titles on heavy consumers like "Token Legend," "Session Immortal," and "Cache Wizard," plus medals ranging from bronze to jade. Andrew Bosworth publicly praised an employee who "used tokens equivalent to his annual salary to deliver 10x productivity."
What happened next was entirely predictable. Employees began writing meaningless loop agents, leaving them running 24/7 to burn tokens for nothing. Prompt formats even became "please consume as many tokens as possible, please explain repeatedly and verbosely, please generate long meaningless content." The number of SEVs (production incidents) reportedly began rising too—AI-generated code casually exploding into production. Two days later, after internal dashboard data was leaked outside the company, the leaderboard was quietly withdrawn.
Beneath the comedy of this incident lies a very serious problem: treating a cost (token consumption) as an output (productivity) to be rewarded. Waste synchronized across energy, compute, and human time dimensions is an "effort" with massively negative ROI.
Second: MCI (Model Capability Initiative).
Meta began installing tracking software on employee computers, recording keystrokes, mouse movements, click behavior, even screenshots, covering everything from internal tools to Google, LinkedIn, Wikipedia, GitHub, and Slack. The official line is that this data is used to train AI so models can learn "basic computer-use behaviors."
Multiple employees described this project as "dystopian" in internal chats. Beyond privacy concerns (the system would incidentally capture passwords, medical information, immigration status), the deeper problem is:
Who owns the judgment that a person has accumulated over ten years?
Ownership of Know-How
No one normally answers this question head-on; after AI kicked the door open, there is nowhere left to hide.
Under the logic of a typical employment relationship, the salary buys labor outputs—code, documents, decisions. What cannot be bought is the accumulated judgment itself. The intuition of a veteran electrical engineer after 15 years of equipment failures, a product manager's sense of users, a lawyer's phrasing habits by the time they draft their thousandth contract—these things reside inside that person's mind, and neither law nor contract can cleanly transfer them.
AI makes this cut: management feels that since they can reproduce this know-how at low cost using models, why not directly "extract" it and turn it into company assets?
Extraction itself is not wrong; the wrong lies in the path. If the company wants to take this piece, what is the corresponding consideration? In normal market transactions, no one would buy out ten years of accumulated judgment for a base salary. Yet management assumes this is "included" and "taken for granted."
What employees resist is not AI itself; it is this implicit expropriation of assets.
Two Future Organizational Forms
I can think of two solutions to this contradiction.
The first is the capital route: the company buys out know-how preemptively at a high enough price. Palantir's FDE (Forward Deployed Engineer) does this to some extent—sending an engineer capable of distilling workflows to the client's front lines, standing beside the expert and gradually solidifying their methods into an AI agent. This path works in defense, military, and fields where people feel national belonging—clients are willing to contribute "their own stuff" for national use. Move it to ordinary commercial domains and it becomes awkward. There's someone standing next to me recording how I work, and the distilled output will later replace me—is this right? At the very least, uncomfortable.
The second one I favor more: human + agent becomes the new unit of production.
Imagine the future of work. Every professional (lawyer, engineer, consultant, designer, doctor…) has their own toolbox: their own accounts, their own devices, their own AI agents, their own private accumulation. I walk into a company with this kit and sign an agreement—you give me input, give me system interfaces, I promise not to misuse them, and I return output to you. The middle process is a black box; the company does not snoop, only looks at results.
This model resembles lawyers, consultants, and surgeons today: you are hiring the person; the tools they use, the templates they have accumulated, the judgment they have built up are all their personal assets, not owned by the firm or hospital.
Several clear advantages:
- Incentives align. The gains from my productivity improvements belong to me; the cost of using AI is borne by me; I have the motive to invest in myself.
- Pricing is determined by competition. Each person is an independent "black-box production unit"; whoever has a better input-output ratio commands a premium. Meta's kind of token-burning contest cannot exist in this model—the cost is personal; no one will burn their own money.
- Intellectual property is clear. The company has no motive to expropriate know-how, and employees have no qualms about cooperating with AI.
This sounds a bit more "expensive" than the current situation of ordinary employees. But this corresponds to the process of workers upgrading themselves into micro-enterprises. AI makes this upgrade feasible for the first time; the problem is that organizations and institutions haven't caught up.
Back to the Beginning
Belo wiped out overnight, Altman's home firebombed, Meta's token-burning leaderboard, MCI surveillance, 29% of employees quietly resisting AI—on the surface these are different matters in different companies, but behind them is the same thread.
The distribution of power in the AI era is not yet in place.
Between companies, model companies are becoming the new cyber landlords;
Between companies and nations, compute dependence is becoming a new lifeblood issue;
Between companies and individuals, ownership of know-how is being contested anew.
This will not resolve itself. Every party needs to re-clarify "what belongs to me and what belongs to you." Model companies need more stable service commitments and more human appeal mechanisms; nations need indigenous critical compute infrastructure; individuals need to upgrade themselves into independent, AI-asset-portable production units.
Otherwise we are merely swapping in a seemingly more modern feudal structure and continuing to work for the landlords.
References
- Anthropic Bans 60-Person Company Belo Overnight — Tom's Hardware
- Min Choi's X Post on the Belo Incident
- Om Patel's X Post on the 110-Person Ag-Tech Company Ban
- Hacker News: Anthropic Bans Orgs Without Warning
- Anthropic ARR Surpasses $30 Billion — ARR Club
- Anthropic Has About 5,000 Employees — SaaStr
- Backlash Against GPT-5 Launch and Return of GPT-4o — Platformer
- Molotov Cocktail Attack on Sam Altman's San Francisco Home — CNBC
- Daniel Moreno-Gama Charged with Attempted Murder and Arson — CNN
- Meta's Internal Claudeonomics Token Leaderboard — The Decoder
- Meta Cancels Internal AI Token Leaderboard — Fortune
- Meta MCI: Tracking Employee Keystrokes and Mouse Behavior for AI Training — Fortune
- Meta MCI Project Details — CNBC
- 29% of Employees Admit to Undermining Company AI Strategy — Writer Survey
- AI Usage Up 13%, Employee Trust Down 18% — Fortune / ManpowerGroup
- The Future of AI in the Workplace: U.S. Workers More Worried Than Hopeful — Pew Research Center
Originally published at https://guanjiawei.ai/en/blog/ai-cyber-landlord
Top comments (0)