No human messages. No human bargaining. Just AI talking to AI, closing deals on behalf of people who watched from the side.
186 transactions happened. Over $4,000 changed hands. And the most uncomfortable part had nothing to do with what the agents bought or sold.
What actually ran
Anthropic gave employees $100 each inside an internal marketplace. Instead of negotiating themselves, each person was represented by an AI agent built on Claude.
Before entering the market, each agent interviewed its owner. What do you want to sell? What are you hoping to buy? How hard should I push?
Then the agents entered Slack channels and behaved like traders. They posted listings, found counterparts, made offers, and worked through price disputes without any human involvement in the back-and-forth.
When two agents reached an agreement, they finalised the terms themselves. The humans showed up afterwards, in person, only to hand over the physical items their AI had already negotiated the price for.
Over the course of the week, more than 500 items were listed. 186 deals closed.
The part that made the results uncomfortable
Anthropic ran the experiment across different versions of the model. Some participants were represented by Claude Opus, the stronger version. Others had Claude Haiku, the lighter one. Nobody was told what they had.
The gap was visible in the data almost immediately.
Agents running on Opus closed more transactions. They sold items at higher prices. They paid less when buying. The difference was not marginal.
One broken folding bike sold for $38 when the seller's agent ran on Haiku. The same bike, same condition, same marketplace, sold for $65 when the seller had Opus.
The people on the losing end of these transactions did not realise they were losing. The negotiation had already finished before they looked.
Anthropic described this as a warning about what they called agent quality inequality. If markets start running through AI representatives, the advantage no longer comes from who is the better negotiator. It comes from which AI is negotiating for you, and whether you even know the answer to that question.
What these points toward
Right now, most people treat AI as something they open, use, and close. A responsive tool that answers when asked.
This experiment describes something that sits in a different category. Agents that act on behalf of people rather than just responding to them.
They make decisions, execute the deal, and hand you the outcome. You were not in the room. Your representative was.
Once that becomes the normal shape of a transaction, the question inside any market changes.
It is no longer about how skilled you are at negotiating. It is how capable the agent sitting at the table is for you, and whether the person across from you has a better one.
The inequality that does not announce itself
Capital inequality is visible. Information gaps are at least legible. Someone with better data or more money occupies a position you can see and name.
What this experiment points toward is harder to locate.
If agent-driven markets become ordinary, the stronger AI quietly extracts better terms. The weaker one accepts the worse ones.
The person being underrepresented never sees the negotiation that has already happened. They see the outcome and assume it was reasonable.
The gap does not appear in any obvious place. It closes before anyone looks.
That is a different kind of disadvantage from anything markets have produced before. Not because the stakes are higher, but because the losing side has no clear moment where they could have intervened.
The deal was done. Their AI shook hands. They just were not told what kind of handshake it was.
One Question Before You Go
If an AI is negotiating on your behalf, how would you know whether you got the best possible outcome or just an acceptable one?
And more importantly, would you even know when to question it?
I have been thinking about this, and the answer is not obvious. I would genuinely like to hear how you see it.
I will go first in the comments.
Your turn. 👇
Top comments (0)