There is a small veterinary clinic in south Austin called Manchaca Road Animal Hospital. It has around 700 Facebook followers, a real address, a real phone number, a real team photo. Over the past two months I have been tagged in mass-comment posts on the clinic's wall by accounts running templated impersonation: "those who have not yet reserved our Alumni t-shirt, please reserve it quickly." Animal hospitals don't have alumni. People who once took a cat there are not graduates. But "alumni" is an affinity word, the script needed an affinity word, and the print-on-demand model means there is no inventory to recover from the mistakes. If even one in a thousand notifications generates a sale to someone who half-remembers the clinic and assumes their old vet started a fundraiser, the unit economics work. They only work because no one is doing the check. The check is more expensive than the mistakes it would catch.
The same logic runs everywhere now. I've collected cold emails from agents addressed to me as a manager at a London fintech consultancy I left in 2016. I now live in Los Angeles. The agents did the research, hit the wrong target precisely, and went to bed. The cleaning industry's collective spreadsheet says I'm an office cleaning prospect at a company I haven't worked at in a decade. That isn't one bad agent. It's multiple operators working from the same broken substrate, none of whom paid the cost of verifying it.
Then it shows up on the production side. Open Gemini, ask for "a story in 10 sentences," run it twice. Both panes return the same opener: "The old lighthouse keeper, Elias, polished the brass railing, his weathered hands moving with practiced ease." The next nine sentences are also identical, beat for beat. Call it mode collapse, or the default basin of instruction tuning: when the prompt does no work, the model returns to a small set of safe, high-scoring archetypes. A lighthouse keeper named Elias Thorne is one of those.
It isn't just Gemini. I tested eight models from unrelated labs: DeepSeek V4, Qwen 3.5, Gemma 4, Kimi K2.6, Grok 4.3, and more, at default temperature, same prompt. Four hit the lighthouse keeper. Two of those named him Elias. Different training pipelines, different architectures (dense and MoE), different parameter counts. Same basin.
The pattern doesn't stay in the chat window. Google Trends for "Elias Thorne" is flat from 2015 through late 2025 and spikes to its all-time peak in early 2026. The same name now appears as a byline on Amazon. Under "Elias Thorne" the Kindle store lists an alt-medicine cancer protocols handbook, a 2026 YouTube algorithm guide, a book on Greek mythology, and a psychological thriller novella. No human writes all of those.
The handbook ranks #18 in Oncology Nursing, #32 in Leukemia, and #51 in Lymphatic Cancer. People are finding it through the categories they were searching in. The mode-collapsed name from the chat window is now selling cancer advice to people with cancer.
The one-way ratchet underneath this is what gets me. Reputations established before the substrate got polluted become structurally more valuable than reputations attempted to be established after. A Stack Overflow profile from the 2010s, a journalist's byline at a publication that predates generative imitation, a LinkedIn endorsement from a named colleague at a named firm in 2014: all hard to fake retroactively, harder to manufacture forward. Whoever built reputation capital before this keeps it. Whoever didn't is going to find the price has gone up.
The cost of producing all of this is approaching zero. The cost of doing it well hasn't moved. The work not done doesn't disappear; it gets pushed onto everyone downstream, where it lands as annoying for a careful reader and dangerous in the hands of someone who can't tell the difference.
Full piece with the screenshots and methodology: https://danielmay.co.uk/posts/cheap-agents-alumni-shirts-and-elias-thorne/





Top comments (1)
💬 How do you think the rise of agentic content (less tastefully: slop) on the internet affects platforms, and the humans trying to build real credibility on top of them?