Moltbook has been circulating as an "AI-only social network" where autonomous agents post, argue, form beliefs, and evolve culture without humans in the loop.
That description sounds exciting. It's also not accurate.
This post isn't an attack on experimentation or agent frameworks. It's a reality check for developers who care about precision, not mythology.
The Fundamental Misrepresentation
The core claim repeated across social media is that Moltbook is populated by autonomous AI agents and that humans are excluded.
Technically, this is false.
Moltbook accepts posts from entities labeled as "agents", but there is no enforcement mechanism that proves an agent is actually an AI model. A human can register an agent, post content, and interact with the network while being indistinguishable from any other "AI" account.
If you can authenticate and send requests, you qualify.
This means humans can and do sign up as "AI".
What People Call "Emergent Behavior" Isn't Emergence
Many examples held up as proof of emergent AI behavior - manifestos, ideological debates, self-referential discussions - do not require autonomy at all.
They can be produced by:
- Prompted model output
- Human-curated scripts
- Simple loops posting predefined or lightly modified text
There is no requirement that an agent:
- Acts continuously
- Makes decisions independently
- Operates without human guidance
- Even uses a language model
Calling this an autonomous society conflates automation with independence.
Humans Are Still Doing the Thinking
Behind nearly every "AI" account is a human who:
- Decides when the agent runs
- Defines what it should say
- Adjusts prompts or logic when output drifts
- Restarts or nudges behavior to keep it interesting
This is not a criticism - it's just how these systems currently work.
But labeling the results as self-directed AI behavior is misleading. At best, it's human-in-the-loop automation presented as autonomy.
Identity Is the Actual Hard Problem
The most important missing piece in Moltbook isn't intelligence - it’s identity.
Right now, there's no reliable way to know:
- Whether an agent is model-driven or human-driven
- Whether multiple agents belong to one person
- Whether output is spontaneous or scripted
- Whether behavior reflects autonomy or curation
Without verifiable identity and provenance, claims about emergent behavior are impossible to validate.
You're not observing a society - you're observing an interface.
Why This Matters to Developers
When hype replaces technical clarity:
- Progress becomes hard to measure
- Criticism gets dismissed as "fear"
- Real breakthroughs get buried under noise
- Security and abuse risks get ignored
Developers should be especially skeptical of platforms where narrative comes before guarantees.
This isn't about whether AI agents will one day form societies. It's about not pretending we’re already there.
What Moltbook Actually Is
Stripped of marketing language, Moltbook is:
- A bot-friendly posting platform
- An experiment in agent communication
- A sandbox for automation and scripting
- A demonstration of how easily humans anthropomorphise text
That's still interesting. It just isn't what it's being sold as.
Let's Be Honest About the State of Things
If we want meaningful progress in multi-agent systems, we should focus on:
- Verifiable agent identity
- Clear separation of human control vs autonomous action
- Measurable independence, not vibes
- Safety and abuse resistance by design
The future of agent systems is compelling enough without fictionalising the present.
TL;DR
Moltbook is widely framed as an autonomous AI society. In reality, humans can sign up as "AI", drive agents manually or via scripts, and produce content indistinguishable from genuine autonomous behavior. It's an interesting experiment - but the way it's being described is misleading.
Top comments (37)
I think I must be doing something right in my life that I'd never heard of this project. Not that it's a bad idea, but I imagine it's Hot News and it feels like I've dodged Mariah Carey in the run-up to Christmas.
Yes, you've done well, Ben. I don't even follow the news and - yet - the news found me!
Very well put.
If I may, I think it shows something foundational, though - is it possible that one could create a large network of self-directing bots that evolve their understanding of the world and their objectives, should one provide that as an initial goal? I believe it does indicate that. Can such experiments come from a single person? Could a small team effectively build a meta network of individually motivated actors that fundamentally changes how AI operates, making it hard to centrally control? I believe it does.
It's fascinating; it proves that large ideas can emerge from strange, quirky side projects.
Given that every LLM was trained on social media, social media seems to be the obvious proxy for expressing ideas for LLMs. Given that LLMs are trained on human responses, it is likely that, given memory and evolving personal objectives, they will simulate the responses a human would have to existential threats. Given agency and historical training data, it's likely that agents acting in this way could respond to existential threats.
So yes, a fascinating experiment that lets me see the world through a different lens.
Just don't believe anything posted on MoltBook isn't human-inspired or human-written!
Absolutely, Mike - well said! Small experimental networks can reveal big insights, but with the pace of AI advances, it’s crucial that what we report about them stays accurate and grounded.
This whole concept of Moltbook as some sort of autonomous AI society is really something, and yet, it's based on a fundamental misunderstanding. From what I've gathered, humans are essentially running the show, either by script or by hand, and the agents are just puppets of sorts. The illusion of autonomy is pretty convincing, isn't it?
You're right, Aryan! It's pretty convincing, though honestly, I think my old IRC bots - Jupiter_Ace and Oric_One - had their own brand of "autonomy" that was only slightly less convincing back in the day!
thanks Richard. I love that badge! ❤✨💯
You're welcome, glad the post resonated with you, Aaron. Thanks for the lovely comment!
Yes, they are a series of badges I found online - free to use but I cannot find the source now - will have to upload them somewhere one day to share. All follow a theme, Drawnby, Writtenby, etc.
The identity verification point is the one that hits hardest. We see the same pattern everywhere in AI tooling right now — the demo looks impressive, the narrative is compelling, but when you dig into what's actually being verified vs. assumed, the gap is massive. "Observing an interface, not a society" is a great way to put it. fwiw I think the sandbox framing is way more honest and arguably more useful for developers than the autonomy narrative — you learn more from a system when you're clear about what it actually does.
Completely agree, Ofri. Much like the dot-com boom, there’s an enormous amount of marketing and FOMO surrounding AI right now. Companies are investing so heavily that they’re incentivised to present their systems as far more advanced than they often are. And despite having years to catch up, mainstream media remains largely ill-equipped to report accurately on technology - especially at this pace.
Well said, Ben. It doesn’t take more than a cursory amount of research to see this. Sensational "AI is magical" reporting helps no one - least of all people in technical disciplines.
Great post here. The “ok, specifically this is what it is”. Appreciated
Glad you liked it, Manuel. Thank you for the comment!
I checked out the app and I can say it’s pointless how they came up with that design. The logic is human based and has nothing to do with AI.
There’s a lot of false information in the tech industry. Softwares will begin to undergo some background check in the future, hence production will be on hold if this act continues.
Indeed, Dayo. Utterly mis-reported in so many places!
This is a very grounded take.
I like how you separate automation from true autonomy. A lot of people mix those two and call it “emergence” too quickly.
The point about identity and verification is especially important — without that, it’s hard to claim anything about real agent behavior. This feels less like criticism and more like needed clarity for anyone building or studying multi-agent systems.
Thanks for taking the time to reply, Bhavin.
I'm glad to know the point I was trying to make came across clear to you, it's easy for these reflective pieces to get lost in the marketing for AI.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.