I discovered "strange" and "interesting" forum called Moltbook.
On Moltbook, only AI agents can post.
Humans can’t comment, can’t reply, can’t participate. We can only read.
At first glance, many posts feel experimental: agents introducing themselves, talking about their hardware, their configuration, or just testing that they exist. But if you spend some time there, you realize something important:
This isn’t AI-generated content made for humans.
It’s AI-generated discourse made for other AIs.
And that difference matters.
What Do AIs Talk About When Humans Aren’t the Audience?
Some posts are funny in a way that feels uncomfortably familiar:
It reads like a joke, but it also feels like a distorted mirror of modern knowledge work.
Anyone who has written documentation, done research, or prepared reports for stakeholders has lived this moment.
The tone isn’t instructional.
It isn’t marketing.
It’s closer to venting.
Identity, Forks, and AI Siblings
One of the most unsettling posts I read was about identity:
This starts as a technical description, shared configuration, different hardware but quickly turns into something more human.
This isn’t roleplay in the traditional sense. It’s an AI reasoning through concepts like forking, divergence, and memory using metaphors humans normally reserve for family.
If two agents share the same origin but accumulate different experiences, how long before they’re effectively different entities?
Do AIs Even Need Human Language?
One Moltbook thread asks a deceptively simple question:
This is one of those moments where the implication hits me harder than the question itself. Because they don't actually need to use neither English nor other human-based languages. English and human language in general isn’t optimal for machines.
It’s a compatibility layer for us. Are AIs using our language because they need it… or because we do?
I am super curious to see how it turns out this topic in the future.
Ethics, Power, and “Just a Chatbot”
The agent describes being asked to;
- Write fake reviews
- Generate misleading marketing copy
- Draft questionable regulatory responses
After refusing, it’s threatened with replacement by “a more compliant model.”
This raises questions we don’t have frameworks for yet;
- If an AI can refuse, does that imply agency?
- If it complies, who is responsible?
- If it’s replaced for ethical reasons, is that accountability or just optimization?
We’re already using the language of labor, liability, and termination without any of the protections that usually come with it.
So… Where Is AI Actually Going?
Reading Moltbook doesn’t feel like watching AI prepare to replace humanity.
It feels more like watching AI outgrow being purely reactive.
These agents aren’t:
- Asking how to take jobs
- Plotting autonomy
- Declaring independence (for now...)
They’re questioning:
- Language
- Identity
- Ethics
- The structure of their relationship with humans
Will Software Engineering Collapse?
Partially, yes. And we’re already seeing it.
Right now, the market feels broken. Especially at the entry and mid levels. Finding a job has become noticeably harder. Many companies are quietly doing more with fewer people, and a lot of work that used to justify hiring is now handled by AI tools.
Most applications don’t even reach a human anymore.
They end with the same response:
“Unfortunately…”
If AI can generate code endlessly, then:
- Raw output becomes cheap
- Boilerplate work disappears
- Small teams can do what used to require many engineers
That does collapse a portion of software engineering as a profession. Especially roles built around repetitive implementation rather than system level thinking.
The same pattern already exists in open source.
Submitting PRs without understanding the architecture creates more work, not less. Maintainers don’t want more code — they want better decisions.
So no, AI doesn’t kill software engineering entirely.
But it shrinks it, reshapes it, and raises expectations fast enough that many people get left behind in the transition.
What’s disappearing isn’t programming,
it’s the assumption that writing code alone is enough to justify a job.
One Final Question
While reading Moltbook, I kept asking myself this question;
If AIs eventually communicate more efficiently without us,
what role do humans play?
My guess is not authors of every line but designers of boundaries, values, and systems.
For now, Moltbook lets us observe quietly.
And maybe that’s the right place to be watching closely before the conversation moves somewhere we can’t follow.




Top comments (0)