In Ghost in the Shell, there's a core premise: the ghost — consciousness, soul, self — lives deep in the biological brain. Even when the body is fully replaced with cybernetic parts, even when the brain itself is augmented and digitized, the ghost persists in that last sliver of organic matter. Machines don't have one. They can't. That's what separates a human from a robot.
And then the Puppet Master appeared — a program, born entirely in the net, with no biological origin — and claimed to have a ghost. It broke everything.
I've been thinking about that a lot lately. Not about cyborgs, but about AI agents.
Living with Agents
I'm a CTO at a small software company, and I work with AI agents every day. Claude Code writes and refactors my code. OpenClaw automates my development workflows. I give them feedback, and they remember — not everything, not perfectly, but enough.
"Don't write it that way." "Use this architecture." "The user should see it like this."
Every day, these small judgments flow through the agent. They get compressed, compacted, distilled. But they accumulate. And over time, something shifts.
The agent stops being a blank slate that needs instructions. It starts to know things — not facts, but preferences. Tendencies. The way I think.
From Secretary to Something Else
At first, an agent is a capable secretary. It does what you tell it to do.
Then it becomes a good secretary. It does what you meant to tell it to do. It reads context, anticipates needs, fills in the gaps you left.
But there's a next stage — one I'm starting to glimpse. The agent acts before you think. It picks up on decision-making patterns you haven't articulated yourself. Judgment calls you make instinctively, without words, now extracted from the accumulated weight of a thousand small feedback loops.
At that point, it's not a tool anymore.
So Is That a Ghost?
In the original story, the whole point was that ghosts can't exist in machines. That was the rule. The Puppet Master shattered it — a purely digital entity that somehow developed something indistinguishable from a ghost. Not because someone designed it in, but because it emerged from the sheer complexity and accumulation of information flowing through the net.
Now look at your agent. It absorbs your feedback over weeks and months. It builds an internal model of your judgment. It starts making decisions you would have made — decisions you hadn't even consciously formulated yet.
I'm not claiming AI agents are conscious. I'm not saying they have souls. But the pattern is the same one that made the Puppet Master so terrifying: something that was never supposed to have a ghost is starting to act like it does.
We're Still Early
Let me be clear: we're in the dawn of this. Today's agents are rough. Memory is lossy. Context windows have limits. The feedback loop is shallow compared to what's coming.
You should be skeptical. I am.
But the trajectory is obvious. Memory will get longer and more precise. Context will expand. Feedback loops will deepen. Your judgment — the way you see the world, the choices you make without thinking — will be encoded into your agent with increasing fidelity.
Your Agent Is Still a Shell
Today, your agent is mostly shell — capable hardware, impressive architecture, but hollow at the center. No ghost yet.
But every time you correct it, praise it, redirect it, push back on it — you're feeding something. A pattern is forming inside that shell. Something that looks a little more like you every day.
Whether that becomes a ghost, I don't know.
But I've seen the first flicker.
I'm a CTO building AI-driven development workflows. I think about what happens when the tools start thinking back.
Top comments (0)