A recent essay, "After Two Years of Vibecoding, I'm Mass Deleting My AI Slop", struck a nerve. The author spent two years going deeper into AI-assisted coding — starting with small tasks, eventually handing over entire features. The excitement faded when they realized:
- The AI couldn't maintain long-term context across sessions
- Detailed specs didn't help — the AI treated them as static, not as living documents
- Code looked fine at the function level but was structurally incoherent at the system level
- The author lost their own mental model of the codebase
This isn't a skill issue. It's an architecture problem.
The Root Cause: Stateless Agents
Every mainstream AI coding tool — Copilot, Cursor, Claude Code — treats each session as a fresh start. Your agent doesn't remember what it decided yesterday. It doesn't know why it structured the auth module that way. It can't tell that the new utility function contradicts a pattern it established three sessions ago.
The result? Architectural drift. Each session produces locally correct code that globally doesn't fit. The author called it "pure slop." It's actually the natural consequence of an amnesiac architect.
Three Missing Pieces
We think the vibecoding problem breaks down into three gaps:
1. Long-term Memory
An agent needs to remember past decisions, patterns, and rationale — not just the current file. Without memory, every session starts from zero context.
What this looks like in practice: Semantic search over past conversations and decisions. "Why did we use Redis here instead of PostgreSQL?" — the agent should be able to answer this from its own memory, not from a comment you hopefully left.
2. Identity Persistence (Persona Drift)
When you give an AI a system prompt, it works... for that session. Next session, the agent might adopt subtly different conventions, naming patterns, or architectural preferences. Over time, your codebase reflects the work of fifty slightly different "developers."
What this looks like in practice: A declarative file — we call it SOUL.md — that defines the agent's principles, coding style, and constraints. Version-controlled, diffable, rollbackable. Not a hidden system prompt that disappears between sessions.
3. Living Specifications
The essay nails this: real development specs evolve through discovery. Waterfall-style "write the spec, then generate" doesn't work because building is understanding. An agent needs to update its own mental model as the project evolves.
What this looks like in practice: Memory that syncs across machines, checkpoints that let you snapshot and rollback agent state, and security scanning that catches when the agent's behavior drifts from its defined principles.
Our Approach (Honest Status: Work in Progress)
We're building SoulClaw — an open-source AI agent framework that tackles these three gaps:
SOUL.md — A markdown file that defines your agent's personality, principles, and coding style. Checked into your repo. Survives across sessions. Currently 89+ community souls available.
Swarm Memory — Git-based memory synchronization across machines. Your agent on your work Mac and your home desktop share the same memory. Age-encrypted, with LLM-based merge conflict resolution.
SoulScan — 55+ security rules that detect prompt injection, privilege escalation, and identity drift in soul files. Because when you download a community persona, you should know it's safe.
Soul Checkpoint — Snapshot and rollback agent configurations. Experiment with different personas without losing your working setup.
Is it polished? Not yet. The UX still has rough edges. Some features are early. But the direction is clear: AI agents need to be stateful, persistent, and version-controlled — just like the code they write.
The Bigger Picture
The vibecoding essay concludes that humans should return to writing code manually. We think there's a middle path: make the AI agent a real collaborator that accumulates knowledge over time, maintains consistent principles, and can be held accountable through version control.
The problem isn't that AI writes bad code. It's that AI forgets everything between sessions and has no consistent identity. Fix that, and vibecoding might actually work.
Links:
- SoulClaw CLI — Terminal-based agent with semantic memory search
- SoulClaw for VSCode — IDE extension with visual Swarm Memory and SoulScan
- Soul Registry — Browse 89+ community agent personas
- Soul Spec — Open standard for AI agent personality files
MIT licensed. Built on OpenClaw.
Top comments (0)