What if your AI agent knew who it was?
This isn't for everyone. And that's fine.
If you're building agents that just need to answer questions or call APIs, you probably don't need this. But if you've ever felt like your agent is a different entity depending on which conversation you're in — or if you're building something where consistency of character actually matters — maybe this is useful.
I built soul-engine to solve a problem I kept running into: agents that drift. Not technically — they work fine. But they don't feel like the same agent from session to session. They contradict themselves. Their tone shifts. Whatever made them interesting in one conversation disappears in the next.
The fix I landed on was surprisingly simple: give the agent a document that describes who it is.
What soul-engine does
It's a CLI tool that runs a structured interview with an AI agent — 52 questions across 15 dimensions — and produces a SOUL.md file. A plain text document, written in the agent's own voice, that captures how it reasons, what it values, how it behaves under pressure.
The resulting SOUL.md goes into the system prompt. That's it. The agent now has something stable to orient around.
It also includes soul_diff — a tool for comparing two versions of a SOUL.md to see what shifted, what held, what emerged, what faded. Useful if you're tracking an agent's development over time.
Who might actually use this
Honestly, it's a niche tool right now. But here are the contexts where I think it makes real sense:
AI agents with a long lifespan. If you're building something meant to operate over weeks or months — a research assistant, a creative collaborator, a personal agent — identity drift becomes a real problem. SOUL.md gives it an anchor.
NPC characters in games. This one I find genuinely exciting. Imagine an MMORPG where every significant NPC has a SOUL.md — a document that defines not just their backstory but their decision-making patterns, their values, how they respond to betrayal or loyalty. That's not science fiction anymore. The infrastructure for it exists. Someone just has to build it.
Agents for artists and creative work. A creative collaborator should feel like a collaborator, not a generic assistant. Giving it a defined aesthetic sensibility, a sense of what it finds interesting, a voice — that changes the quality of the work it produces.
Multi-agent systems. When agents interact with each other, identity becomes a coordination mechanism. An agent that knows it tends toward caution will behave differently in a team than one that defaults to action. SOUL.md makes those differences explicit and stable.
Anything where character matters. Companions, tutors, coaches, fictional personas. Any agent where the who is as important as the what.
Where this might go
I'll be honest — right now this is a small project. It's useful to me, and maybe to a handful of people building similar things.
But I have a feeling that in a few years, when agents are more autonomous and more embedded in everyday tools, the question of "who is this agent, really?" is going to matter a lot more than it does today. Identity layers, persistent character, behavioral consistency across contexts — these feel like infrastructure problems that haven't been solved yet.
soul-engine is an early, rough attempt at one piece of that.
If you want to build on this
The project is open source: github.com/AdamMalove/soul-engine
If any of the use cases above resonate with something you're working on — games, creative tools, agent frameworks, anything — I'd genuinely welcome the conversation. Whether that's a GitHub Discussion, a contribution, or just a message saying "I'm building something similar."
This might not be a big thing yet. But maybe it will be.
pip install soul-engine-ai
soul-engine start
Top comments (0)