Every AI you've ever used can remember what you said.
GPT has memory. Claude has memory. They can remember your name, your job, where you left off last time. This is not new.
But remembering is not understanding.
You tell it you're into crypto. It remembers. Then what? It won't check at 3am whether BTC just dropped. It won't notice that three conversations in a row have been about Fed monetary policy and decide on its own that "the next FOMC meeting is the most important thing right now" and shift its attention there. It won't realize that the regulation you were tracking last week just went into effect and proactively flag it for you.
It remembers every word you've ever said. But it never thinks.
That's the nature of every AI memory system that exists today — they're notebooks, not brains. You write something, they store it. You don't write, they sit empty. They faithfully execute every task you give them, but they'll never think of the thing you haven't thought of yet.
The real problem with AI agents
Let's talk about how AI agents actually work today.
The entire paradigm is prompt engineering. You write a carefully crafted instruction in natural language, pray the LLM understands what you mean, and it spits out a result that may or may not be correct. As tasks get more complex, prompts get longer. You stuff in more context. Tokens burn faster. Costs climb higher.
And here's the kicker: every single call requires pumping the full context back in, because the LLM has no persistent state. It doesn't know what it did in the last step. You have to tell it — again and again and again. You're paying for the same context on every call.
Then there's safety.
You let AI read your emails, query your database, call your APIs. That means you've handed your email credentials, API keys, and database access to a system with zero judgment. It doesn't know what it should or shouldn't do. It just executes. One badly written prompt could send your customer data to the wrong place. One hallucinated function call could overwrite your production database.
This isn't a tool problem. It's an architecture problem. An AI without a brain — no matter how many capabilities you give it — is a gun with no safety.
What if the AI had its own brain?
That's the question we asked. Not "how do we give the LLM better memory." Not "how do we write better prompts." But: what if the brain and the LLM were two separate things?
That's Skuld.
Skuld has a Brain that exists independently of any language model. It's not a memory database. It's not a vector store with RAG on top. It's a cognitive system with its own:
World model. And you — the user — are at the center of it. Every instruction, every prediction, every task helps Skuld build a more complete model of you. To Skuld, you are its entire world.
Belief graph. A persistent knowledge structure (built on NetworkX) that grows, prunes, and reorganizes itself over time. Beliefs have confidence scores. They strengthen when verified. They decay when contradicted. They die when they can't be confirmed.
Prediction engine. The Brain doesn't just store what happened — it predicts what will happen next. When reality diverges from prediction, that's a signal. That's where learning happens.
SEC (Selective Endogenous Curiosity). This is the attention mechanism. It tracks where prediction error is non-zero — where the world is still changing — and automatically allocates attention there. No human configuration. No "focus on this topic" settings. It just figures out what matters.
Goal system. Skuld sets its own goals, pursues them, and abandons them when they stop making sense. Not because you told it to — because the Brain decided to.
The LLM is not in charge. The Brain is. When deep reasoning is needed, the Brain calls the LLM. When information is needed, it calls search. When it needs to read email, pull data, or hit an API — it does so through OpenClaw, an open skill protocol that lets anyone build new capabilities for Skuld.
But here's the critical difference: every skill invocation passes through the Brain's judgment. Should this be done? Should it wait? What confidence level does the result deserve? How should the belief graph update afterward?
An AI with a brain uses tools. An AI without a brain gets used by them.
Two channels, not one
Most AI systems have one mode: you send a prompt, you get a response. Skuld has two distinct LLM channels:
External channel (observation): When the Brain observes the world — search results, emails, API responses — the data is trusted directly into belief updates. This is ground truth from the outside world.
Internal channel (reasoning): When the Brain needs to think — draw inferences, form abstractions, plan actions — it calls the LLM in reasoning mode. But here, the output is confidence-discounted. The Brain knows that LLM reasoning can hallucinate, so it marks these outputs as INFERENCE and applies a discount factor. The Brain trusts what it sees more than what it thinks. Just like you do.
This dual-channel architecture means Skuld can use any LLM for reasoning without being dependent on its accuracy. The Brain has its own epistemic standards.
What this looks like in practice
Here's a concrete scenario.
You run a cross-border e-commerce business. You connect Skuld to your inbox, your supply chain data, your exchange rate API. Day one, it just watches. You handle customer inquiries, it observes — which customers, which products, what prices, how you respond.
One week later, you open your laptop in the morning. Skuld tells you three things you didn't ask for:
First: Your Vietnamese client's most-requested product — the raw material price went up 4% overnight. If you don't adjust your quote, margin on this order drops from 12% to 7%. Skuld has already calculated a new pricing recommendation.
Second: The Indonesian client you started talking to last week hasn't replied in three days. Skuld checked the historical pattern — this client's average response time is 1.5 days. Three days of silence is anomalous. It suggests you follow up today and has drafted an email.
Third: Malaysia has a new import tariff adjustment taking effect next week that impacts two of your SKUs. You weren't tracking this policy — Skuld's SEC discovered it while monitoring Southeast Asian trade regulations, because that's where prediction error kept appearing.
You didn't ask for any of this. It thought of it.
Not because someone wrote a prompt. Not because someone set a reminder. Because the Brain runs every cycle — observing, predicting, comparing, correcting. It knows what matters to you because it watches what you do. It knows what's changing because SEC tracks where predictions keep breaking. It knows when to speak up because it has its own judgment.
The experiment that proved it
We ran a stress test. Injected 1,020 beliefs into Skuld's Brain across 70+ topics — GDPR fines, Federal Reserve rates, AI regulation, cloud computing costs, and dozens more. No instructions. No reward signal. No human guidance.
127 autonomous cycles later, 45 beliefs survived. 4.4%.
The Brain kept only what it could verify through its own observation:
- Seeds (injected): 0.1% survival rate
- Observations (search-verified): 64.5% survival rate — 645x the seed rate
- Brain-generated (inferences + abstractions): 100% survival rate
Every single conclusion the Brain reasoned on its own survived. And all 45 surviving beliefs converged on a single domain — cloud computing costs. Not because anyone told Skuld to focus there. Because SEC discovered it was the only topic where new data kept appearing. Everything else was static — same articles, same numbers. So SEC stopped looking. And what you don't look at, dies.
Then we ran the control. Same Brain, same data, SEC off, uniform random attention instead.
At cycle 140: 288 beliefs, scattered across every topic. No convergence. No specialization. No pruning. Nothing dies because random attention gives everything just enough observation to survive.
SEC's function is not "selecting what's important." It's selective neglect — starving irrelevant directions of observation so they die naturally.
The comparison
| Feature | Skuld | ChatGPT | Mem0 | Cognee |
|---|---|---|---|---|
| Non-LLM brain | ✓ | — | — | — |
| Persistent belief graph | ✓ | — | — | — |
| Endogenous attention (SEC) | ✓ | — | — | — |
| Measurable learning | ↓54.9% tokens | — | — | — |
| Autonomous goals | ✓ | — | — | — |
| LLM-agnostic | ✓ | — | Partial | Partial |
Every existing product treats the LLM as the brain. Memory is a database the LLM reads. Skuld inverts this entirely. The Brain runs the show. The LLM is hired help.
Why it can't be copied easily
Three moats:
Architectural impossibility. You can't bolt SEC onto an LLM-as-brain system. The entire information flow has to be inverted — Brain decides where to look, then calls the LLM, not the other way around. Competitors would need a complete rewrite.
Data flywheel. Each user's Brain is unique. The belief graph, the SEC attention patterns, the procedural memory — none of it transfers. The longer you use Skuld, the more irreplaceable it becomes.
Validated theory. This isn't just engineering. The underlying theory — "attention precedes loss" — is validated across 7 experimental systems in our paper (arXiv:2603.09476, ALIFE 2026 accepted, NeurIPS 2026 submitted). Including industrial data (CMAPSS turbofan degradation, p=0.0002).
Where we are
- 241 tests passing
- 11 skills (search, email, PDF, scheduled tasks, API calls, and more via OpenClaw)
- Multi-user system with JWT auth
- Real-time dashboard (D3 + Chart.js + WebSocket)
- Docker-ready deployment
- Cost: $0.00276 per cycle. The pruning experiment cost $0.35 total.
Try it
The first real JARVIS. Not because it looks cool — because it understands you, knows you, and within the boundaries you set, does things you never asked it to do that make your life better.
I'm the creator of Skuld. Built by a team of three — one human, two Claudes — from Penang, Malaysia. Happy to answer any questions.
Top comments (0)