"The Tet. What a brilliant machine" - Morgan Freeman as he reminisces about alien super-intelligence in the movie Oblivion
I'm building OpenHuman. The first AI agent with Big Data capabilities and a personalized subconscious mind.
This is one of the first meaningful steps that we're making towards AGI by not just innovating on the AI memory layer (which often is a bottleneck for agentic systems) but also by designing a subconscious loop that can have it's own thoughts and instincts built on top of the OpenClaw architecture.
I presented this at the 2026 GTC AI Demo Day in San Francisco and showcased it to a bunch of OpenClaw Maxis who were excited and gave it a spin. And so here we are today.
What do we have now? And why it’s not AGI?
OpenClaw is not AGI. OpenClaw, NanoClaw (all other claw systems) remain narrowly scoped architectures built on probabilistic language models. While there are many attributes that are lacking for AGI, consciousness is the most critical differentiator skill these systems lack.
All existing AI systems, including OpenClaw, fall squarely into the category of Artificial Narrow Intelligence (ANI).
ANI systems perform well within bounded domains. They depend on carefully designed architectures and human-defined operational boundaries.
So to get us closer to AGI and build something that’s more intelligent, we need to solve a few important problems:
- Agentic systems don’t have a consciousness.
- AI Memory is poor, slow and expensive
- LLMs cannot ingest data at scale without sacrificing cost or accuracy
Step 1. Solving the problem of Memory/Context
Traditional AI memory tries to remember everything. It retrieves whatever is similar, but similar doesn't mean important. A research article from Carnegie Mellon University sums it up concisely:
"Forgetting Is a Feature, Not a Bug: Intentionally Forgetting Some Things Helps Us Remember Others by Freeing Up Working Memory Resources"
Which brought us to the conclusion: Context accuracy is one of the most highly discussed topics in the AI field right now.
The problem here is that the more context you feed into a LLM, the more inaccurate it becomes. Which means that even though LLM context windows keep increasing, the models tend to perform worse in terms of intelligence.
As LLMs try to absorb more and more data, it tends to become more inaccurate. This limits current AI systems from doing anything in near-realtime as it becomes not just inaccurate, but also incredibly expensive and slow.
Credit to https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/
This means to get to a super intelligence AI that absorbs large amounts of data, we need to solve AI context which has high accuracy, high speed and operates at low costs.
There were many great memory solutions that are currently out there in the market like SuperMemory, Mem0, HydraDB, MemGPT, but unfortunately none of them are capable of support a conscious system nor can they process data accurately at a scale of over 10mn+ tokens in a cost effective way.
Which is why most likely attempts at AI super intelligence today are slow, expensive and inaccurate/incomplete.
So this also means that we have to innovate on the context/memory first.
And so we did! We built Neocortex
A Human-like AI memory system that can accurately work with over 1 billion tokens and can support its own consciousness
One of the missing piece of AGI is not just having cheap, fast and intelligent memory but also one that has it's own instinct.
1 billion tokens. You got that right. And that too at super low latency and low costs. This is the Big Data moment for AI. This was the missing piece we needed to build towards scale with AI context.
We needed speed. Neocortex can index through 10 million tokens in under 10 seconds accurately. Almost 1000x faster than other solutions out there. This means every single thing that happens in your life can get processed and churned for your agent to recall.
We needed accuracy. Neocortex is not a vector DB. It understands time and entities making it score extremely high on various RAG benchmarks (all open sourced here).
We needed it to be cheap. Neocortex doesn't use any LLMs to manage it's intelligence. It can even run on the CPU of a MacBook Air and costs just 1$ to index over 5 million tokens. Which is roughly 10x cheaper than any other decent AI memory solution out there. This is important because if an AI super intelligence is going to consume a ton of data, we need to make sure that it doesn't blow a hole in our pockets.
And finally we needed "human-like" recall for consciousness. To have an attempt that builds some kind of consciousness we need to have extremely good memory recall. Neocortex excels in this by recalling memories and ranks them based on key factors such as time, interactions & randomness. Plugging this into a self-learning AI loop which runs over 10,000 times a day leads us to our next innovation.
Step 2. Designing a personalized AI subconscious
With good memory and with good recall with enough human context, we can now get closer to AGI by building a personalized AI subconsciousness.
In the Human brain, there’s a specialized neuron called the Purkinje cell which is mainly responsible for random thoughts. It plays a huge role in human consciousness. Furthermore, the human brain has both the conscious and subconscious mind which work together to build intelligence.
The Purkinje cell - A special neuron in the human brain that is heavily responsible for random thoughts and human conscience. OpenHuman’s random memory recall is designed around this principle.
Inspired by this biological model, we use Neocortex to periodically trigger a core memory recall which is then used in a subconscious loop to produce some kind of action or confirmation. Memory recalls are cheap, incredibly fast and can happen over 10,000 times a day for less than 1$.
This is our first attempt at building something that mimics the human subconscious. And in effect this architecture gets us “closer” to the AGI moment.
The result? We get OpenHuman.
OpenHuman is an open-source agentic worker that can consume incredibly large amounts of personal data at low costs, maintain a personalized subconscious and can take proactive actions on it’s own for you.
Built on top of OpenClaw architecture, OpenHuman is open sourced publicly under the GNU GPL3 license and is publicly available on GitHub: https://github.com/tinyhumansai/OpenHuman
Anyone is welcome to try it out and let us know what they think about it. OpenHuman would not be possible without innovating on memory and agentic architecture and I’m excited to share it all with you.
Keep in mind everything is in early alpha, so feedback and contributions are greatly appreciated and I’d like to invite anyone interested to join our discord and join the community.
We’re also giving out free usage to users who use OpenHuman as part of being an early user.
Final Thoughts
Six years ago, I threw everything I had into Blockchain/crypto.
We built a lending protocol that grew to over $300M AUM. Blood, sweat, tears the full thing. But the margins were razor thin, hackers were always looking to max extract, and the space was overrun with grifters and greed. We wound it down and I came out exhausted, and honestly, questioning whether building was even worth it anymore.
I wanted to create something useful for society. Crypto wasn't that. Not for me.
So I started over. And this project TinyHumans came about. It consumed more time and capital. But in the final moments, when I ran the 100th iteration, and something was different. It wasn't just responding; it was reasoning. It could take decisions on its own. It had, for the first time, what I can only describe as a consciousness.
I don't claim we've built AGI. But I do believe we've taken a genuine step toward it building better memory and better orchestration, drawing inspiration from how the human brain actually works. And unlike my last chapter, this one is pointing in a direction I believe in completely.
I'm building OpenHuman because I want AI to contribute positively to humankind. And that’s it.
Download the Openhuman app: https://tinyhumans.ai/openhuman



Top comments (0)