Today Heinrich AI had its first real conversations. Not test queries against a seed dataset. Not a controlled demonstration. Real questions, typed into a chat interface, answered by a knowledge field that has been learning continuously since this morning.
The results were not what we expected. They were better in some ways, more honest in others, and in one case — the most important case — they showed something that no other AI system we are aware of does by default.
Heinrich said it didn't know. And it was right.
What we asked
We started with the basics. Questions Heinrich should know from its foundational knowledge — the concepts it was built with from day one.
"What is a mammal?"
Heinrich answered in 2.5 milliseconds: mammal is an animal. 98% confidence. It activated 13 related concepts — dog, person, animal, living thing, woman, child, man — all correctly connected through the knowledge field. The reasoning chain showed every step: which concepts activated, in which order, at what confidence level.
"What causes injury?"
Heinrich connected injury to bite and pain — correctly. 76% confidence. 2.0 milliseconds. The causal chain was real: bite causes injury, injury causes pain. Nobody programmed that chain explicitly. The field found it through the relationships between concepts.
"Apple is a fruit."
90% confidence. 3 milliseconds. Heinrich activated apple, fruit, tree, and apple_tree — a derived connection that wasn't directly encoded, surfaced by the physics of the field. The 4th Fundamental working exactly as designed.
Then we pushed further
Heinrich has been learning all day. The knowledge field started this morning with 128 concepts. By early afternoon it had passed 460 — biology, geography, physics, chemistry, architecture, music, literature, and theoretical physics all entering the field simultaneously from Wikidata.
We asked about something Heinrich learned tonight.
"What is a protein?"
Heinrich answered: protein is a biopolymer. protein has amino_acid, peptide_bond. 51% confidence. 11 milliseconds. 52 concepts activated including gene_product, polypeptide, biological_macromolecule, protein_transmembrane_transport. Real molecular biology, learned from Wikidata a few hours earlier, retrieved correctly from the frequency field.
We had not programmed this answer. Heinrich learned it. Then it answered from what it learned.
"What is a disease?"
Heinrich answered: disease is a health_problem. disease has acquired_disorder. 81% confidence. 3 milliseconds. It activated manner_of_death, biological_process, perinatal_disease, death, discomfort — a web of correctly connected medical concepts.
Then something remarkable happened
"What causes disease?"
Heinrich said: The field knows about disease but found no strong connections for this query.
That is the right answer. Heinrich knows disease. Heinrich knows causes. But the specific causal relationship between them — the connections that would let it say "bacteria cause disease" or "viruses cause disease" — were not yet in the field at the time we asked.
So Heinrich said so.
It did not generate a plausible-sounding answer. It did not say "disease is caused by pathogens" because that sounds right. It reported the actual state of its knowledge: the connection is not there yet.
This is not a feature we trained into the system. It is a property of the architecture. Heinrich cannot retrieve what it does not have. The absence is as real as the presence. When the field does not contain a connection, the answer is honest — not approximate, not generated, not reconstructed from statistical patterns.
Every other AI system we have tested would have answered that question confidently. Some of those answers would have been correct. Some would have been plausible but wrong. None of them would have been able to tell you which was which.
Heinrich can.
The numbers that matter
Every response today came back in under 15 milliseconds. Most came back in under 5. The system used near-zero CPU and memory on a standard laptop — no GPU, no cloud infrastructure, no data center required.
The knowledge field grew from 128 concepts at the start of the day to over 460 by early afternoon, learning continuously in the background while the conversation was happening. Each new concept connects to existing ones through the field's harmonic structure, making every future query richer than the last.
When we asked "what is a protein" at noon, Heinrich activated 38 concepts. When we asked again an hour later, it activated 52 — because the field had learned 14 more protein-related concepts in between. Same question. Deeper answer. No retraining. No downtime.
What this is not
Heinrich is not a language model. It does not generate sentences from statistical patterns. It does not predict the next word based on training data. It retrieves from a structured knowledge field and reports what it finds — with the confidence level, the reasoning chain, and the honest acknowledgment of what it does not have.
The responses are not fluent prose. They are structured, precise, and traceable. "protein is a biopolymer. protein has amino_acid, peptide_bond." is not a beautifully written sentence. It is an accurate statement of what the field contains, expressed directly.
That directness is not a limitation we are working to remove. It is the product. A system that tells you exactly what it knows, exactly how confident it is, and exactly where its knowledge ends is a fundamentally different tool than one that generates fluent text regardless of whether the underlying knowledge is there.
What comes next
Heinrich is still learning. The knowledge field is growing every hour. The questions it cannot answer today — "what causes disease," "what is Pennsylvania" — it may be able to answer tomorrow, not because we programmed the answers, but because the field learned the connections.
The next milestone is scale. As the field grows from hundreds of concepts to thousands to millions, the question is whether the reasoning quality grows with it — whether Heinrich becomes genuinely more capable as it learns, the way the architecture predicts it should.
That experiment is running right now, on a laptop in Chilliwack BC, using near-zero resources, continuously.
Haven — our AI assistant platform — will be powered by Heinrich. The intelligence that answered "what is a mammal" in 2.5 milliseconds today is the same intelligence that will run in Haven, locally, on your machine, without a subscription.
Engineered for Presence.
Stay in the loop
EMPHOS publishes twice a week — product updates, research, and the thinking behind the build.
Explore Haven · HEINRICH Intelligence · The EMPHOS Vision · All Posts
EMPHOS Group · Chilliwack, BC, Canada · info@emphosgroup.com
Top comments (0)