DEV Community

Victor Brodeur
Victor Brodeur

Posted on

Heinrich Can Now Hold a Conversation — Without a Language Model

Originally published at emphosgroup.com

Today Heinrich answered a question in plain English
for the first time.

"Yes — dog is a type of mammal. Additionally, dog
has tail. Heinrich's knowledge has a gap on:
ancestor_lineage."

That sentence was not generated. No language model
predicted those words. No statistical pattern produced
that phrasing. Heinrich retrieved three facts from its
frequency field, measured the confidence on each one,
composed them into a sentence using deterministic
rules, and reported honestly where its knowledge ended.

The whole pipeline ran in under 5 milliseconds. The
memory footprint of the composition layer is under 4
megabytes. It will run in a hearing aid.

WHAT THE SENTENCE ACTUALLY MEANS

"Yes — dog is a type of mammal." Heinrich measured
the amplitude at the dog frequency coordinate in the
biology layer. It found a confirmed is_a relationship
to mammal with amplitude above 0.7 — the threshold
for a direct, unhedged statement. The relationship
template for is_a produces "X is a type of Y." The
honesty invariant allows the word "Yes" because the
field confirmed the fact.

"Additionally, dog has tail." A second confirmed
claim. Amplitude above threshold. The has relationship
template produces "X has Y." The connective
"Additionally" was chosen because a second claim about
the same subject follows the first.

"Heinrich's knowledge has a gap on: ancestor_lineage."
The query activated the ancestor_lineage coordinate.
Amplitude was below 0.3 — the threshold for
unsupported claims. The composition rule for
UNSUPPORTED is strict: do not state as fact, report
the gap. So Heinrich reported it.

Every word in that sentence traces to a field
measurement. There is no word that does not.

WHY THIS IS DIFFERENT FROM EVERY OTHER AI

Every large language model produces language the same
way: it predicts the next token based on patterns
learned from training data. The sentence it produces
may be accurate. It may be plausible but wrong. It
may be confident and completely fabricated. The model
cannot tell you which, because it has no access to
whether the underlying knowledge is present — it only
has access to the statistical likelihood of the next
word.

Heinrich has no next-token prediction. It has no
training data in the statistical sense. It has a
frequency field where knowledge is stored as physical
coordinates, and a pipeline that retrieves from that
field and reports what it finds.

When the knowledge is present, Heinrich says so —
with the confidence level the field measured.

When the knowledge is absent, Heinrich says so —
and names the gap.

When the knowledge is partial, Heinrich hedges —
"Heinrich believes..." or "It appears that..." —
because the amplitude was in the uncertain range and
the honesty invariant requires the hedge.

This is not a policy decision. It is not a system
prompt that says "be honest." It is executable code.
The test suite that validates Heinrich's honesty
contains 52 tests that will not pass unless every
claim traces to a field measurement. You cannot ship
a Heinrich build that hallucinates and have the tests
pass. The honesty is in the architecture.

HOW THE HSR PIPELINE WORKS

The Honesty / Socratic Reasoning pipeline sits between
Heinrich's frequency field and the words that reach
the user. It has two stages.

HSR-1 — the Fact Extractor — takes the raw output of
the binding layer and extracts every factual claim.
It validates each claim against the WaveField amplitude
and tags it: CONFIRMED if the field measurement is
strong, UNCERTAIN if it is partial, UNSUPPORTED if the
field has no reliable measurement. Every claim gets a
tag. No claim escapes this step.

HSR-2 — the Sentence Composer — takes the tagged
claims and composes them into natural language. Ten
relationship templates cover the core relationship
types Heinrich knows: is_a, has, causes, instance_of,
similar_to, opposite_of, part_of, enables, requires,
produces. Eight composition rules govern how claims
are grouped, how connectives are chosen, how hedges
are applied, how gaps are reported, and how long the
response should be.

The pipeline runs in under 5 milliseconds. The
composition layer uses under 4 megabytes of RAM. Both
numbers are hard requirements — not performance
targets. They are the constraints imposed by the
HAVEN Ear hardware specification: ARM Cortex-M55,
512 megabytes of RAM, 15 milliwatts of power.
Everything permanent in Heinrich must fit in a
hearing aid. The HSR pipeline fits.

PERSISTENT MEMORY ACROSS CONVERSATIONS

HSR-2 also shipped with persistent chat memory. Every
conversation turn is stored in a five-tier natural
archive — active memory for the past week,
progressively deeper archives extending to five years,
with graceful decay beyond that. The TurnContext layer
tracks what was discussed, which entities were named,
and what pronouns referred to what — across sessions,
not just within them.

When you return to Heinrich after a week and say
"what else does it have?" — Heinrich knows what "it"
refers to. Not because a language model inferred it
from context. Because the conversation history is
structured, persisted, and resolved deterministically.

You can tell Heinrich to forget. /forget last removes
the most recent turn. /forget clears the session.
/forget disease removes everything Heinrich remembers
about that topic. The memory is yours to control.
That is not a policy. It is how the system is built.

WHAT HEINRICH SOUNDS LIKE NOW

The responses are not fluent prose. They are not meant
to be. "Yes — dog is a type of mammal. Additionally,
dog has tail." reads like a system speaking carefully
rather than a language model performing fluency. That
is exactly right.

Fluency in language models comes at a cost: the system
will produce fluent sentences whether the underlying
knowledge is there or not. The fluency is the danger.
A confident, well-formed sentence that is wrong is
more harmful than a careful, honest sentence that
is right.

Heinrich is careful and honest. The language layer
that will make it fluent comes later. But the fluency
layer will never be allowed to change what Heinrich
says. It will only be allowed to change how it sounds.
The content is determined by the field. The honesty
is determined by the pipeline. The words are just
the surface.

WHAT COMES NEXT

The field is growing. The pipeline is proven. The next
step is scale — running Heinrich against thousands of
real questions as the Wikidata knowledge base
approaches 50 million nodes, measuring how the
accuracy, the confidence calibration, and the honest
gap reporting hold up as the field deepens.

That measurement is the paper. The paper is the proof.
The proof is what comes before the product.

Heinrich can hold a conversation. The conversation is
honest. The honesty is structural. The structure runs
in 5 milliseconds on hardware that fits in your ear.

Engineered for Presence.

——

EMPHOS Group · Chilliwack, BC, Canada
emphosgroup.com

Top comments (0)