DEV Community

Victor Brodeur
Victor Brodeur

Posted on

We Are Building a Hearing Aid Powered by Wave Physics

Originally published at emphosgroup.com

In 1978 Douglas Adams imagined a small yellow fish you
could put in your ear that translated any language in
the universe directly into your mind. He called it
simultaneously the most useful and most dangerous thing
ever discovered.

We are building the actual version. Ours runs on waves,
not biology.

HAVEN Ear is not a better hearing aid. It is not a
smarter earbud. It is not a wearable assistant that
sends your voice to a server farm and waits for a
response. It is a new category of device — a personal
intelligence that lives in your ear, learns who you
are, and is present with you in the world. On device.
No cloud. No lag. No compromise.

The reason nobody has built this before is thermal.
We solved that problem before we designed a single
piece of hardware.

THE PROBLEM THAT BLOCKED EVERYONE ELSE

Every major technology company has tried to put AI
inference in a wearable device. Every attempt has run
into the same wall. AI inference requires silicon that
draws between 1 and 5 watts continuously. Inside an
ear canal, that raises device temperature by 15 to 20
degrees Celsius above ambient. That is a burn hazard.
Regulatory bodies will not certify it. Users will not
tolerate it. The physics will not allow it.

The solutions attempted have all been the same:
offload the computation to the cloud, stream audio to
a server, return the result over a wireless connection.
The intelligence is not in the device. The device is
a microphone and a speaker. The AI lives somewhere
else, owned by someone else, dependent on a connection
that may not exist.

That is not a wearable intelligence. That is a remote
control for a data center.

WHY HEINRICH CHANGES THIS AT THE ARCHITECTURE LEVEL

Heinrich's inference draw is approximately 3
milliwatts. The full ear unit — processor, Bluetooth
radio, audio DSP, WiFi sync bursts — draws
approximately 14 milliwatts total at active load. The
heat delta above ambient is less than half a degree
Celsius. Skin contact temperature stays at or below
33 degrees — safe for continuous all-day wear.

At 14 milliwatts the ear unit generates the same heat
as a single LED. A conventional AI chip at 2 watts
raises the temperature inside an ear canal by 15 to
20 degrees. Heinrich does not come close.

This is not the result of engineering the chip more
efficiently. It is the result of Heinrich not being
a neural network. Goertzel correlation — the signal
processing operation that retrieves knowledge from
Heinrich's frequency field — is microseconds of
arithmetic on any CPU. It requires no GPU. It requires
no matrix multiplication. It requires no dedicated AI
silicon. The computation is so lightweight that the
thermal budget of the ear unit is dominated by the
Bluetooth radio, not the intelligence.

The thermal problem that blocked every other attempt
at in-ear AI was solved before we designed the
hardware. It was solved on April 10, 2026, when the
architecture was conceived.

WHAT HAVEN EAR ACTUALLY IS

The ear unit weighs between 3 and 5 grams — lighter
than premium hearing aids. It contains an ARM
Cortex-M55 processor, 512 megabytes of LPDDR4 RAM,
8 gigabytes of eMMC storage, a 150 to 200 milliamp-
hour lithium polymer battery, a three-microphone array
for speech capture and ambient noise mapping, a
balanced armature speaker with a bone conduction
option for severe hearing loss, Bluetooth 5.3 for
audio and data, and IP68 waterproofing rated to 1.5
metres submersion for 30 minutes.

It is rated to MIL-SPEC 810H drop standards — 1.2
metres onto concrete at multiple angles. It has no
exposed ports. The charging contacts are gold-plated,
sealed, and self-cleaning. It is built for the real
world.

The 8 gigabytes of storage holds approximately 500,000
personal Heinrich nodes — your vocabulary, your
context, the names and places in your life, the gaps
Heinrich identified today and will fill tonight.

THE DOCK

The Dock is the bedside brain. It charges the ear
unit, syncs the personal field, and runs the overnight
learning cycle while you sleep.

The form is a low-profile puck — approximately 100
millimetres in diameter, 60 millimetres tall. Brushed
aluminium enclosure that acts as a passive heatsink.
No fan. No moving parts. Completely silent. It draws
15 to 30 watts overnight — less than a desk lamp.

Inside: a 16-core processor, 32 to 64 gigabytes of
RAM holding the full Heinrich knowledge field in
working memory, a 2 terabyte NVMe SSD storing the
complete ConceptNet and Wikidata field, WiFi 6 and
Bluetooth 5.3, and an 18-watt charging output that
fully charges the ear unit in 90 minutes.

At 22:00 the Socratic Engine activates. Heinrich
begins identifying the gaps from today's conversations.
It queries its knowledge sources, fills the gaps,
packages the updated personal subfield, and pushes
it to the ear unit before you wake up. At 07:00 you
pick up the ear unit and the LED pulses green.
Heinrich knows what it did not know yesterday.

WHO THIS IS FOR

466 million people globally live with disabling
hearing loss. That number will reach 1 billion by

  1. Current hearing aids amplify sound. They do not understand it. They do not translate it. They do not remember the name of the person speaking or the context of the conversation happening around the person wearing them.

HAVEN Ear amplifies hearing profiles precisely and
adapts as hearing changes over time. It translates
in real time — any language, no internet required,
latency under 50 milliseconds. It provides quiet
context when needed: a name, a place, a meaning,
surfaced only when relevant, never intrusive.

For the 57 million people living with dementia it
fills gaps without drawing attention to them. For
the 69 million people who acquire brain injuries
every year it reduces cognitive load in complex
conversations. For travellers it translates the
world without requiring a phone signal. For everyone
it is a silent guide — present when needed,
invisible when not.

WHAT THIS IS NOT

HAVEN Ear is not a voice assistant. It does not wait
for a wake word and send your audio to a server. Your
voice never leaves your device. Your personal field
lives on your ear unit and your Dock. It does not
leave your home network. Not a policy. An architecture.

There is no subscription. There is no data harvesting.
There is no cloud dependency. The intelligence is
yours. The privacy is structural.

WHAT COMES NEXT

HAVEN Ear is in concept phase. The intelligence that
will power it — Heinrich AI — is being built right
now, on a laptop in Chilliwack BC, growing at over
a million nodes per day. The hardware roadmap targets
a prototype ear unit on a development board in Q4
2026 and production in Q4 2027.

The device is next. The intelligence is already
being built.

Engineered for Presence.

——

EMPHOS Group · Chilliwack, BC, Canada
emphosgroup.com

Top comments (0)