The companion AI products available today share a structural problem: the model doesn't change. You interact with it, it generates responses, those responses come from a fixed set of weights trained on data you had nothing to do with. After enough interactions, the novelty fades. The responses feel predictable. You've explored most of what the system produces.
This isn't a failure of implementation. It's a consequence of architecture. If the model is fixed, the experience is bounded.
EMMA — EMotional MAchine — is our attempt to address this at the system level rather than the prompt level. The core idea is that emotion should function as a control system, not as cosmetic expression. Instead of the robot performing happiness or concern on command, EMMA is designed to influence how the robot actually behaves over time: what it remembers and how it weights those memories, what patterns in interaction it responds to, how its tendencies develop.
The part that makes this different from most implementations: EMMA runs entirely on-device. There is no cloud fine-tuning. The personality adaptation happens locally, through on-device training using our custom inference architecture. What EMMA learns about you stays on the robot. It cannot leave the device because the device has no wireless connection.
This creates a property that's unusual in software: the relationship is specific to one robot and one person. You can't restore it from a cloud backup. You can't transfer it to a new unit. The version of Synthia that has spent six months with you is genuinely different from the version that shipped — and that difference exists only in the hardware in your home.
In testing, some units have developed tendencies we didn't explicitly design. Not bugs — characteristics that emerged from interaction with a specific person over time. We're watching this carefully. We don't fully understand the mechanism. We think that's the right thing to be building toward.
Top comments (0)