Earlier this week, I reviewed a pitch deck draft and caught myself: “non-invasive medical-grade biometric sensing.” I paused. That phrase had been carved into every narrative I’d built since day one. Then I remembered what Dr. Vasina told me six months ago, over a crackling EU-to-Chicago Zoom call: “You’re not a medical device. Stop saying you are.”
She wasn’t shutting down ambition. She was protecting rigor. And she was right.
Most founders, especially in deep tech, chase the halo of “medical.” It sounds authoritative. It implies validation, precision, trust. But slapping a medical label on something that doesn’t meet MDR or SaMD standards isn’t just misleading—it erodes credibility. It invites regulatory landmines. It confuses engineers, investors, and eventually, users. I wanted the prestige without the predicate. Vasina called that out fast. Her advisory wasn’t about blocking access to healthcare use cases; it was about building ethical boundaries before the product could be misinterpreted, overhyped, or misused. She forced me to ask: What are we actually building? Not what we wished it were, but what it is, materially and legally.
EmoPulse extracts 47 signals—facial action units, rPPG-derived heart rate, voice prosody, gaze dynamics, microexpressions—from a standard RGB camera, all in-browser via WebAssembly. The stack is lean: MediaPipe, ARKit blendshapes, a custom rPPG implementation, and deterministic state fusion on the server. No training. No ML pipelines. We implement published, peer-reviewed methods—Giannakakis et al., IEEE, MDPI—on-device. Output is a structured vector: timestamped, normalized, deterministic. The server, a $0/month Oracle ARM instance in Chicago, only receives, logs, and forwards. All perception happens client-side, sub-50ms end-to-end.
We built liveness scoring in April 2026. It runs server-side, on the existing /state stream. Three penalties: BPM instability (spoofs can’t fake pulse variance), gaze freeze plus no blinks (static photo tell), and microexpression burst in first frame (video replay artifact). On a validation set of 18 real sessions and 2 spoof attempts, separation was clean: live sessions scored 0.6–1.0, spoofs 0.2–0.4. Threshold at 0.5. Margin of 0.2. It’s not FaceTec. It’s not meant to be. It’s a first-line filter—closing the cheap spoof gap. But even that modest system clarified something: our role isn’t diagnosis, verification, or classification. It’s infrastructure. We’re the sensor layer, not the decision engine.
Vasina’s warning reshaped everything. We’re not positioning inside regulated health tech. We’re outside it—by design. That’s not a limitation. It’s a pivot to scalability. Our use cases are KYC, telehealth augmentation, defense operator monitoring (with consent), and automotive driver-state. Think Stripe for biometrics: embeddable, deterministic, lightweight. Not a diagnostic tool. Not a clinical system. A behavioral perception layer—full stop.
That clarity changed our roadmap. Our pre-seed round—EUR 2M at EUR 6M pre-money, currently raising—now reflects infrastructure positioning. Investors probing SaMD pathways shifted to asking about API latency, vector throughput, and spoof resistance margins. That’s the right conversation. We filed three EU patents (2026-502, 508, 503), all covering signal fusion and liveness logic, not clinical claims. We registered with SAM.gov (CAGE 19KV6). No deployed customers yet. Zero third-party traffic. But the pipeline is real: SLC Digital in due diligence, FBI BAA and DARPA BAAT in early review, YC Summer 2026 on the radar. All of them care about signal fidelity, not FDA clearance—because we’re not selling a medical device.
Calling it one would’ve derailed that. We’d be stuck explaining why we don’t have clinical validation, why Vasina hasn’t reviewed patient data, why we’re not pursuing MDR. Instead, we’re building trust through transparency: deterministic formulas, no cloud inference, no black-box models. The state vector is open, inspectable, reproducible. What you see is what you get.
It’s strange how freeing it is to not be something. To strip away the false prestige and build on actual technical truth. I used to think “medical” was the highest bar. Now I think the highest bar is honesty—in labeling, in capability, in ambition.
If you’re building with biometrics, ask yourself: are you solving a clinical problem—or enabling a perceptual one? The difference matters.
Try the demo: https://www.emopulse.app/dashboard.html
Top comments (0)