DEV Community

EmoPulse
EmoPulse

Posted on

I Built a Browser-Based Emotion AI With 47 Real-Time Biometric Parameters — Here's the Architecture

TL;DR: EmoPulse runs emotion detection, heart rate extraction (rPPG), micro-expression analysis, and voice sentiment — all in the browser via TensorFlow.js. No backend. No cloud. 47 parameters at 30 FPS.
The Stack

Frontend: Vanilla JS, WebGL, CSS3 (zero dependencies)
AI/ML: TensorFlow.js + custom models
Heart Rate: Remote Photoplethysmography (rPPG) — extracting pulse from skin micro-color changes via webcam
Face Analysis: 68-point landmark mesh + FACS Action Units
Audio: Web Audio API for voice emotion/pitch
Crypto: Web Crypto API (SHA-256 session signatures)
Deploy: PWA — works offline, air-gapped, installable

No React. No Next.js. No build step. Just a browser.
How rPPG Works (The Coolest Part)
Most people don't know you can extract a heartbeat from a webcam. Here's the principle:
Every heartbeat pushes blood through your capillaries. This causes micro-color changes in your skin — invisible to the naked eye but detectable by a camera sensor.
EmoPulse's PulseSense™ algorithm:

Detects face and isolates forehead/cheek ROI (regions of interest)
Extracts average RGB values per frame
Applies bandpass filter (0.7–4.0 Hz = 42–240 BPM range)
Runs FFT to find dominant frequency
Calculates BPM + HRV (RMSSD) from inter-beat intervals

All in JavaScript. In real time. At 30 FPS.
The accuracy is surprisingly good in controlled lighting. Not medical-grade — but sufficient for stress monitoring, wellness, and engagement tracking.
The 47 Parameters
Grouped into 8 channels:
ChannelParametersMethodEmotions7 core + confidence + stability + mood shifts + spectrumCNN classifierBiometricsBPM, HRV, breathing rate, blinksrPPG + eye aspect ratioCognitiveStress, energy, focus, cognitive loadMulti-signal fusionAuthenticityTruthLens score, Duchenne smiles, micro-expressions, signal qualityAU6+AU12 analysisGazeTracking, pupil dilation, stability, multi-faceLandmark geometryVoiceEmotion, pitch, level, contagionWeb Audio API + MLNeural Mesh68 landmarks, 5+ action unitsface-api.js + customAnalyticsTimeline, memory, events, SHA-256 sigSession state
Why Edge AI Matters
Every emotion AI company I looked at — Affectiva, Hume AI, iMotions — requires either cloud processing or dedicated hardware.
That's a dealbreaker for:

Defence — can't send soldier biometrics to AWS
Healthcare — HIPAA/GDPR means data stays on-device
Education — parents won't consent to face data in the cloud
Enterprise — security teams will block it

EmoPulse runs 100% in the browser. Your face data never touches a network. It works in airplane mode. It works in a SCIF.
What I Learned Building This Solo

rPPG is fragile. Lighting changes, movement, and skin tone all affect accuracy. I spent more time on signal filtering than on the actual ML models.
TensorFlow.js is underrated. People assume browser ML is a toy. It's not. With WebGL backend, inference is fast enough for real-time multi-model pipelines.
The hardest part isn't the AI — it's the UX. Showing 47 parameters without overwhelming the user required more design iteration than model tuning.
Privacy-first architecture is a competitive advantage. Everyone says they care about privacy. Very few actually architect for it.

Try It
The live demo is at emopulse.app/dashboard — allow camera access and watch your biometrics in real time.
The full README with technical details: emopulse.app/readme
Currently raising a seed round to build the API/SDK layer. If you're working on anything that involves human-computer interaction and want to add an emotion layer — let's talk: info@emopulse.app
2 patents filed (EU/US). More at emopulse.app.

What would you build with real-time emotion data from a browser camera? Drop your ideas in the comments.

Top comments (0)