<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: EmoPulse</title>
    <description>The latest articles on DEV Community by EmoPulse (@emopulse).</description>
    <link>https://dev.to/emopulse</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/emopulse"/>
    <language>en</language>
    <item>
      <title>Micro-expressions happen in 1/25th of a second — here's how we catch them</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Wed, 08 Apr 2026 10:00:04 +0000</pubDate>
      <link>https://dev.to/emopulse/micro-expressions-happen-in-125th-of-a-second-heres-how-we-catch-them-169a</link>
      <guid>https://dev.to/emopulse/micro-expressions-happen-in-125th-of-a-second-heres-how-we-catch-them-169a</guid>
      <description>&lt;p&gt;Micro-expressions are involuntary facial movements that last between 1/25th to 1/5th of a second. Traditional computer vision models, even state-of-the-art CNNs, blur them out. Why? Because they’re trained on static images or slow video streams where these flickers get averaged into noise. At EmoPulse, we treat temporal resolution as a first-class citizen. Our pipeline starts with a 200 FPS edge capture stack — not for storage, but for real-time optical flow decomposition. We use a lightweight 3D-CNN (based on Tiny-I3D) that operates on micro-video clips of 16 frames at 200 FPS, giving us ~80ms temporal windows. This isn’t about more data — it’s about &lt;em&gt;meaningful&lt;/em&gt; data. The model doesn’t classify emotions. It detects muscle activation patterns (AU25 + AU04, etc.) at 5ms resolution, then applies a temporal attention mask to isolate transient peaks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Pseudo: Temporal attention over optical flow stacks
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;forward&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;flow_stack&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;  &lt;span class="c1"&gt;# shape: (B, C, T=16, H, W)
&lt;/span&gt;    &lt;span class="n"&gt;features&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;i3d_backbone&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;flow_stack&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;attention_weights&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;temporal_attention&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;features&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# learned peak sensitivity
&lt;/span&gt;    &lt;span class="n"&gt;attended&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;features&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;attention_weights&lt;/span&gt;
    &lt;span class="n"&gt;au_logits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;au_head&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attended&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dim&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;au_logits&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’ve found that even with 99% dropout on spatial features, the model learns to ignore identity and focus on dynamics. The key insight? Micro-expressions aren’t rare — they’re &lt;em&gt;overlooked&lt;/em&gt;. Standard datasets like CASME II and SAMM are gold, but they’re tiny. So we synthetic-augment using GAN-generated micro-sequence perturbations (think: simulating orbicularis oculi twitch on neutral-to-suppressed smile). This pushes F1 on AU detection from 0.68 to 0.81 in real-world conditions. We run this all on-device using TensorRT-optimized engines, no cloud roundtrip. Because if it takes 200ms to react, you’ve already missed the 40ms truth.&lt;/p&gt;

&lt;p&gt;If you're working with high-speed behavioral signals — what’s your threshold for "real-time," and are you still using 30 FPS as input?  &lt;/p&gt;

&lt;p&gt;Learn more about our approach at &lt;a href="https://emo.city" rel="noopener noreferrer"&gt;emo.city&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>biometrics</category>
      <category>deeptech</category>
    </item>
    <item>
      <title>"How We Run AI Inference on $0/month (And Still Ship Fast)"</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Tue, 07 Apr 2026 10:00:21 +0000</pubDate>
      <link>https://dev.to/emopulse/how-we-run-ai-inference-on-0month-and-still-ship-fast-5fm4</link>
      <guid>https://dev.to/emopulse/how-we-run-ai-inference-on-0month-and-still-ship-fast-5fm4</guid>
      <description>&lt;p&gt;We run real-time multimodal AI inference for biometric emotion detection—audio, video, and text—and our cloud AI bill is $0/month. Not close to zero. Zero. While most teams burn thousands on GPU instances just to prototype, we’ve architected a system that leverages strategic caching, client-side compute, and model distillation to avoid cloud costs entirely. The key insight? You don’t need GPT-4-level infrastructure to ship impactful AI—especially when you shift inference off the server at the right layers.&lt;/p&gt;

&lt;p&gt;Our stack uses ONNX Runtime in WebAssembly to run distilled versions of our emotion classification models directly in the browser and mobile clients. Raw sensor data (microphone, camera) is processed locally using PyTorch Mobile on-device or WebAssembly-bound models via Mediapipe and Tensorflow.js. Only anonymized, low-dimensional embeddings—think 512-d vectors instead of video streams—get sent to our backend. These are cached aggressively with Redis and used for stateless batch retraining in CI/CD, not real-time inference. We quantize models to FP16 or INT8, and use knowledge distillation to train tiny models (TinyBERT, MobileViT) that match 90%+ of our original model’s performance. For tasks like voice-based valence detection, we even use Web Audio API filters to extract spectral features in-browser, cutting preprocessing costs to zero.&lt;/p&gt;

&lt;p&gt;This isn’t just cost-saving—it’s better architecture. Lower latency, stronger privacy, and no cold starts. We built EmoPulse (emo.city) this way from day one because funding doesn’t scale engineering rigor. So here’s the challenge: if you can run BERT on a Raspberry Pi, why are we still spinning up $20/hr instances for every AI side project? When does cloud inference actually add value—versus just making engineers lazy?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>biometrics</category>
      <category>deeptech</category>
    </item>
    <item>
      <title>Real-time emotion detection from webcam — no wearables needed</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Mon, 06 Apr 2026 10:00:17 +0000</pubDate>
      <link>https://dev.to/emopulse/real-time-emotion-detection-from-webcam-no-wearables-needed-d9i</link>
      <guid>https://dev.to/emopulse/real-time-emotion-detection-from-webcam-no-wearables-needed-d9i</guid>
      <description>&lt;p&gt;We’ve been running controlled trials with real-time facial affect analysis using nothing but a standard 720p webcam — no IR sensors, no EEG caps, no chest straps. The goal? Detect emotional valence and arousal with enough accuracy to be useful in high-stakes environments: remote proctoring, telehealth triage, UX research. Most open-source pipelines fail here because they treat emotion as a static classification problem. We treat it as a dynamic signal. Our stack uses a lightweight RetinaFace for detection, followed by a pruned EfficientNet-B0 fine-tuned on dynamic expressions from the AFEW and SEED datasets — not just static FER2013 junk. Temporal smoothing via a 1D causal CNN on top of softmax outputs reduces jitter and improves response latency under variable lighting.&lt;/p&gt;

&lt;p&gt;The real breakthrough wasn’t the model — it was synchronizing inference with gaze vector estimation and head pose to gate confidence. If the user isn’t facing the camera within ±30 degrees, we don’t emit a prediction. This eliminates false spikes during glances away. Inference runs at 22–28 FPS on a consumer laptop GPU using TensorRT-compiled engines. We batch inputs across users in shared sessions (e.g. virtual classrooms) by time-slicing the stream, not frames — critical for maintaining temporal integrity. All processing happens client-side; raw pixels never leave the device. We’re not building surveillance — we’re building situational awareness without intrusion.&lt;/p&gt;

&lt;p&gt;This approach powers EmoPulse (emo.city), where we're deploying it for real-time engagement analytics in online learning. But here’s the unresolved tension: how do you quantify “frustration” without over-interpreting micro-expressions? Are we measuring emotion — or just facial mechanics? The more we scale, the more we question the ontology of what we’re detecting.  &lt;/p&gt;

&lt;p&gt;So — if you're working in affective computing: do you validate against self-report, physiology, or behavior? And which one lies the least?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>biometrics</category>
      <category>deeptech</category>
    </item>
    <item>
      <title>"Building the Perception Layer AI Is Missing"</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Sun, 05 Apr 2026 13:13:07 +0000</pubDate>
      <link>https://dev.to/emopulse/building-the-perception-layer-ai-is-missing-46j1</link>
      <guid>https://dev.to/emopulse/building-the-perception-layer-ai-is-missing-46j1</guid>
      <description>&lt;p&gt;Most AI today is blind to human context. Models classify images, transcribe speech, and generate text—but they don’t &lt;em&gt;perceive&lt;/em&gt;. They miss the silent cues: hesitation in voice, micro-expressions, posture shifts. That’s the gap I’m hacking on as a solo founder. At EmoPulse (emo.city), we’re building a real-time perception layer that fuses multimodal biometrics—audio prosody, facial dynamics, galvanic skin response—to infer cognitive and emotional states beneath surface behavior. This isn’t sentiment analysis on text. This is low-latency signal processing meeting transformer-based sequence modeling to close the loop between human expression and machine awareness.&lt;/p&gt;

&lt;p&gt;The stack starts at the edge. On-device preprocessing (in C++ with LLVM-compiled kernels) reduces raw video and audio streams into privacy-preserving embeddings before any data leaves the device. We use MediaPipe for facial landmarks and a custom CNN-RNN hybrid to extract temporal affective features—think eyebrow raises over 200ms windows, not static frames. Audio goes through a learned filter bank (think learnable Mel-spectrogram layers in PyTorch) trained end-to-end on paralinguistic tasks. These streams are fused via cross-modal attention in a lightweight transformer (4 layers, 256-dim), optimized using TorchScript and quantized for &amp;lt;50ms inference on mid-tier smartphones. The output? A low-dimensional state vector—focus, confusion, engagement—that apps can react to in real time.&lt;/p&gt;

&lt;p&gt;We’re not building another emotion API. We’re building the &lt;em&gt;perception infrastructure&lt;/em&gt; for AI to finally sense human context—without compromising privacy or latency. But here’s the hard part: ground truth. How do you label "cognitive load" at scale? We’re experimenting with implicit signals (mouse dynamics, speech pause frequency) as proxy labels, but it’s messy. If you’re working on sensor fusion, on-device ML, or subjective state modeling—how are you validating what you can’t directly observe?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>biometrics</category>
      <category>deeptech</category>
    </item>
    <item>
      <title>I Built a Browser-Based Emotion AI With 47 Real-Time Biometric Parameters — Here's the Architecture</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Sat, 21 Mar 2026 16:26:57 +0000</pubDate>
      <link>https://dev.to/emopulse/i-built-a-browser-based-emotion-ai-with-47-real-time-biometric-parameters-heres-the-architecture-23id</link>
      <guid>https://dev.to/emopulse/i-built-a-browser-based-emotion-ai-with-47-real-time-biometric-parameters-heres-the-architecture-23id</guid>
      <description>&lt;p&gt;TL;DR: EmoPulse runs emotion detection, heart rate extraction (rPPG), micro-expression analysis, and voice sentiment — all in the browser via TensorFlow.js. No backend. No cloud. 47 parameters at 30 FPS.&lt;br&gt;
The Stack&lt;/p&gt;

&lt;p&gt;Frontend: Vanilla JS, WebGL, CSS3 (zero dependencies)&lt;br&gt;
AI/ML: TensorFlow.js + custom models&lt;br&gt;
Heart Rate: Remote Photoplethysmography (rPPG) — extracting pulse from skin micro-color changes via webcam&lt;br&gt;
Face Analysis: 68-point landmark mesh + FACS Action Units&lt;br&gt;
Audio: Web Audio API for voice emotion/pitch&lt;br&gt;
Crypto: Web Crypto API (SHA-256 session signatures)&lt;br&gt;
Deploy: PWA — works offline, air-gapped, installable&lt;/p&gt;

&lt;p&gt;No React. No Next.js. No build step. Just a browser.&lt;br&gt;
How rPPG Works (The Coolest Part)&lt;br&gt;
Most people don't know you can extract a heartbeat from a webcam. Here's the principle:&lt;br&gt;
Every heartbeat pushes blood through your capillaries. This causes micro-color changes in your skin — invisible to the naked eye but detectable by a camera sensor.&lt;br&gt;
EmoPulse's PulseSense™ algorithm:&lt;/p&gt;

&lt;p&gt;Detects face and isolates forehead/cheek ROI (regions of interest)&lt;br&gt;
Extracts average RGB values per frame&lt;br&gt;
Applies bandpass filter (0.7–4.0 Hz = 42–240 BPM range)&lt;br&gt;
Runs FFT to find dominant frequency&lt;br&gt;
Calculates BPM + HRV (RMSSD) from inter-beat intervals&lt;/p&gt;

&lt;p&gt;All in JavaScript. In real time. At 30 FPS.&lt;br&gt;
The accuracy is surprisingly good in controlled lighting. Not medical-grade — but sufficient for stress monitoring, wellness, and engagement tracking.&lt;br&gt;
The 47 Parameters&lt;br&gt;
Grouped into 8 channels:&lt;br&gt;
ChannelParametersMethodEmotions7 core + confidence + stability + mood shifts + spectrumCNN classifierBiometricsBPM, HRV, breathing rate, blinksrPPG + eye aspect ratioCognitiveStress, energy, focus, cognitive loadMulti-signal fusionAuthenticityTruthLens score, Duchenne smiles, micro-expressions, signal qualityAU6+AU12 analysisGazeTracking, pupil dilation, stability, multi-faceLandmark geometryVoiceEmotion, pitch, level, contagionWeb Audio API + MLNeural Mesh68 landmarks, 5+ action unitsface-api.js + customAnalyticsTimeline, memory, events, SHA-256 sigSession state&lt;br&gt;
Why Edge AI Matters&lt;br&gt;
Every emotion AI company I looked at — Affectiva, Hume AI, iMotions — requires either cloud processing or dedicated hardware.&lt;br&gt;
That's a dealbreaker for:&lt;/p&gt;

&lt;p&gt;Defence — can't send soldier biometrics to AWS&lt;br&gt;
Healthcare — HIPAA/GDPR means data stays on-device&lt;br&gt;
Education — parents won't consent to face data in the cloud&lt;br&gt;
Enterprise — security teams will block it&lt;/p&gt;

&lt;p&gt;EmoPulse runs 100% in the browser. Your face data never touches a network. It works in airplane mode. It works in a SCIF.&lt;br&gt;
What I Learned Building This Solo&lt;/p&gt;

&lt;p&gt;rPPG is fragile. Lighting changes, movement, and skin tone all affect accuracy. I spent more time on signal filtering than on the actual ML models.&lt;br&gt;
TensorFlow.js is underrated. People assume browser ML is a toy. It's not. With WebGL backend, inference is fast enough for real-time multi-model pipelines.&lt;br&gt;
The hardest part isn't the AI — it's the UX. Showing 47 parameters without overwhelming the user required more design iteration than model tuning.&lt;br&gt;
Privacy-first architecture is a competitive advantage. Everyone says they care about privacy. Very few actually architect for it.&lt;/p&gt;

&lt;p&gt;Try It&lt;br&gt;
The live demo is at emopulse.app/dashboard — allow camera access and watch your biometrics in real time.&lt;br&gt;
The full README with technical details: emopulse.app/readme&lt;br&gt;
Currently raising a seed round to build the API/SDK layer. If you're working on anything that involves human-computer interaction and want to add an emotion layer — let's talk: &lt;a href="mailto:info@emopulse.app"&gt;info@emopulse.app&lt;/a&gt;&lt;br&gt;
2 patents filed (EU/US). More at emopulse.app.&lt;/p&gt;

&lt;p&gt;What would you build with real-time emotion data from a browser camera? Drop your ideas in the comments.&lt;/p&gt;

</description>
      <category>computervision</category>
      <category>startup</category>
      <category>deeptech</category>
      <category>rppg</category>
    </item>
    <item>
      <title>How I Built an Emotion AI That Reads 47 Biometric Parameters From a Browser Camera</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Sat, 21 Mar 2026 16:23:29 +0000</pubDate>
      <link>https://dev.to/emopulse/how-i-built-an-emotion-ai-that-reads-47-biometric-parameters-from-a-browser-camera-25ec</link>
      <guid>https://dev.to/emopulse/how-i-built-an-emotion-ai-that-reads-47-biometric-parameters-from-a-browser-camera-25ec</guid>
      <description>&lt;p&gt;The moment an AI read a human before responding — not from cookies, but from a live biometric stream.&lt;br&gt;
On March 5, 2026, something happened that I hadn't seen anyone do before.&lt;br&gt;
An AI looked at a person through a webcam and understood their emotional state — stress level, heart rate, focus, authenticity — all in real time. Then it adapted. Tone, depth, pace, everything. Before the person typed a single word.&lt;br&gt;
No wearables. No special hardware. Just a standard browser camera.&lt;br&gt;
I'm Arvydas Pakalniskis, and I built EmoPulse — a real-time emotion AI platform that extracts 47 biometric and emotional parameters from any camera. 100% on-device. Zero cloud. I want to share the story of how and why.&lt;/p&gt;

&lt;p&gt;The Problem: AI Knows What You Click, But Not How You Feel&lt;br&gt;
Every major AI platform — ChatGPT, Claude, Gemini — operates blind. They analyze your words, your clicks, your browsing history. But they have zero understanding of your actual emotional state at the moment of interaction.&lt;br&gt;
Think about that. Your doctor doesn't prescribe medication based only on what you tell them. They observe. They read your face, your posture, your voice. They see things you don't say.&lt;br&gt;
AI can't do that. Until now.&lt;/p&gt;

&lt;p&gt;What EmoPulse Actually Measures&lt;br&gt;
EmoPulse isn't just "facial expression detection." That's table stakes. Here's what the platform captures in real time:&lt;br&gt;
Emotion Detection — 7 core emotions (happy, sad, angry, fearful, surprised, disgusted, neutral) with confidence scoring, mood shift tracking, and emotional spectrum analysis.&lt;br&gt;
Biometrics Without Wearables — Heart rate (BPM) and heart rate variability (HRV) extracted through remote photoplethysmography (rPPG) — measuring micro-color changes in your skin. Breathing rate. Blink detection.&lt;br&gt;
Cognitive Metrics — Stress level, energy flow, focus score, cognitive load estimation.&lt;br&gt;
Authenticity Analysis — TruthLens™ technology that distinguishes genuine Duchenne smiles from fake ones, detects micro-expressions lasting less than 500ms, and provides an overall authenticity score.&lt;br&gt;
Eye &amp;amp; Gaze Analytics — Gaze tracking, pupil dilation (arousal indicator), gaze stability mapping, multi-face detection.&lt;br&gt;
Voice Analysis — Voice emotion, pitch detection, energy levels, emotional contagion indexing.&lt;br&gt;
47 parameters. From a webcam. In your browser.&lt;/p&gt;

&lt;p&gt;The Tech Behind It&lt;br&gt;
I didn't want EmoPulse to be another cloud-dependent SaaS that ships your face to some server. Privacy isn't a feature — it's the architecture.&lt;br&gt;
100% Edge AI. Everything runs in the browser using TensorFlow.js. No data ever leaves the device. This isn't just a privacy choice — it's what makes EmoPulse viable for defence, healthcare, and any environment where data sovereignty matters.&lt;br&gt;
The core consists of four proprietary algorithms:&lt;/p&gt;

&lt;p&gt;NeuroMesh™ — A 68-point facial landmark tracking system with 5+ FACS action units&lt;br&gt;
PulseSense™ — rPPG-based heart rate and HRV extraction from skin micro-color changes&lt;br&gt;
TruthLens™ — Authenticity scoring via Duchenne marker analysis&lt;br&gt;
MoodCast™ — Predictive emotion timeline with session memory&lt;/p&gt;

&lt;p&gt;The entire system runs at sub-50ms latency, 30 FPS, using WebGL acceleration. It works offline. It works air-gapped. It works on a phone.&lt;/p&gt;

&lt;p&gt;Why I Built This&lt;br&gt;
I don't have a team of 50 engineers. I have obsession and a very clear vision.&lt;br&gt;
Payments got Stripe. Communications got Twilio. Search got Google. AI got OpenAI.&lt;br&gt;
Emotion gets EmoPulse.&lt;br&gt;
This isn't a nice-to-have feature. As AI becomes more embedded in healthcare, education, hiring, security, and daily communication — the ability to understand the human on the other side becomes critical infrastructure.&lt;br&gt;
Imagine an AI tutor that sees a student is confused before they ask a question. A telehealth platform that monitors patient stress in real time. A security system that detects deception without an interrogation. An HR tool that identifies burnout before it becomes a resignation letter.&lt;br&gt;
That's what EmoPulse enables.&lt;/p&gt;

&lt;p&gt;The Numbers&lt;br&gt;
The target markets are massive:&lt;/p&gt;

&lt;p&gt;Defence &amp;amp; Security — $49B addressable market&lt;br&gt;
Healthcare — $15B&lt;br&gt;
Education — $8B&lt;br&gt;
HR &amp;amp; Wellbeing — $120B&lt;br&gt;
Market Research — $80B&lt;br&gt;
AI Platform Integration — $30B&lt;/p&gt;

&lt;p&gt;And the competitive landscape? Affectiva measures about 8 parameters and requires cloud. Hume AI does roughly 12, also cloud-dependent. iMotions needs dedicated hardware costing thousands.&lt;br&gt;
EmoPulse: 47 parameters, zero hardware, zero cloud, API starting at $0.01.&lt;/p&gt;

&lt;p&gt;What's Next&lt;br&gt;
Two patents are filed (EU/US), a third is in preparation. The technology is live and demonstrable at emopulse.app.&lt;br&gt;
I'm currently raising a €500K–€2M seed round to:&lt;/p&gt;

&lt;p&gt;Build the API and SDK for third-party integration&lt;br&gt;
Secure first enterprise contracts in defence and healthcare&lt;br&gt;
Expand the team&lt;/p&gt;

&lt;p&gt;If you're building anything that involves humans interacting with screens — EmoPulse is the layer you're missing.&lt;/p&gt;

&lt;p&gt;Try it live: emopulse.app&lt;br&gt;
Live Dashboard: emopulse.app/dashboard&lt;br&gt;
Contact: &lt;a href="mailto:info@emopulse.app"&gt;info@emopulse.app&lt;/a&gt;&lt;br&gt;
LinkedIn: EmoPulse Official&lt;br&gt;
Product Hunt: EmoPulse on Product Hunt&lt;/p&gt;

&lt;p&gt;Tags: Emotion AI, Biometric AI, Edge AI, rPPG, Facial Expression Recognition, Computer Vision, TensorFlow.js, Startup, Deep Tech, Affective Computing&lt;/p&gt;

</description>
      <category>biometricai</category>
      <category>computervision</category>
      <category>startup</category>
      <category>deeptech</category>
    </item>
  </channel>
</rss>
