<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: EmoPulse</title>
    <description>The latest articles on DEV Community by EmoPulse (@emopulse).</description>
    <link>https://dev.to/emopulse</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/emopulse"/>
    <language>en</language>
    <item>
      <title>As a solo founder, I've experienced burnout firsthand. But what if I told you that our biometric AI</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Mon, 04 May 2026 10:00:02 +0000</pubDate>
      <link>https://dev.to/emopulse/as-a-solo-founder-ive-experienced-burnout-firsthand-but-what-if-i-told-you-that-our-biometric-ai-29p9</link>
      <guid>https://dev.to/emopulse/as-a-solo-founder-ive-experienced-burnout-firsthand-but-what-if-i-told-you-that-our-biometric-ai-29p9</guid>
      <description>&lt;p&gt;As a solo founder, I've experienced burnout firsthand. But what if I told you that our biometric AI can detect burnout signals 2 weeks before it's too late? Follow @emopulseai on Telegram for daily AI insights -&amp;gt; t.me/emopulseai&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Packaging anonymized data taught me what diligence really tests</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Fri, 01 May 2026 11:38:52 +0000</pubDate>
      <link>https://dev.to/emopulse/packaging-anonymized-data-taught-me-what-diligence-really-tests-48jn</link>
      <guid>https://dev.to/emopulse/packaging-anonymized-data-taught-me-what-diligence-really-tests-48jn</guid>
      <description>&lt;p&gt;Earlier this week, I exported 142 session fragments for SLC Digital’s due diligence team. Not raw video. Not biometrics. Just state vectors—5-layer JSON payloads stripped of identifiers, timestamps, and session hashes. I zipped them, encrypted the file with their public key, and uploaded it to a burner Tresorit link. Then I sat back and realized: this wasn’t a technical request. It was a stress test of my honesty.&lt;/p&gt;

&lt;p&gt;What I actually learned isn’t about data formats or anonymization pipelines. It’s that investors don’t trust the demo. They trust the gap between what you &lt;em&gt;could&lt;/em&gt; show and what you &lt;em&gt;choose&lt;/em&gt; not to. Most founders think diligence is about proving sophistication—showing the model card, the accuracy metrics, the pipeline diagrams. But when you’re a solo engineer building browser-based perception with no cloud ML, what they’re really looking for is restraint. They want to see that you know the difference between capability and overclaim.&lt;/p&gt;

&lt;p&gt;I’ve seen other founders dump raw AU intensities, gaze heatmaps, and HRV traces into due diligence folders like they’re scoring points for complexity. We don’t. Our /state endpoint emits 47 signals, yes—ARKit blendshapes, rPPG-derived BPM, FACS units, prosody buckets—but the packet is structured, minimal, deterministic. No model scores. No “engagement” or “trust” logits. Just observables and fixed-weight derived indicators: R_risk, intent_clarity, human_signature. All calculated client-side in WebAssembly using methods from Giannakakis et al. and replications in IEEE 2025. No training. No gradients.  &lt;/p&gt;

&lt;p&gt;When I packaged those 142 sessions, I didn’t filter for “good data.” I included the shaky webcam feeds, the low-light sessions, the one where the test subject coughed mid-capture and the rPPG spiked. Because our liveness scorer already handles that. It’s been live since April 8, 2026—server-side, running on the Oracle ARM box in Chicago, analyzing the last five ticks of each session. It checks for BPM volatility below 12, gaze stability above 95 with zero blinks, micro-expression bursts that suggest video replay. We tested it: 3 real people, 2 spoof attempts (phone photo, screen replay), 100% separation. Threshold at 0.5. Margin of 0.2.  &lt;/p&gt;

&lt;p&gt;But I didn’t include the liveness scores in the export. Not because they’re secret—they’re deterministic, built from existing signals—but because the investor didn’t ask for anti-spoof logic. They asked for behavioral data. And if I’d thrown in an extra “liveness” field unprompted, it would’ve looked like I was trying to oversell. Like I needed the data to &lt;em&gt;do more&lt;/em&gt; than it does.&lt;/p&gt;

&lt;p&gt;That’s the trap so many fall into: inflating the narrative to match the funding stage. We’re pre-seed. Raising €2M at €6M pre. No customers. No revenue. Just code, patents pending, and a medical advisor who wrote an ethical positioning paper—not a clinical validation. So when I anonymize data, I don’t hide the emptiness. I highlight it. No user IDs. No geolocation. No device metadata. Just clean vectors ticking at 25Hz, each one a snapshot of face, voice, and motion—nothing more.&lt;/p&gt;

&lt;p&gt;This changes how I frame everything. The next request will probably ask for signal distributions, noise floors, failure modes. I’m preparing those now. Not to impress, but to expose. Because what Emoulse is building isn’t an AI oracle—it’s infrastructure. Like Stripe for biometrics, but in the browser, on-device, sub-50ms. The dashboard (&lt;a href="https://www.emopulse.app/dashboard.html" rel="noopener noreferrer"&gt;https://www.emopulse.app/dashboard.html&lt;/a&gt;) shows it raw: no smoothing, no storytelling.&lt;/p&gt;

&lt;p&gt;What this forces is a new discipline: shipping truth instead of potential. No more “our model can detect depression with 95% accuracy”—we don’t claim that. Not now, not ever. Because once you start fudging the boundary between literature and implementation, you’re not building a company. You’re building a story. And stories don’t run on WebAssembly.&lt;/p&gt;

&lt;p&gt;What do you actually leave out when you’re trying to be believed?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeptech</category>
      <category>founders</category>
    </item>
    <item>
      <title>We Were Wrong to Call It a Medical Device</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Fri, 01 May 2026 11:38:50 +0000</pubDate>
      <link>https://dev.to/emopulse/we-were-wrong-to-call-it-a-medical-device-pin</link>
      <guid>https://dev.to/emopulse/we-were-wrong-to-call-it-a-medical-device-pin</guid>
      <description>&lt;p&gt;Earlier this week, I reviewed a pitch deck draft and caught myself: “non-invasive medical-grade biometric sensing.” I paused. That phrase had been carved into every narrative I’d built since day one. Then I remembered what Dr. Vasina told me six months ago, over a crackling EU-to-Chicago Zoom call: “You’re not a medical device. Stop saying you are.”&lt;/p&gt;

&lt;p&gt;She wasn’t shutting down ambition. She was protecting rigor. And she was right.&lt;/p&gt;

&lt;p&gt;Most founders, especially in deep tech, chase the halo of “medical.” It sounds authoritative. It implies validation, precision, trust. But slapping a medical label on something that doesn’t meet MDR or SaMD standards isn’t just misleading—it erodes credibility. It invites regulatory landmines. It confuses engineers, investors, and eventually, users. I wanted the prestige without the predicate. Vasina called that out fast. Her advisory wasn’t about blocking access to healthcare use cases; it was about building ethical boundaries before the product could be misinterpreted, overhyped, or misused. She forced me to ask: &lt;em&gt;What are we actually building?&lt;/em&gt; Not what we wished it were, but what it &lt;em&gt;is&lt;/em&gt;, materially and legally.&lt;/p&gt;

&lt;p&gt;EmoPulse extracts 47 signals—facial action units, rPPG-derived heart rate, voice prosody, gaze dynamics, microexpressions—from a standard RGB camera, all in-browser via WebAssembly. The stack is lean: MediaPipe, ARKit blendshapes, a custom rPPG implementation, and deterministic state fusion on the server. No training. No ML pipelines. We implement published, peer-reviewed methods—Giannakakis et al., IEEE, MDPI—on-device. Output is a structured vector: timestamped, normalized, deterministic. The server, a $0/month Oracle ARM instance in Chicago, only receives, logs, and forwards. All perception happens client-side, sub-50ms end-to-end.&lt;/p&gt;

&lt;p&gt;We built liveness scoring in April 2026. It runs server-side, on the existing /state stream. Three penalties: BPM instability (spoofs can’t fake pulse variance), gaze freeze plus no blinks (static photo tell), and microexpression burst in first frame (video replay artifact). On a validation set of 18 real sessions and 2 spoof attempts, separation was clean: live sessions scored 0.6–1.0, spoofs 0.2–0.4. Threshold at 0.5. Margin of 0.2. It’s not FaceTec. It’s not meant to be. It’s a first-line filter—closing the cheap spoof gap. But even that modest system clarified something: our role isn’t diagnosis, verification, or classification. It’s infrastructure. We’re the sensor layer, not the decision engine.&lt;/p&gt;

&lt;p&gt;Vasina’s warning reshaped everything. We’re not positioning inside regulated health tech. We’re outside it—by design. That’s not a limitation. It’s a pivot to scalability. Our use cases are KYC, telehealth augmentation, defense operator monitoring (with consent), and automotive driver-state. Think Stripe for biometrics: embeddable, deterministic, lightweight. Not a diagnostic tool. Not a clinical system. A behavioral perception layer—full stop.&lt;/p&gt;

&lt;p&gt;That clarity changed our roadmap. Our pre-seed round—EUR 2M at EUR 6M pre-money, currently raising—now reflects infrastructure positioning. Investors probing SaMD pathways shifted to asking about API latency, vector throughput, and spoof resistance margins. That’s the right conversation. We filed three EU patents (2026-502, 508, 503), all covering signal fusion and liveness logic, not clinical claims. We registered with SAM.gov (CAGE 19KV6). No deployed customers yet. Zero third-party traffic. But the pipeline is real: SLC Digital in due diligence, FBI BAA and DARPA BAAT in early review, YC Summer 2026 on the radar. All of them care about signal fidelity, not FDA clearance—because we’re not selling a medical device.&lt;/p&gt;

&lt;p&gt;Calling it one would’ve derailed that. We’d be stuck explaining why we don’t have clinical validation, why Vasina hasn’t reviewed patient data, why we’re not pursuing MDR. Instead, we’re building trust through transparency: deterministic formulas, no cloud inference, no black-box models. The state vector is open, inspectable, reproducible. What you see is what you get.&lt;/p&gt;

&lt;p&gt;It’s strange how freeing it is to &lt;em&gt;not&lt;/em&gt; be something. To strip away the false prestige and build on actual technical truth. I used to think “medical” was the highest bar. Now I think the highest bar is &lt;em&gt;honesty&lt;/em&gt;—in labeling, in capability, in ambition.&lt;/p&gt;

&lt;p&gt;If you’re building with biometrics, ask yourself: are you solving a clinical problem—or enabling a perceptual one? The difference matters.&lt;/p&gt;

&lt;p&gt;Try the demo: &lt;a href="https://www.emopulse.app/dashboard.html" rel="noopener noreferrer"&gt;https://www.emopulse.app/dashboard.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeptech</category>
      <category>founders</category>
    </item>
    <item>
      <title>Shipping liveness in four hours changed everything</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Fri, 01 May 2026 11:38:49 +0000</pubDate>
      <link>https://dev.to/emopulse/shipping-liveness-in-four-hours-changed-everything-lki</link>
      <guid>https://dev.to/emopulse/shipping-liveness-in-four-hours-changed-everything-lki</guid>
      <description>&lt;p&gt;Earlier this week, an investor paused mid-call and said, “You’re processing biometrics in the browser with sub-50ms latency. But what stops someone from holding up a photo?” I didn’t have a real-time answer. I had assumptions—about spoofing being low-effort, about enterprise buyers layering in their own liveness—but not a technical one. The call ended. I opened a new tab, pulled the last 18 sessions from our internal logs, and started writing.&lt;/p&gt;

&lt;p&gt;Most people think spoof detection requires machine learning, specialized hardware, or cloud-heavy inference. That’s true if you’re building a turnkey identity platform. But we’re not. EmoPulse is a behavioral perception layer. We don’t own the final decision—we feed signals. So when the gap was flagged, the real question wasn’t “How do we build a liveness model?” It was “What can we derive &lt;em&gt;now&lt;/em&gt; from signals we’re already emitting?” That shift—away from prediction, toward deterministic signal logic—changed everything. It’s not flashy. But it’s fast. And it works.&lt;/p&gt;

&lt;p&gt;Our /state vector already carries 47 biometric and behavioral signals: rPPG-derived heart rate, 52 ARKit blendshapes, blink frequency, gaze vector, facial action units, voice prosody. All extracted on-device via WebAssembly from a standard RGB camera. No cloud inference. No data exfiltration. Sub-50ms round-trip from frame to server. That means the raw material for anti-spoofing was already in flight—we just weren’t using it. The insight: liveness isn’t a new model. It’s a state machine over existing signals.&lt;/p&gt;

&lt;p&gt;So I built a server-side scorer that runs on the Flask endpoint, analyzing a sliding window of the last five ticks per session. Three deterministic penalties:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;BPM instability&lt;/strong&gt;: If rPPG shows heart rate standard deviation &amp;gt;12 BPM across five ticks, it fails. Real faces have micro-variations, but not chaos. Synthetic surfaces, especially replays, cause rPPG to hunt—color shifts don’t map to physiology. This penalty catches phone screen replays.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gaze and blink freeze&lt;/strong&gt;: Gaze stability &amp;gt;95% (per MediaPipe normalized vectors) plus zero blinks over five ticks. Humans glance. Humans blink. Photos don’t. This is the printed photo signature.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Micro-expression burst&lt;/strong&gt;: If micro-expression count &amp;gt;8 &lt;em&gt;or&lt;/em&gt; Duchenne smile count &amp;gt;20 in the first tick, it triggers. Why? Because spoofers often start a video replay mid-laugh. Real onboarding ramps up. Replay starts peak.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each penalty scores 0.3. Base liveness = 1.0. Threshold for live: 0.5. The validation set: 18 real human sessions (3 subjects), 2 spoof attempts (1 photo, 1 phone replay). Real sessions scored 0.6–1.0. Spoofs: 0.2–0.4. 100% separation. Margin: 0.2. Not perfect. But enough to close the easy gap.&lt;/p&gt;

&lt;p&gt;We’re not replacing FaceTec or iProov. We’re preventing the lazy attack. And we did it in four hours—no new models, no cloud scaling, no client-side bloat. Just deterministic logic on signals we already compute.&lt;/p&gt;

&lt;p&gt;This is the EmoPulse pattern: extract, derive, deliver. We implement peer-reviewed methods—Giannakakis et al., IEEE, MDPI—not train our own. Our stress classifier? Published path. Our HRV proxy? RMSSD-like, short-window, fixed weights. No ML training. No weights to drift. The stack runs on a $0/month Oracle ARM box in Chicago (4 OCPU, 24 GB RAM). The perception is in the browser. The intelligence is in the composition.&lt;/p&gt;

&lt;p&gt;Shipping this liveness scorer didn’t just patch a gap. It proved the model: lightweight, deterministic, signal-native scoring beats waiting for “perfect” AI. It forces the next step—layering in ethical validation (Dr. Vasina is reviewing the penalty logic for consent implications) and preparing for real integration traffic. SLC Digital is in due diligence. The pre-seed round is open: EUR 2M at EUR 6M pre-money. But we still have zero deployed customers. Zero production partners. This is still ground zero.&lt;/p&gt;

&lt;p&gt;But now, when an investor asks about spoofing, I don’t deflect. I show them the logs. I show them the 0.2 vs 0.7 gap. I say: “It’s not bulletproof. But it’s running. And it’s simple.”&lt;/p&gt;

&lt;p&gt;What’s the simplest thing you’ve shipped that changed the conversation?  &lt;/p&gt;

&lt;p&gt;Demo: &lt;a href="https://www.emopulse.app/dashboard.html" rel="noopener noreferrer"&gt;https://www.emopulse.app/dashboard.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeptech</category>
      <category>founders</category>
    </item>
    <item>
      <title>rPPG chaos: a noisy signal's unexpected role</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Fri, 01 May 2026 11:38:47 +0000</pubDate>
      <link>https://dev.to/emopulse/rppg-chaos-a-noisy-signals-unexpected-role-30ca</link>
      <guid>https://dev.to/emopulse/rppg-chaos-a-noisy-signals-unexpected-role-30ca</guid>
      <description>&lt;p&gt;Earlier this week, I found myself staring at a noisy heart rate signal, wondering how it could possibly be useful. As the sole founder and engineer of EmoPulse, I've grown accustomed to dealing with imperfect data, but this particular signal seemed like a lost cause. And yet, as I delved deeper into the issue, I stumbled upon an unexpected insight: this noisy signal could become our anti-spoof primitive.&lt;/p&gt;

&lt;p&gt;The underlying lesson here is that sometimes, the things that seem like flaws or imperfections can actually become our greatest strengths. In the case of our remote photoplethysmography (rPPG) implementation, the noise and variability of the heart rate signal made it seem like a poor candidate for any serious application. However, as we began to explore its properties, we realized that this very noise could be used to detect spoofing attempts. It's a counter-intuitive observation, to say the least: the thing that makes our signal imperfect is also what makes it secure.&lt;/p&gt;

&lt;p&gt;Our custom rPPG implementation, which extracts heart rate from face color changes, is just one part of our broader behavioral perception infrastructure layer. We use MediaPipe's 478 facial landmarks and 52 ARKit blendshapes to extract 47 biometric and behavioral signals from any standard RGB camera, all on-device in the browser via WebAssembly. The output is a structured state vector posted to a Flask /state endpoint on our server, which runs on a $0/month Oracle ARM box in Chicago. As we worked to develop our liveness scoring system, we discovered that the rPPG signal's noise could be used to detect anomalies in the data. Specifically, we found that a BPM standard deviation above 12 BPM, combined with other penalty signals like gaze stability and micro-expression count, could be used to identify spoofing attempts with remarkable accuracy.&lt;/p&gt;

&lt;p&gt;This insight has significant implications for our journey at EmoPulse. As we continue to develop and refine our technology, we're forced to confront the trade-offs between security, accuracy, and usability. Our liveness scoring system, which runs on the server-side and uses a sliding window of the last 5 ticks per session, is just one example of how we're working to balance these competing demands. As we move forward, we'll need to continue exploring the properties of our signals and finding creative ways to leverage their imperfections.&lt;/p&gt;

&lt;p&gt;What will be the next unexpected benefit to arise from our imperfect signals, and how will it change the course of our development?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeptech</category>
      <category>founders</category>
    </item>
    <item>
      <title>Building a 5-layer neurosymbolic perception engine alone</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Fri, 01 May 2026 11:38:46 +0000</pubDate>
      <link>https://dev.to/emopulse/building-a-5-layer-neurosymbolic-perception-engine-alone-4pko</link>
      <guid>https://dev.to/emopulse/building-a-5-layer-neurosymbolic-perception-engine-alone-4pko</guid>
      <description>&lt;p&gt;Earlier this week, I found myself staring at a wall of code, wondering how I ended up building a 5-layer neurosymbolic perception engine by myself. It was a moment of exhaustion, but also a moment of clarity. I realized that the hardest part of building EmoPulse wasn't the technology itself, but the solitude of making decisions without a team to bounce ideas off of.&lt;/p&gt;

&lt;p&gt;As I reflected on the past year, I noticed a pattern. Every time I thought I had made a breakthrough, I would soon realize that it was just a small step in a much larger journey. The stress classification path, for example, implements published peer-reviewed methodology from Giannakakis et al., which reports 93 to 96 percent accuracy on cohorts of N=48 to 58. But what does that really mean? It means that I have to trust the research, trust my implementation, and trust that it will work in the real world. It's a heavy burden to carry alone.&lt;/p&gt;

&lt;p&gt;The technical reality of building EmoPulse is daunting. I've had to extract 47 biometric and behavioral signals from any standard RGB camera, all on-device in the browser via WebAssembly. The output is a structured state vector posted to a Flask /state endpoint on the server, with sub-50ms end-to-end latency from frame capture to state vector emission. It's a complex system, and one that requires a deep understanding of the underlying technology. I've had to rely on tools like MediaPipe and custom rPPG implementation to get the job done.&lt;/p&gt;

&lt;p&gt;As I look back on the past year, I realize that building EmoPulse has been a journey of continuous learning. Every decision I make has a ripple effect, and every problem I solve reveals a new set of challenges. The liveness scoring system, for example, was a recent addition, shipped on 2026-04-08. It's a server-side anti-spoof scorer that runs on signals already in the /state payload, with a sliding window of last 5 ticks per session. It's a complex system, and one that requires careful consideration of the potential risks and benefits.&lt;/p&gt;

&lt;p&gt;The journey of building EmoPulse is far from over. In fact, it's just beginning. As I move forward, I'm forced to confront the reality of my own limitations. I have to be careful not to overpromise and underdeliver, and I have to be willing to adapt to changing circumstances. The pre-seed round of EUR 2 million at EUR 6 million pre-money is just the beginning, and I know that the real challenge lies ahead. What will I learn next, and how will it change the course of EmoPulse?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeptech</category>
      <category>founders</category>
    </item>
    <item>
      <title>I just spent a week building a 5-layer neurosymbolic perception engine by myself. The experience was</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Fri, 01 May 2026 11:38:45 +0000</pubDate>
      <link>https://dev.to/emopulse/i-just-spent-a-week-building-a-5-layer-neurosymbolic-perception-engine-by-myself-the-experience-was-3f52</link>
      <guid>https://dev.to/emopulse/i-just-spent-a-week-building-a-5-layer-neurosymbolic-perception-engine-by-myself-the-experience-was-3f52</guid>
      <description>&lt;p&gt;I just spent a week building a 5-layer neurosymbolic perception engine by myself. The experience was exhilarating and challenging. I'll share my journey and key takeaways. Follow @emopulseai on Telegram for daily AI insights -&amp;gt; t.me/emopulseai&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Last week, I dove into building a neurosymbolic perception engine. I experimented with 47 signals fr</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Fri, 01 May 2026 11:38:43 +0000</pubDate>
      <link>https://dev.to/emopulse/last-week-i-dove-into-building-a-neurosymbolic-perception-engine-i-experimented-with-47-signals-fr-26fg</link>
      <guid>https://dev.to/emopulse/last-week-i-dove-into-building-a-neurosymbolic-perception-engine-i-experimented-with-47-signals-fr-26fg</guid>
      <description>&lt;p&gt;Last week, I dove into building a neurosymbolic perception engine. I experimented with 47 signals from an RGB camera, and the results were fascinating. The potential for defense, health, and enterprise applications is vast. Follow @emopulseai on Telegram for daily AI insights -&amp;gt; t.me/emopulseai&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>I dove headfirst into building a neurosymbolic perception engine last week, tackling the challenges</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Fri, 01 May 2026 11:38:42 +0000</pubDate>
      <link>https://dev.to/emopulse/i-dove-headfirst-into-building-a-neurosymbolic-perception-engine-last-week-tackling-the-challenges-1a42</link>
      <guid>https://dev.to/emopulse/i-dove-headfirst-into-building-a-neurosymbolic-perception-engine-last-week-tackling-the-challenges-1a42</guid>
      <description>&lt;p&gt;I dove headfirst into building a neurosymbolic perception engine last week, tackling the challenges of integrating symbolic AI with connectionist AI. My key takeaway? It's all about balance. Follow @emopulseai on Telegram for daily AI insights -&amp;gt; t.me/emopulseai&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Last week, I dove head-first into building a neurosymbolic perception engine. I worked on 3 differen</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Fri, 01 May 2026 11:38:40 +0000</pubDate>
      <link>https://dev.to/emopulse/last-week-i-dove-head-first-into-building-a-neurosymbolic-perception-engine-i-worked-on-3-differen-2ej3</link>
      <guid>https://dev.to/emopulse/last-week-i-dove-head-first-into-building-a-neurosymbolic-perception-engine-i-worked-on-3-differen-2ej3</guid>
      <description>&lt;p&gt;Last week, I dove head-first into building a neurosymbolic perception engine. I worked on 3 different iterations, and I'm excited to share my findings. Follow @emopulseai on Telegram for daily AI insights -&amp;gt; t.me/emopulseai&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Last Tuesday at 3 AM, I trained my first model to detect pupil dilation from my laptop’s webcam. No</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Fri, 01 May 2026 11:38:39 +0000</pubDate>
      <link>https://dev.to/emopulse/last-tuesday-at-3-am-i-trained-my-first-model-to-detect-pupil-dilation-from-my-laptops-webcam-no-450a</link>
      <guid>https://dev.to/emopulse/last-tuesday-at-3-am-i-trained-my-first-model-to-detect-pupil-dilation-from-my-laptops-webcam-no-450a</guid>
      <description>&lt;p&gt;Last Tuesday at 3 AM, I trained my first model to detect pupil dilation from my laptop’s webcam. No IR sensors. No chest straps. Just raw RGB and a PyTorch script. After 72 failed epochs, it caught my stress spike when I spilled coffee. That moment, EmoPulse was real. Now it tracks 47 distinct biometric signals — all in real time, all from one camera. I’m building this solo, no lab, no legacy hardware. Just code, biology, and obsession. Follow @emopulseai on Telegram for daily AI insights -&amp;gt; t.me/emopulseai&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>I dove headfirst into building a neurosymbolic perception engine last week, tackling the challenges</title>
      <dc:creator>EmoPulse</dc:creator>
      <pubDate>Fri, 01 May 2026 11:38:38 +0000</pubDate>
      <link>https://dev.to/emopulse/i-dove-headfirst-into-building-a-neurosymbolic-perception-engine-last-week-tackling-the-challenges-192c</link>
      <guid>https://dev.to/emopulse/i-dove-headfirst-into-building-a-neurosymbolic-perception-engine-last-week-tackling-the-challenges-192c</guid>
      <description>&lt;p&gt;I dove headfirst into building a neurosymbolic perception engine last week, tackling the challenges of integrating symbolic AI with connectionist models. My key takeaway? It's all about finding the right balance between the two. Follow @emopulseai on Telegram for daily AI insights -&amp;gt; t.me/emopulseai&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
