DEV Community

Varun Dasharadhi
Varun Dasharadhi

Posted on

How I built an emotion-reading AI in 24 hours using Claude + Hume EVI

The Idea

What if AI could actually feel what you're feeling —
not just read your words?

That was the spark behind EmpathIQ. Built solo in 24
hours for the Replit 10 Buildathon.

What It Does

EmpathIQ combines:

  • 👁️ Facial emotion detection via webcam
  • 🎙️ Vocal emotion analysis via Hume EVI
  • 🤖 Claude API responses calibrated to both signals

The result? An AI that responds to how you actually
feel — not just what you type.

The Feature That Surprised Me Most

Smart Glasses mode 🥽

Point the camera at someone ELSE. EmpathIQ reads
THEIR emotion and gives YOU real-time coaching
on what to say.

Angry person in front of you?
→ "Lower your voice and acknowledge their concern
without arguing"

The future vision is Meta Ray-Ban integration —
real-time emotional coaching in every room you
walk into.

The Tech

  • React + Vite
  • face-api.js — facial emotion detection
  • Hume EVI — vocal emotion AI
  • Claude API — emotionally calibrated responses
  • Recharts — emotion timeline chart
  • Tailwind CSS

The Hardest Part

Combining two real-time emotion signals (face + voice)
into one coherent reading without lag or conflicts.
The fusion panel took several iterations to get right.

What I'd Do Differently

Start with the voice mode earlier — EVI integration
took longer than expected and nearly didn't make
the 24hr deadline.

What's Next

  • Apple Watch pulse + biometric fusion
  • Meta smart glasses integration
  • Clinical/therapy version (HIPAA compliant)
  • Mobile app

Try It

🔗 Live: https://empathiq-studio--varundasharadhi.replit.app
🎬 Demo: https://www.loom.com/share/ee3177d34b40404487115fca5f8366ed
⭐ GitHub: https://github.com/VarunDasharadhi/Empathiq-Studio

Would love your feedback! 🙏

Top comments (0)