This is a submission for the Algolia MCP Server Challenge
What I Built
How’s My Day? is a voice-first emotional wellness app that helps people reflect on their mood — all through a single spoken sentence.
Users simply tap a mic button, speak freely, and the app:
- Transcribes their voice in real time using AssemblyAI
- Detects the emotion behind their voice using Speech Understanding
- Searches Algolia MCP Server for a warm, human-written emotional tip
- Reads the tip back using AssemblyAI’s or ElevenLabs’ Text-to-Speech
The result is a calm, empathetic micro-interaction that helps people feel heard — no chatbot, no complexity.
Demo
📺 Video Walkthrough
📦 GitHub Repo
[🔗 https://github.com/abhishektaneja09/how-is-your-day]
How I Utilized the Algolia MCP Server
Algolia’s MCP Server powers the backend of emotional intelligence in our app.
Here's how we used it:
- We created a static
moods.json
index with handcrafted emotional responses for moods likeanxious
,burnt out
,grateful
,lonely
, etc. - When the emotion is detected (via AssemblyAI), we query Algolia MCP with that mood keyword
- MCP returns a personalized, human-style tip, emoji, and short comforting message
- This response is instantly delivered to the user, giving the illusion of true emotional understanding
The fast response time of MCP and its simple API made it the perfect choice for a real-time emotional UX.
Key Takeaways
- Voice-first interactions can feel personal, if designed with empathy
- AssemblyAI’s Speech Understanding feature can infer emotional tone from voice — not just text
- Algolia MCP makes it easy to build “domain brains” — in our case, an emotional advice brain
- Text-to-Speech should be slow, warm, and natural — we used AssemblyAI, and ElevenLabs as a fallback
- People don’t want a chatbot to solve their feelings — they want to be heard
Top comments (0)