This is a submission for the Built with Google Gemini: Writing Challenge
What I Built with Google Gemini
Eldora was the product of my first 24-hour hackathon. Although it has undergone several name changes since its initial creation, its core vision has remained the same.
At its heart, Eldora is a multimodal AI counseling tool designed to provide immediate conversational support. By combining computer vision for facial expression detection, the Gemini API for contextual and empathetic responses, and ElevenLabs for real-time voice playback, we aimed to create a live, responsive experience rather than a static chatbot.
The goal was never to replace professional therapy, but to serve as an interim support system — something accessible in the moments when immediate human help isn’t available.
Demo
Landing page
Agent-Human interaction
Check out Eldora!!!
What I Learned
This was my first 24-hour MLH event, and as a second-year student, it pushed me technically and professionally.
Soft Skills
The biggest non-technical takeaway was learning how to pitch. Building something is one skill — explaining it clearly, confidently, and with impact is another. I learned to frame the problem first, then the tech. That’s something I want to keep improving.
I also had the chance to speak with Mike Swift and several MLH representatives, which gave me practical insight into AI product development and helped me navigate early friction with the Gemini API.
Technical Growth
This project stretched me far beyond my comfort zone.
It was my first serious build using:
- React
- Tailwind CSS
- Docker
- A layered, production-style Gemini integration.
Previously, I had only built a basic chatbot. This time, we engineered a multimodal pipeline:
Webcam → Emotion detection (computer vision) → Context injection → Gemini API → ElevenLabs voice playback
This forced me to think about latency, prompt design, state management, and rate limits across multiple services.
The biggest lesson? AI agents are powerful — but they’re constrained systems. Token limits and API quotas aren’t edge cases; they’re architectural considerations.
The Unexpected Lesson
In a 24-hour build, your team is everything.
Under time pressure, small problems escalate quickly. We watched another team disband at midnight. I was fortunate to work with a cohesive team that trusted each other and stayed composed when things broke.
In the end, the tech mattered.
The pitch mattered.
But the team mattered most.
Google Gemini Feedback
Gemini has been great to use as a student developer. It was the first agentic AI I integrated into a real project, and since then it has become my go-to model — even over alternatives like OpenAI and Amazon Nova.
What stood out immediately was the ease of integration. The API felt straightforward, the documentation was clear, and getting a working prototype up and running was fast. Beyond that, the responses felt noticeably more humanistic and context-aware, which mattered a lot for our use case — building an AI counseling tool where tone and empathy are critical.
Where we ran into friction was token exhaustion. Our app, Eldora, was deeply intertwined with Gemini calls. During the hackathon, we hit usage limits faster than expected. As a result, we had to rotate API keys just to keep the demo functioning for judges.
For rapid prototyping, Gemini was excellent. But for production-level or high-frequency multimodal use, managing token limits became a significant operational consideration.
Through experience, I’ve realized this challenge is far more manageable in a production environment. With proper quota planning, usage monitoring, and a premium tier configured from the start, the token limitations become a scaling consideration rather than a blocking issue.
But for prototyping and hackathons, as well as being a broke student developer, Gemini is the cream of the crop.


Top comments (0)