DEV Community

Rashi
Rashi

Posted on

Building an AI Event Assistant with Gemini and Genkit


This blog post was written for the purposes of entering the Google AI Hackathon on Devpost.


A few months ago, I attended a Science Olympiad competition as a chaperone for my daughter's team. While the students were running between events, I noticed the coach trying to manage everything with printed schedules, spreadsheets, and a pile of notes. Students kept asking where to go next, attendance had to be tracked manually, and it was difficult to know where everyone was at any moment.

Watching that unfold made me think: coordinating events shouldn't be this complicated. What if there was a single system that could manage schedules, track teams in real time, and even respond to voice commands from the coach? That idea eventually turned into TeamSync, an AI-powered event coordination platform designed to help coaches and organizers run complex events more smoothly.

TeamSync acts as a central command center where organizers can create schedules, manage teams, track attendance, and communicate with participants. But the feature that really changes the experience is the AI Voice Assistant. Instead of clicking through multiple screens, a coach can simply speak to the system and it performs the action directly — hands-free.

Making this kind of interaction work smoothly required more than just connecting a language model to a chatbot interface. The AI needed to understand user commands, interpret them in the context of the current event, and then trigger real actions in the system.

Gemini's function calling made this possible. Rather than just generating text responses, the AI can invoke actual operations within the app. When a coach says something like "start attendance," the assistant doesn't just respond with text — it triggers the real action and updates the dashboard in real time.

Responsiveness was critical. During live events, the assistant needs to react quickly enough to feel natural. By using Gemini's native audio streaming, I was able to create an assistant that listens and responds in real time with low latency. It also supports any language, making TeamSync accessible to teams worldwide.

Beyond voice, I also used Gemini for other AI features across the platform — a text chatbot, image-based schedule extraction, location intelligence, and post-event analytics that summarize attendance and engagement patterns automatically.

The entire application runs on Google Cloud with automated deployment through GitHub Actions.

Working on TeamSync reinforced an important lesson: AI becomes far more valuable when it's connected to real workflows. Instead of simply answering questions, an AI assistant can manage tasks, automate processes, and provide insights that would otherwise take significant manual effort.

What started as a simple observation at a Science Olympiad competition has turned into an exploration of how AI can assist people in real-world coordination. And in many ways, I'm just getting started.

Top comments (0)