I built an AI coaching tool for the hackathon that analyzes esports match data and responds with voice. Most of the work went into prompt engineering and fixing API issues. This post covers what I built, what broke, and how I fixed it.
Why This Matters
Professional esports teams like Cloud9 have analysts reviewing hundreds of hours of footage. They track player tendencies, draft patterns, objective timing. Most of this remains manual - someone watching VODs and taking notes.
GRID provides detailed match data for every professional League of Legends and VALORANT game. The missing piece is turning that data into coaching conversations. A spreadsheet showing 22% gank success rate doesn't tell a jungler what to change. A coach saying "you're forcing top lane ganks when your bot lane setups work twice as often" does.
I built an AI that takes match data and explains it like a coach would.
What I Built
Zenith pulls match data from GRID's esports API, analyzes patterns, and responds to questions with voice. Ask "what went wrong this game?" and it returns a spoken answer referencing specific players and moments.
The hackathon required personalized player insights, automated macro game review, and hypothetical outcome predictions. I prioritized making insights conversational rather than just accurate.
AI Model Issues
I started with AWS Bedrock using Amazon Nova Micro for cost reasons. The first version failed entirely. Nova requires a different API format than Claude - it needs the Messages API format with an inferenceConfig block instead of standard parameters. I spent hours debugging before finding this.
After fixing the format, responses were still generic. Statements that could apply to any match. I rewrote prompts several times. The fix was being specific about tone: "Talk like a coach reviewing film with the team. No bullet points. Reference players by name. Be direct about what went wrong."
Then I hit Bedrock rate limits during testing. Deadline approaching, I added Anthropic Claude as a fallback:
async def get_coaching_response(self, prompt: str, context: dict) -> str:
try:
return await self.bedrock_client.invoke(prompt, context)
except RateLimitError:
return await self.anthropic_client.invoke(prompt, context)
The system tries Bedrock first, falls back to Anthropic if rate limited, and uses pre-written responses if both fail. I added support for multiple Bedrock models (Claude, Llama, Titan, Nova) with automatic format detection.
Making the Coach Useful
The first version had problems. Ask about a specific player, get a generic team overview. Ask the same question twice, get identical responses with the same intro paragraph.
I fixed these over several iterations:
Player name recognition. Ask "how did Skuba play?" and it looks up that player's stats instead of summarizing the team.
Context awareness. Tracks conversation history. Skips the intro on follow-up questions.
Query patterns. Different question types trigger different response structures - mistakes vs MVP vs player comparisons.
Champion and role data. Responses include each player's champion and role for relevant advice.
Prompt engineering took longer than writing code. Making an AI sound like a coach instead of an encyclopedia requires precise instructions.
Voice Integration
I added ElevenLabs because coaches multitask and can't always read.
First version was slow. Question → 2-3 seconds for AI → 3-4 seconds for audio generation. Six seconds total.
I switched to ElevenLabs' turbo model, cutting audio generation to under one second. I limited responses to 500 characters with sentence boundary detection to avoid mid-thought cutoffs. Added caching so repeated questions play instantly.
if len(clean_text) > 500:
truncated = clean_text[:500]
last_period = truncated.rfind('.')
if last_period > 300:
clean_text = truncated[:last_period + 1]
GRID Data Integration
GRID provides official data for professional League of Legends and VALORANT matches - actual game data, not scraped estimates.
The data includes objective timestamps, fight locations, and gold leads at any point. This is what coaches analyze manually.
The API has two parts: GraphQL for tournament metadata, File Download for match events as JSONL. Some GraphQL endpoints returned UNAUTHENTICATED with a valid key, so I added fallback to the File Download API.
Data structure varies between endpoints. Player assists appear at participant.stats.killAssistsGiven or player.assists depending on the source. I wrote a normalization layer to handle this.
Pattern Detection
The hackathon required insights like "jungler ganks top lane pre-6 with 22% success rate." I built tracking by lane and game phase:
class GankOutcome(Enum):
SUCCESS_KILL = "success_kill"
SUCCESS_FLASH = "success_flash"
FAILURE_DEATH = "failure_death"
FAILURE_COUNTER = "failure_counter"
class TimePeriod(Enum):
PRE_6 = "pre_6"
MID_GAME = "mid_game"
LATE_GAME = "late_game"
I also built isolated death tracking - deaths with no teammates nearby. This identifies patterns like "team loses 85% of games with 5+ isolated deaths in mid-game."
What-If Analysis
I built a scenario engine for questions like "what if we contested that Baron?" It classifies the situation, finds similar historical cases, and calculates success probability with confidence intervals.
VOD Review Agenda
The system generates timestamped review agendas. It identifies objective contests, teamfights with large gold swings, poor rotations, and isolated deaths. Each item includes timestamp, description, optimal play, and priority level.
Junie Usage
I used Junie in PyCharm throughout the hackathon.
Bedrock debugging: I described the error and Junie identified the Nova API format differences. It pointed me to the inferenceConfig structure I was missing.
GRID API client: I requested an async client with pagination and rate limiting. Junie generated working code with aiohttp, retry logic using tenacity, and error handling for different HTTP status codes. It matched my existing code style.
ElevenLabs integration: Junie built text preprocessing (removing markdown, emojis, URLs), sentence boundary detection, and two API routes for file generation and streaming.
Map bug: My React component broke due to Map from lucide-react shadowing JavaScript's Map. I spent 10 minutes confused. Junie identified the issue and suggested renaming the import to MapIcon.
Pattern detection: Junie generated the enum classes and dataclasses. I wrote the analysis logic.
Problems: progress output is verbose, history disappeared when switching to Claude, can't accept partial suggestions. The context awareness still saved time.
React Hydration Errors
The frontend had hydration errors from React rendering different content on server vs client. I fixed this by adding 'use client' to chart components and resolving the Map import collision.
Vercel builds failed initially. I fixed import paths and Turbopack config for path aliases.
Future Development
Zenith currently works with historical match data. Next would be live game integration - coaches asking questions during scrims with immediate analysis.
Pattern detection could expand to draft analysis: comparing team performance across compositions and identifying impactful bans.
For organizations like Cloud9 with multiple teams, this could standardize how coaching insights get documented and shared.
Retrospective
I should have added voice earlier - it changed how the product felt.
I should have written data normalization first. I kept hitting inconsistent data shapes throughout development.
I should have talked to actual coaches. I made assumptions about usefulness that may be wrong.
Stack
- Next.js 16 / React 19 / TypeScript
- Python 3.12 / FastAPI
- AWS Bedrock (Nova, Claude) with Anthropic fallback
- ElevenLabs turbo model
- GRID Esports API
- PyCharm + Junie
Conclusion
The hardest part wasn't the code. It was getting the AI to say something useful instead of generic summaries. Prompt engineering and API debugging took more time than building features.
Voice output made the tool more practical for actual coaching use. GRID's data made real analysis possible. Junie helped with the repetitive parts.
The project works. Whether coaches would actually use it requires testing with real users, which I didn't do. That's the main gap.





Top comments (0)