NaviSmart in action...
(https://www.loom.com/share/5a2d02541d384313b61bb4ac9219c7c3)
๐งญ The Problem No One Talks About
You've been to a large sporting event. You know the drill.
You walk into a stadium holding 50,000 people. The signage is confusing. The crowd is thick. You need to get from the registration desk to Workshop Hall A โ but three corridors look identical, every checkpoint has a queue, and your phone's GPS is useless indoors.
You're lost. You're late. You're frustrated.
This isn't a small inconvenience โ it's a systemic failure of physical event design. And it's exactly what the PromptWars: Virtual Hackathon challenge asked us to solve:
"Design a solution that improves the physical event experience for attendees at large-scale sporting venues. The system should address challenges such as crowd movement, waiting times, and real-time coordination."
My answer was NaviSmart โ a crowd-aware, AI-powered stadium navigation assistant that lives in your browser and speaks plain English.
๐ก The Idea: What if the Venue Could Talk to You?
Most navigation apps are built for roads. Google Maps is phenomenal outside โ but inside a stadium, with gate numbers, workshop halls, food courts, and first-aid stations, it falls flat.
What I wanted to build was something fundamentally different:
A conversational interface layered on top of a live interactive map, where attendees type naturally โ "Guide me from Registration to Hall A, avoiding the crowd" โ and get a real, drawn route with crowd context.
The core insight was combining two things that had never been combined for venues:
Natural language understanding (so anyone can use it, no learning curve)
Live crowd density awareness (so the route isn't just shortest โ it's smartest)
That became NaviSmart.
๐๏ธ Architecture: How It All Fits Together
User types natural language
โ
routeParser.ts (NLP Intent Engine)
โ
Extracts: { from: "Registration Desk", to: "Workshop Hall A" }
โ
Google Maps Directions API (real walking route)
โ
crowdSimulator.ts (wait times + density per location)
โ
formatRouteResponse() โ Chat message with steps
โ
StadiumMap draws Polyline route in real time
โ
User sees route on map + step-by-step chat directions
The beauty of this architecture is its modularity. Each piece does one thing:
The parser handles language
The Directions API handles geography
The crowd simulator handles context
The map handles visualization
The chat interface handles communication
No single component is overloaded. Swap any one of them and the rest still works.
๐ ๏ธ Tech Stack โ Every Choice Explained
โ๏ธ Next.js 14 (App Router) + TypeScript
Next.js with the App Router gives us server components, optimized image loading, and a clean routing model out of the box. TypeScript ensures every function contract is explicit โ critical when you're building fast and can't afford runtime surprises.
๐จ Tailwind CSS
Dark-themed UI, built in minutes. No custom CSS files. Tailwind's utility classes let you design directly in JSX โ perfect for a hackathon where design speed matters.
๐บ๏ธ @react-google-maps/api
The best React wrapper for Google Maps. It gives us useJsApiLoader, GoogleMap, Marker, and Polyline components as proper React primitives โ no DOM manipulation, no lifecycle hacks.
๐ Google Maps JavaScript API + Directions API
The Maps JS API renders the interactive venue map. The Directions API computes real walking routes between GPS coordinates. Together, they give NaviSmart its geographic intelligence.
๐ง Custom NLP Intent Parser (routeParser.ts)
No LLM API calls needed. A smart regex + keyword matching engine that handles every natural language pattern a user might type:
"Guide me from X to Y"
"How do I reach Y from X"
"Navigate from X to Y"
"Get me to Y"
It maps recognized phrases to canonical location names and returns structured { from, to } objects.
๐ฆ Crowd Simulator (crowdSimulator.ts)
A deterministic crowd density model that assigns each venue location a crowd level (Low/Medium/High), estimated wait time in minutes, and a color-coded emoji indicator. In a production system, this would pull from real IoT sensor data or Google Maps Popular Times.
๐ณ Docker + Google Cloud Run / Vercel
The app is containerized using a multi-stage Dockerfile with Next.js standalone output โ keeping the final image lean. Deployed live on Vercel for the hackathon demo.
๐งช Jest Unit Tests
Unit tests cover the core logic: intent parsing, crowd level retrieval, and input sanitization. These aren't afterthoughts โ they're what separates a prototype from a production-ready system.
๐ Feature Walkthrough
๐บ๏ธ Interactive Venue Map
The top 60% of the screen is a live Google Map centered on the venue, showing 6 key locations as custom markers:
LocationEmojiCrowd LevelRegistration Desk๐๐ก MediumMain Gate๐ช๐ด HighFood Court๐๐ด HighFirst Aid๐ฅ๐ข LowWorkshop Hall A๐๐ข LowExit๐ถ๐ก Medium
When a route is computed, a blue Polyline is drawn dynamically on the map tracing the exact walking path.๐ฌ AI Chat Interface
The bottom 40% is a clean dark-themed chat window. Users type naturally:
You: Guide me from Registration to Hall A
NaviSmart responds:
๐ค NaviSmart:
๐บ๏ธ Route: Registration Desk โ Workshop Hall A
โฑ๏ธ Est. time: 4 mins
๐ฆ Crowd at Hall A: ๐ข Low (~1 min wait)
๐ Step-by-step:
- Head north from Registration toward the central concourse
- Turn left at the main corridor junction
- Continue past the media zone
- Workshop Hall A will be on your right
๐ก Tip: Registration Desk is currently Medium โ expect a short wait if returning.
The response is informative, actionable, and human. Not robotic output โ a genuine assistant voice.
- ๐ก๏ธ Security-First Design A hackathon project that handles user input has real security responsibilities:
XSS Prevention: All user input is sanitized via a regex strip of HTML tags before processing
API Key Safety: The Google Maps API key lives exclusively in .env.local โ never committed to git, never hardcoded
.gitignore discipline: node_modules/, .next/, and all .env*.local files are excluded from the repository
- โฟ Accessibility by Default NaviSmart was built with accessibility as a first-class concern, not a checkbox:
aria-label on every interactive element
aria-live="polite" on the chat message feed (screen reader announcements)
Semantic HTML: , , used throughout
High-contrast dark theme with clear visual hierarchy
Keyboard navigation supported for the chat input
โก Building with Google Antigravity: Intent-Driven Development
The most unusual part of this project wasn't the technology โ it was how it was built.
PromptWars: Virtual mandates the use of Google Antigravity, an intent-driven development tool. Instead of writing code line by line, you describe what you want in structured natural language prompts, and Antigravity generates the implementation.
Here's what that looked like in practice:
Prompt 1 โ "Scaffold a Next.js 14 app with TypeScript, Tailwind, @react-google-maps/api, set up .env.local with placeholder API key, configure .gitignore"
Prompt 2 โ "Build a split-screen layout: top 60% is a Google Map with 6 custom markers and a Polyline component, bottom 40% is a dark-themed chat interface with accessible ARIA labels and semantic HTML"
Prompt 3 โ "Add Google Directions API integration, crowd simulator with per-location wait times, NLP intent parser with regex matching, and a formatted route response generator"
Prompt 4 โ "Add Jest unit tests for the parser and crowd simulator, create a multi-stage Dockerfile with Next.js standalone output, write a comprehensive README"
Four prompts. A fully functional, tested, deployed web application.
This is what the future of software development looks like: you architect, the AI implements. The developer's job shifts from syntax to systems thinking.
๐ Real-World Applications
NaviSmart isn't just a hackathon demo. With modest extension, it becomes genuinely deployable:
๐๏ธ Sports Stadiums
Real-time crowd routing for NFL, IPL, or Premier League venues. Integrate with turnstile sensor data for live density maps.
๐ช Music Festivals & Concerts
Guide attendees between stages, merchandise stalls, and medical tents. Reduce crush risk at bottleneck corridors.
โ๏ธ Airports & Transit Hubs
Indoor navigation for terminals, gates, customs, and lounges. Layer on flight delay data for proactive rerouting.
๐ซ Universities & Campuses
Wayfinding for new students, exam-day crowd management, and accessibility routing for mobility-impaired users.
๐ฅ Hospitals
Navigate between departments, reduce wait-room overcrowding, direct visitors without staff interruption.
๐๏ธ Shopping Malls
Promotional routing ("take the scenic path past our featured stores"), parking guidance, event-day crowd management.
๐ฎ What's Next for NaviSmart
If I were to take this beyond a hackathon prototype:
v2.0 Features:
๐๏ธ Voice input via Web Speech API โ hands-free navigation
๐ก Real IoT crowd data โ live sensor integration instead of simulation
๐ Push notifications โ "Your route to Hall A just cleared up!"
๐ Multi-language support โ for international venues and global events
๐งญ Indoor positioning โ Bluetooth beacon integration for precise indoor GPS
๐ฑ PWA support โ installable on mobile, works offline with cached venue maps
๐ค Staff coordination mode โ separate interface for event staff to manage crowd flow
! Closing Thoughts
NaviSmart started as a response to a hackathon prompt and became something I genuinely believe in.
The problem it solves = people being lost, stressed, and stuck in crowds at events they paid to enjoy is real. The technology to fix it exists. What was missing was the right interface: conversational, ambient, and crowd-aware.
Building this in under 2 hours with Google Antigravity showed me something important: the bottleneck in software development is increasingly not implementation & it's ideation, architecture, and judgment. Those are irreducibly human.
The tools are getting faster. The ideas still need us.
Top comments (0)