How I went from a broken UI and failing deployments to a fully functional AI-powered stadium navigation system running on Google Cloud.
The Idea
What if a stadium dashboard behaved like Google Maps + Iron Man HUD + AI brain?
Not just static dashboards… but:
Real-time navigation
AI decision engine
Live crowd intelligence
Fully deployed on cloud
That’s exactly what I built.
The System Overview
This is not just a frontend project. It’s a full-stack AI system:
Architecture
User Input → Frontend (React)
→ Backend (Node.js)
→ BigQuery (crowd data)
→ Vertex AI (reasoning)
→ Firebase (logging)
→ Response → UI visualization
Tech Stack
Frontend: React + Vite
Backend: Node.js + Express
AI Layer: Vertex AI (Gemini)
Data Layer: BigQuery
Logging: Firebase
Deployment: Cloud Run
Infra: Docker + Artifact Registry
The UI: Not Just a Dashboard
I rebuilt the UI completely into a spatial AI control hub:
Key Features
Zoomable + pannable SVG map
Stadium zones (Gates, Facilities)
Animated AI route (particle flow)
Heatmap layer (crowd density)
Toggle-based facility layers
AI reasoning panel (typing effect)
The Real Challenge (Not UI… Logic)
At first, the system looked good but failed logically:
“I am at West Gate, where is medical station”
→ Returned random output
Root Problem:
No proper intent mapping
No structured routing logic
Fix: Deterministic Routing Engine
I introduced 3 layers:
Intent Normalization
"med station" → "Med Station"
"hungry" → "Food Stall"
Rule-Based Routing
Find nearest valid node
Use graph-based shortest path
Validation Guard
If no location → ask user instead of breaking
The REAL Battle: Cloud Run Deployment
This is where things went from “works locally” → “fails miserably”
Problem #1: Container Not Starting
Error:
“Container failed to start and listen on PORT=8080”
Why?
Cloud Run requires:
A running server
Listening on PORT env variable
Bound to 0.0.0.0
Fix (Backend)
const PORT = process.env.PORT || 8080;
app.listen(PORT, '0.0.0.0', () => {
console.log(Server running on ${PORT});
});
Problem #2: Frontend Fails on Cloud Run
Frontend = static files
Cloud Run = expects a server
Result → deployment failure
Fix (Frontend Server)
I had to wrap frontend inside a server
Docker: The Backbone
Backend Dockerfile:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["node", "server.js"]
Final Result:
Backend running on Cloud Run
Frontend deployed on Cloud Run
AI routing working
UI stable (mostly 😅)
Full Google Cloud integration
Key Learnings
- Cloud Run is simple… but strict
You must follow:
PORT
Server process
Correct container structure
- Frontend ≠ static anymore in Cloud Run
You need:
Build → Serve → Run
AI ≠ just calling LLM
Without structure:
It fails badlyDebugging > Coding
Most time went into:
fixing deployment not building features
What I’d Improve Next
Better AI intent detection
Cleaner UI layout (no overlap)
Real-time streaming data
Graph-based routing engine upgrade
Final Thought
This project started as:
“Let’s build a cool UI”
It ended as:
“Let’s build a real production AI system”
Try It Yourself: https://stadium-frontend-986344078772.asia-south1.run.app/
Conclusion
If you’re building AI apps today:
Deployment is NOT optional
Architecture matters more than UI
Cloud knowledge = unfair advantage
Let’s Connect
If you're working on:
AI apps
Cloud deployments
Automation systems
Drop a comment or connect with me.
Top comments (0)