What happens when you replace rigid, if-then-else rules with large language models (LLMs) in a complex, high-stakes environment? To answer this, we built SkySwarm, a real-time 3D simulation of global air traffic where every flight is an autonomous, reasoning agent.
Instead of following simple waypoints, our planes analyze fuel levels, monitor localized weather systems, and "think" their way through crises. In this post, I'll break down the architecture behind SkySwarm and what we learned by putting Agentic AI in the pilot's seat.
The Premise: Rule-Based vs. Agentic Navigation
In traditional simulations (like our baseline "RULE" mode), logic is deterministic:
If fuel < 10% and near airport, then land.
If route intersects storm, then adjust heading by 15 degrees.
This works, but it's fragile. It requires developers to anticipate and code for every possible edge case.
In "LLM" mode, SkySwarm delegates this logic. We feed the real-time state of the aircraft (location, fuel, destination) and the environment (storms, airport closures) into an Agno agent. This agent queries a local Ollama model (llama3.2) to reason about the best course of action. The result? Emergent behavior. We didn't explicitly tell the agents how to handle a fuel shortage during a storm over an open ocean—they figured it out themselves.
The Tech Stack: Bridging React and Local LLMs
SkySwarm requires high-performance rendering on the frontend and asynchronous, heavy computation on the backend.
1. The Frontend: React + globe.gl
Rendering thousands of active flights over a 3D Earth requires WebGL. We used globe.gl within a React/Vite environment.
The primary challenge here was state synchronization without blocking the main render thread. If the UI waits for an LLM to generate a response, the globe freezes. We decoupled this by having the frontend act purely as a dumb terminal. It polls a FastAPI backend every 1.5 seconds, receiving only the latest coordinates, fuel levels, and current "mode" (RULE or LLM) to interpolate smooth animations using htmlElementsData for flight markers and dynamic arcsData for routes.
2. The Backend: FastAPI + OpenFlights Data
The simulation engine runs in Python. It loads authentic airport and route data from the openflights dataset to give the simulation real-world grounding.
The backend maintains an internal "tick" loop. Every second, it updates the positions of RULE-based flights instantly. For LLM-based flights, it checks if the agent is approaching a hazard or critical fuel level. If so, it dispatches an asynchronous task to the Agno agent to request an updated heading or action.
3. The Brains: Agno + Ollama
Why local LLMs? Latency and cost. If we simulated 50 flights querying GPT-4 every 10 seconds, the API costs would skyrocket, and rate limits would throttle the simulation.
By using Ollama running llama3.2 locally, we achieve rapid, free inference. We wrapped the Ollama interface with Agno, allowing us to define clear personas and output schemas for the agents. The prompt is injected with JSON state data:
{
"fuel_level": 42.5,
"distance_to_destination": 1200,
"hazards": ["Severe Storm at (lat, lon)"],
"nearest_diversion": "KLAX"
}
The agent responds with a structured decision (PROCEED, DIVERT, HOLD), which the backend engine then translates into updated physics vectors.
Injecting Chaos: The True Test of Agents
A simulation is only interesting when things go wrong. We built a "Control Center" UI to inject global constraints:
- Severe Storms: localized zones of high risk where agents must decide if they have the fuel to fly around or if they should risk flying through.
- Airport Shutdowns: Sudden closure of major hubs, forcing agents to recalculate diversions mid-flight.
- Fuel Shortages: Artificial capping of max fuel, forcing aggressive optimization.
When we hit "Inject Severe Storm," the difference between RULE and LLM agents is stark. RULE agents follow their hardcoded evasion algorithms, creating predictable, identical diversion arcs. LLM agents display variance. Some cautiously divert early. Others mathematically weigh their fuel against the storm's size and push through.
We added a "Live Agent Thoughts" HUD. Clicking on a flight reveals the exact inner monologue generated by the LLM as it makes these life-or-death (simulated) choices.
SkySwarm proves that Agentic AI can handle dynamic, multi-variable environments in real-time. While we used planes, this identical architecture—a reactive frontend, a stateful backend, and a swappable LLM reasoning layer—can model supply chain logistics, autonomous drone delivery, or even dynamic network packet routing.
The code is meant to be a sandbox. Swap Ollama for OpenAI, add new crisis types, or let the agents communicate with each other to negotiate landing slots. The skies are literally the limit.
Github: https://github.com/harishkotra/skyswarm
Watch how this works

Top comments (0)