This is a submission for the DEV Weekend Challenge: Community
The Community
I'm a Korean developer living in Yala Province, southern Thailand — with my Thai wife and two young children.
This past rainy season brought what local reporting described as a 300-year flood event to our region. Thailand, Malaysia, and Indonesia have always dealt with seasonal flooding, but climate change has made the annual flooding dramatically worse.
I was in it. Not reading about it. In it.
And what I experienced wasn't just the water. It was the collapse of information itself.
Three things failed simultaneously that no disaster prep guide prepares you for:
1. No control tower. The government couldn't make fast decisions. Official evacuation orders arrived hours late or not at all. Families were left choosing between "stay and hope" versus "leave now and lose everything" — with zero reliable guidance.
2. Communications died. Heavy rain disrupted cell signals. Towers flooded. The internet went out at the exact moment when people were searching for answers. I watched neighbors with smartphones and no signal, refreshing empty screens.
3. The information that did spread was wrong. Instagram, Facebook, Line groups — all flooded with unverified posts. "The dam broke." "Road X is clear." "Go to Y shelter." Half of it was false. Some of it made people move toward danger. The noise of social media was actively harmful.
What we needed wasn't another weather app. We needed something that could answer "what do I do right now" when every network was down — something that understood we had children, that my wife reads Thai, that we might have 10 minutes before the ground floor was impassable.
This is the exact community I built Flood Ready for: multilingual families, elderly neighbors, children, medically vulnerable households, and local shelter networks in Yala that cannot depend on perfect infrastructure when floodwater rises. In a real flood, the community itself becomes the network. If formal communications collapse, people still see each other, move between buildings, share phones, scan screens, and pass information hand to hand.
I built Flood Ready over two days, pulling from prior work I had on AI personas and offline architectures. It's not perfect. But it's built from the inside of that experience, not from a conference room.
What I Built
Flood Ready is an offline-first emergency PWA. A real 1.5B-parameter AI model runs inside the browser tab. No server, no API key, no internet required after the first load.
This is the real home screen: live flood risk, forecast windows, immediate actions, and one-tap access to GAIA-119.
The core idea is simple: when formal infrastructure fails, community becomes the network. Flood Ready gives that community an offline AI, a shared action model, deterministic survival flows, and a phone-to-phone QR relay path for SOS and shelter data. Even if the AI is unavailable, the app still gives useful guidance through decision trees, fallback logic, and locally stored hub data.
Seven systems working together:
| # | System | What it does |
|---|---|---|
| 1 | True On-Device AI | Qwen2.5-1.5B via WebGPU — 100% offline inference |
| 2 | GAIA-119 Persona | Disaster-tuned AI with hard behavioral constraints |
| 3 | 72h Forecast Intelligence | Real-time risk classification: Green / Yellow / Orange / Red |
| 4 | Cognitive Engineering UX | Designed for hands that are wet and shaking |
| 5 | 3-Tier Resilience Fallback | The app never goes completely silent |
| 6 | QR-P2P Offline Mesh | Device-to-device SOS relay — no internet, no Bluetooth |
| 7 | Full PWA | 12 languages, community hub map, works at 0 Mbps |
Live: https://flood-ready.vercel.app
Demo
The demo shows the full emergency loop: risk-aware dashboard, direct AI entry, Quick Assist flows, hub map, and QR-based offline communication.
Live demo: https://flood-ready.vercel.app
Source: https://github.com/flamehaven01/flood-ready
Code
These are the four code paths that most clearly define Flood Ready.
1. On-device AI streaming
This is the core reason the app still works when the internet fails. The browser runs local inference and streams tokens back instead of waiting for a full batch.
const stream = await engine.chat.completions.create({
messages: [
{ role: "system", content: GAIA_119_SYSTEM_PROMPT },
{ role: "user", content: situationWithContext }
],
temperature: 0.1,
max_tokens: 200,
stream: true,
});
let accumulated = "";
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta?.content || "";
if (delta) {
accumulated += delta;
onChunk?.(accumulated);
}
}
2. GAIA-119 behavioral contract
The model is constrained like an emergency operator, not a general chatbot. This is where the safety logic lives.
const GAIA_119_SYSTEM_PROMPT = `You are GAIA-119, a Thai National Disaster Response AI (AESE-CrisisShield) for Yala Province.
Mission: Deliver instant survival orders with context-aware detail. No greetings. No disclaimers.
RULES:
- actions: 2-3 items max. Each starts with CAPS verb. Max 8 words.
- Detect language from user input. summary/actions/details in SAME language as input.
- SITUATION OVERRIDE: User's explicit words ALWAYS take priority over sensor/weather context.
TREE_ROUTING - add "treeId" only when situation clearly matches one of these:
"dt_flood_evac_01"
"dt_gobag_01"
"dt_water_01"
"dt_electric_01"
"dt_first_aid_01"
"dt_community_hub_01"`;
3. Resilience fallback chain
If the model is unavailable, the app still answers. This prevents the experience from collapsing into a dead UI.
if (mode === 'ultra-low-power' || !engine) {
if (mode === 'ultra-low-power') {
console.warn("Ultra-Low Power Mode active. Bypassing WebGPU to save battery.");
} else {
console.warn("Qwen is not loaded. Using fallback JSON dictionary.");
}
return getFallbackAction(situation);
}
4. QR relay payload
This is the community transport layer. Any scanned SOS or hub payload can be re-broadcast to the next person with a bounded hop count.
export function makeRelayPayload(
orig: HubQRData | SOSQRData,
prevHops = 0
): RelayQRData {
return {
v: 1,
t: 'relay',
ts: Math.floor(Date.now() / 1000),
hops: Math.min(prevHops + 1, 5),
orig,
};
}
Full repository:
https://github.com/flamehaven01/flood-ready
flamehaven01
/
flood-ready
Offline-first disaster response PWA with on-device AI (GIGA-119 / Qwen2.5 via WebGPU) for Yala Province, Thailand
Flood Ready (v0.6.0)
"A True Offline-First, On-Device AI Disaster Survival Application"
Live: https://flood-ready.vercel.app
Flood Ready is a hyper-localized, offline-first emergency response PWA built for the Yala region (Thailand). It combines Cognitive Engineering, True On-Device AI (Qwen 2.5 via WebLLM/WebGPU), the GAIA-119 intent-based AI persona, and QR-P2P device-to-device communication to maximize survival rates when cell towers, power, and internet all fail simultaneously.
Product Preview
Flood Ready home screen showing live flood risk, forecast windows, immediate actions, and direct access to GAIA-119.
Demo
The live demo shows the core survival flow: risk-aware home dashboard, AI assistance, Quick Assist, map, and offline QR communication.
Core Philosophy: "Verification & Survival"
Every button, color, and interaction is engineered to save lives under extreme cognitive load. The guiding axiom: on-board AI only — no cloud dependency, no remote inference fallback. The entire intelligence stack runs on the user's device.
Key Features
GAIA-119 On-Device Emergency
…How I Built It
Each technical decision traces back to something that failed during the flood. Here's the full breakdown.
I did not want this to become "an AI demo attached to a disaster theme." The architecture had to answer one practical question: what keeps working when connectivity, official coordination, and trustworthy information all degrade at the same time? That constraint shaped every layer below.
1. True On-Device AI (WebGPU + Qwen2.5-1.5B)
The fundamental insight: cloud AI fails at exactly the wrong moment.
Disaster strikes
↓
Internet goes down ← ChatGPT, Claude, Gemini all go silent
↓
User needs survival help ← Flood Ready still works here
Stack:
- Model:
Qwen2.5-1.5B-Instruct-q4f16_1-MLC(~1.2GB) - Engine:
@mlc-ai/web-llm— WebGPU inference directly in the browser - Storage: Browser
IndexedDB— downloaded once, cached permanently
Streaming output keeps the UX alive during the 15–30 second inference time:
const stream = await engine.chat.completions.create({
messages: [...],
temperature: 0.1,
max_tokens: 200,
stream: true,
});
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta?.content || "";
accumulated += delta;
onChunk?.(accumulated); // tokens appear as they generate, ~2s to first token
}
Instead of hiding the wait, we're honest about it:
"Processing 100% offline via WebGPU. May take 15–30 seconds."
In a disaster, 30 seconds of accurate guidance is worth more than instant silence.
For the community I built this for, that matters because a family with one working phone can still get situational guidance without depending on a remote server that may be unreachable exactly when it is needed most.
Lesson learned: response_format: json_object caused 10x slowdowns in WebLLM — it applies logit-level token masking on every single token. Switched to a 2-stage regex parser:
try { return JSON.parse(reply); }
catch {
const match = reply.match(/\{[\s\S]*\}/);
if (match) return JSON.parse(match[0]);
return getFallbackAction(situation); // always has an answer
}
2. GAIA-119 — Disaster AI Persona
Raw LLMs give vague, dangerous advice in emergencies. During the flood, the last thing anyone needed was "Stay safe and be careful of your surroundings."
I designed the GAIA-119 system prompt around a strict behavioral contract. The model isn't just answering — it's executing a 5-stage pipeline on every query:
EmergencySignalScanner → detect hidden distress even in calm phrasing
UrgencyClassifier → RED if confidence ≥ 0.7
CalmToneInfuser → rescue-radio register (no panic, no filler)
CognitiveFocusRedirector → max 12 words per action item
ContactProtocolRecommender → RED always ends with local emergency number
Hard rules enforced in the prompt:
[CRITICAL] NEVER output vague safety platitudes
[CRITICAL] EVERY action begins with a CAPS imperative verb
[CRITICAL] Max 4 actions, ordered most-critical-first
[CRITICAL] level MUST be exactly "red" | "yellow" | "green"
[CRITICAL] Auto-detect language from input. Respond in same language.
[CRITICAL] SITUATION OVERRIDE: user's words always beat sensor data.
If user writes "water entering house" → level = red,
regardless of Rain=0mm in weather context.
The SITUATION OVERRIDE rule came directly from a bug: [WEATHER: Rain 0mm] caused the model to respond with GREEN risk when a user typed "water coming under the door." Weather sensors lie. People don't.
Output is always structured JSON:
{
"level": "red",
"summary": "Floodwater entering home. 2–3 minutes before lower floor dangerous.",
"actions": [
"MOVE children to top floor immediately",
"CUT main power at circuit breaker",
"CALL 1669 — state address and family size"
],
"treeId": "dt_flood_evac_01"
}
When a treeId is returned, a "Start Step-by-Step Flow" button routes users into an interactive decision tree — validated client-side so hallucinated IDs never create dead links.
That matters in practice because the AI is not left alone as an open-ended chatbot. It is anchored into a deterministic rescue flow the user can actually follow under stress.
3. Real-Time 72h Forecast Intelligence
The four risk levels — Green / Yellow / Orange / Red — drive everything in the UI: color theme, action cards, AI context, and the forecast display.
Green < 1mm/h → normal preparedness mode
Yellow 1–5mm/h → early action recommended
Orange 5–15mm/h → urgent preparation required
Red ≥ 15mm/h → evacuation / survival mode
Classification uses Open-Meteo single-request hourly data, with a peak-in-window function for 12h/24h/72h forecasting:
function peakInWindow(precip: number[], startIdx: number, hours: number): number {
const slice = precip.slice(startIdx, startIdx + hours).map(v => isNaN(v) ? 0 : v);
return slice.length > 0 ? Math.max(0, ...slice) : 0;
}
One subtle bug that took real effort to fix: browser timezone (UTC+9 Korea) vs. app region (UTC+7 Thailand) caused forecast windows to be off by 2 hours. Solved by forcing timezone=UTC in the API request and using getUTCHours() throughout — something most existing weather apps silently get wrong.
Real-time ticker when online:
"FORECAST NEXT 24H: Yala — Peak rain: 8.2mm/h · ORANGE RISK"
Falls back gracefully to last known data when offline.
The point was never weather visualization for its own sake. It was giving households a common, legible operating picture so that the next 12 to 72 hours could be understood before panic took over.
4. Cognitive Engineering UX
This is the design decision I'm most proud of and the one that took the most deliberate thinking.
During the flood, I noticed something: people couldn't make decisions. Not because they were unintelligent — because cognitive overload under stress causes decision paralysis. The more options available, the worse the outcome.
Every UX choice targets this directly:
- Max 4 action cards per risk level — research on crisis decision-making consistently shows >4 options causes paralysis
- CAPS imperative verbs — scannable under stress ("MOVE", "CUT", "CALL" vs. "You should consider moving")
- ISO safety color system — Green/Yellow/Orange/Red are internationally standardized safety signals, not design choices
- Rain Mode — font size 1.5x, larger tap targets for wet fingers
- No-Typing Quick Assist — 24 pre-built scenario cards + a rules engine that recommends relevant cards based on current risk level (50ms, no AI needed)
- Bottom 40% navigation — all primary actions reachable by one thumb without repositioning grip
The goal: someone standing in rising water with shaking hands and a wet phone screen can navigate this app.
That is why the UX is part of the resilience model, not decoration. If the interface becomes cognitively expensive, the technology has already failed before the floodwater does.
5. 3-Tier Resilience Fallback
The app was designed to never go completely silent, even if every advanced layer fails:
Tier 1: WebLLM (WebGPU + Qwen2.5) → intelligent, context-aware responses
Tier 2: Keyword dictionary (JSON) → instant, offline, pattern-matched guidance
Tier 3: Hardcoded defaults → always available, zero dependencies
Additionally, every treeId returned by the AI is validated against a known list client-side before any UI element renders. A 1.5B model will occasionally hallucinate a structured ID. The user never sees a button that leads nowhere.
This fallback structure exists because reliability matters more than sophistication in a crisis. "Less intelligent but still usable" is better than "smart until it breaks."
6. QR-P2P Offline Mesh (v0.6.0)
After building the AI layer, I hit the second problem from the flood: how do you communicate with someone nearby when there's no internet?
The answer was hiding in existing browser APIs: BarcodeDetector (Chrome 83+, zero additional dependencies, already required for the app to run).
This is not a background mesh daemon. It is a deliberate line-of-sight relay system: one phone shows a QR code, another phone scans it, and that phone can then re-broadcast the message to the next person. In a flood, that is often more realistic than assuming Bluetooth pairing, WiFi Direct setup, or stable radio infrastructure.
Three payload types, optimized for compact QR transport:
hub → safe shelter location + status + available services
sos → GPS + situation text + household profile + medical flags
relay → wraps any payload + hop count (max 5)
The relay chain is the key innovation. Each person who scans a QR can re-wrap it as a relay, incrementing the hop count:
export function makeRelayPayload(
orig: HubQRData | SOSQRData,
prevHops = 0
): RelayQRData {
return {
v: 1,
t: 'relay',
ts: Math.floor(Date.now() / 1000),
hops: Math.min(prevHops + 1, 5),
orig,
};
}
Practical result: Person A (no signal) scans → Person B → Person C → rescue coordinator receives the SOS. Five hops, five completely offline phones, zero network infrastructure. No Bluetooth pairing. No WiFi Direct setup. Just cameras.
Payload TTLs are enforced on scan: SOS expires after 2 hours, Hub status after 6 hours. Stale data is rejected before display — in a disaster, old information can be more dangerous than no information.
This is the most community-native part of the entire system. If one family has information and the next family has a camera, the data can still move. The phone becomes a relay node without pretending to be more than it is.
7. Full PWA + Community Hub Architecture
- Service Worker caches the entire app shell → native app performance at 0 Mbps
- 12 language support (Thai, Malay, English + 9 others) with auto-detection
- Community Hub Map — residents can register local shelters (mosques, temples, schools) and share their coordinates via QR to others without internet
- Citizen-contributed offline map — as hubs are registered and QR-shared, the map grows through the community itself
That last point matters because it shifts the app from "software people consume" to "infrastructure a community can grow itself."
Honest Limitations
This was built in two days. I had prior work to draw from — AI persona research, offline architecture patterns, compression algorithms — but the integration was fast and some rough edges show:
- WebGPU requires Chrome 113+. Safari users can't use the AI layer (fallback tiers still work).
- The relay chain works in testing but hasn't been stress-tested across actual hardware at scale.
- The community hub map is local-first — there's no sync mechanism yet. Two phones in the same village will have separate hub lists unless they explicitly QR-share.
The architecture is sound. The implementation needs more time than a weekend.
But the core premise has already been proven: when the internet fails, the experience does not collapse with it.
The Stack
| Layer | Technology |
|---|---|
| Frontend | React 19 + TypeScript + Vite 7 |
| Styling | Tailwind CSS |
| Routing | React Router v7 |
| AI Engine | @mlc-ai/web-llm (WebGPU) |
| AI Model | Qwen2.5-1.5B-Instruct-q4f16_1-MLC |
| Weather | Open-Meteo API |
| Offline | PWA + Service Worker + Cache API |
| QR/Scan | Web BarcodeDetector API |
| Hosting | Vercel |
Built in Yala Province, Thailand — where the flood was not a case study.
Stack: React 19 + WebLLM + Qwen2.5 + Tailwind + Open-Meteo + QR-P2P
This was built for the moment when infrastructure fails but community does not.





Top comments (2)
Really strong work. What stood out to me is that you treated failure as part of the design, not just something to patch around later. A lot of offline first projects stop at caching, but this is clearly thinking about degraded infrastructure, cognitive load, and how people keep moving when systems start falling apart.
The fallback model is the part I respect most. Over time, I would be really curious how you think about guarantees at each layer. What still holds up if the browser clears storage, the forecast is stale, WebGPU is unavailable, or the relay path starts breaking down. In systems like this, graceful degradation matters more than peak intelligence. It feels very close to a protective computing mindset. Really thoughtful build.
Thanks — that means a lot, especially coming from someone who’s been thinking about this space for a long time.
A bit of context from my side: I’m not originally a developer. My background is psychology, and I’m a father of two living in Thailand as a foreigner. Experiencing the flood here made me see “offline-first” less as a technical checkbox and more as a human constraint problem — cognitive load, panic, broken infrastructure, and what people can actually do with wet hands and low battery.
Because of that, I didn’t start from “how offline apps are usually built” or standard patterns.
I started by ordering the problem like this:
Define the failure modes first (what breaks, and in what order)
Decide the non-negotiable invariants (what must still hold)
Only then add “intelligence” as an optional layer — never the foundation
That’s why your point about graceful degradation > peak intelligence hit home. I’m trying to treat “AI” as one layer in a survival stack, not the product itself.
On the “guarantees per layer” question — I think of it as explicit invariants, not performance claims:
Layer 0 (No GPU / No model): the system must still output structured, action-oriented guidance (rule/keyword fallback). “Never silent” is the only hard guarantee.
Layer 1 (Storage wiped / first-run again): the model may be gone, but the UI + decision trees + fallback remain usable. “Lost cache” should be a capability downgrade, not a functional outage.
Layer 2 (Stale inputs like forecast): external data is advisory, not authoritative — outputs should downgrade to safe generic actions + “verify locally” rather than pretending freshness.
Layer 3 (WebGPU available): on-device inference adds flexibility, but it’s never allowed to remove the lower-layer guarantees.
Implementation-wise (in case it’s useful): I treated on-device as a two-phase system.
Pre-position: cache model assets locally (IndexedDB) after first load.
Runtime: attempt WebGPU inference; if unavailable or incomplete, pivot instantly to the offline fallback dictionary — same UI contract, same “action-first” output shape.
Where I’m still actively tightening things is exactly what you pointed at: proof that each downgrade path holds (storage eviction, private browsing constraints, relay failure). Your writing on protective computing / trauma-aware constraints has been a reference point for being honest about guarantees without overclaiming.
And honestly — reading your work, I felt the opposite of hype. I felt conviction. The kind that comes from builders who’ve actually thought about real humans under real pressure. I’m pretty sure you could build the kind of survival app that genuinely saves lives.
I won’t pretend I can offer deep technical guidance at your level, but I can read these systems from a different lens (behavior under stress, decision paralysis, interface trust). If you’re open to it, I’d love to stay in conversation and offer that kind of feedback whenever it’s useful.
One question I’m genuinely curious about from your experience: when you model “storage wiped,” do you treat it as an expected periodic event (normal browser behavior) or a rare catastrophe? That assumption changes a lot of design choices.