DEV Community

Cover image for I extended an open-source BLE mesh messenger with on-device AI for disaster response
Oleksandr Pliekhov
Oleksandr Pliekhov

Posted on

I extended an open-source BLE mesh messenger with on-device AI for disaster response

Originally published on HackerNoon

GitHub: https://github.com/AleksPlekhov/ai-mesh-emergency-communication-platform

The Problem Nobody Talks About

Every major disaster follows the same pattern. The earthquake hits. The hurricane makes landfall. The wildfire jumps the firebreak. And within minutes - sometimes seconds - the cell towers go down.

Not because they're damaged. Because 50,000 people are all trying to call their families at the same time.

First responders pull out their phones. Nothing. Rescue coordinators open their apps. No signal. Meanwhile, somewhere in the rubble, someone is typing "trapped under debris, 3rd floor, need help" into a mesh messaging app - and that message is sitting in a queue behind a hundred check-ins, status updates, and "is everyone okay?" messages from people who are fine.

This is the information overload problem. And until now, nobody had solved it for offline mesh networks.

Mesh Networking Works. Triage Doesn't.

Apps like Meshtastic and BitChat have proven that device-to-device mesh communication works without internet infrastructure. Your phone talks to my phone via Bluetooth. My phone relays to the next phone. Messages hop across a network of humans - no towers, no servers, no connectivity required.

The problem is that these systems treat every message equally. A cardiac arrest report and a "checking in, all good" message arrive with identical priority. In a network serving hundreds of nodes during an active disaster, a rescue coordinator might receive dozens of messages per minute with zero mechanism to distinguish life-threatening signals from ambient traffic.

The result is cognitive overload at precisely the moment when a human being needs to make fast, accurate decisions.

We needed AI. But we had a constraint: the internet was gone.

On-Device AI: The Only Option That Actually Works

Cloud NLP is off the table the moment infrastructure fails. You cannot call an API when there is no network. You cannot run inference on a remote server when that server is unreachable.

What you can do is run a small, fast, quantized neural model directly on a commodity Android smartphone - no connectivity required, no cloud dependency, no specialized hardware.

This is the core insight behind ResQMesh AI Platform: an open-source Android platform that integrates on-device machine learning directly into the BLE mesh communication stack.

The entire AI layer runs locally. Always. Even when every tower in a 50-mile radius is dark.
ResQMesh AI chat interface showing emergency message classification

How the Classification Pipeline Works

The heart of the system is a two-stage composite classifier I call M1.

Stage 1: Keyword Classifier

Before any neural inference runs, every incoming message passes through a deterministic rule engine covering approximately 90 lexical patterns derived from FEMA Incident Command System guidelines. Patterns like "trapped inside," "can't breathe," "water rising," "cardiac arrest."

If a CRITICAL or HIGH pattern matches, the message is tagged and surfaced immediately - no neural model invocation required. This guarantees sub-millisecond classification for the most time-sensitive messages, and it works on any Android device regardless of hardware capability.

Stage 2: Neural Classifier

Messages that don't match any keyword pattern go to a quantized Conv1D TensorFlow Lite model. The architecture is deliberately lightweight: an embedding layer, a single Conv1D layer with global max pooling, a dense layer with dropout, and a 10-class softmax output.

The model classifies into nine emergency categories - Medical, Collapse, Fire, Flood, Security, Weather, Missing Person, Infrastructure, Resource Request - plus a Non-Emergency class. Each category maps to one of four priority tiers: CRITICAL, HIGH, NORMAL, or LOW.

Total model size: 420KB. Inference time on mid-range hardware: single-digit milliseconds.

A confidence threshold of ฯ„ = 0.25 filters out uncertain predictions. Messages shorter than three tokens skip neural classification entirely.

Emergency Feed showing messages sorted by priority categories

The Part That Surprised Us: The BLE Priority Queue

Classification is only half the problem. The other half is radio-layer delivery.

I replaced the FIFO broadcaster with a java.util.PriorityQueue protected by a Mutex, driven by a CONFLATED signal actor. Every outgoing packet gets a priority field assigned synchronously by the classifier before enqueuing. CRITICAL packets preempt everything else at the radio layer.

The benchmark result: 1001x reduction in delivery latency for CRITICAL messages under simulated mesh load - 1,000 packets queued at 100ยตs intervals.

One thousand and one times faster delivery for the message that says someone is dying.

What Else the Platform Does

M2 - Offline Speech Recognition via Vosk Android. Voice-to-text that works completely offline, powered by a deep neural acoustic model.

M3 - FEMA ICS-213 Report Generator. Automatically aggregates classified messages into ICS-213-compatible situation reports - rendered as HTML, printable via Android's PrintManager.

M4 - Energy Optimizer. A battery-aware relay policy engine that adapts packet forwarding probability based on device power state. M4 enforces the guarantee that CRITICAL traffic always relays - with 21 unit tests.

ICS-213 FEMA report generated from mesh network messages

The Architecture Decision That Made Everything Possible

All four modules live in a self-contained Gradle library module - :resqmesh-ai - with zero dependency on the application module. The entire AI layer can be unit tested independently of the Android UI.

Why This Matters Beyond Disaster Response

The pattern I've built - lightweight on-device neural classification combined with deterministic rule-based fallback, running over an encrypted peer-to-peer mesh, with radio-layer priority preemption - is generalizable.

Anywhere connectivity is unreliable or absent. Rural areas. Underground facilities. Ships at sea.

A 420KB quantized Conv1D model running on a mid-range Android phone can classify emergency messages faster than a human can read them. You don't need a data center. You don't need an API key. You don't need internet.

Open Source and What's Next

ResQMesh AI Platform is released under GPL-3.0:
https://github.com/AleksPlekhov/ai-mesh-emergency-communication-platform

Planned next steps include federated learning for in-field model improvement, a real-time situational awareness map on offline OpenStreetMap tiles, adaptive BLE/Wi-Fi Direct switching, and bridging to FEMA IPAWS when connectivity is restored.

If you're working on offline communication, emergency response systems, or on-device ML for constrained environments - open an issue or start a discussion on GitHub.

Top comments (0)