A driver leaves Nagoya at six in the morning. Clear skies. By seven-fifteen she's on a mountain pass east of Toyota City, and the sky turns white. Unexpected snow. Visibility drops to ten meters. The road surface changes from dry asphalt to black ice in the span of three curves.
She reaches for her navigation screen. GPS died in the tunnel two kilometers back. Her phone has no cell signal — the base station on the ridge can't cut through the storm. Google Maps shows a loading spinner. Apple CarPlay says "Route Unavailable."
What should a navigation system do in this moment?
Most navigation architectures have a simple answer: nothing. They assume connectivity. They assume GPS. They assume the cloud is always there. And when those assumptions fail — in a tunnel, in a snowstorm, in a rural dead zone — the driver is on her own at the exact moment she needs help most.
I spent the last several months building an architecture that takes the opposite approach. SNGNav is an open-source Flutter navigation system for embedded Linux that makes a deliberate bet: offline-first, deterministic, privacy-by-design. No AI. No cloud dependency. Eleven packages on pub.dev, 1,005 tests, 91% line coverage.
This article explains the architecture and three arguments behind it: why deterministic beats AI for regulatory positioning, why consent-by-default beats bolt-on privacy, and why the moments that matter most are the moments when everything else fails.
The Architecture
SNGNav is structured around a concept I call Five Guardians — five independent subsystems, each protecting the driver against a different failure mode:
| Guardian | Protects Against |
|---|---|
| Dead reckoning | GPS loss (tunnel, canyon, interference) |
| Offline tiles | Network failure (rural, congestion) |
| Local routing | Cloud unavailability (no server reachable) |
| Kalman filter | Sensor degradation (cold, old hardware) |
| Config system | Target variation (different deployments) |
No single component's failure abandons the driver. If GPS dies, the Kalman filter continues estimating position from speed and heading. If the network drops, offline MBTiles render the map. If the routing server is unreachable, a cached route persists. Five failure modes. Five guardians. The system degrades gracefully instead of collapsing.
That is the architectural difference I care about most. A lot of navigation demos are really feature demos: look, a pretty map; look, a route line; look, a weather overlay. SNGNav is not organized around features. It is organized around failure boundaries. Every major package exists because something important can disappear at the wrong time: signal, network, backend reachability, sensor quality, or deployment assumptions. The architecture starts from the question, "What breaks first in real driving conditions?" and only then moves to API design.
At a high level, the stack looks like this:
Provider / Engine Layer
Valhalla | OSRM | Mock routing | MBTiles | TTS backend
|
v
Application State Layer
RoutingBloc -> NavigationBloc -> VoiceGuidanceBloc
| |
v v
MapBloc SafetyOverlay
|
v
Presentation Layer
Flutter widgets, route progress, map camera, alerts, controls
The important point is that the UI is not talking directly to infrastructure. The UI never calls Valhalla. It never reads MBTiles. It never decides whether speech comes from a Flutter plugin or Linux Speech Dispatcher. Those concerns stay behind interfaces and BLoCs. That separation is what lets the system keep working when one concrete implementation changes.
Eleven Packages, One Architecture
The codebase is a Flutter monorepo with eleven extracted packages, all published to pub.dev at version 0.3.0:
| Package | What it does |
|---|---|
| kalman_dr | 4D Extended Kalman Filter for dead reckoning when GPS fails |
| routing_engine | Abstract routing interface — swap Valhalla, OSRM, or mock without changing app code |
| routing_bloc | BLoC state machine for route lifecycle (idle → loading → active → error) |
| driving_weather | Weather condition model: precipitation type, intensity, visibility, ice risk |
| driving_conditions | Road surface classification: six states (dry → black ice), grip factors, Monte Carlo simulation |
| driving_consent | Privacy consent lifecycle: per-purpose, per-jurisdiction, deny-by-default |
| fleet_hazard | Crowd-sourced hazard scoring with Haversine clustering and temporal decay |
| navigation_safety | Guardian-based safety boundary enforcement with BLoC integration |
| map_viewport_bloc | Map camera state machine: follow, free-look, overview modes |
| offline_tiles | MBTiles-based offline tile management with multi-tier coverage |
| voice_guidance | Engine-agnostic TTS for turn announcements, hazard warnings, and deviation alerts |
Seven of these are pure Dart — no Flutter dependency. You can use kalman_dr in a Raspberry Pi CLI tool, driving_weather in a server-side fleet manager, or driving_consent in a backend service. The remaining four provide Flutter BLoC integration while exposing _core libraries for pure Dart reuse.
That split matters more than it first appears. One of the traps in automotive Flutter work is to let everything drift upward into widget code because Flutter makes UI composition easy. SNGNav does the opposite. Domain logic gets pushed downward into packages that can be tested without a UI and, in many cases, without Flutter at all. That is why the repo can carry 1,005 tests at 91% line coverage without turning every test into a widget harness.
The current evidence base is strong enough to talk about this as an architecture, not just a prototype:
- 11 published packages
- 6,904 lines of library code
- 122 source files
- 1,005 automated tests
- 91% line coverage
- 0 analyzer issues at the release gate
Those numbers do not prove the architecture is correct. But they do prove it has shape. They prove the boundaries are stable enough to extract, version, publish, and test repeatedly.
How They Compose
The example app demonstrates four-BLoC composition in a single MultiBlocProvider:
MultiBlocProvider(
providers: [
BlocProvider(
create: (_) => MapBloc()
..add(const MapInitialized(center: _origin, zoom: 9.8)),
),
BlocProvider(
create: (_) => RoutingBloc(engine: _HybridRoutingEngine())
..add(const RoutingEngineCheckRequested()),
),
BlocProvider(create: (_) => NavigationBloc()),
BlocProvider(
create: (context) => VoiceGuidanceBloc(
ttsEngine: _ttsEngine,
navigationStateStream: context.read<NavigationBloc>().stream,
config: VoiceGuidanceConfig(
enabled: _voiceGuidanceEnabled,
languageTag: _voiceLanguageTag, // defaults to 'en-US'
),
),
),
],
child: const ExampleHomePage(),
)
Each BLoC owns one domain. RoutingBloc manages the route lifecycle. NavigationBloc tracks progress through maneuvers and arrival state. VoiceGuidanceBloc subscribes to NavigationBloc's state stream and turns those transitions into speech. MapBloc manages the camera and layer visibility. No BLoC reaches into another's private internals. They communicate through streams and events, which is exactly what you want in a system that needs to be observable, replayable, and testable.
That composition is doing more work than it seems. RoutingBloc doesn't know whether the route came from local Valhalla, public OSRM, or the deterministic mock engine. VoiceGuidanceBloc doesn't know whether speech is coming from a mobile plugin, Linux spd-say, or a no-audio fallback used in tests. MapBloc doesn't care whether tiles came from disk or network. Each layer receives a contract, not a concrete dependency. That is what keeps the app from becoming a knot of platform-specific conditionals.
The routing engine itself is a fallback chain: try Valhalla first, fall back to OSRM, fall back to a mock engine for offline demos. The tile pipeline has the same philosophy: try local MBTiles first, fall back to network tiles only when needed. Voice guidance now follows the same pattern: choose a platform-safe engine, expose diagnostics, and keep the rest of the app unaware of the implementation detail. The pattern repeats because the system is trying to solve the same problem over and over: continue serving the driver when the preferred path disappears.
This is the core architectural claim of SNGNav: Flutter is not just the UI toolkit here. It is the shell around a layered, failure-aware navigation system. The packages exist so an edge developer can take only the pieces they need, replace the pieces they don't trust, and still preserve the overall shape. That is why the package count matters. It is not eleven for the sake of eleven. It is eleven because the boundaries are real.
The consumer doesn't know which engine answered. That's the point.
Argument 1: The Regulatory Bet
The EU AI Act enters enforcement in August 2026. Articles 9 through 15 impose specific obligations on high-risk AI systems: conformity assessment, technical documentation, human oversight, accuracy monitoring, and robustness testing. Transportation is explicitly listed as a high-risk domain.
Navigation systems that use AI — predictive routing, neural scene interpretation, ML-based traffic estimation — will need to demonstrate compliance. The cost isn't trivial: conformity assessment, ongoing monitoring documentation, and the legal exposure if a driver relies on an AI prediction that turns out to be wrong.
That matters strategically even before a lawyer gets involved. If you are an OEM, a Tier 1 supplier, or an embedded platform team, every "smart" feature now comes with a second question: what regulatory class does this put us in? The industry spent years treating AI as a product differentiator. The EU AI Act turns part of that differentiation into an ongoing compliance burden. Once a feature moves from deterministic logic into model-driven inference, you do not just inherit its capabilities. You inherit its documentation requirements, traceability requirements, testing obligations, and review surface.
SNGNav makes a different bet. It doesn't use AI at all.
That is not because AI is useless. It is because this particular problem rewards predictability more than cleverness. If the system is advising a driver in snow, fog, or partial sensor loss, I would rather be able to explain every branch than celebrate that a model produced a plausible answer. In this architecture, the interesting engineering question is not "How do we infer more?" It is "How do we remain legible under stress?"
Road surface classification is a deterministic decision tree. Given a weather condition — temperature, precipitation type, intensity, ice risk — the system returns one of six states:
enum RoadSurfaceState {
dry(gripFactor: 1.0),
wet(gripFactor: 0.7),
slush(gripFactor: 0.5),
compactedSnow(gripFactor: 0.3),
blackIce(gripFactor: 0.15),
standingWater(gripFactor: 0.6);
final double gripFactor;
const RoadSurfaceState({required this.gripFactor});
static RoadSurfaceState fromCondition(WeatherCondition condition) {
if (condition.iceRisk) return blackIce;
final temp = condition.temperatureCelsius;
if (condition.precipType == PrecipitationType.none) {
return temp <= -3 ? blackIce : dry;
}
return switch (condition.precipType) {
PrecipitationType.rain when temp <= 0 => blackIce,
PrecipitationType.rain when condition.intensity ==
PrecipitationIntensity.heavy && temp > 3 => standingWater,
PrecipitationType.rain => wet,
PrecipitationType.snow when temp > 2 => slush,
PrecipitationType.snow when temp < -2 &&
(condition.intensity == PrecipitationIntensity.moderate ||
condition.intensity == PrecipitationIntensity.heavy) =>
compactedSnow,
PrecipitationType.snow => slush,
PrecipitationType.sleet => slush,
PrecipitationType.hail when condition.intensity ==
PrecipitationIntensity.heavy => standingWater,
PrecipitationType.hail => wet,
_ => dry,
};
}
}
No neural network. No training data. No probabilistic inference. A developer can read this function in two minutes and predict exactly what it will return for any input. An auditor can verify it in an afternoon.
That sounds almost too simple until you compare it to the alternative. A model-based road classifier would need a training corpus, labeling rules, evaluation metrics, drift monitoring, and an answer for edge cases that do not resemble the training distribution. What happens when freezing rain is underrepresented in the dataset? What happens when a regional climate pattern shifts? What happens when the model is technically accurate in aggregate but wrong on the one mountain pass the driver is actually climbing? Those are valid research questions. They are expensive product questions.
The deterministic approach changes the economics. If iceRisk is true, return blackIce. If there is heavy rain above a threshold temperature, return standingWater. If snowfall is cold and sustained, return compactedSnow. These are not hidden weights. They are declared assumptions. You can argue with the assumptions, tune them, or replace them entirely, but you do not have to reverse-engineer them from a model output.
That is the regulatory bet in one sentence: make the logic explicit enough that compliance collapses into normal software engineering.
Now, to be precise, a deterministic architecture does not exempt you from safety work. You still need testing. You still need documentation. You still need to be honest about system limits. But it changes the category of the problem. Instead of proving that a high-risk AI system is controlled, you are proving that ordinary software behaves as specified. That is a dramatically narrower claim.
There is also a product-side benefit. Deterministic systems are easier to explain internally. A platform lead can walk a reviewer through RoadSurfaceState.fromCondition() branch by branch. A safety engineer can reason about it. A customer engineer can map it to a field condition. That shared legibility lowers friction across teams that usually do not trust one another's abstractions.
This isn't a limitation — it's a positioning choice. For OEMs calculating AI liability costs under the EU AI Act, a deterministic system that's provably correct for its domain can be worth more than an AI system that needs continuous compliance monitoring. In a consumer mobile app, "smart" may win the marketing slide. In embedded navigation, "auditable" may be the stronger product.
That is why SNGNav's answer to the AI wave is not to compete on AI at all. It is to occupy the opposite corner: offline-first, explainable, and cheap to reason about under regulation. If the law makes opaque systems more expensive to ship, then simple systems become more strategically attractive. I think that asymmetry is real, and this architecture is built around it.
Argument 2: Privacy by Architecture
Most navigation systems bolt privacy on after the fact. The architecture collects everything, then a consent dialog asks the driver to agree to terms no one reads. If the driver declines, features break.
SNGNav inverts this. The consent model follows Jidoka (自働化) — the Toyota principle where the machine stops itself when something is wrong. In SNGNav, "something wrong" means "consent hasn't been explicitly granted."
abstract class ConsentService {
Future<ConsentRecord> getConsent(ConsentPurpose purpose);
Future<List<ConsentRecord>> getAllConsents();
Future<ConsentRecord> grant(
ConsentPurpose purpose,
Jurisdiction jurisdiction,
);
Future<ConsentRecord> revoke(ConsentPurpose purpose);
}
enum ConsentStatus { granted, denied, unknown }
class ConsentRecord extends Equatable {
final ConsentPurpose purpose;
final ConsentStatus status;
final Jurisdiction jurisdiction;
final DateTime updatedAt;
bool get isEffectivelyGranted => status == ConsentStatus.granted;
}
Three design choices make this architecturally strong:
Per-purpose consent. The driver grants fleet location sharing without granting diagnostics. Each
ConsentPurposehas its own state. No blanket "I agree" toggle.UNKNOWN = DENIED. When the system starts with no consent records, every
isEffectivelyGrantedcheck returnsfalse. Data doesn't flow until the driver explicitly says yes. The machine stops itself — Jidoka.Jurisdiction-aware. Each
ConsentRecordcarries aJurisdiction(GDPR, CCPA, APPI). The architecture is designed for GDPR — the strictest standard. An architecture that passes EDPB scrutiny automatically satisfies APPI and CCPA.
The ConsentService is abstract — swap InMemoryConsentService for a SQLite implementation without touching any consumer code. The pattern is identical to RoutingEngine: interface first, implementation second.
This is where most privacy writing gets weak. It focuses on policy language instead of execution semantics. "We respect your privacy" is not an architectural property. "No data leaves the device unless isEffectivelyGranted is true" is an architectural property. One is copy. The other is a gate.
That distinction matters because navigation data is unusually intimate. A fleet location stream is not just telemetry. It is movement history, work pattern, home pattern, and potentially behavior pattern. Weather telemetry is easier to rationalize, but even that can become location-adjacent when tied to repeated reports from a specific vehicle. Diagnostics data can reveal maintenance state, hardware problems, and usage cadence. These are not abstract privacy categories. They are directly monetizable data surfaces.
So SNGNav takes the stricter route: do not model consent as a checkbox attached to a product flow. Model it as a storage-backed gate attached to each purpose.
That is why ConsentPurpose exists. fleetLocation, weatherTelemetry, and diagnostics are separate lanes, not one umbrella approval. A driver can allow hazard contribution without allowing diagnostics export. A fleet operator can build a feature on top of one purpose without smuggling in another. The architecture makes the separation explicit.
Jurisdiction matters for the same reason. The code carries GDPR, CCPA, and APPI as first-class concepts because privacy is not only about whether the user said yes. It is also about which legal regime the system claims to satisfy. The design principle in the package is simple: design for GDPR, then deploy everywhere else from that stricter baseline. That is a much healthier default than building to the weakest common denominator and hoping policy text closes the gap.
The most important line in the whole package is still the smallest one:
bool get isEffectivelyGranted => status == ConsentStatus.granted;
That line does two things. First, it collapses unknown and denied into the same operational result: stop. Second, it removes ambiguity from startup state. There is no grace period. There is no "collect now, ask later." There is no temporary data flow justified by good intentions. If consent has never been recorded, the answer is functionally no.
That is Jidoka translated into software architecture. The machine stops itself and waits for the human.
There is also a practical engineering advantage here. Because ConsentService is abstract, storage is replaceable without changing consumers. The demo can use an in-memory implementation. A real embedded deployment can use SQLite. A fleet-integrated version could persist encrypted records in a more opinionated local store. None of those choices alter the business rule. The call site still asks the same question: is consent effectively granted?
Why does architecture matter more than contracts? Because contracts can be violated in code. Architecture can't. If isEffectivelyGranted returns false, the data pipeline physically stops. No amount of business logic can override it without changing the consent abstraction itself. That's auditable.
And auditable is the keyword. A regulator, customer, or internal reviewer can inspect this design and answer concrete questions. What happens before first consent? Denied. What happens after revocation? Denied. Can purpose A imply purpose B? No. Can storage be swapped without changing policy semantics? Yes. That is a much more robust privacy story than any modal dialog could ever be.
Argument 3: When Everything Else Fails
Back to the mountain pass. GPS is gone. Network is gone. What does SNGNav do?
The kalman_dr package implements a 4D Extended Kalman Filter. The state vector is [latitude, longitude, speed, heading]. When GPS is available, the filter fuses prediction with measurement, producing a smoothed estimate. When GPS is lost — in a tunnel, in a canyon, in a blizzard — the filter predicts only.
final kf = KalmanFilter();
// GPS fix arrives:
kf.update(
lat: 35.17, lon: 136.88,
speed: 11.0, heading: 90.0,
accuracy: 5.0, timestamp: DateTime.now(),
);
// GPS lost — predict forward:
final predicted = kf.predict(const Duration(seconds: 1));
// predicted.accuracy grows over time (honest uncertainty).
The key insight is in the last comment: accuracy grows over time. The covariance matrix tracks uncertainty. After 10 seconds without GPS, the accuracy radius is noticeably larger. After 60 seconds, it is larger still. The system is not hiding the degradation. It is modeling it.
That is what I mean by honest uncertainty. The system doesn't pretend to know where the driver is. It says: "I estimate you're here, and here's how confident I am." That distinction matters for ASIL-QM classification. This is advisory information, not vehicle control, and the software has to communicate that difference through behavior, not slogans.
This is also where a lot of navigation products fail philosophically. They treat loss of certainty as a UI problem. Hide the error. Smooth the dot. Keep the animation looking confident. SNGNav takes the opposite approach. If certainty is dropping, the architecture should say so explicitly. A growing covariance matrix is not a bug to conceal. It is the system telling the truth about its own limits.
Meanwhile, the map stays visible. offline_tiles renders from a local MBTiles archive — an SQLite database of pre-downloaded tiles. Its runtime resolution order is explicit:
RAM cache -> MBTiles -> lower-zoom fallback -> online -> placeholder
That ordering matters. It means the runtime does not care which coverage tier originally produced the tile. It cares about serving the best local tile available for the requested coordinate and zoom. If the exact tile is not there, the system can still step down gracefully before it ever touches the network. If the network is gone, the first three layers still work. The driver sees a map, sees her estimated position, sees the route she was following.
The routing server might be unreachable, but the route was already calculated and cached in the RoutingBloc state. The NavigationBloc continues tracking progress along the cached route shape. The driver still sees her next turn. That is a subtle but important point: routing and navigation are not the same thing. Losing the ability to compute a fresh route is bad. Losing the ability to continue along the current route is worse. The architecture separates those concerns so the second does not automatically fail with the first.
This is where the pieces start to reinforce one another. The filter keeps estimating. The tile stack keeps rendering. The route state keeps the current path alive. The voice layer can still announce the next maneuver. None of those pieces is magical alone. Together, they create continuity under failure.
Local Valhalla deployment makes this even stronger. Based on benchmarking, a local Valhalla instance processes Nagoya-region routes in 0.05 seconds versus 1.17 seconds for public servers — a 22× improvement. On embedded hardware with a preloaded regional graph, routing does not need the internet at all. That turns "offline mode" from a degraded backup into a first-class operating mode.
And that is really the thesis of this section: the moments that matter most are not the happy-path moments. They are the tunnel, the snowstorm, the rural dead zone, the overloaded network, the cheap GNSS antenna, the half-broken sensor stack. If your architecture only shines when everything upstream is healthy, it is not really a driver-assisting architecture. It is a demo.
SNGNav is built around the opposite assumption. Something important will fail. The job is to fail one layer at a time, keep the driver oriented, and never bluff certainty you no longer have.
The Co-Driver Speaks
The newest addition to the architecture is voice_guidance — an engine-agnostic TTS package that turns maneuvers and hazard warnings into speech.
The design follows the same abstraction pattern as every other SNGNav package:
abstract class TtsEngine {
Future<bool> isAvailable();
Future<void> setLanguage(String languageTag);
Future<void> setVolume(double volume);
Future<void> speak(String text);
Future<void> stop();
Future<void> dispose();
}
That interface is deliberately small. It does not try to be a general speech platform. It exists to answer one narrow question well: how does a navigation system ask some audio backend to speak, stop, configure language, and release resources without learning any of that backend's details?
That turns out to matter a lot in embedded Linux. On mobile, you can often get away with assuming the platform speech layer exists. On Linux, you cannot. In the current implementation, the package chooses a default backend per platform. On supported Flutter platforms it can use FlutterTtsEngine. On Linux it can use a real spd-say-backed engine through Speech Dispatcher. In test or headless environments it can fall back to NoOpTtsEngine. The rest of the app does not care which one was selected.
That is not just convenience. It is architectural hygiene. Voice guidance is one of the easiest places for platform conditionals to leak upward into the UI layer. Once that happens, every demo app, test harness, and deployment target grows its own special-case audio logic. The TtsEngine boundary stops that spread.
The ManeuverSpeechFormatter handles bilingual output — Japanese and English:
return switch (normalizedType) {
'left' => 'Turn left.',
'right' => 'Turn right.',
'arrive' => 'You will arrive at your destination.',
'depart' => 'Start driving.',
_ => 'Proceed to the next maneuver.',
};
If the route already provides a literal instruction string, the formatter uses it. If not, it falls back to maneuver-type patterns like left, right, arrive, and depart. That makes the speech layer flexible in two directions at once: it can respect route-engine wording when available, but it can also generate stable, localizable phrases when the route source only supplies structural data.
VoiceGuidanceBloc is where the speech layer connects to navigation state. It subscribes to the NavigationBloc stream, watches for maneuver index changes, arrival transitions, route deviation, and hazard alerts, and turns those state transitions into announcements. Hazard announcements interrupt maneuver speech because safety alerts outrank convenience instructions. Arrival speaks once on transition. Re-route speaks once on deviation. This is not a chatbot. It is a rule-driven co-driver.
That distinction is important. A lot of voice interfaces aim for personality. This one aims for timing and priority. The best turn instruction is not charming. It is short, clear, and spoken at the right moment. The best hazard warning is not expressive. It is interruptive in exactly the cases that justify interruption.
The demo app makes that behavior visible. Voice guidance can be muted, unmuted, tested directly from the diagnostics panel, and observed through the latest spoken message in the UI. On Linux, the example can route speech through spd-say; in CI, it can run completely silent while still exercising the BLoC and formatter. That is another benefit of the abstraction: the behavior remains testable even when actual audio output is absent.
The driver controls this with one toggle: mute or unmute. That matters philosophically as much as technically. SNGNav is not trying to become the driver's authority. It is trying to become a calm secondary channel that reduces glance load when the visual surface is already busy.
The whole feature stays within an ASIL-QM advisory boundary: display and audio, no vehicle control, no actuator path, no attempt to make the driver's decision for them. The co-driver speaks; the driver drives. That separation is why voice guidance belongs in this architecture at all. It extends continuity under stress without pretending to own the vehicle.
Honest Limitations
SNGNav is not a production navigation product. It is an architecture with real code, published packages, strong test coverage, and a clear point of view. That distinction matters. If I blur it, the article becomes marketing, and marketing is the fastest way to make technical work untrustworthy.
So here are the limits plainly.
No 3D visualization. The current UI is a 2D flutter_map display. I think there is a credible 3D upgrade path later, especially if public rendering APIs mature in the direction I expect, but that is not what exists today. Today it is a 2D navigation architecture.
Linux desktop first, not deployed embedded hardware. The system runs on standard Linux with Flutter 3.11.0. There is no finished ARM deployment story yet, no polished Raspberry Pi image, and no hardware qualification narrative. The package boundaries are meant to make that step easier, but easier is not the same thing as done.
Routing still depends on a real engine. The architecture abstracts routing cleanly, and the example can fall back to a mock engine for demos, but real turn-by-turn guidance still needs Valhalla or OSRM somewhere. Local deployment is viable and benchmarked; zero-routing infrastructure is not the claim.
The maintainer count is one. Eleven packages and 1,005 tests do not change the bus factor. A healthy architecture can still be a fragile project if too much context lives in one person's head.
The hard scenario is still mostly simulated. Unexpected snow is the design anchor, but most of the evidence so far is software evidence: simulated weather, simulated GPS loss, deterministic route scenarios, CI coverage, benchmark numbers. That is useful. It is not the same as field validation on a dashboard in winter.
I do not list these limitations as a hedge. I list them because open-source credibility depends on saying exactly what the work is and exactly what it is not. If the architecture is good, it can survive that honesty.
Try It
SNGNav is BSD-3-Clause. Everything is open.
git clone https://github.com/aki1770-del/SNGNav.git
cd SNGNav
flutter run -d linux
Or install individual packages:
dependencies:
kalman_dr: ^0.3.0
routing_engine: ^0.3.0
voice_guidance: ^0.3.0
driving_consent: ^0.3.0
# ... any combination of 11 packages
The numbers as of v0.6.0 are straightforward: 11 packages on pub.dev, 1,005 tests, 91% line coverage, 6,904 lines of library code, and zero analyzer warnings at the release gate. Every package has a README with install instructions and usage examples.
Architecture documentation: ARCHITECTURE.md. Safety boundaries: SAFETY.md. Contributing: CONTRIBUTING.md.
If you are an edge developer working on Flutter for embedded Linux, my hope is not that you copy this repo wholesale. My hope is that you steal the boundaries. Take the routing abstraction. Take the consent gate. Take the dead-reckoning model. Take the idea that uncertainty should be shown honestly and that offline should be treated as a first-class mode, not a fallback apology.
The driver on the mountain pass does not care whether the architecture is elegant. She cares whether the map is still there, whether the next maneuver is still understandable, and whether the system is truthful when certainty falls apart.
If this project helps another developer build for that moment, then it has already justified itself.

Top comments (0)