Picture this: Three warehouse workers standing less than 400 feet apart, each holding a smartphone thousands of times more powerful than the computers that landed astronauts on the moon. They need to share inventory updates with each other to coordinate shipments. But the WiFi access point just went down, and there's no cellular signal inside the metal building. They can't share a single piece of data.
In 1969, NASA successfully transmitted telemetry data 238,900 miles through the vacuum of space to astronauts on the lunar surface. The communications equipment was primitive by today's standards—the entire Apollo Guidance Computer had less processing power than a modern calculator. Yet we solved that problem.
So why, in 2026, can't three people standing in the same building share data between devices that are literally millions of times more capable?
The answer isn't technological capability—it's architecture. We built cloud-dependent systems that require internet connectivity for everything. When that connectivity fails, our applications stop working, even though the devices themselves are perfectly functional.
After 30 years of building software systems, I've learned that every complex problem is solvable. But here's the catch: every solution introduces new complexity. The offline data synchronization problem isn't a single challenge—it's a cascade of interconnected problems where each solution reveals the next layer of difficulty.
This is the story of that cascade, and what it takes to truly solve it.
The Solution Cascade: How Each Fix Creates New Problems
Problem #1: The Cloud Dependency Trap
Modern mobile applications are built on a simple assumption: the cloud is always reachable. When a user updates data, it goes to a server. When another user needs that data, they fetch it from the server. The server is the single source of truth, the arbiter of conflicts, the coordinator of everything.
This architecture works beautifully—until it doesn't.
Airlines ground flights when connectivity fails. Healthcare workers can't access patient records in basements or rural clinics. Construction managers lose hours of productivity when they move between sites. Retail workers can't process transactions when the network goes down.
The problem isn't just complete outages. It's the "janky" networks—connections that drop every few minutes, bandwidth that fluctuates wildly, or latency that makes applications feel broken even when technically connected.
The Solution: Hardware First... Then Reality Sets In
The obvious answer is to throw hardware at the problem: deploy local servers at each site, add redundant network infrastructure, install cellular backup connections. You're essentially replicating your cloud infrastructure on-premises.
But this "solution" creates its own cascade of problems:
- Capital costs: Servers, networking equipment, and redundant systems for every location
- Operational burden: Each site needs monitoring, maintenance, updates, and eventual hardware replacement
- Expertise requirement: Local IT staff or expensive managed services contracts
- Single points of failure: The local server becomes the new bottleneck—if it fails, you're back to square one
- Synchronization complexity: Now you need to sync between local servers and the cloud, managing conflicts at a new architectural layer
You've just traded one dependency for another, more expensive one.
The Better Solution: Offline-First Databases
Instead of fighting the hardware battle, move the database to where your users already are: on their devices. Let users read and write data locally with zero latency, regardless of network conditions. Sync changes peer-to-peer or through the cloud when connectivity is available.
This isn't just about adding a local cache—it's about architectural inversion. The device becomes the source of truth, not a mirror of the cloud. Your application works the same whether you're connected or not.
The New Problem: Sync Conflicts
But now you've introduced a new fundamental challenge: what happens when two devices edit the same data while offline, then try to sync?
Consider our warehouse scenario. Worker A marks a pallet as "shipped" on their device. Worker B, unable to reach the server, marks the same pallet as "damaged" on their device. Two hours later, when WiFi comes back online, which change wins?
Traditional database replication uses locks and transactions to prevent this—but locks require coordination, and coordination requires connectivity. You can't lock what you can't reach.
Problem #2: The Conflict Resolution Challenge
This isn't a theoretical problem. I've seen production systems lose data, duplicate orders, and corrupt inventory counts because they didn't properly handle offline conflicts. The naive solution—"last write wins"—loses data. And prompting users to manually resolve conflicts creates terrible user experiences. As developers we all know how bad it feels to try and merge a complex conflict with git - now think about how an average end user must feel and then you add on to this problem the UI designer needs to design something that is usable. Sigh. This is a bad experience.
The Solution: CRDTs (Conflict-Free Replicated Data Types)
CRDTs are mathematically-sound data structures designed for distributed systems. They have a beautiful property: replicas can be updated independently and merged automatically, with guaranteed convergence to the same state.
The key is that CRDTs use deterministic merge rules based on data type semantics and causal ordering, not arbitrary timestamps. A counter CRDT knows how to merge increments. A set CRDT knows how to merge additions and removals..
Typically developers use business logic on top of CRDTs—perhaps a LWW-register with vector clocks, or a workflow state machine that respects causal dependencies. The CRDT guarantees the underlying data merges correctly; your application logic defines what "correct" means for your business case.
The New Problem: Getting Data to Sync More Often
Here's where most teams discover that solving conflict resolution doesn't mean you've solved sync. Your CRDTs can perfectly resolve conflicts—but only when devices actually connect and exchange data.
In our warehouse scenario, workers might spend 15 to 20 minutes offline before WiFi comes back. By that time, they've made dozens of decisions based on stale data. The CRDT will eventually make everything consistent, but "eventual" can mean "too late for business needs."
Problem #3: The Single Transport Limitation
The typical approach is to wait for a stable WiFi or cellular connection. But what if we could make devices sync more frequently by using whatever connectivity is available?
Modern smartphones have multiple radios: WiFi, cellular, and Bluetooth Low Energy. Tablets and IoT devices have similar capabilities. Why not use them all?
The Solution: Multi-Transport Mesh Networking
Instead of depending on a single transport, build a mesh where devices communicate directly with each other using whatever works: Bluetooth LE when devices are close, peer-to-peer WiFi when available, LAN when on the same access point, or cellular/internet when connected to infrastructure.
I wrote about this extensively in my article on transport multiplexing, but the key insight is simple: decentralized mesh networks offer multiple pathways for communication, eliminating single points of failure.
In the warehouse, even with WiFi down, workers can sync directly device-to-device via Bluetooth LE or peer-to-peer WiFi. Data flows through the mesh, hopping from device to device until it reaches everyone who needs it.
The New Problem: Intelligent Transport Selection
But now you face a new challenge: how do you decide which transport to use, and when?
Each transport has different characteristics:
- WiFi: 50-100+ Mbps, but requires access points and different devices might have different versions of WiFi which can limit distance and speeds (WiFi 5, 6, 6e, 7, etc)
- Bluetooth LE: ~2 Mbps, works peer-to-peer, lower power
- Peer-to-Peer WiFi: High bandwidth, no infrastructure needed, but platform-specific (Apple AWDL vs WiFi Direct )
- Cellular: Variable bandwidth, requires subscription, costs money
Your mesh networking solution needs to constantly evaluate: Which transport is available? Which offers the best performance right now? How do you handle transitions when a transport fails mid-sync?
Problem #4: Network Instability and Transport Switching
Networks in the real world aren't clean. Connections don't just work or fail—they degrade. You get the "janky" networks I mentioned earlier: a Bluetooth connection that drops packets, a WiFi signal that fluctuates, a cellular connection that's technically up but practically unusable.
The Solution: A Transport Multiplexer
You need intelligence sitting between your sync engine and the various transports—a multiplexer that can:
- Monitor the health of each transport in real-time
- Dynamically switch between transports based on conditions
- Fragment data across multiple transports simultaneously
- Reassemble packets that arrived via different paths
Ditto calls this the "Rainbow Connection" because each transport is like a different color of the rainbow, all working together to move data.
The New Problem: Bandwidth Constraints
Now you can use multiple transports and switch intelligently between them. But you've just surfaced another problem: Bluetooth LE operates at roughly 2 Mbps, while WiFi can handle 50-100+ Mbps. That's a 25-50x difference.
If your sync protocol tries to send too much data, Bluetooth becomes a bottleneck. The mesh might technically work, but performance becomes unacceptable for business operations.
Problem #5: The Document Flooding Disaster
Here's where most CRDT implementations hit a wall. The typical approach is to sync entire documents whenever any field changes.
Let's do the math. Say you have:
- Documents averaging 100KB each
- 50 devices in the mesh
- 10 updates per minute across the mesh
If every field change triggers a full document broadcast, you're looking at:
100KB × 50 devices × 10 updates/min = 50MB/minute = 3GB/hour
Over Bluetooth LE at 2 Mbps, 50MB takes over 3 minutes to transfer. Your mesh can't keep up. Updates queue behind each other, lag compounds, and the system grinds to a halt.
This isn't a theoretical problem. I've seen this exact scenario kill production deployments. The CRDT works perfectly in the lab with a few devices, but collapses under real-world load.
The Solution: Property-Level Diffs (Delta Sync)
The answer is to sync only what changed. If a document has 20 fields and only the status field changed, send only that one property update—not the entire 100KB document.
This is harder than it sounds. You need to:
- Track changes at the property level within your CRDT
- Calculate diffs efficiently
- Maintain causality across partial updates
- Handle cases where different properties on the same document change on different devices
State-based CRDTs traditionally sync full state. Some research papers discuss "delta-state CRDTs" that send incremental changes, but implementing them correctly is complex. You need version vectors, hybrid logical clocks, or similar mechanisms to track causality at the property level.
When implemented well, property-level sync reduces bandwidth by orders of magnitude. That 100KB document? Maybe you're now sending 200 bytes. Suddenly Bluetooth LE looks much more viable.
The New Problem: Broadcast Storm Inefficiency
But even with property-level diffs, you're still broadcasting. Every update goes to every device in the mesh, whether they need it or not.
In an enterprise deployment, this gets expensive fast. A retail chain might have thousands of stores, each with dozens of devices. Not every device needs every piece of data.
Problem #6: Indiscriminate Data Propagation
Broadcasting everything creates several problems:
- Bandwidth waste: Sending data that recipients don't need
- Storage waste: Devices store data they'll never use
- Security concerns: Devices see data they shouldn't access
- Battery drain: Radios constantly active processing irrelevant updates
The Solution: Query-Based Subscriptions
Instead of broadcasting everything, let devices declare what they want using queries:
SELECT * FROM inventory
WHERE storeId = 'store-seattle-42'
AND status IN ('active', 'pending')
AND lastUpdated >= '2025-01-01'
Now your sync engine only sends documents matching the subscription. When a document stops matching (because status changed to archived, for example), it's automatically removed from the device.
This is a fundamental architectural shift. You're moving from "sync everything and let the application filter" to "sync only what's needed." The database itself becomes aware of what data should exist on which devices.
The New Problem: Data Lifecycle and Eviction
Query-based subscriptions solve propagation, but introduce yet another challenge: how do you safely delete data?
In a distributed system with CRDTs, deletion is complex. You can't just remove a document—other devices might still have operations referencing it. If Device A deletes a document while Device B is offline, then Device B comes back online with an update to that "deleted" document, what happens?
The standard CRDT solution is "tombstones"—you mark documents as deleted but keep metadata around so you can resolve conflicts. But tombstones accumulate forever, consuming storage.
Problem #7: Safe Data Eviction
You need to safely remove data—both documents that no longer match queries and deleted documents—without breaking causality or losing conflict resolution capabilities.
The Solution: Query-Driven Eviction
The elegant solution is to make eviction a natural consequence of subscriptions rather than a separate garbage collection problem:
Subscription-Based Lifecycle: Documents are automatically removed from a device when they no longer match any active subscription query
Automatic Cleanup: Once a deleted document stops matching subscription queries (because it's marked deleted), it's automatically evicted from devices
For example, if your subscription is:
SELECT * FROM orders
WHERE status != 'completed' AND storeId = 'store42'
When an order's status changes to completed, it automatically evicts from your device—no separate garbage collection needed. The same principle works for deletions: mark the document as deleted, and if "deleted" documents don't match your queries, they evict automatically.
This approach requires:
- Hybrid Logical Clocks (HLC) to track causality even when device clocks drift
- Incremental query evaluation to detect when documents stop matching
- Grace periods to account for devices that might be offline temporarily
The key insight: instead of building a separate eviction protocol, let the query subscription system handle it. Data lifecycle becomes declarative—you specify what you want, and the system manages what stays and what goes.
The New Problem: Implementation Complexity
And here's where we arrive at the final challenge: building all of this correctly is incredibly hard.
Why This Is Hard to Build
Let me be direct: after three decades of building distributed systems I can confidently say that implementing this stack correctly is a multi-year engineering effort.
Here's what you're really signing up for:
Multi-Transport Networking
Each transport has platform-specific APIs and quirks:
- iOS uses Apple Wireless Direct Link (AWDL) for peer-to-peer WiFi
- Android uses WiFi Aware or WiFi Direct
- Bluetooth LE behaves differently across platforms and OS versions
- Discovery protocols (mDNS, BLE advertising) need tuning for battery life vs responsiveness
You need to handle graceful degradation, automatic reconnection, and intelligent fallback between transports.
CRDT Implementation and Optimization
State-based CRDTs are conceptually simple but have serious practical challenges:
- Metadata overhead: Version vectors grow with the number of devices
- Merge complexity: Combining states efficiently at scale
- Property-level granularity: Most academic papers describe document-level CRDTs, not field-level
- Tombstone accumulation: Growing garbage that impacts performance
You need a production-grade CRDT implementation optimized for mobile constraints.
Sync Protocol Design
Your protocol needs to:
- Calculate diffs efficiently between device states
- Compress payloads for bandwidth-constrained transports
- Handle partial sync over unreliable connections
- Resume interrupted transfers
- Avoid thundering herd problems when many devices sync simultaneously
Query Engine Integration
Query-based subscriptions require:
- A query parser and optimizer
- Incremental view maintenance algorithms
- Indexing for efficient query evaluation
- Integration with the CRDT layer so updates trigger query re-evaluation
Testing at Scale
You can't just unit test this. You need:
- Network simulation with realistic latency, packet loss, and bandwidth constraints
- Large-scale mesh simulations with hundreds of devices
- Chaos testing—randomly disconnecting devices, corrupting data, simulating clock drift
- Platform-specific testing across iOS, Android, and other targets
- Performance benchmarking under load
I've seen teams underestimate this by 10x or more. What looks like a few months of work turns into years.
The NASA Analogy: Systematic Problem Solving
The Apollo program faced similar cascading complexity. Getting to the moon wasn't a single problem—it was a stack of interconnected challenges where each solution revealed new obstacles.
They needed rockets powerful enough to escape Earth's gravity.
Solution: staged rockets. But now they needed to figure out how to rendezvous in orbit.
Solution: the Lunar Orbit Rendezvous approach. But now they needed a way for astronauts to survive re-entry heat.
Solution: heat shields. But now they needed guidance systems accurate enough to navigate in space. And so on.
NASA succeeded not by finding one brilliant solution, but by systematically working through each layer of complexity. They built subsystems, tested them independently, integrated them carefully, and refined relentlessly.
Offline-first data sync requires the same systematic approach. You can't just drop in a CRDT library and call it done. You need the full stack:
- Offline-first database
- CRDT-based conflict resolution
- Multi-transport mesh networking
- Intelligent transport multiplexing
- Property-level delta sync
- Query-based subscriptions
- Causality tracking and safe eviction
Each component must work correctly in isolation and integrate seamlessly with the others.
What Complete Solutions Look Like
This is why comprehensive platforms have emerged. Building this stack from scratch takes years and specialized expertise. But the good news is that the problem is solved—you can use platforms that have done the engineering work.
Let me show you what a complete solution looks like, using Ditto as a concrete example (since it's the platform I work with daily and know inside-out).
Architecture: Edge P2P Query-Correct Sync
The fundamental shift is from "server-controlled sync" to "edge P2P query-correct sync."
Traditional architecture:
Device -> WiFi/Cellular -> Server -> WiFi/Cellular -> Device
Edge-first architecture:
Device -> Direct P2P -> Device
Device -> Mesh Network -> Device (with optional server sync)
The database lives on the device. Sync happens peer-to-peer using whatever transport works. The server (if present) is just another peer, not a required coordinator.
Multi-Transport Mesh with Multiplexer
Ditto's mesh automatically uses:
- Bluetooth LE: For close-range, low-power connectivity
- Local Area Network (LAN): When devices share a WiFi access point
- Peer-to-peer WiFi: Platform-specific (AWDL on iOS, WiFi Aware on Android)
- WebSockets/Internet: For connecting to servers or remote peers
The multiplexer (Rainbow Connection) intelligently switches between transports and can even fragment data across multiple simultaneous connections.
Property-Level CRDT Sync
Instead of syncing 100KB documents, Ditto syncs field-level changes. A document with 20 fields where one field changed? You're syncing just that field's update—typically hundreds of bytes instead of kilobytes.
The CRDT implementation uses:
- MAP types for objects (add-wins semantics)
- REGISTER types for simple values (last-write-wins)
- Counters and other specialized types as needed
Each field is independently tracked, so concurrent updates to different fields on the same document merge cleanly.
Query-Based Subscriptions with Incremental View Maintenance
Here's what it looks like in practice:
// Register a subscription - determines what data syncs to this device
const subscription = ditto.sync.registerSubscription(
`SELECT * FROM orders
WHERE storeId = 'store-seattle-42'
AND status != 'completed'`
)
When an order's status changes to completed, it automatically evicts from devices that have this subscription. The sync engine evaluates queries incrementally—updates trigger query re-evaluation without full table scans.
This solves multiple problems at once:
- Bandwidth efficiency: Only sync relevant data
- Storage management: Devices don't accumulate irrelevant documents
- Security: Devices only receive data matching their authorization scope
- Automatic cleanup: Documents evict when they stop matching
Query-Driven Data Lifecycle
Ditto takes a unique approach to eviction by making it a natural consequence of query subscriptions rather than a separate garbage collection process:
Automatic Eviction: When a document no longer matches any active subscription on a device, it's automatically removed. This works for both state changes (like an order moving to "completed" status) and deletions.
Deletion Handling: When you delete a document in Ditto, it's marked as deleted in the CRDT. If your subscription queries exclude deleted documents (which is typical), the deleted document automatically evicts from devices that don't need it.
Causality Tracking: Ditto uses hybrid logical clocks (HLC) to maintain causality, ensuring that late-arriving updates are handled correctly even after eviction. The system tracks enough metadata to resolve conflicts without keeping full document tombstones forever.
This happens automatically—you don't manually manage data lifecycle. You just define what data each device needs via subscriptions, and Ditto handles the rest.
The Path Forward
Here's what I tell teams evaluating offline-first solutions:
Don't underestimate the complexity. This isn't just "add a CRDT library and you're done." Each layer of the problem is harder than it first appears, and the integration between layers is where most of the difficulty lies.
Decide: Build or Buy. If you have a multi-year timeline, deep expertise in distributed systems, and can dedicate a team to this problem, building from scratch is possible. Most teams should use a comprehensive platform—the engineering effort saved is enormous.
Think Edge-Native. The future of computing is increasingly decentralized. IoT devices, mobile workers, autonomous systems—these all need to operate independently and sync opportunistically. Architecting for edge-native scenarios now positions you better for what's coming.
Test Realistically. Whatever solution you choose, test it under realistic conditions: flaky networks, limited bandwidth, devices coming online and offline at random, simultaneous updates across many peers. Lab tests don't surface the hard problems.
Conclusion: Engineering Is About Tradeoffs
The warehouse problem I started with—three workers 400 feet apart who can't share data—is completely solvable. So is the broader challenge of offline-first data sync. We have the technology, we understand the mathematics, and we've built production systems that work at scale.
Ditto has solved these problems comprehensively with its offline-first architecture that keeps applications working regardless of network conditions, peer-to-peer mesh networking that intelligently multiplexes across multiple transports, and CRDT-powered conflict resolution that automatically handles concurrent updates. The platform combines property-level delta sync, query-based subscriptions, and automatic data lifecycle management into a production-ready system that developers can integrate in days rather than years.
But it's not simple. Like NASA's moon missions, success requires systematically solving each layer of cascading complexity. You need offline-first databases, CRDT-based conflict resolution, multi-transport mesh networking, intelligent multiplexing, property-level sync, query-based subscriptions, and safe eviction protocols—all working together.
Every solution creates new problems. That's not a bug; it's the nature of complex engineering. The question is whether you're prepared to solve all the problems, not just the first few.
After 30 years of building distributed systems, I've learned that the teams who succeed are the ones who understand this complexity upfront, respect the engineering effort required, and choose their battles wisely. Sometimes that means building from scratch. More often, it means leveraging platforms that have already done the hard work.
The offline-first future is here. The question is whether your architecture is ready for it.
Frequently Asked Questions
Why can't devices sync data without WiFi or cellular?
Devices absolutely can sync without WiFi or cellular—using technologies like Bluetooth LE and peer-to-peer WiFi. The limitation isn't hardware; it's architecture. Most applications are built assuming cloud connectivity and don't include peer-to-peer mesh networking capabilities. Modern smartphones have multiple radios (Bluetooth, WiFi Direct, etc.) that enable direct device-to-device communication, but applications need to be specifically designed to use these capabilities.
What are CRDTs and why aren't they enough for offline sync?
CRDTs (Conflict-Free Replicated Data Types) are data structures that automatically resolve conflicts when multiple devices edit the same data offline. They guarantee eventual consistency using mathematical properties. However, CRDTs alone don't solve the complete sync problem because they don't address: when and how data syncs (transport layer), bandwidth efficiency (most implementations send full documents), what data each device should receive (subscription management), or when to safely delete data (eviction protocols). CRDTs are the foundation, not the complete solution.
How does mesh networking solve connectivity problems?
Mesh networking creates multiple pathways for data to flow between devices. Instead of all devices connecting to a central hub (like a WiFi access point), devices connect directly to nearby peers. Data can "hop" from device to device until it reaches its destination. If one connection fails, data routes through alternative paths. This eliminates single points of failure and enables operation even when infrastructure (WiFi, cellular) is unavailable. Modern mesh implementations use multiple transports simultaneously—Bluetooth LE, peer-to-peer WiFi, and LAN connections—maximizing resilience.
What is a transport multiplexer?
A transport multiplexer is an intelligent switching layer that manages multiple network transports (WiFi, Bluetooth LE, cellular, etc.) simultaneously. It monitors the health and performance of each available transport in real-time, dynamically routes data through the best available option, and can even fragment data across multiple transports at once. Think of it like a smart router at the device level—it ensures data flows using whatever connectivity is available, automatically failing over when transports become unavailable or degraded.
Why do most CRDT implementations have bandwidth problems?
Traditional CRDT implementations sync entire document states when any field changes. If you have a 100KB document with 20 fields and one field update, most systems broadcast the full 100KB to all peers. In bandwidth-constrained environments (Bluetooth LE operates at ~2 Mbps), this creates bottlenecks. With many devices and frequent updates, the math breaks down: 100KB × 50 devices × 10 updates/minute = 50MB/minute, which exceeds the capacity of low-bandwidth transports. This "document flooding" problem makes naive CRDT implementations impractical at scale.
What are property-level diffs and why do they matter?
Property-level diffs (also called delta sync) means sending only the changed fields rather than entire documents. If a document has 20 fields and one field changes, you transmit just that field's new value—typically a few hundred bytes instead of kilobytes. This reduces bandwidth by 10-100x or more, making sync practical over constrained transports like Bluetooth LE. Implementing this correctly requires tracking causality at the field level, calculating diffs efficiently, and maintaining CRDT semantics for partial updates.
How does query-based sync differ from full replication?
Full replication broadcasts all data to all devices, letting applications filter what they need. Query-based sync inverts this: devices declaratively specify what data they want using queries (e.g., SELECT * FROM orders WHERE storeId = 'seattle'), and the sync engine only sends matching documents. When a document stops matching (because a field changed), it's automatically removed from that device. This reduces bandwidth (only send relevant data), manages storage automatically (devices don't accumulate irrelevant documents), improves security (devices only receive authorized data), and enables automatic cleanup (documents evict when they stop matching).
What is causality tracking in distributed systems?
Causality tracking means recording the order of operations across distributed devices so you can determine which updates happened before others, even when clocks aren't synchronized. This is critical for conflict resolution and safe data deletion. Common techniques include vector clocks (each device maintains a counter of operations from all known peers) and hybrid logical clocks (combining physical timestamps with logical counters). Causality tracking lets you say "Device A's update happened before Device B's update" even if Device A's clock was wrong, ensuring correct conflict resolution and safe eviction of old data.
What makes offline-first sync implementation difficult?
The difficulty comes from the intersection of multiple complex problems: distributed systems theory (CRDTs, causality, consistency), networking (multi-transport protocols, discovery, NAT traversal), platform-specific APIs (iOS vs Android vs web), performance optimization (minimizing bandwidth, battery, storage), and security (distributed authorization without central authority). Each problem alone is manageable, but they interact in complex ways. Additionally, testing requires simulating realistic network conditions at scale, which most teams aren't equipped to do. The full implementation is typically a multi-year engineering effort requiring specialized expertise.
How does Ditto's approach differ from other CRDT implementations?
Ditto provides a complete integrated stack rather than just CRDT primitives. Key differentiators include: property-level delta sync (most implementations sync full documents), query-based subscriptions with automatic eviction (most broadcast all data and require manual data lifecycle management), multi-transport mesh with intelligent multiplexing (most use single transports), and cross-platform device discovery and connection management. The integration between layers—how the query engine triggers sync, how subscriptions control data lifecycle, how the multiplexer optimizes for bandwidth constraints—is where much of the value lies.
When should you use an offline-first database?
Use offline-first databases when:
1 - Users need to work in environments with unreliable or absent connectivity (field work, healthcare, aviation, retail, construction)
2 - Your application requires low-latency interactions even when cloud-connected
3 - Data must sync peer-to-peer without routing through servers
4 - High availability is critical—the app must keep working regardless of network conditions
5 - You need eventual consistency across distributed devices. Don't use offline-first when you have no offline use case, or can tolerate application unavailability during network outages.
About Me
I am a Developer Advocate at with over 30 years of experience building distributed systems, mobile applications, and developer tools currently working as a Developer Advocate @ Ditto.
Previously I was an Associate Director/Tech Lead of Client Technologies - Mobile Technologies @ EY and Senior Developer Advocate/Principal Software Engineer @ Couchbase.
Top comments (0)