The maturation of WebAssembly (WASM) and emerging browser storage primitives have shifted the web application paradigm from traditional stateless models toward Local-First distributed systems. This technical analysis explores the implementation of a high-performance stack engineered to eliminate network latency (sub-millisecond UI) by running a full relational engine directly on the client.
The Technical Ecosystem
Throughout this analysis, we will deconstruct the integration of three core layers that facilitate bidirectional data persistence and synchronization:
- Client Runtime: The application of WASM-compiled SQLite interfacing with the Origin Private File System (OPFS). We will explore how this architecture overcomes the IndexedDB I/O bottleneck by offloading complex query execution to a dedicated Web Worker.
- Data Infrastructure: The configuration of PostgreSQL as the canonical source of truth, leveraging Logical Replication and the Write-Ahead Log (WAL). We will detail why data flows must be driven by database-level events (CDC) rather than imperative REST API calls.
- Transport and Synchronization Layer: The deployment of PowerSync or ElectricSQL for the orchestration of Data Buckets. We will analyze the segmentation of global state into per-user local shards via JWT Claims and WebSocket streams, ensuring Optimistic UI consistency.
The goal is to demonstrate the transformation of the browser into a resilient database node, capable of handling local transactions and reconciling state with the server in a transparent, asynchronous manner.
1. The Client-Side Engine: SQLite + WASM + OPFS
1.1. The Rationale for WASM-based SQLite
Historically, web storage has been the primary bottleneck for complex applications:
- LocalStorage: Synchronous by nature, it blocks the UI thread during write operations. It is restricted to a 5MB capacity and limited to string-based key-value pairs, making it unfeasible for sophisticated data structures.
- IndexedDB: While asynchronous, its event-based API is notoriously difficult to manage (callback hell) and lacks a relational engine. The absence of native support for JOINs, aggregations, or advanced filtering forces developers to process data manually in JavaScript, incurring heavy CPU and memory overhead.
WebAssembly (WASM) redefines these constraints by enabling the original C source of SQLite to run directly within the browser. This is not an emulation, but a bytecode compilation executed by the browser engine with the following advantages:
- Near-Native Performance: WASM operates on linear memory—a contiguous block of bytes—managed directly by SQLite. This allows the Query Optimizer to analyze execution plans and access indexes with microsecond latency, a feat impossible via JavaScript interpretation.
- Main Thread Isolation: By compiling SQLite to WASM, the entire database engine can be offloaded to a Web Worker. The main thread (UI) remains completely unburdened, communicating with the database only to dispatch queries and receive result sets, thereby maintaining a consistent 60fps.
- Full Relational Capabilities: Porting the complete engine provides access to ACID transactions, triggers, views, full-text search (FTS5), and, crucially, referential integrity directly on the client.
1.2. Storage: The OPFS Revolution
Running SQLite solely in-memory is volatile; data is lost upon page refresh. To transform SQLite into an industrial-grade persistent database, we leverage the Origin Private File System (OPFS). OPFS is a private storage ecosystem within the File System Access API that enables the browser to manage files with an efficiency previously unattainable by legacy web APIs.
-
High-Performance Synchronous Access: The cornerstone of OPFS is the
FileSystemSyncAccessHandle. Unlike standard asynchronous web APIs—which introduce latency via the event loop—this interface allows for synchronous read and write operations. This is critical for SQLite, as the engine was architected under the assumption that the file system responds immediately to its low-level calls. - Optimization for WAL (Write-Ahead Logging): SQLite employs a journaling mechanism to prevent data corruption during failures. OPFS provides direct, exclusive access to data blocks (offsets), enabling SQLite's WAL mode to operate at peak velocity. In this mode, writes do not block reads, facilitating true concurrency that IndexedDB cannot emulate.
- Specialized VFS (Virtual File System): SQLite interfaces with hardware through an abstraction layer known as the VFS. In this stack, we implement a JS/WASM VFS that serves as a bridge, translating SQLite's C-based I/O requests into direct OPFS calls. By executing this within a Web Worker, we ensure these synchronous operations occur on a separate thread, preventing any impact on the responsiveness of the user interface.
1.3. Implementation Strategy: Worker Threading
Executing SQLite on the Main Thread is non-viable, as intensive queries would cause the UI to hang. The standard architectural pattern for mitigating this is:
-
Main Thread: The UI layer (React/Vue/Svelte) dispatches commands via a messaging channel (
postMessage). - Web Worker: Encapsulates the SQLite WASM binary. It receives commands, executes SQL against the OPFS, and returns the result set asynchronously.
- Shared Workers (Optional but Recommended): In scenarios where a user has multiple tabs of the application open, a Shared Worker ensures all tabs interface with the same database instance. This is a critical design choice to prevent file contention and potential data corruption within the OPFS.
1.4. Frontend <-> SQLite Interaction: The Observer Pattern
In a Local-First architecture, the frontend does not "request" data in the traditional sense; instead, it observes the local state. To ensure this interaction remains efficient, we implement a reactivity system centered on table-level tracking.
- Query Subscriptions (Live Queries): The frontend registers persistent SQL queries. Rather than a one-off execution, the integration SDK establishes a Stream or Observable. Because the SQLite WASM engine operates within a dedicated Worker, the system can maintain hundreds of active subscriptions without degrading user input latency.
-
Reactive Loop & Table-Level Invalidation: To optimize performance, the system avoids re-executing every query upon every change. The observability engine tracks which tables are touched by each
SELECTstatement. For instance, if anINSERToccurs in thetaskstable, the system detects the mutation and only notifies the hooks or components specifically dependent ontasks. - Zero-Latency Feedback: With a local database, the "Write → Notification → Re-render" cycle executes in microseconds. This renders complex loading states for local operations obsolete, as the UI stays in near-instantaneous synchronization with the relational engine.
// High-level reactive hook example
const { data, isLoading } = useQuery(
"SELECT * FROM projects WHERE id = ?",
[projectId]
);
// This mutation triggers a table-level invalidation in the Worker.
// The UI updates automatically in <1ms without a network round-trip.
const addTask = async (task: Task) => {
await db.execute(
"INSERT INTO tasks (id, content, status) VALUES (?, ?, ?)",
[task.id, task.content, task.status]
);
};
1.5. The Technical Challenge: Type Parity and Marshalling
The primary architectural friction between client and server stems from type system discrepancies. While PostgreSQL is a strictly typed system supporting complex data structures, SQLite utilizes Manifest Typing—where the data type is associated with the value itself rather than the column—and supports only five native storage classes: NULL, INTEGER, REAL, TEXT, and BLOB.
To ensure system-wide integrity, a robust Type Mapping layer must be implemented:
-
UUIDs and Strings: PostgreSQL handles UUIDs natively, whereas SQLite must store them as
TEXTorBLOB. The technical challenge lies in ensuring that the "bridging" between the WASM binary and JavaScript environment does not introduce significant overhead during string conversions in high-frequency transactions. -
Timestamps (ISO 8601 vs. Unix Epoch): SQLite lacks a native date/time type. To maintain the precision and timezone accuracy of Postgres's
TIMESTAMPTZ, we standardize local storage as ISO 8601 strings or BigInts (Unix epoch). This ensures that SQLite's built-in date functions (datetime(),strftime()) remain fully functional for filtering and sorting operations. -
JSONB to Text: The
JSONBtype in PostgreSQL is binary and highly efficient. On the client, SQLite stores this data asTEXT. Consequently, the synchronization layer must perform selective parsing: deserializing into JavaScript objects only when required by the UI to avoid unnecessary CPU penalties at the storage layer.
/**
* Type Mapping Utility (Conceptual)
* Ensures Postgres-compatible types are correctly handled in SQLite WASM
*/
interface SyncPayload {
id: string; // UUID from Postgres
data: Record<string, unknown>; // JSONB mapped to TEXT
updated_at: string; // TIMESTAMPTZ mapped to ISO-8601
}
const mapToSQLite = (payload: SyncPayload) => {
return {
...payload,
// SQLite doesn't have native JSONB, we must stringify
data: JSON.stringify(payload.data),
// Ensure consistent timestamp format for SQLite date functions
updated_at: new Date(payload.updated_at).toISOString()
};
};
2. The Source of Truth (Postgres + Logical Replication + WAL)
In a Local-First architecture, the backend does not originate data—that process has already occurred on the client. Instead, the server functions as the authoritative entity responsible for validation, long-term persistence, and global redistribution.
2.1. The WAL (Write-Ahead Log) as a Messaging System
Traditional synchronization architectures typically rely on polling or manual application-level event triggers, both of which are resource-intensive and prone to race conditions. This stack transforms PostgreSQL into a reactive system by tapping directly into its internal transaction engine.
- Understanding the WAL: The Write-Ahead Log is the backbone of Postgres data integrity. Every state change is recorded in this sequential log before it is committed to the permanent tables, ensuring the database can reconstruct its state following a crash.
-
Logical Replication & Decoding: By configuring the server with
wal_level = logical, we enable Postgres to decode physical disk changes into logical row-level operations. This allows for the extraction of a continuous stream ofINSERT,UPDATE, andDELETEevents in structured formats like JSON or Protobuf. - CDC (Change Data Capture) Efficiency: Rather than repeatedly querying the database for changes, the Sync Layer subscribes to the Postgres replication slot. Changes are "pushed" to the synchronization orchestration layer immediately upon transaction commit. This architecture drastically reduces database overhead and ensures minimal propagation latency across all connected clients.
Critical Server-side Configuration
For the infrastructure to emit these granular deltas, the PostgreSQL instance must be tuned specifically for logical decoding. This involves modifying core server parameters to support persistent replication slots and high-volume log streaming.
-- Required configuration in postgresql.conf
-- wal_level must be 'logical' to enable logical decoding
wal_level = logical
-- Increase the number of replication slots based on expected load
max_replication_slots = 10
max_wal_senders = 10
2.2. Schema Strategy: The Postgres-SQLite Mirror
In a high-level architecture, we treat PostgreSQL as the Canonical Source of Truth and SQLite as an Optimized Projection.
-
Postgres Schema (The Canonical Source): This layer houses the core business complexity. We utilize native types—such as
JSONBfor semi-structured documents,TIMESTAMPTZfor absolute temporal precision, andGISfor geospatial data—alongside aggressive constraints. Postgres ensures data integrity across the entire organization. -
SQLite Schema (The Client Projection): This is not a direct clone but rather a "flattened," lightweight version.
- Bypassing Constraints: While a foreign key violation in Postgres would trigger an integrity error, we occasionally relax these constraints in SQLite. This allows the Optimistic UI to function even if related data has not yet propagated through the synchronization stream.
-
On-the-fly Type Mapping: Since SQLite lacks native support for types like
UUIDorTIMESTAMPTZ, the synchronization layer must perform real-time transformations. When the WAL emits a change for aTIMESTAMPTZfield, the sync layer normalizes it (typically to ISO 8601 or Unix Epoch) so that SQLite's date functions remain performant on the client side.
2.2.1. Shadow Columns: Managing Synchronization State
To enable the system to be aware of its own synchronization state without polluting the business domain model, the SQLite schema incorporates technical metadata columns. These act as control headers for every record:
-
_status: Defines the local data lifecycle (synced,pending_insert,pending_update). This is the engine behind the Optimistic UI, allowing the interface to visually distinguish between confirmed data and data currently in transit. -
_version: A sequence identifier or hash used for Conflict Detection. It prevents "stale" server updates from overwriting more recent local changes. -
_last_synced_at: A timestamp of the last validation against the source of truth. It facilitates cache eviction policies and ensures the client knows the "freshness" of its local projection.
Atomic Reconciliation Flow
When a local operation is performed, the engine updates both the domain data and the synchronization metadata within a single, atomic transaction:
- The local state transitions to
pending. - The Sync Layer detects the flagged row and initiates the upstream transmission.
- Upon backend confirmation (verified via the WAL), the state is updated to
synced, closing the consistency loop.
2.3. Multi-tenancy and Isolation (Buckets)
The objective is to ensure that each SQLite instance contains only the data authorized for the active user, optimizing both bandwidth consumption and security.
-
Bucket-Based Segmentation: Rather than replicating entire tables, we define logical subsets. A bucket is a unit of synchronization that aggregates records based on membership criteria (e.g.,
user_id,team_id, orproject_id). - Postgres Publication: This serves as the egress point. We define which tables participate in the logical replication stream.
-- Defining the publication for the synchronization engine
CREATE PUBLICATION my_app_sync FOR TABLE tasks, projects, comments;
- Resync and Invalidation Dynamics: The primary challenge lies in permission changes. If a user's access to a project is revoked, their local bucket becomes "orphaned." The synchronization layer must detect this state change on the backend—via triggers or updates to permission tables—and force a bucket invalidation on the client to maintain isolation integrity.
Edge Filtering
Unlike traditional query patterns, filtering does not occur on the client side; instead, it is handled within the Sync Layer before deltas are dispatched. The server evaluates access rules against the WAL and forwards only the rows that match the claims within the user's JWT.
2.4. The Write Flow: The "Async Bridge"
In this model, reads are part of a passive stream (via the WAL), whereas writes are imperative actions that require server-side validation. The "Async Bridge" ensures that local mutations are promoted to global canonical truth.
-
Local Ingestion and Mutation: The client writes directly to its local SQLite instance. The change is reflected instantaneously in the UI, but the record is flagged with a
pendingstatus. - Transport: The synchronization SDK aggregates these mutations and dispatches them to the backend, either via a standard API or a persistent tunnel provided by the Sync Service.
- Business Validation: The server receives the mutation and enforces integrity rules, verifying user permissions and validating data schemas.
-
Commit and Cycle Closure: Upon successful validation, the change is committed to PostgreSQL. This generates a new entry in the WAL, which the Sync Layer detects and broadcasts back to the client as an acknowledgment (ACK). Only then does the client transition the local record status to
synced.
Conflict Resolution: The Concurrency Challenge
In an offline-first system, write conflicts are inevitable. The architecture must define how the server arbitrates between competing updates:
- Last Write Wins (LWW): This is the industry standard due to its simplicity. PostgreSQL utilizes the transaction commit timestamp to arbitrate; the final change to reach the disk persists as the state of truth. It is ideal for applications with low collision probability on individual fields.
-
Causal Integrity (Versioning): For mission-critical systems, each record includes a version column or a high-precision
updated_attimestamp. If a client attempts to push a mutation with a stale version—typically caused by offline drift while another user updated the same record—the backend rejects the change or initiates a merge process.
3. The Synchronization Layer (Sync Layer & Orchestration)
In traditional models, the backend serves as a passive gatekeeper. In this architecture, the synchronization layer—utilizing PowerSync—functions as an active orchestrator, maintaining a consistent data graph between the server and thousands of local clients.
3.1. The Data Tunnel: WebSockets and Delta Streaming
Unlike atomic REST requests that terminate upon response, this architecture establishes a persistent binary tunnel via WebSockets.
- PowerSync Streaming: PowerSync interfaces with the PostgreSQL logical replication slot to consume the WAL in real-time. It does not wait for client polling; it "pushes" changes immediately following a commit in the central database.
- Binary Protocol (Deltas): To minimize bandwidth and battery consumption, the system employs efficient serialization formats like Protobuf. Rather than transferring the entire row, it sends Deltas—the exact diff of the changed fields.
- Sync State Persistence (LSN & Cursors): The sync layer tracks the Log Sequence Number (LSN) for every client. In a reconnection scenario—such as a user emerging from a tunnel—the client transmits its last known LSN. The Sync Layer then calculates the precise delta from that point in the WAL, avoiding expensive full re-synchronizations.
3.2. PowerSync as a Filtering Engine (Sync Rules)
PowerSync's primary strength lies in its ability to execute server-side Sync Rules. These rules function as a dynamic firewall:
- Permission Evaluation: For every change detected in the WAL, PowerSync determines the intended recipients based on SQL logic defined at the server level.
- Dynamic Partitioning: If a user's access to a project is revoked, the sync rule detects this state change and dispatches a "cleanup" instruction to the local SQLite instance, ensuring data security even within offline storage.
-- Conceptual Server-Side Sync Rule
-- Tasks flow only if the user is a member of the project
SELECT * FROM tasks
WHERE project_id IN (
SELECT id FROM projects WHERE user_id = request.auth.user_id
)
3.3. Data Buckets and Sync Rules
This is a critical architectural concept for security. Data is never filtered on the client; filtering occurs within the Sync Service via SQL rules.
- Bucket Definition: Each user possesses a virtual "bucket". Upon login, the system calculates the user's permission graph and begins "filling" the local SQLite database with these specific fragments of the global database.
3.4. Authentication and Security (JWT + Claims)
In this stack, the JWT (JSON Web Token) is more than an access token; it is the primary key for data filtering.
-
Auth Provider: Systems like Supabase Auth, Clerk, or Auth0 issue a JWT containing Custom Claims (such as
user_idand roles). - Handshake: The WASM client transmits the JWT upon establishing a connection to the Sync Service.
-
Validation: The Sync Service validates the JWT signature and utilizes the embedded
user_idto execute the bucket rules described in section 3.2.
3.5. The "Upload Path": The Transactional Outbox Pattern
While WAL streaming handles the downstream "pulse," the Upload Path is the engine that pushes mutations upstream. To ensure no changes are lost during network partitions, we implement the Transactional Outbox pattern directly within the local engine.
-
SQLite Atomicity: When the application executes a mutation (e.g.,
UPDATE tasks...), the SDK performs a dual operation within a single SQLite transaction: it modifies the business table and inserts the change representation into a technical outbox table. This ensures that either the change and its pending upload are saved together, or nothing is saved at all. - Background Sync & Retry Logic: A background process monitors the outbox table. Upon detecting connectivity, it attempts to dispatch changes to the backend (typically via POST/PATCH APIs) using exponential backoff if the server is unreachable.
-
The Loopback Confirmation: This is the final step in the consistency cycle. The client does not purge the outbox record simply upon receiving a
200 OKfrom the server. Instead, it waits for the processed change to arrive via the WAL stream. Receiving its own update back from the server serves as definitive proof of persistence in Postgres. At this point, the local record transitions frompendingtosynced, and the outbox is cleared.
Backend Integrity
The backend serves as the ultimate validator. If an upstream mutation violates business rules—such as updating a task already closed by another user—the backend rejects the change. The sync system then notifies the client to revert the local change or flag it for manual resolution, preventing offline conflicts from corrupting the central source of truth.
4. The Alternative: Turso in the Browser (LibSQL WASM)
Turso's entry into the browser via WebAssembly demonstrates that the local database model is no longer merely a trend, but the de facto standard. However, its underlying implementation architecture reflects distinct design decisions compared to PowerSync that merit technical analysis:
-
Compute: Main Thread vs. Workers: In contrast to the standard recommendation of offloading SQLite to a Web Worker to prevent UI blocking, Turso—in its current implementation via
napi-rs—executes computation on the Main Thread. It delegates only file I/O to a Worker through aSharedArrayBuffer. The rationale is that for lightweight queries, the overhead of cross-thread communication can exceed the execution time of the query itself. - Replication Protocol (The LibSQL Way): While PowerSync relies on a "Sync Rules" system (server-side SQL) to generate buckets, Turso utilizes native LibSQL replication. This facilitates the creation of Embedded Replicas in the browser that are virtually identical to the cloud-hosted database, significantly simplifying type parity across the stack.
- Memory Security Requirements: To enable efficient communication between the Main Thread and the OPFS Worker, Turso requires the server to deliver the application with COOP/COEP (Cross-Origin Isolation) headers. This is a critical technical requirement to mitigate side-channel attacks, though it introduces an additional layer of deployment configuration.
Conclusion: Which Stack to Choose?
Choosing the right architecture depends on your specific data requirements:
- Choose PowerSync/ElectricSQL if: You require granular control over which data subsets are downloaded (Buckets), manage highly complex permission logic within PostgreSQL, and need an Optimistic UI with conflict resolution handled out-of-the-box.
-
Choose Turso/LibSQL if: You seek deeper integration with the LibSQL ecosystem, prefer an API that mirrors
better-sqlite3within a browser environment, and are comfortable managing COOP/COEP security headers.
What this architecture is not
This stack is powerful, but it is not universally applicable. It is intentionally opinionated and comes with real trade-offs.
Not for simple CRUD applications
If your app is a basic form-over-API system with minimal offline requirements, a traditional REST or GraphQL backend will be simpler, cheaper, and easier to maintain.
Not beginner-friendly
This architecture assumes solid knowledge of relational databases, WAL semantics, concurrency, and distributed systems. Debugging synchronization issues requires backend and database expertise—not just frontend tooling.
Not free of operational complexity
Logical replication, replication slots, sync rules, and client reconciliation introduce operational overhead. You are trading API simplicity for correctness, performance, and offline guarantees.
Not a replacement for domain-specific conflict resolution
While patterns like Last-Write-Wins work for many use cases, collaborative or high-contention domains often require explicit merge strategies or user-assisted conflict resolution.
Not necessary unless latency or offline capability truly matter
The benefits of this architecture only justify themselves when instant local feedback, offline operation, and data ownership are core product requirements—not nice-to-haves.
Originally published at bdovenbird.com









Top comments (0)