My fitness app needs to log sets in under 5 seconds—even in a gym basement with zero signal. I assumed “offline-first” was just “store it locally and sync later”… until I hit duplicates, lost updates, and weird race conditions. Here’s the SQLite sync pattern I landed on in Expo that keeps the UI fast and the data consistent.
Last updated: 2026-01-28 (Builder Day 30)
TL;DR
- I use SQLite as the source of truth and treat the network as an async side-effect.
- Every local change becomes an outbox event with an idempotency key.
- Sync is a small state machine: push outbox → pull changes → resolve conflicts.
- This kept set logging snappy (<100ms writes) while remaining 100% offline-capable.
Context (why this matters)
I’m building a mobile workout tracker (React Native + Expo) where the whole point is speed: you should be able to log a set in ~5 seconds.
Gyms are the worst networking environment. Spotty LTE, no Wi‑Fi, airplane mode… you name it. So I went local-first: every tap writes to SQLite immediately. The hard part wasn’t storing data locally—it was syncing without creating duplicates or overwriting newer edits.
In this post I’ll show the exact pattern I’m using: schema, outbox events, a “single sync loop”, and how I avoid the classic “sync storms” when the app wakes up.
1) Design the schema: data + outbox + meta
I keep three pieces:
1) Your actual domain tables (workouts, sets, PRs, etc.)
2) An outbox table for pending mutations
3) A sync_meta table to track the last server cursor
The outbox is the key. Instead of “syncing tables”, I sync a sequence of events.
// db/schema.ts
import * as SQLite from 'expo-sqlite';
export const db = SQLite.openDatabaseSync('gym.db');
export function initSchema() {
db.execSync(`
PRAGMA journal_mode = WAL;
PRAGMA foreign_keys = ON;
CREATE TABLE IF NOT EXISTS workout_sets (
id TEXT PRIMARY KEY NOT NULL,
workout_id TEXT NOT NULL,
exercise_id TEXT NOT NULL,
reps INTEGER NOT NULL,
weight REAL NOT NULL,
performed_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL
);
-- Outbox of local changes to push to server
CREATE TABLE IF NOT EXISTS outbox (
id TEXT PRIMARY KEY NOT NULL,
entity TEXT NOT NULL, -- e.g. 'workout_sets'
entity_id TEXT NOT NULL, -- row id
op TEXT NOT NULL, -- 'upsert' | 'delete'
payload TEXT NOT NULL, -- JSON string
idempotency_key TEXT NOT NULL, -- stable unique key
created_at INTEGER NOT NULL,
attempts INTEGER NOT NULL DEFAULT 0,
last_error TEXT
);
CREATE UNIQUE INDEX IF NOT EXISTS idx_outbox_idem
ON outbox(idempotency_key);
-- Store last pull cursor from server
CREATE TABLE IF NOT EXISTS sync_meta (
key TEXT PRIMARY KEY NOT NULL,
value TEXT NOT NULL
);
`);
}
A couple notes:
- WAL mode helps concurrency (reads don’t block writes as much).
- The unique index on
idempotency_keyprevents accidental duplicate events.
Next I’ll show how I write data locally + enqueue an outbox event in one transaction.
2) Write locally first, then enqueue an outbox event
My rule: UI actions never wait for the network.
When you log a set, I do two SQLite writes inside a transaction:
- Upsert the set row
- Insert an outbox event that describes the mutation
This gives me “instant UI” plus a reliable queue for sync.
// db/mutations.ts
import { db } from './schema';
import { randomUUID } from 'expo-crypto';
type WorkoutSet = {
id: string;
workout_id: string;
exercise_id: string;
reps: number;
weight: number;
performed_at: number;
updated_at: number;
};
export function upsertWorkoutSet(input: Omit) {
const now = Date.now();
const row: WorkoutSet = { ...input, updated_at: now };
// Stable idempotency key per mutation attempt
// If the user double-taps, we still want one outbox event.
const idempotencyKey = `workout_sets:upsert:${row.id}:${row.updated_at}`;
db.withTransactionSync(() => {
db.runSync(
`INSERT INTO workout_sets (id, workout_id, exercise_id, reps, weight, performed_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
reps=excluded.reps,
weight=excluded.weight,
performed_at=excluded.performed_at,
updated_at=excluded.updated_at;`,
[row.id, row.workout_id, row.exercise_id, row.reps, row.weight, row.performed_at, row.updated_at]
);
db.runSync(
`INSERT OR IGNORE INTO outbox (id, entity, entity_id, op, payload, idempotency_key, created_at)
VALUES (?, ?, ?, ?, ?, ?, ?);`,
[
randomUUID(),
'workout_sets',
row.id,
'upsert',
JSON.stringify(row),
idempotencyKey,
now
]
);
});
return row;
}
Pitfalls I hit here:
- If you don’t use a transaction, you can end up with a set row written but no outbox event (or vice versa).
- If you don’t have an idempotency strategy, you’ll create duplicates when the app retries.
Next we need a sync engine that’s boring and predictable.
3) Build a single sync loop (and prevent sync storms)
My first attempt was… embarrassing. I triggered sync from everywhere: app resume, screen focus, button press, network change. It caused concurrent sync runs, duplicated pushes, and random “database is locked” errors.
Now I do one thing:
- A single
SyncManagerthat runs at most one sync at a time. - Any “sync requested” events just call
requestSync().
// sync/SyncManager.ts
import NetInfo from '@react-native-community/netinfo';
type SyncFn = () => Promise;
export class SyncManager {
private running = false;
private queued = false;
constructor(private syncOnce: SyncFn) {}
async requestSync() {
// Coalesce multiple requests into one extra run
if (this.running) {
this.queued = true;
return;
}
const net = await NetInfo.fetch();
if (!net.isConnected) return;
this.running = true;
try {
await this.syncOnce();
} finally {
this.running = false;
if (this.queued) {
this.queued = false;
// Run one more time to catch anything new
await this.requestSync();
}
}
}
}
This one change killed a whole class of bugs.
Next I’ll show what syncOnce() actually does: push outbox, then pull server changes.
4) Push outbox with idempotency + retries
Pushing outbox events is straightforward, but you need two safety rails:
- Idempotency keys sent to the server so retries don’t duplicate mutations
- Backoff / attempts so one bad payload doesn’t block the entire queue forever
Below is a minimal “push N events” implementation. The API shape is intentionally simple.
// sync/pushOutbox.ts
import { db } from '../db/schema';
type OutboxRow = {
id: string;
entity: string;
entity_id: string;
op: 'upsert' | 'delete';
payload: string;
idempotency_key: string;
attempts: number;
};
export async function pushOutbox(apiBaseUrl: string, batchSize = 25) {
const rows = db.getAllSync(
`SELECT id, entity, entity_id, op, payload, idempotency_key, attempts
FROM outbox
ORDER BY created_at ASC
LIMIT ?;`,
[batchSize]
);
for (const row of rows) {
try {
const res = await fetch(`${apiBaseUrl}/sync/push`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
// Server should dedupe based on this key
'Idempotency-Key': row.idempotency_key
},
body: JSON.stringify({
entity: row.entity,
op: row.op,
payload: JSON.parse(row.payload)
})
});
if (!res.ok) throw new Error(`Push failed: ${res.status}`);
// Success: remove from outbox
db.runSync(`DELETE FROM outbox WHERE id = ?;`, [row.id]);
} catch (err: any) {
// Failure: increment attempts and record error
db.runSync(
`UPDATE outbox
SET attempts = attempts + 1,
last_error = ?
WHERE id = ?;`,
[String(err?.message ?? err), row.id]
);
// If you're batching, you can either continue or break.
// I continue so one bad event doesn't block everything.
}
}
}
Common pitfalls:
- Don’t delete the outbox row until the server confirms.
- If you
breakon error, one bad event can block the queue forever. - If your server doesn’t implement idempotency, retries will create duplicates.
Now we need the other half: pulling server changes without clobbering local edits.
5) Pull changes with a cursor (and handle conflicts simply)
For pulling, I use a cursor stored in sync_meta.
Server response is like:
nextCursor-
changes[](each change has entity, op, payload, updated_at)
Conflict strategy (simple but effective for my use case):
- Last-write-wins by
updated_at - If local row has a newer
updated_at, I keep local and let it push later
// sync/pullChanges.ts
import { db } from '../db/schema';
type Change = {
entity: 'workout_sets';
op: 'upsert' | 'delete';
payload: any;
updated_at: number;
};
function getCursor(): string {
const row = db.getFirstSync<{ value: string }>(
`SELECT value FROM sync_meta WHERE key = 'cursor';`
);
return row?.value ?? '0';
}
function setCursor(cursor: string) {
db.runSync(
`INSERT INTO sync_meta (key, value)
VALUES ('cursor', ?)
ON CONFLICT(key) DO UPDATE SET value=excluded.value;`,
[cursor]
);
}
export async function pullChanges(apiBaseUrl: string) {
const cursor = getCursor();
const res = await fetch(`${apiBaseUrl}/sync/pull?cursor=${encodeURIComponent(cursor)}`);
if (!res.ok) throw new Error(`Pull failed: ${res.status}`);
const data: { nextCursor: string; changes: Change[] } = await res.json();
db.withTransactionSync(() => {
for (const change of data.changes) {
if (change.entity !== 'workout_sets') continue;
const incoming = change.payload as {
id: string;
reps: number;
weight: number;
performed_at: number;
updated_at: number;
workout_id: string;
exercise_id: string;
};
const local = db.getFirstSync<{ updated_at: number }>(
`SELECT updated_at FROM workout_sets WHERE id = ?;`,
[incoming.id]
);
// Conflict rule: keep the newest updated_at
if (local?.updated_at && local.updated_at > incoming.updated_at) {
continue; // local will be pushed later via outbox
}
if (change.op === 'delete') {
db.runSync(`DELETE FROM workout_sets WHERE id = ?;`, [incoming.id]);
} else {
db.runSync(
`INSERT INTO workout_sets (id, workout_id, exercise_id, reps, weight, performed_at, updated_at)
VALUES (?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
reps=excluded.reps,
weight=excluded.weight,
performed_at=excluded.performed_at,
updated_at=excluded.updated_at;`,
[
incoming.id,
incoming.workout_id,
incoming.exercise_id,
incoming.reps,
incoming.weight,
incoming.performed_at,
incoming.updated_at
]
);
}
}
setCursor(data.nextCursor);
});
}
What I like about this:
- It’s deterministic.
- It doesn’t require fancy CRDTs.
- It matches my product reality: most data is edited on one device at a time.
If you truly need multi-device collaborative editing, you’ll want a stronger strategy. But for a workout tracker, this is “good enough” and keeps the implementation small.
Results / outcome
After switching to this outbox + single sync loop approach, my app stopped doing weird stuff under bad connectivity.
- Local set logging is always immediate (SQLite write + outbox insert), and it consistently feels sub‑100ms.
- I can go fully offline for an entire workout and everything syncs later without duplicates.
- I reduced sync-related bugs a lot by ensuring only one sync runs at a time.
On my waitlist build (10 people so far), the main qualitative win is: nobody has reported “my set disappeared” since I moved to this pattern.
Key takeaways
- Make SQLite the source of truth; network sync should never block the UI.
- Use an outbox table and push events, not “table diffs”.
- Add idempotency keys or you’ll ship duplicates the moment retries happen.
- Guard against concurrent sync runs (one sync loop only).
- Start with a simple conflict rule (like last-write-wins) and upgrade only if you truly need more.
Closing (question for you)
If you’ve built offline-first in React Native, what sync strategy did you end up with—outbox events, full table replication, something else? Drop your approach (and your worst bug) in the comments.
If there’s interest, I’ll write a follow-up showing how I wire this into React state so the UI updates instantly without re-querying SQLite on every render.
Top comments (0)