- I store edits in SQLite as an outbox. Not memory.
- I use per-row
updated_at+deleted_atfor merges. - I ship a single
pull -> push -> acksync loop. - I fixed a brutal “ghost reappearing sets” bug.
Context
I’m building a React Native fitness app. Fast logging. Like 5 seconds per set.
Offline isn’t optional. Gyms have dead zones. Airplane mode happens. And the app still needs to work.
I started with “just write to SQLite locally, sync later.” Sounds fine. Then I hit conflicts.
Same workout edited on two devices. A set deleted on my phone, then it came back after sync. Ghost data. Brutal.
What finally worked: treat every local change as a durable event (outbox), make server merges deterministic, and never “delete” without a tombstone.
This post is the exact pattern I use in Expo + SQLite.
1) I model my tables for merges. Not for vibes.
I spent 4 hours trying to merge based on “latest workout updated.” Most of it was wrong.
Merges need row-level metadata. Every row.
So I added:
-
idas a UUID from the client -
updated_atas an integer ms timestamp -
deleted_atas a nullable integer ms timestamp (tombstone)
That’s it. No fancy CRDT.
Here’s the schema. Copy/paste.
// db/schema.ts
import * as SQLite from "expo-sqlite";
export async function initDb(db: SQLite.SQLiteDatabase) {
// WAL helps with concurrent reads while writing.
await db.execAsync(`PRAGMA journal_mode = WAL;`);
await db.execAsync(`PRAGMA foreign_keys = ON;`);
await db.execAsync(`
CREATE TABLE IF NOT EXISTS sets (
id TEXT PRIMARY KEY,
workout_id TEXT NOT NULL,
exercise_id TEXT NOT NULL,
reps INTEGER NOT NULL,
weight REAL NOT NULL,
updated_at INTEGER NOT NULL,
deleted_at INTEGER
);
CREATE INDEX IF NOT EXISTS idx_sets_workout ON sets(workout_id);
CREATE INDEX IF NOT EXISTS idx_sets_updated ON sets(updated_at);
CREATE TABLE IF NOT EXISTS outbox (
id TEXT PRIMARY KEY,
entity TEXT NOT NULL, -- e.g. 'sets'
entity_id TEXT NOT NULL, -- row id
op TEXT NOT NULL, -- 'upsert' | 'delete'
payload TEXT NOT NULL, -- JSON string
created_at INTEGER NOT NULL,
attempts INTEGER NOT NULL DEFAULT 0,
last_error TEXT
);
CREATE INDEX IF NOT EXISTS idx_outbox_created ON outbox(created_at);
`);
}
The tombstone is the whole trick.
If I hard-delete locally, I’ve got nothing to sync. Then the server sends the row back on the next pull. That’s how I got ghost sets.
2) I write locally first. Then I enqueue.
My UX rule: the UI never waits for the network.
So every action does:
1) mutate SQLite
2) enqueue an outbox item
3) update the UI from SQLite (or optimistic state that’s backed by SQLite)
I keep it boring.
// db/sets.ts
import * as SQLite from "expo-sqlite";
import { randomUUID } from "expo-crypto";
type SetRow = {
id: string;
workout_id: string;
exercise_id: string;
reps: number;
weight: number;
updated_at: number;
deleted_at: number | null;
};
export async function upsertSet(
db: SQLite.SQLiteDatabase,
input: Omit & { updated_at?: number }
) {
const now = input.updated_at ?? Date.now();
const row: SetRow = { ...input, updated_at: now, deleted_at: null };
await db.runAsync(
`INSERT INTO sets (id, workout_id, exercise_id, reps, weight, updated_at, deleted_at)
VALUES (?, ?, ?, ?, ?, ?, NULL)
ON CONFLICT(id) DO UPDATE SET
workout_id=excluded.workout_id,
exercise_id=excluded.exercise_id,
reps=excluded.reps,
weight=excluded.weight,
updated_at=excluded.updated_at,
deleted_at=NULL`,
[row.id, row.workout_id, row.exercise_id, row.reps, row.weight, row.updated_at]
);
await enqueueOutbox(db, {
entity: "sets",
entity_id: row.id,
op: "upsert",
payload: row,
});
return row;
}
export async function deleteSet(db: SQLite.SQLiteDatabase, setId: string) {
const now = Date.now();
// Tombstone. Don't hard delete.
await db.runAsync(
`UPDATE sets SET deleted_at = ?, updated_at = ? WHERE id = ?`,
[now, now, setId]
);
const payload = { id: setId, deleted_at: now, updated_at: now };
await enqueueOutbox(db, {
entity: "sets",
entity_id: setId,
op: "delete",
payload,
});
}
async function enqueueOutbox(
db: SQLite.SQLiteDatabase,
item: { entity: string; entity_id: string; op: "upsert" | "delete"; payload: unknown }
) {
const id = randomUUID();
await db.runAsync(
`INSERT INTO outbox (id, entity, entity_id, op, payload, created_at)
VALUES (?, ?, ?, ?, ?, ?)`,
[id, item.entity, item.entity_id, item.op, JSON.stringify(item.payload), Date.now()]
);
}
One thing that bit me — I originally enqueued first, then wrote the row.
Bad idea.
If the app crashes between those steps, I’m syncing a payload for a row that doesn’t exist locally. Now I always mutate first, then enqueue.
3) I sync in one direction at a time
I tried “push whenever online” plus “pull on app start” plus “pull every 30 seconds.”
It worked. Until it didn’t.
I got interleaving.
- A pull overwrote a local edit.
- Then a push sent the overwritten value back.
So I made a single loop. One mutex.
Order matters:
1) pull remote changes since cursor
2) apply them locally (merge rules)
3) push outbox in order
4) ack and delete outbox items
Here’s the sync loop. It’s small.
// sync/syncNow.ts
import * as SQLite from "expo-sqlite";
let syncInFlight: Promise | null = null;
export function syncNow(db: SQLite.SQLiteDatabase) {
if (!syncInFlight) {
syncInFlight = (async () => {
try {
await pull(db);
await push(db);
} finally {
syncInFlight = null;
}
})();
}
return syncInFlight;
}
async function pull(db: SQLite.SQLiteDatabase) {
const cursor = await getCursor(db);
const res = await fetch("https://example.com/api/sync/pull", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({ cursor }),
});
if (!res.ok) throw new Error(`pull failed: ${res.status}`);
const data: { nextCursor: string; sets: any[] } = await res.json();
await db.withTransactionAsync(async () => {
for (const remote of data.sets) {
await mergeRemoteSet(db, remote);
}
await setCursor(db, data.nextCursor);
});
}
async function push(db: SQLite.SQLiteDatabase) {
const items = await db.getAllAsync<{
id: string;
entity: string;
entity_id: string;
op: string;
payload: string;
}>(
`SELECT id, entity, entity_id, op, payload
FROM outbox
ORDER BY created_at ASC
LIMIT 100`
);
if (items.length === 0) return;
const res = await fetch("https://example.com/api/sync/push", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({ items: items.map((i) => ({ ...i, payload: JSON.parse(i.payload) })) }),
});
if (!res.ok) throw new Error(`push failed: ${res.status}`);
const data: { ackIds: string[] } = await res.json();
await db.withTransactionAsync(async () => {
for (const id of data.ackIds) {
await db.runAsync(`DELETE FROM outbox WHERE id = ?`, [id]);
}
});
}
async function getCursor(db: SQLite.SQLiteDatabase) {
await db.execAsync(`CREATE TABLE IF NOT EXISTS sync_state (k TEXT PRIMARY KEY, v TEXT NOT NULL);`);
const row = await db.getFirstAsync<{ v: string }>(`SELECT v FROM sync_state WHERE k = 'cursor'`);
return row?.v ?? "0";
}
async function setCursor(db: SQLite.SQLiteDatabase, cursor: string) {
await db.runAsync(
`INSERT INTO sync_state (k, v) VALUES ('cursor', ?)
ON CONFLICT(k) DO UPDATE SET v=excluded.v`,
[cursor]
);
}
async function mergeRemoteSet(db: SQLite.SQLiteDatabase, remote: any) {
// Implemented in the next section.
// Keeping this file readable.
await import("./merge").then((m) => m.mergeRemoteSet(db, remote));
}
Yep, the import("./merge") is a little weird.
But it kept my sync file from turning into a 300-line blob while iterating.
4) My merge rule is dumb. That’s why it works.
I don’t try to merge reps/weight field-by-field.
I do last-write-wins at the row level.
Rule:
- compare
updated_at - apply the newer row
- but keep tombstones
The only nuance: don’t resurrect deleted rows.
If remote has deleted_at != null, it wins even if local has edits after. That sounds harsh. But it matches what users expect: delete means delete.
Here’s the merge.
// sync/merge.ts
import * as SQLite from "expo-sqlite";
type RemoteSet = {
id: string;
workout_id: string;
exercise_id: string;
reps: number;
weight: number;
updated_at: number;
deleted_at: number | null;
};
export async function mergeRemoteSet(db: SQLite.SQLiteDatabase, remote: RemoteSet) {
const local = await db.getFirstAsync<{
updated_at: number;
deleted_at: number | null;
}>(`SELECT updated_at, deleted_at FROM sets WHERE id = ?`, [remote.id]);
if (!local) {
await db.runAsync(
`INSERT INTO sets (id, workout_id, exercise_id, reps, weight, updated_at, deleted_at)
VALUES (?, ?, ?, ?, ?, ?, ?)`,
[
remote.id,
remote.workout_id,
remote.exercise_id,
remote.reps,
remote.weight,
remote.updated_at,
remote.deleted_at,
]
);
return;
}
// If remote is a tombstone, apply it.
if (remote.deleted_at !== null) {
// Avoid resurrecting by accident.
if (local.deleted_at === null || remote.updated_at >= local.updated_at) {
await db.runAsync(
`UPDATE sets
SET deleted_at = ?, updated_at = ?
WHERE id = ?`,
[remote.deleted_at, remote.updated_at, remote.id]
);
}
return;
}
// Local tombstone wins over remote upsert.
if (local.deleted_at !== null) return;
// Last write wins.
if (remote.updated_at > local.updated_at) {
await db.runAsync(
`UPDATE sets SET
workout_id = ?,
exercise_id = ?,
reps = ?,
weight = ?,
updated_at = ?,
deleted_at = NULL
WHERE id = ?`,
[
remote.workout_id,
remote.exercise_id,
remote.reps,
remote.weight,
remote.updated_at,
remote.id,
]
);
}
}
The bug I had: I only compared updated_at.
So a local delete (deleted_at set) could lose to an older remote upsert, because the remote row still existed and my pull code “helpfully” inserted it back.
Once I treated tombstones as first-class data, the ghost stopped.
Results
I ran this on two physical devices (Pixel 7 + iPhone 14). Same account. Same workout. Airplane mode toggles.
Before:
- Deleting a set on device A could reappear after a pull on device B.
- I reproduced it 7 times in 20 minutes.
- Logs showed the same
idgetting inserted again.
After:
- 0 ghost sets across 42 delete/edit scenarios.
- Outbox stayed small: 0 to 18 rows during a long session.
- Sync time for a typical workout stayed under 1 second on Wi‑Fi (pull + push).
Key takeaways
- Don’t hard-delete in offline apps. Use
deleted_attombstones. - Store every change in a SQLite outbox. Durable beats “I’ll retry later.”
- Sync in one loop with a mutex:
pull -> push -> ack. - Merge at the row level first. Field-level merges can wait.
- If you only fix one thing: make deletes conflict-safe.
Closing
If you’re doing offline-first in React Native right now — what merge rule do you ship for deletes?
Do you let tombstones always win, or do you allow a later edit to resurrect a row?
Top comments (0)