DEV Community

Sathish
Sathish

Posted on

React Native offline-first: conflict rules in SQLite

  • I stopped “last write wins” from nuking offline edits.
  • I model conflicts with base_rev + rev in SQLite.
  • I sync with 1 endpoint: pull then push.
  • I ship deterministic merges for workout sets.

Context

My fitness app has one job.
Log a set in ~5 seconds.
No spinner.
No “retry later”.

Offline-first forced me into SQLite.
AsyncStorage died fast.
I need queries like “last 12 sets for this exercise” in under 100ms.
And I need sync.

I shipped a naive sync first.
It was brutal.
Two devices editing the same workout would randomly overwrite each other.
Sometimes it looked fine.
Then you’d refresh.
Poof.
Half the sets gone.

So I rebuilt it around conflict rules.
Not “hope”.
Rules I can test.
Rules SQLite can enforce.

1) I store revisions per row. Not timestamps.

Timestamps lied to me.
Device clocks drift.
Also, two updates can share the same millisecond.
And I don’t want “latest” anyway.
I want “what was this row based on?”

So every row gets:

  • rev (server-assigned integer)
  • base_rev (what the client last saw)
  • updated_at (still useful for UI)

On create, rev = 0.
Server assigns real rev later.

Here’s the SQLite schema I ended up keeping.
It’s small.
But it fixed the mental model.

-- schema.sql (SQLite)
PRAGMA foreign_keys = ON;

CREATE TABLE IF NOT EXISTS sets (
  id TEXT PRIMARY KEY,              -- uuid
  workout_id TEXT NOT NULL,
  exercise_id TEXT NOT NULL,
  reps INTEGER NOT NULL,
  weight REAL NOT NULL,
  created_at INTEGER NOT NULL,       -- unix ms
  updated_at INTEGER NOT NULL,       -- unix ms

  rev INTEGER NOT NULL DEFAULT 0,    -- server revision, 0 = not on server yet
  base_rev INTEGER NOT NULL DEFAULT 0,
  deleted INTEGER NOT NULL DEFAULT 0,

  CHECK (deleted IN (0, 1))
);

CREATE INDEX IF NOT EXISTS idx_sets_workout
ON sets(workout_id, deleted, updated_at);

CREATE TABLE IF NOT EXISTS outbox (
  id TEXT PRIMARY KEY,              -- uuid for the op
  entity TEXT NOT NULL,             -- 'sets'
  entity_id TEXT NOT NULL,          -- sets.id
  op TEXT NOT NULL,                 -- 'upsert' | 'delete'
  payload TEXT NOT NULL,            -- JSON string
  base_rev INTEGER NOT NULL,
  created_at INTEGER NOT NULL
);

CREATE INDEX IF NOT EXISTS idx_outbox_created
ON outbox(created_at);
Enter fullscreen mode Exit fullscreen mode

That base_rev in the outbox is the whole point.
It’s the receipt.
It tells the server what I edited against.

2) Every local write also writes an outbox op

I wanted “offline-first”.
I accidentally built “offline-only” for a day.
Because my UI wrote to SQLite.
But I forgot to queue ops.
Sync ran.
Nothing pushed.
Looked fine locally.
Server stayed stale.

Now I don’t allow a write without an outbox row.
Same transaction.
All-or-nothing.

I’m using Expo’s SQLite.
This code runs.
Copy-paste.
It’s the exact pattern I use for set logging.

// db/upsertSet.ts
import * as SQLite from 'expo-sqlite';
import { randomUUID } from 'expo-crypto';

const db = SQLite.openDatabaseSync('gym.db');

type SetRow = {
  id?: string;
  workout_id: string;
  exercise_id: string;
  reps: number;
  weight: number;
};

export function upsertSetLocal(input: SetRow) {
  const now = Date.now();
  const id = input.id ?? randomUUID();

  db.withTransactionSync(() => {
    // Read current base_rev so the server can detect conflicts.
    const cur = db.getFirstSync<{ rev: number }>(
      'SELECT rev FROM sets WHERE id = ? LIMIT 1',
      [id]
    );
    const baseRev = cur?.rev ?? 0;

    db.runSync(
      `INSERT INTO sets (id, workout_id, exercise_id, reps, weight, created_at, updated_at, rev, base_rev, deleted)
       VALUES (?, ?, ?, ?, ?, ?, ?, COALESCE((SELECT rev FROM sets WHERE id = ?), 0), ?, 0)
       ON CONFLICT(id) DO UPDATE SET
         reps = excluded.reps,
         weight = excluded.weight,
         updated_at = excluded.updated_at,
         base_rev = excluded.base_rev,
         deleted = 0`,
      [
        id,
        input.workout_id,
        input.exercise_id,
        input.reps,
        input.weight,
        now,
        now,
        id,
        baseRev
      ]
    );

    db.runSync(
      `INSERT INTO outbox (id, entity, entity_id, op, payload, base_rev, created_at)
       VALUES (?, 'sets', ?, 'upsert', ?, ?, ?)`,
      [
        randomUUID(),
        id,
        JSON.stringify({ ...input, id, updated_at: now }),
        baseRev,
        now
      ]
    );
  });

  return id;
}
Enter fullscreen mode Exit fullscreen mode

Two things I learned the hard way:
1) If you don’t do this in a transaction, you’ll eventually enqueue an op without the row (or vice versa).
2) Don’t build the payload from the UI state. Build it from what you wrote.

3) Sync is pull-then-push. Always in that order.

I tried push-first.
Bad call.
If you push against an old server state, you create conflicts you didn’t need.

My sync loop is:
1) Pull server changes since last_server_rev.
2) Apply them locally.
3) Push outbox ops.
4) Update last_server_rev.

I store last_server_rev in a tiny meta table.
Could be SecureStore.
I kept it in SQLite so it survives reinstalls with backups.

Here’s the sync function.
It’s trimmed.
But it’s real.

// sync/syncNow.ts
import * as SQLite from 'expo-sqlite';

const db = SQLite.openDatabaseSync('gym.db');

type PullResponse = {
  latestRev: number;
  changes: Array<{
    id: string;
    workout_id: string;
    exercise_id: string;
    reps: number;
    weight: number;
    updated_at: number;
    rev: number;
    deleted: number;
  }>;
};

async function apiPost(path: string, body: unknown): Promise {
  const res = await fetch(`https://api.example.com${path}`, {
    method: 'POST',
    headers: { 'content-type': 'application/json' },
    body: JSON.stringify(body)
  });
  if (!res.ok) throw new Error(`HTTP ${res.status} ${await res.text()}`);
  return res.json() as Promise;
}

export async function syncNow(userId: string) {
  const meta = db.getFirstSync<{ value: string }>(
    'SELECT value FROM meta WHERE key = ? LIMIT 1',
    ['last_server_rev']
  );
  const sinceRev = meta ? Number(meta.value) : 0;

  const pulled = await apiPost('/sync/pull', { userId, sinceRev });

  // Apply server changes.
  db.withTransactionSync(() => {
    for (const c of pulled.changes) {
      db.runSync(
        `INSERT INTO sets (id, workout_id, exercise_id, reps, weight, created_at, updated_at, rev, base_rev, deleted)
         VALUES (?, ?, ?, ?, ?, COALESCE((SELECT created_at FROM sets WHERE id = ?), ?), ?, ?, ?, ?)
         ON CONFLICT(id) DO UPDATE SET
           workout_id = excluded.workout_id,
           exercise_id = excluded.exercise_id,
           reps = excluded.reps,
           weight = excluded.weight,
           updated_at = excluded.updated_at,
           rev = excluded.rev,
           deleted = excluded.deleted`,
        [
          c.id,
          c.workout_id,
          c.exercise_id,
          c.reps,
          c.weight,
          c.id,
          c.updated_at,
          c.updated_at,
          c.rev,
          c.rev,
          c.deleted
        ]
      );
    }

    db.runSync(
      `INSERT INTO meta (key, value)
       VALUES ('last_server_rev', ?)
       ON CONFLICT(key) DO UPDATE SET value = excluded.value`,
      [String(pulled.latestRev)]
    );
  });

  // Push after pull.
  await pushOutbox(userId);
}

async function pushOutbox(userId: string) {
  const ops = db.getAllSync<{ id: string; entity_id: string; op: string; payload: string; base_rev: number }>(
    'SELECT id, entity_id, op, payload, base_rev FROM outbox ORDER BY created_at ASC LIMIT 50'
  );
  if (ops.length === 0) return;

  const res = await apiPost<{ acked: string[]; conflicts: string[] }>('/sync/push', {
    userId,
    ops: ops.map(o => ({ ...o, payload: JSON.parse(o.payload) }))
  });

  db.withTransactionSync(() => {
    for (const ackId of res.acked) {
      db.runSync('DELETE FROM outbox WHERE id = ?', [ackId]);
    }
  });

  // Conflicts handled in the next section.
}
Enter fullscreen mode Exit fullscreen mode

Real bug I hit:
HTTP 413 Payload Too Large.
Because I tried to push 2,000 ops in one request.
Now it’s LIMIT 50.
Boring.
Works.

4) Conflicts: I don’t “merge” everything. I pick winners.

I wanted fancy merges.
Like Git.
For workout sets.
Why.

Sets are append-heavy.
Conflicts mostly happen on edits.
Like changing weight from 185 to 195.
If two devices edit the same set, I don’t want an average.
I want a deterministic rule.

My rule:

  • If the server row rev equals the op’s base_rev, accept.
  • Else, conflict.
  • For conflicts, keep the server version.
  • But keep the client edit as a new set row (duplicate) with a note.

That last part sounds weird.
It is.
But it avoids silent data loss.
And it’s visible in the UI.
I’d rather show “(from offline edit)” than delete someone’s work.

This is the server-side SQL check.
Postgres in my case.
But the logic is universal.

-- Server-side (Postgres) apply op with optimistic concurrency
-- Table: sets(id pk, reps, weight, rev int, deleted bool, updated_at bigint)

-- Given: :id, :reps, :weight, :base_rev

UPDATE sets
SET reps = :reps,
    weight = :weight,
    updated_at = :updated_at,
    rev = rev + 1,
    deleted = false
WHERE id = :id
  AND rev = :base_rev
  AND deleted = false;

-- If rowCount = 0 => conflict (someone else changed it)
Enter fullscreen mode Exit fullscreen mode

And here’s how I handle the conflict on the client.
This part took me 4 hours.
Most of it was wrong.
I kept trying to “auto-resolve”.
It just made it harder to reason about.

// sync/handleConflicts.ts
import * as SQLite from 'expo-sqlite';
import { randomUUID } from 'expo-crypto';

const db = SQLite.openDatabaseSync('gym.db');

export function materializeConflictAsNewSet(entityId: string) {
  const row = db.getFirstSync(
    'SELECT * FROM sets WHERE id = ? LIMIT 1',
    [entityId]
  );
  if (!row) return;

  const now = Date.now();
  const newId = randomUUID();

  db.withTransactionSync(() => {
    // Keep server row as-is. Create a new local row with the client's values.
    // rev=0 so it will be treated as a new create on next sync.
    db.runSync(
      `INSERT INTO sets (id, workout_id, exercise_id, reps, weight, created_at, updated_at, rev, base_rev, deleted)
       VALUES (?, ?, ?, ?, ?, ?, ?, 0, 0, 0)`,
      [
        newId,
        row.workout_id,
        row.exercise_id,
        row.reps,
        row.weight,
        now,
        now
      ]
    );

    db.runSync(
      `INSERT INTO outbox (id, entity, entity_id, op, payload, base_rev, created_at)
       VALUES (?, 'sets', ?, 'upsert', ?, 0, ?)`,
      [
        randomUUID(),
        newId,
        JSON.stringify({
          id: newId,
          workout_id: row.workout_id,
          exercise_id: row.exercise_id,
          reps: row.reps,
          weight: row.weight,
          updated_at: now,
          note: 'conflict_copy'
        }),
        now
      ]
    );
  });
}
Enter fullscreen mode Exit fullscreen mode

Is it perfect?
No.
But it’s honest.
No silent overwrite.
No “maybe it synced”.

And the UI stays fast.
Because conflicts are rare.
And when they happen, I can show a banner.

Results

Before this, I could reproduce data loss in 6 steps.
Two devices.
Airplane mode.
Edit the same set.
Sync both.
One edit vanished.
Every time.

After the revision + base revision change, I ran 30 conflict simulations.
15 “same set edited on both devices”.
15 “delete on one, edit on the other”.
I got 30 deterministic outcomes.
Zero silent overwrites.

Perf stayed fine.
On a 3-year-old Android device, upsertSetLocal() took 4–12ms per log in my profiling.
The UI stayed under my 100ms budget.

Key takeaways

  • Don’t use timestamps for conflict resolution. Use integer revisions.
  • Every local mutation needs an outbox op in the same transaction.
  • Sync order matters: pull first, then push.
  • “Server wins” is acceptable if you preserve the client edit visibly.
  • Keep your rules testable. If you can’t write a 6-step repro, you’re guessing.

Closing

I’m still not happy with the conflict UI.
It’s functional.
It’s not pretty.

If you’ve shipped offline-first sync in React Native: when you hit a conflict, do you surface a “choose version” screen, or do you do deterministic rules like I’m doing here?

Top comments (0)