- I stopped blocking the UI on SQLite writes.
- I log sets in-memory first, then flush.
- I use a tiny write queue + one transaction.
- I ship a “pending” state that never lies.
Context
My fitness app has one job during a workout.
Log a set in under 5 seconds.
I started with “just write to SQLite on button tap”.
Brutal.
Every tap waited on IO. Then React re-rendered. Then the list jumped.
Also offline-first means I can’t punt to the server.
SQLite is the source of truth.
But the UI can’t feel like a database client.
So I split it.
UI writes to memory instantly.
SQLite catches up in the background.
If SQLite fails, I show it.
No silent drops.
1) I treat SQLite like a sink, not my UI state
My first version did this:
Tap “+”. Await insert. Then query. Then render.
It “worked”.
It also felt like a 2012 Android app.
Now I keep a local in-memory log.
It’s the thing the list renders.
SQLite is the durable sink.
I model it with a clientId.
Always.
That ID exists before SQLite.
That’s the trick.
// types.ts
export type SetRow = {
id?: number; // SQLite row id (after flush)
clientId: string; // stable id generated in UI
workoutId: string;
exerciseId: string;
reps: number;
weight: number;
createdAt: number; // ms epoch
pending: 0 | 1; // UI + sync status
error?: string; // only in memory
};
export const makeClientId = () => {
// No deps. Works in Expo.
return `${Date.now()}-${Math.random().toString(16).slice(2)}`;
};
One thing that bit me — I tried to use SQLite autoincrement id as the list key.
That forces you to wait for SQLite.
So you lose.
clientId becomes my React key.
Stable. Instant.
2) I enqueue writes. I don’t await them on tap
I keep a queue in memory.
Each tap pushes a “write intent”.
Then a flusher drains the queue.
I spent 4 hours building a generic job queue.
Most of it was wrong.
I don’t need retries with backoff and fancy priorities.
I need “don’t block the UI” and “don’t lose data”.
So it’s small.
A single in-flight promise.
// writeQueue.ts
import * as SQLite from "expo-sqlite";
import type { SetRow } from "./types";
const db = SQLite.openDatabaseSync("gym.db");
type WriteJob = { type: "insertSet"; set: SetRow };
let queue: WriteJob[] = [];
let flushing = false;
export function enqueue(job: WriteJob) {
queue.push(job);
void flushSoon();
}
async function flushSoon() {
if (flushing) return;
flushing = true;
try {
// Drain everything currently queued in one transaction.
const batch = queue;
queue = [];
db.withTransactionSync(() => {
for (const job of batch) {
if (job.type === "insertSet") {
const s = job.set;
db.runSync(
`INSERT INTO sets (client_id, workout_id, exercise_id, reps, weight, created_at, pending)
VALUES (?, ?, ?, ?, ?, ?, 1)`,
[s.clientId, s.workoutId, s.exerciseId, s.reps, s.weight, s.createdAt]
);
}
}
});
} finally {
flushing = false;
// If new jobs arrived during the transaction, flush again.
if (queue.length) void flushSoon();
}
}
Yeah, I’m using openDatabaseSync + runSync.
On purpose.
In Expo’s SQLite, the async API still posts work to the native module.
If you await it in the press handler, you’re back to “button feels sticky”.
The win here is: enqueue is instant, flush is decoupled.
Also: one transaction.
That’s the difference between “fine” and “why does it stutter on older phones?”.
3) I render “pending” rows so the UI never lies
Offline-first apps love lying.
They show the new row.
But it never hit disk.
Then you reopen the app. It’s gone.
I make pending visible.
Not with a spinner everywhere.
Just a subtle state.
On tap:
- add row to memory with
pending: 1 - enqueue insert
- later, refresh from SQLite and clear pending
This hook is the glue.
It keeps memory fast.
And it reconciles with SQLite.
// useSets.ts
import { useCallback, useMemo, useState } from "react";
import * as SQLite from "expo-sqlite";
import type { SetRow } from "./types";
import { enqueue } from "./writeQueue";
import { makeClientId } from "./types";
const db = SQLite.openDatabaseSync("gym.db");
export function useSets(workoutId: string) {
const [sets, setSets] = useState([]);
const reloadFromDb = useCallback(() => {
const rows = db.getAllSync<{
id: number;
client_id: string;
workout_id: string;
exercise_id: string;
reps: number;
weight: number;
created_at: number;
pending: 0 | 1;
}>(
`SELECT id, client_id, workout_id, exercise_id, reps, weight, created_at, pending
FROM sets
WHERE workout_id = ?
ORDER BY created_at ASC`,
[workoutId]
);
setSets(
rows.map((r) => ({
id: r.id,
clientId: r.client_id,
workoutId: r.workout_id,
exerciseId: r.exercise_id,
reps: r.reps,
weight: r.weight,
createdAt: r.created_at,
pending: r.pending,
}))
);
}, [workoutId]);
const addSet = useCallback(
(exerciseId: string, reps: number, weight: number) => {
const newSet: SetRow = {
clientId: makeClientId(),
workoutId,
exerciseId,
reps,
weight,
createdAt: Date.now(),
pending: 1,
};
// UI first. Always.
setSets((prev) => [...prev, newSet]);
// SQLite later.
enqueue({ type: "insertSet", set: newSet });
// Keep it simple: reload after the flush tick.
// This avoids needing "returning id" support.
setTimeout(reloadFromDb, 0);
},
[reloadFromDb, workoutId]
);
return useMemo(() => ({ sets, addSet, reloadFromDb }), [sets, addSet, reloadFromDb]);
}
That setTimeout(reloadFromDb, 0) looks dumb.
It is.
It also works.
I originally tried to grab the inserted row id immediately.
Then I hit SQLiteError: cannot commit - no transaction is active while mixing async calls.
I stopped being clever.
Reloading is cheap for one workout.
If you have 2,000 rows in the same screen, you’ll need a smarter reconcile.
But for a workout detail page?
This stays fast.
4) I measure tap-to-render time. Not “feels faster”
My brain lies.
Especially after midnight.
So I put timing directly in the handler.
I log “tap -> next frame”.
If that number spikes, I know I regressed.
// perf.ts
import { InteractionManager } from "react-native";
export function measureTapToFrame(label: string) {
const t0 = global.performance?.now?.() ?? Date.now();
// After interactions + next frame.
InteractionManager.runAfterInteractions(() => {
requestAnimationFrame(() => {
const t1 = global.performance?.now?.() ?? Date.now();
const ms = Math.round(t1 - t0);
console.log(`[perf] ${label}: ${ms}ms`);
});
});
}
Then in my button:
- call
measureTapToFrame("addSet") - call
addSet(...)
If I see 180ms, I go hunting.
If I see 22ms, I leave it alone.
The biggest regression I caused?
I added an await in the press handler to “ensure durability”.
Tap-to-frame jumped from 28ms to 146ms on my Pixel 6a.
I reverted it.
Durability moved to the background flush.
Results
The set logging screen stopped stuttering.
That’s the real result.
Before: I logged 10 sets and the UI froze 2–3 times per workout.
My console showed tap-to-frame spikes between 120ms and 210ms.
After: the same flow stays between 18ms and 45ms for tap-to-frame on my Pixel 6a.
I also stopped losing sets when I toggled airplane mode mid-workout, because the UI always marks rows as pending: 1 until SQLite confirms them.
No magic.
Just not blocking the tap.
Key takeaways
- Generate a
clientIdin the UI. Use it as the list key. - Don’t
awaitSQLite in press handlers. Enqueue instead. - Flush in one transaction. Batches beat chatty writes.
- Render pending state. Offline-first apps can’t pretend.
- Measure tap-to-frame with a real timer. Your gut’s unreliable.
Closing
I’m still not fully happy with the setTimeout(reloadFromDb, 0) reconcile.
It’s simple. It’s also a little sloppy.
If you’re shipping offline-first React Native screens: do you reconcile UI-to-SQLite by requerying, or do you propagate inserted row ids back into memory without a full reload?
Top comments (0)