DEV Community

Sathish
Sathish

Posted on

How to Build a Fast SQLite Queue in Expo Offline

My workout app felt “offline-first”… until I walked into a gym with zero signal and my UI started stuttering when I logged sets. I wasn’t even syncing yet—SQLite writes alone were blocking the main thread. Here’s the queue + batching pattern I shipped to keep set logging under 5 seconds (and usually under 300ms).

Last week I hit an annoying reality check: offline-first isn’t just “store it locally”. It’s also “don’t make the UI feel like it’s writing to a database.”

In my fitness app (React Native + Expo + SQLite), my goal is super specific: log a set in under 5 seconds, ideally in a couple taps. That means every interaction needs to feel instant—even when I’m hammering SQLite.

TL;DR

  • I keep the UI snappy by queueing writes instead of writing to SQLite on every tap.
  • I batch inserts in a single transaction to avoid “death by 20 tiny writes”.
  • I use an optimistic UI: update state immediately, persist in the background.
  • I add a tiny retry + dead-letter strategy so I don’t lose sets.

Context (why this matters)

Gyms are the perfect stress test: bad Wi‑Fi, spotty LTE, and you’re trying to move fast between sets. I originally wrote to SQLite directly inside my “Save set” handler. It worked… until I started logging quickly (like 10–20 sets in a few minutes).

The symptom wasn’t a crash. It was worse: micro-freezes. Haptics felt delayed. Animations dropped frames. And the whole “5-second set logging” promise started to feel fake.

So in this post I’ll show the pattern I landed on:

1) model a local write queue
2) add a single background flusher
3) batch writes with transactions
4) keep UI optimistic and safe

This is copy-pasteable for Expo + SQLite.

1) Define a “write job” table (durable queue)

I learned this the hard way: if you keep your queue only in memory, you lose it on app kill. I want set logging to be 100% offline-capable, so my queue needs to be durable.

I keep a small write_jobs table in SQLite. Each job stores an operation name and a JSON payload.

// db/schema.ts
import * as SQLite from "expo-sqlite";

export const db = SQLite.openDatabaseSync("gymtracker.db");

export function initDb() {
  // Workout tables (simplified)
  db.execSync(`
    PRAGMA journal_mode = WAL;

    CREATE TABLE IF NOT EXISTS sets (
      id TEXT PRIMARY KEY NOT NULL,
      workout_id TEXT NOT NULL,
      exercise_id TEXT NOT NULL,
      reps INTEGER NOT NULL,
      weight_kg REAL NOT NULL,
      created_at INTEGER NOT NULL
    );

    -- Durable local queue
    CREATE TABLE IF NOT EXISTS write_jobs (
      id INTEGER PRIMARY KEY AUTOINCREMENT,
      type TEXT NOT NULL,
      payload TEXT NOT NULL,
      attempts INTEGER NOT NULL DEFAULT 0,
      status TEXT NOT NULL DEFAULT 'pending',
      created_at INTEGER NOT NULL
    );

    CREATE INDEX IF NOT EXISTS idx_write_jobs_status
      ON write_jobs(status, created_at);
  `);
}
Enter fullscreen mode Exit fullscreen mode

What this does:

  • WAL mode helps concurrency (reads don’t block writes as badly).
  • write_jobs is a mini message queue.

Pitfall: don’t store huge payloads. For sets I store just the fields I need.

Next I needed a safe way to enqueue jobs without making the UI wait.

2) Enqueue writes fast (and return immediately)

My “log set” button should do almost no work. I update UI state optimistically and just enqueue a job.

// db/writeQueue.ts
import { db } from "./schema";

type InsertSetPayload = {
  id: string;
  workoutId: string;
  exerciseId: string;
  reps: number;
  weightKg: number;
  createdAt: number;
};

export function enqueueInsertSet(payload: InsertSetPayload) {
  // Keep this function tiny: it runs during a tap.
  const payloadJson = JSON.stringify(payload);

  db.runSync(
    `INSERT INTO write_jobs (type, payload, created_at)
     VALUES (?, ?, ?)` ,
    ["insert_set", payloadJson, Date.now()]
  );
}
Enter fullscreen mode Exit fullscreen mode

What this does:

  • The tap handler performs a single insert into write_jobs.
  • We avoid doing a full insert into sets (and any related work) on the hot path.

Pitfall: don’t enqueue duplicates accidentally. I made this mistake when a user double-tapped. In my app I also disable the button for ~300ms, but you can also do idempotency (more on that later).

Now we need a background flusher that drains jobs in batches.

3) Flush jobs in a single transaction (batching)

This is the biggest win: instead of 20 separate transactions, I flush 20 jobs in one transaction. SQLite loves that.

I keep a single flusher loop running while the app is active. (If you want background tasks, that’s a separate rabbit hole.)

// db/flusher.ts
import { db } from "./schema";

type WriteJobRow = {
  id: number;
  type: string;
  payload: string;
  attempts: number;
};

function claimPendingJobs(limit = 25): WriteJobRow[] {
  // Mark jobs as 'processing' so we don't double-flush.
  // SQLite doesn't support SELECT ... FOR UPDATE, so we do it in a transaction.
  db.execSync("BEGIN");
  try {
    const rows = db.getAllSync(
      `SELECT id, type, payload, attempts
       FROM write_jobs
       WHERE status = 'pending'
       ORDER BY created_at ASC
       LIMIT ?`,
      [limit]
    );

    if (rows.length > 0) {
      const ids = rows.map(r => r.id);
      const placeholders = ids.map(() => "?").join(",");
      db.runSync(
        `UPDATE write_jobs
         SET status = 'processing'
         WHERE id IN (${placeholders})`,
        ids
      );
    }

    db.execSync("COMMIT");
    return rows;
  } catch (e) {
    db.execSync("ROLLBACK");
    throw e;
  }
}

export function flushOnce() {
  const jobs = claimPendingJobs(25);
  if (jobs.length === 0) return 0;

  db.execSync("BEGIN");
  try {
    for (const job of jobs) {
      if (job.type === "insert_set") {
        const p = JSON.parse(job.payload) as {
          id: string;
          workoutId: string;
          exerciseId: string;
          reps: number;
          weightKg: number;
          createdAt: number;
        };

        // Idempotent insert: ignore if already written.
        db.runSync(
          `INSERT OR IGNORE INTO sets
           (id, workout_id, exercise_id, reps, weight_kg, created_at)
           VALUES (?, ?, ?, ?, ?, ?)`,
          [p.id, p.workoutId, p.exerciseId, p.reps, p.weightKg, p.createdAt]
        );

        db.runSync(
          `UPDATE write_jobs SET status = 'done' WHERE id = ?`,
          [job.id]
        );
      } else {
        // Unknown job type: mark as failed so it doesn't block the queue.
        db.runSync(
          `UPDATE write_jobs SET status = 'failed' WHERE id = ?`,
          [job.id]
        );
      }
    }

    db.execSync("COMMIT");
    return jobs.length;
  } catch (e) {
    db.execSync("ROLLBACK");

    // If the batch fails, requeue jobs with attempts++
    // (simple strategy; you can make this smarter)
    db.execSync("BEGIN");
    try {
      for (const job of jobs) {
        db.runSync(
          `UPDATE write_jobs
           SET status = 'pending', attempts = attempts + 1
           WHERE id = ?`,
          [job.id]
        );
      }
      db.execSync("COMMIT");
    } catch {
      db.execSync("ROLLBACK");
    }

    throw e;
  }
}
Enter fullscreen mode Exit fullscreen mode

What this does:

  • Claims jobs first (so we don’t process the same job twice).
  • Flushes them inside one transaction.
  • Uses INSERT OR IGNORE so double taps don’t create duplicate sets.

Pitfalls I hit:

  • If you don’t “claim” jobs, two flush calls can race.
  • If you do one transaction per job, you lose most of the performance benefit.

Next I needed a safe way to run this flusher regularly without creating multiple timers.

4) Run a single flusher loop (and don’t spam it)

I originally called flushOnce() after every enqueue. That defeated the whole point because it created a flush storm.

Now I run a single interval while the app is active, and I guard against concurrent flushes.

// db/useWriteQueueFlusher.ts
import { useEffect, useRef } from "react";
import { flushOnce } from "./flusher";

export function useWriteQueueFlusher() {
  const isFlushingRef = useRef(false);

  useEffect(() => {
    const timer = setInterval(async () => {
      if (isFlushingRef.current) return;

      isFlushingRef.current = true;
      try {
        // Drain multiple batches per tick if needed.
        // This keeps backlog small after a long offline session.
        for (let i = 0; i < 5; i++) {
          const flushed = flushOnce();
          if (flushed === 0) break;
        }
      } catch (e) {
        // Don't crash the app; log and keep going.
        console.warn("flushOnce failed", e);
      } finally {
        isFlushingRef.current = false;
      }
    }, 750); // tune this based on your app

    return () => clearInterval(timer);
  }, []);
}
Enter fullscreen mode Exit fullscreen mode

What this does:

  • Only one flusher runs.
  • It drains up to 5 batches per tick so it can catch up.
  • It doesn’t block taps; it runs on a schedule.

Pitfall: don’t set the interval too low. If you flush constantly, you’ll burn battery and still risk UI contention.

Now we can wire it into a screen and keep the UI optimistic.

5) Optimistic UI: show the set immediately, persist later

For set logging, I update React state first, then enqueue.

// screens/WorkoutScreen.tsx
import React, { useMemo, useState } from "react";
import { View, Button, FlatList, Text } from "react-native";
import { enqueueInsertSet } from "../db/writeQueue";
import { useWriteQueueFlusher } from "../db/useWriteQueueFlusher";

type SetItem = {
  id: string;
  reps: number;
  weightKg: number;
  createdAt: number;
};

function generateId() {
  // Good enough for local IDs; swap for uuid if you prefer.
  return `${Date.now()}-${Math.random().toString(16).slice(2)}`;
}

export function WorkoutScreen() {
  useWriteQueueFlusher();

  const workoutId = "w_123";
  const exerciseId = "e_456";

  const [sets, setSets] = useState([]);

  const lastSet = useMemo(() => sets[0], [sets]);

  const logSet = () => {
    const id = generateId();
    const createdAt = Date.now();

    const reps = lastSet?.reps ?? 8;
    const weightKg = lastSet?.weightKg ?? 60;

    // 1) Optimistic UI update (instant)
    setSets(prev => [{ id, reps, weightKg, createdAt }, ...prev]);

    // 2) Queue the durable write (fast)
    enqueueInsertSet({
      id,
      workoutId,
      exerciseId,
      reps,
      weightKg,
      createdAt,
    });
  };

  return (



       item.id}
        renderItem={({ item }) => (
          {item.reps} reps @ {item.weightKg}kg
        )}
      />

  );
}
Enter fullscreen mode Exit fullscreen mode

What this does:

  • UI updates immediately.
  • SQLite work is deferred and batched.

Pitfall: if you also read from sets table to render the list, you can get weird duplication (optimistic state + DB state). I keep one source of truth per screen: either state-only until refresh, or render from DB and mark optimistic rows separately.

Next is the part people skip: what happens when jobs keep failing?

Results / outcome

After switching from “write on tap” to “enqueue + batch flush”, the micro-freezes disappeared.

In my Gym Tracker project, I can now spam log sets quickly and the UI stays smooth (haptics and animations feel immediate). On a cold start, the app still launches under ~2 seconds, and offline sessions don’t build up scary lag.

Also: debugging got easier. When something goes wrong, I can inspect write_jobs and see exactly what’s pending/failed instead of guessing.

Key takeaways

  • Treat SQLite writes like a network request: don’t do heavy work on the tap path.
  • A durable queue (write_jobs) makes offline behavior reliable (even after app kills).
  • Batch transactions are the difference between “works” and “feels fast”.
  • Make writes idempotent (INSERT OR IGNORE) to survive double taps and retries.
  • Keep your flusher single-threaded (guard against concurrent flushes).

Closing CTA

If you’re building an offline-first Expo app: are you writing to SQLite directly inside button handlers, or are you queueing/batching?

Drop your current approach in the comments and I’ll help you pressure-test it. If there’s interest, I’ll write a follow-up on adding exponential backoff + a dead-letter screen to inspect failed jobs inside the app.

Top comments (0)