DEV Community

Cover image for React Native + Rails synchronization with WatermelonDB
Alex Aslam
Alex Aslam

Posted on

React Native + Rails synchronization with WatermelonDB

I still remember the Slack message that changed my entire approach to mobile development.

It came from our lead iOS engineer at 11:47 PM: “The app crashes when the train goes into the tunnel. Every. Single. Time.”

We had built a beautiful React Native app for field technicians. The Rails backend was solid. The API was RESTful. The UI was pixel‑perfect. But the moment the network got spotty—on the subway, in a basement, in the middle of nowhere—the app fell apart. Spinners that never stopped. Forms that failed to submit. Users who wanted to throw their phones into the nearest river.

We tried caching. We tried Redux persist. We tried local storage hacks. Nothing worked reliably. The app was a house of cards, and every network hiccup was a gust of wind.

That’s when I stumbled on a GitHub repository with a strange name: WatermelonDB. I read the README, and my heart started racing. This wasn’t another “just store some JSON in AsyncStorage” library. This was a full‑blown, reactive database for React Native, built for offline‑first apps with massive data sets.

The tagline said: “Build powerful React Native apps that work offline, with lightning-fast performance.”

I was skeptical. I’d been burned before. But three months later, after a journey of late nights, whiteboard arguments, and one unforgettable production deployment, I became a believer. This is the story of how we synchronized React Native with Rails using WatermelonDB—and how I learned that synchronization is less about code and more about art.

The Problem: Offline Isn’t Optional

Our use case was brutal. Field technicians in industrial sites needed to:

  • View thousands of work orders, even with zero connectivity.
  • Fill out detailed forms with photos, signatures, and checklists.
  • Sync everything automatically when they returned to the office or found a Wi‑Fi hotspot.

We tried the obvious: store API responses in AsyncStorage, show a cached version when offline, and queue mutations with a custom sync manager. It worked… for about a week. Then we hit the walls.

Performance – AsyncStorage is synchronous and blocking. Loading 5,000 work orders froze the UI for seconds.
Consistency – Redux persisted state could get out of sync with the backend. We had no way to know if the data was fresh.
Conflicts – When two technicians edited the same work order offline, the last one to sync won. We lost data.

We needed a database that was:

  • Fast – Queries in milliseconds, even with tens of thousands of records.
  • Reactive – The UI should update automatically when data changes, without manual refetching.
  • Sync‑aware – It needed a built‑in way to handle pull and push synchronization with a backend.

WatermelonDB checked every box.

WatermelonDB: The Database That Woke Up

WatermelonDB is not your typical mobile database. It’s built on top of SQLite (via @nozbe/watermelondb), but it adds a reactive layer that feels like magic. You define models with decorators, query with .observe(), and the UI re‑renders automatically when data changes.

The learning curve was steeper than I expected. It requires a different mental model: you’re working with observables and collections, not traditional imperative queries. But the payoff is immense.

Here’s a snippet of what a model looked like for us:

import { Model, field, date, relation } from '@nozbe/watermelondb';

export default class WorkOrder extends Model {
  static table = 'work_orders';

  @field('work_order_number') workOrderNumber;
  @field('title') title;
  @field('status') status;
  @date('scheduled_date') scheduledDate;
  @relation('users', 'assigned_to') assignedTo;
}
Enter fullscreen mode Exit fullscreen mode

Simple, declarative, and reactive. But the real magic came when we added the sync engine.

The Sync Art: Bridging Rails and Watermelon

Synchronizing a WatermelonDB database with a Rails backend is an art form. It’s not a plug‑and‑play solution; you have to design both sides to speak the same language.

We spent a week sketching on a whiteboard, mapping out the synchronization lifecycle. We ended up with a two‑way sync strategy:

1. Pull: Getting the Initial Data and Updates

WatermelonDB’s synchronize method expects a pull function that fetches changes since a given timestamp. On the Rails side, we built an endpoint that accepted a last_synced_at parameter and returned:

  • A list of created/updated records (in a compact JSON format)
  • A list of deleted record IDs
  • A new timestamp for the next sync

We used updated_at columns to track changes. But we quickly realized that relying solely on timestamps could miss updates that happened in the same second. So we added a sync_version integer that increments on every change—a classic optimistic locking approach.

The Rails endpoint looked something like:

# /api/v1/sync/pull
def pull
  since = params[:last_synced_at] || Time.at(0)
  records = WorkOrder.where('updated_at > ?', since)
  deleted = WorkOrder.deleted.where('deleted_at > ?', since).pluck(:id)

  render json: {
    changes: records.map { |r| WorkOrderSerializer.new(r).as_json },
    deleted: deleted,
    timestamp: Time.current
  }
end
Enter fullscreen mode Exit fullscreen mode

But we didn’t stop there. WatermelonDB allows you to send the entire dataset in chunks, so we implemented pagination for the initial sync to avoid loading 50,000 records at once.

2. Push: Sending Local Changes to Rails

Push was harder. WatermelonDB expects a push function that sends a batch of created, updated, and deleted records. On the Rails side, we had to process them in order, handle conflicts, and respond with success or failure for each record.

We created a POST /api/v1/sync/push endpoint that accepted an array of changes. Each change included:

  • id (local WatermelonDB ID)
  • table
  • action (create, update, delete)
  • data (the raw attributes)

The Rails controller had to:

  • Validate each change (permissions, data integrity)
  • Apply it to the database
  • Handle conflicts (if the server version was newer, we returned a “conflict” response so the client could resolve it)

This was the most complex part. We introduced a last_synced_at on each record to detect conflicts. If the server’s updated_at was newer than the client’s version, we rejected the push and sent the server version back for the client to merge.

The Art of Conflict Resolution

Conflicts are inevitable in offline‑first apps. You can’t avoid them; you can only manage them gracefully.

We implemented a three‑tier strategy:

  1. Last‑write‑wins (LWW) – For non‑critical fields like notes or comments, we simply let the latest write (by timestamp) win. We stored a client_updated_at field on the client and used that to determine precedence.

  2. Merge – For more complex data, like checklist items, we merged changes. If the technician added a new item offline and the office changed the description of another item, we combined both.

  3. Manual resolution – In rare cases (e.g., conflicting signatures), we flagged the record and asked the user to resolve it during sync. This was a last resort.

WatermelonDB’s sync adapter made it possible to implement these strategies cleanly. We wrote custom resolvers that ran on the client after a conflict was detected, merging data or showing a modal.

The Performance Revelation

Once we had the sync working, we tested it with our production data set: 20,000 work orders, 500 technicians, and thousands of photos. The initial sync took about 90 seconds over a slow 3G connection—unacceptable.

We optimized in several ways:

  • Chunked sync – We broke the initial sync into pages of 500 records. WatermelonDB processes them in batches, so the UI stayed responsive.
  • Selective sync – We didn’t sync all work orders. Only those assigned to the technician or related to their location. We added WHERE assigned_to = ? on the pull endpoint.
  • Binary data – Photos were synced separately with a background upload queue, not through WatermelonDB. The database only stored local file references.

The final result: first sync in ~15 seconds, incremental syncs in <2 seconds, and the UI never dropped below 60fps.

Lessons from the Journey

Looking back, I realize that building this sync layer was less about coding and more about understanding the shape of our data and the reality of our users. We had to make trade‑offs:

  • Consistency vs. availability – We chose availability (the app works offline) and accepted eventual consistency. Users could see stale data for a few minutes, but they could always work.
  • Complexity vs. user experience – The sync engine added 30% more code to our codebase. But it eliminated 90% of the support tickets related to network issues. Worth it.

We also learned to respect WatermelonDB’s constraints. It’s not a relational database in the traditional sense—it’s a reactive object store with SQLite underneath. You have to design your models to match your access patterns, not the other way around.

The Art of Sync

If I had to distill this journey into one piece of advice for senior full‑stack developers, it would be this:

Synchronization is not a technical feature; it’s a product experience.

When you build an offline‑first app, you’re promising your users that their work will be safe, that the app will be fast, and that the data will eventually be where it needs to be. WatermelonDB gives you the tools—but you, the artist, must paint the picture.

You have to decide:

  • How fresh does the data need to be?
  • What happens when two people edit the same thing?
  • How do you communicate sync status without annoying the user?
  • How do you recover from sync failures?

These are design questions, not just engineering ones. And the best solutions come from walking in your users’ shoes—or, in our case, riding in their trucks, watching them work in basements and barns, and understanding that a spinning spinner is a betrayal of trust.

We deployed the new WatermelonDB‑powered app six months after that frantic Slack message. The first week, we held our breath. Support tickets dropped by 80%. The lead iOS engineer sent a new message: “The train tunnel test passed. It didn’t even blink.”

That’s the art. That’s the journey.

Top comments (0)