DEV Community

rishabh pahwa
rishabh pahwa

Posted on

The Production Problem with Async Dual Writes

Many "zero-downtime" data migration strategies involving dual writes promise seamless transitions, but often hide insidious data consistency traps. Without careful handling, you're not just moving data; you're silently corrupting or losing it, only to discover the issue months after cutover.

The Production Problem with Async Dual Writes

Imagine you're an engineer at a rapidly growing SaaS company. Your users table needs to be sharded or migrated to a new database technology. To avoid downtime, you implement a dual-write strategy: all new writes go to both the old and new users tables. Reads initially come from the old table, then eventually switch to the new one. This sounds solid.

Now, picture this: A user updates their profile. Your application sends two write requests: one to OldDB.users and one to NewDB.users. The write to OldDB succeeds, returning HTTP 200. But the write to NewDB fails due to a network timeout, a transient database hiccup, or a schema validation error specific to the new system. What does your application do? If it immediately returns success because the OldDB write worked, you now have an inconsistency: the user's profile is updated in the old system but stale in the new. Over days or weeks, these small, non-atomic failures accumulate, leading to widespread data divergence. When you finally cut over to reading solely from NewDB, users start seeing outdated profiles, missing orders, or incorrect balances. Your "zero-downtime" migration just became a "zero-consistency" disaster.

The Expand-Contract Pattern and Dual Writes

The Expand-Contract pattern is a common strategy for zero-downtime schema migrations. It involves phases:

  1. Expand: Modify your application to read from the old schema and write to both the old and new schemas.
  2. Migrate Data: Backfill historical data from the old schema to the new.
  3. Validate: Continuously compare data between old and new.
  4. Contract: Switch reads to the new schema, then remove the old schema and dual-write logic.

Here's how the dual-write phase typically works, and where consistency issues arise:

                  +-----------------------------------+
                  |            Application            |
                  |  (v1.1 - Dual-Write/Read Old)     |
                  +-----------------------------------+
                       |        ^         ^
                       | Write  | Read    | Write
                       v        |         |
      +---------------------+   |         |   +---------------------+
      | Old Database (v1.0) |<--+---------+-->| New Database (v1.1) |
      | (e.g., MySQL)       |                 | (e.g., PostgreSQL)  |
      +---------------------+                 +---------------------+
                                  ^
                                  | Backfill / Sync Job
                                  | (e.g., Debezium, custom scripts)
Enter fullscreen mode Exit fullscreen mode

In this setup:

  • Reads: Go to the Old Database (or read from both and merge, with old as authoritative).
  • Writes: Go to both Old Database and New Database.
  • Backfill: A separate job continuously copies existing data from Old to New.

The fundamental challenge is that writing to two separate databases (or even two different tables in the same database) is not an atomic operation. Without a distributed transaction across both write operations, there's always a window where one succeeds and the other fails, leading to divergence.

How Stripe Maintains Sanity at Scale

Stripe, processing billions in transactions, performs hundreds of schema changes monthly. Their approach to zero-downtime data migration heavily relies on dual writes but is backed by extensive reconciliation. When migrating critical financial data, they recognize that non-atomic dual writes are a reality.

Instead of assuming perfect consistency, Stripe engineers build systems that detect and fix discrepancies. Their strategy often includes:

  1. Shadow Writes: Before dual-writing, they might "shadow write" to the new schema. The new system receives a copy of write traffic, but these writes aren't considered authoritative and are often discarded. This allows testing the performance and correctness of the new schema under production load without impacting the old system or risking data integrity.
  2. Idempotency and Retries: Application logic ensures that write operations are idempotent, meaning they can be safely retried. When a dual write occurs, if one database write fails, the application logs the failure and often retries later or enqueues it for asynchronous processing.
  3. Continuous Reconciliation: This is the most crucial part. After dual writes are enabled, Stripe runs continuous, automated reconciliation jobs. These jobs scan both the old and new databases, compare records based on a unique identifier, and identify discrepancies. If a difference is found (e.g., a record exists in OldDB but not NewDB, or attributes differ), the reconciliation job logs it, potentially attempts to fix it (e.g., by re-applying the change to NewDB), or flags it for manual review. For example, a reconciliation job might compare 100 million customer records daily, flagging any divergence beyond a 0.0001% threshold. This background process ensures eventual consistency and acts as a safety net against non-atomic dual-write failures.

This rigorous validation and reconciliation process is what turns a risky dual-write strategy into a production-grade, zero-downtime migration.

Common Mistakes When Implementing Dual Writes

  1. Assuming Atomicity Across Databases: Many engineers treat a dual-write operation (e.g., db1.save() and db2.save()) as a single atomic unit. It's not. If your application code just calls two database clients, success from one and failure from the other leads to data divergence. You need explicit error handling, retries, and compensation logic, or rely on eventual consistency with strong reconciliation.
  2. Inadequate Read Strategy During Transition: During the dual-write phase, how do you read?
    • Read-Old: Reading only from the old system is safer for consistency during the transition, but means data written to the new system isn't immediately visible, and requires a hard cutover for reads.
    • Read-New-Fallback-Old: Reading from the new, falling back to old if not found, can lead to inconsistencies if the new system is incomplete or subtly different.
    • Read-Both-Merge: Reading from both and merging requires complex conflict resolution and can be slow. Most get this wrong by not clearly defining the source of truth for reads at each stage.
  3. Neglecting Reconciliation and Observability: Simply setting up dual writes and a backfill job isn't enough. Without robust monitoring to track dual-write success rates, latency for each write, and, critically, continuous data validation (reconciliation) between the old and new systems, you're flying blind. Silent data loss is guaranteed without it. Many engineers skip this crucial, complex step, leading to post-cutover data integrity nightmares.

Interview Angle: What Interviewers Ask

Interviewers will probe your understanding beyond the basic concept. Expect questions like:

  • "How do you ensure data consistency during a dual-write phase if one database write succeeds and the other fails?"

    • Strong Answer: "Since distributed transactions are rarely feasible or desirable, I wouldn't assume atomicity. Instead, I'd implement a compensation mechanism. For writes, I'd typically wrap the dual-write logic in a transaction within the application or use an idempotent message queue. The application would first publish the data change to a reliable queue (e.g., Kafka). A consumer would then attempt to write to both databases. If one write fails, the message could be retried with backoff. If persistent failures occur, it lands in a dead-letter queue for manual intervention or triggers an alert. Ultimately, even with retries, you need a continuous, asynchronous reconciliation job that scans both databases for discrepancies and fixes them, ensuring eventual consistency. This shifts the complexity from transactional guarantees to robust error handling and eventual repair."
  • "When would you use a 'shadow write' versus a 'dual write'?"

    • Strong Answer: "Shadow writes are primarily for testing the new system with production-like load and data, without letting it impact the live system. You write to both the old authoritative system and the new system, but the new system's writes are often ignored or merely logged for validation. This is low-risk. Dual writes, however, mean both systems are authoritative for writes during a transitional period, with the intent to eventually cut over reads to the new system. It's a higher-risk strategy because data consistency is paramount. I'd use shadow writes for initial performance testing or schema validation of the new system, and dual writes when I'm confident in the new system's write path and am preparing for a full cutover, backed by strong reconciliation."

Moving critical data without disruption is hard. Do it right, and your systems evolve gracefully. Cut corners, and you'll spend weeks on data recovery.


Need to refine your system design skills for your next interview? Book a 1:1 session with me to discuss real-world system challenges and effective design patterns.


Want to Go Deeper?

I do 1:1 sessions on system design, backend architecture, and interview prep.
If you're preparing for a Staff/Senior role or cracking FAANG rounds — book a session here.

Top comments (0)