Most data migration strategies focus on getting to the new state. But your actual success metric isn't "migration complete," it's "can we revert this change without data loss?" A robust rollback mechanism isn't a luxury; it's the only way to guarantee business continuity when migrations inevitably hit a snag.
Why "No Rollback" Breaks Production
Imagine your team deploys a new feature requiring a crucial schema change—say, adding a user_preferences JSONB column with a NOT NULL constraint. You run the migration, deploy the new application code, and for the first 10 minutes, everything looks green. Then, an edge case surfaces: existing users with implicit empty preference data (handled by old app logic) start seeing 500 errors because the new application expects a specific, non-null JSON structure. Revenue instantly drops by 15%, and PagerDuty is screaming.
Without a safe rollback strategy, you're in a nightmare scenario:
- Roll forward with a hotfix: Rushing a fix under pressure is a recipe for more bugs, especially if the underlying data is already corrupted or partially transformed.
- Restore from backup: This means hours of downtime and guaranteed data loss since the backup was taken. Any new data written in the last few hours is gone.
- Manual data repair: An error-prone, slow process for critical data, often involving direct database manipulation, leading to further inconsistency.
All options are unacceptable in a production system handling high traffic or sensitive data.
Designing for Zero-Data-Loss Rollback: The Phased Migration
The core idea for safe rollbacks is to ensure your old system can continue to operate correctly throughout the migration, especially writing data, even as you transition to a new schema or database. This allows you to revert to the old application version without data loss if something breaks.
This typically involves a phased approach often called "dual write" or "shadow write."
+--------------------+
| |
| Application v1 |
| (Reads/Writes Old)|
| |
+----------+---------+
|
| Reads/Writes (Old Schema)
v
+-------------------+
| |
| Old Database |
| (Old Schema) |
| |
+-------------------+
Phase 1: Dual Write Introduction (No Read Change)
Your new application version (v2) is deployed alongside v1. Critically, v2 writes to both the old schema and the new schema. Reads continue to come from the old schema by both v1 and v2. This ensures the old path is always kept up-to-date and valid.
+--------------------+ +--------------------+
| Application v1 | | Application v2 |
| (Reads/Writes Old) | | (Writes Old & New) |
| | | (Reads Old) |
+----------+---------+ +----------+---------+
| |
| Reads/Writes (Old Schema) | Writes (New Schema)
v v
+-------------------+ +-------------------+
| | | |
| Old Database |<----------| New Database |
| (Old Schema) | | (New Schema) |
| | | |
+-------------------+ +-------------------+
Phase 2: Backfill Historical Data
While dual writes ensure new data is captured in both places, existing historical data only lives in the old schema. An asynchronous job is run to backfill and transform this data from the old schema into the new schema. This must be idempotent and carefully handle concurrent writes from Phase 1.
Phase 3: Read Switchover (Still Dual Writing)
Once the backfill is complete and verified, you update Application v2 to read primarily from the new schema. Application v1 continues to read and write to the old schema. Dual writes from v2 continue, ensuring both databases remain synchronized.
+--------------------+ +--------------------+
| Application v1 | | Application v2 |
| (Reads/Writes Old) | | (Writes Old & New) |
| | | (Reads New) |
+----------+---------+ +----------+---------+
| |
| Reads/Writes (Old Schema) | Writes (New Schema)
v v
+-------------------+ +-------------------+
| | | |
| Old Database |<----------| New Database |
| (Old Schema) | | (New Schema) |
| | | |
+-------------------+ +-------------------+
Rollback Point: If at any point during Phases 1-3 an issue arises, you can instantly rollback Application v2 to Application v1. Since Application v1 was always writing to the old schema, and Application v2 was also writing to it, the critical data for your production system remains intact and consistent in the old schema. The new schema might contain inconsistent or orphaned data, but your core business operations are unaffected.
Phase 4: Cutover and Cleanup
Once confidence is high (e.g., after weeks of monitoring with no issues), you can remove the dual writes from v2 and eventually deprecate/drop the old schema or database.
Real-world Application: Stripe's Data Migrations
Stripe, processing billions of API calls daily, cannot afford data loss or significant downtime. Their approach to critical data migrations (e.g., changing how PaymentIntent objects are stored, or migrating customer data between sharded databases) heavily relies on phased strategies for zero-downtime, zero-data-loss transitions.
When migrating to new data models or infrastructure, Stripe often employs a variation of the dual-write pattern, sometimes extended with a "shadow-read" phase. For instance, if migrating a service to a new database or schema, they might:
- Replicate data: Stream existing data from the old system to the new, ensuring eventual consistency.
- Dual-write: All new writes go to both the old and new systems. This is critical for rollback: the old system always has the latest state.
- Shadow-read/Verify: New application code starts reading from the new system but compares the result with the old system. If there's a discrepancy, it logs an error but serves the response from the old system. This acts as a "dark launch" validation, catching data inconsistencies before they impact users.
- Phased Read Cutover: Once shadow-reads are validated (e.g., 99.999% consistency over days), reads are progressively switched to the new system, starting with a small percentage of traffic (canary deployment) and gradually increasing.
- Remove Dual-write: Once all traffic is routed to the new system and it's stable, the dual-write logic is removed.
- Decommission: The old system is eventually decommissioned.
This process can take weeks or even months for critical systems, providing an extremely long window for verification and instant rollback at any stage before the old system is retired. The overhead of writing twice (or reading twice) is a recognized trade-off for business continuity.
Common Mistakes Engineers Make
- Forgetting Data Integrity Constraints: Focusing only on changing column types but neglecting the
NOT NULLconstraints or unique indexes. If you addNOT NULLto a column that has existingNULLvalues, your migration will fail unless you've backfilled defaults before applying the constraint. This seems basic, but it's a frequent cause of production failures. - Prematurely Dropping Old Data or Indices: Convinced the migration is "done" after a few hours, engineers drop old columns, tables, or indices. If a hidden bug emerges days later, a rollback becomes a partial data restoration from backup (data loss) or a manual, complex data reconstruction task. Keep old structures around for weeks or months if possible, even if unused, until full confidence is achieved.
- Inadequate Monitoring on the Old Path: During dual-write, the focus often shifts entirely to the new path. If the old path's writes (which are critical for rollback) start failing due to unexpected application interactions or database load, and you don't monitor it, your safety net is silently compromised. Monitor both paths comprehensively, especially write success rates and latencies.
Interview Angle
Interviewers love to probe into data migration because it exposes your understanding of trade-offs and production resilience.
Question: "You need to add a new status column (enum type) to a critical orders table that processes thousands of transactions per second. Describe a zero-downtime, zero-data-loss migration strategy and how you'd handle a rollback."
Strong Answer Breakdown:
- Phase 1: Safe Schema Evolution. Start by adding the new
statuscolumn asNULLABLEand with no default. This ensures existing rows remain valid. Deploy this schema change without application code changes. - Phase 2: Dual Write with Backfill.
- Deploy a new version of your application (v2) that, when writing or updating an order, writes to both the old and new
statuscolumns. For existing orders, backfill thestatuscolumn based on existing logic or a reasonable default value using an asynchronous, idempotent job. - Application v1 continues to operate as normal, reading/writing only the old columns.
- Rollback Safety: At this stage, if v2 has issues, you can roll back to v1. All critical data (including the old status representation) is preserved in the original format. The new
statuscolumn might become stale or inconsistent, but it doesn't impact v1.
- Deploy a new version of your application (v2) that, when writing or updating an order, writes to both the old and new
- Phase 3: Phased Read Switchover.
- Once backfill is complete and the dual-write period has passed without issues, deploy an updated v2 that reads the
statusfrom the new column first. If it'sNULL(indicating an un-migrated row or an old version), fall back to inferring status from the old logic. Continue dual-writing. - Use feature flags to gradually roll out this read change to a small percentage of users, carefully monitoring for errors and data discrepancies.
- Once backfill is complete and the dual-write period has passed without issues, deploy an updated v2 that reads the
- Phase 4: Enforce Constraint and Cleanup.
- Once confident, add a
NOT NULLconstraint to thestatuscolumn. - Finally, remove the old status logic and column, typically after a significant soak period (weeks).
- Once confident, add a
- Key Mitigations and Trade-offs:
- Data Inconsistency: Validate data written to the new column against the old. Use eventual consistency patterns.
- Performance Overhead: Dual writes add latency and database load. Monitor this closely.
- Complexity: More application code paths, more deployment steps. Mitigate with automated testing and clear operational runbooks.
- Rollback: Emphasize that the existence of the old, valid data and the ability for the old application version to function means you can always revert to a known good state without data loss.
Need help designing robust migration strategies or preparing for your next system design interview?
Book a 1:1 session with me on Topmate to discuss your challenges and level up your skills.
Want to Go Deeper?
I do 1:1 sessions on system design, backend architecture, and interview prep.
If you're preparing for a Staff/Senior role or cracking FAANG rounds — book a session here.
Top comments (0)