DEV Community

ANKUSH CHOUDHARY JOHAL
ANKUSH CHOUDHARY JOHAL

Posted on • Originally published at johal.in

We Migrated from MongoDB 7.0 to PostgreSQL 17: Here’s How We Handled 1B+ Documents

We Migrated from MongoDB 7.0 to PostgreSQL 17: Handling 1B+ Documents

Why We Migrated

Our team maintains a large-scale social media analytics platform ingesting over 50M user activity events daily, totaling 1.2B documents stored in a sharded MongoDB 7.0 cluster. While MongoDB served us well for high-throughput writes and flexible schema needs early on, we hit critical limitations as our workload evolved:

  • Complex cross-collection aggregations for billing and user attribution took 10+ seconds, impacting customer dashboards.
  • Lack of native ACID transactions across shards led to data inconsistencies in billing workflows.
  • Relational metadata (user profiles, subscription tiers) stored in separate collections required slow application-side joins.
  • PostgreSQL 17’s upcoming features, including enhanced JSONB performance, declarative partitioning improvements, and MERGE statement optimizations, aligned with our long-term roadmap.

Pre-Migration Assessment

We started with a 3-month audit of our MongoDB workload:

  • Total data size: ~2.4TB across 1.2B documents, average document size 2KB.
  • Identified 4 core collections to migrate, with the largest (user_activity) accounting for 1.1B documents.
  • Mapped nested BSON structures to PostgreSQL types: ObjectId → UUID, ISODate → TIMESTAMPTZ, NumberDecimal → NUMERIC, embedded arrays → JSONB or normalized tables based on query patterns.
  • Validated PostgreSQL 17’s performance with a 10M document subset, confirming 4x faster aggregation speeds over MongoDB 7.0.

Schema Design for PostgreSQL 17

We leveraged PostgreSQL 17’s new capabilities to balance flexibility and relational integrity:

  • Normalized core entities: Created users, subscriptions, and activity_types tables with foreign key constraints to enforce data integrity.
  • Used JSONB for variable metadata: The user_activity table includes a metadata JSONB column for flexible, non-relational fields, taking advantage of PostgreSQL 17’s optimized JSONB path queries and index-only scans for JSONB.
  • Declarative partitioning: Partitioned the user_activity table by month using PostgreSQL 17’s improved partition pruning, reducing query scan times for time-range filters by 60%.
  • GIN indexes on JSONB columns and B-tree indexes on high-cardinality fields like user_id and activity_time to speed up common queries.

Data Migration Strategy

We prioritized zero data loss and minimal downtime with a 4-phase migration approach:

  1. Historical Backfill: Split the 1.1B user_activity documents into batches using time-ordered ObjectId ranges. Used a custom Go-based migration service with 32 parallel workers to read batches from MongoDB shards, transform BSON to relational rows, and bulk-load into PostgreSQL using the COPY command (10x faster than single inserts). We validated each batch with row count checks and MD5 checksums of _id fields.
  2. Dual Writes: Enabled dual writes for all new events via a feature flag, writing to both MongoDB and PostgreSQL. We used MongoDB 7.0’s change streams to capture any updates to historical data during the backfill period, replaying them to PostgreSQL to close the gap.
  3. Validation: Ran weekly consistency checks comparing document counts, latest activity timestamps, and aggregated metrics between MongoDB and PostgreSQL. Resolved edge cases like 1MB+ large documents (stored in a separate large_docs table) and invalid BSON dates (converted to NULL with logging).
  4. Cutover: Scheduled a 5-minute maintenance window to stop dual writes, verify PostgreSQL had 100% data parity, switch application reads to PostgreSQL, and deprecate the MongoDB cluster.

Performance Tuning & Results

Post-migration, we tuned PostgreSQL 17 for our workload:

  • Configured shared_buffers to 25% of total RAM, wal_buffers to 64MB, and optimized autovacuum settings for the large partitioned user_activity table.
  • Enabled JIT compilation for analytical queries, reducing aggregation runtimes by 40% for complex billing reports.
  • Deployed PgBouncer for connection pooling, handling 10k+ concurrent application connections without resource exhaustion.

Final results:

  • 1.2B documents migrated with 0 data loss, 99.999% data integrity.
  • Complex aggregation queries sped up by 8x on average, with p99 latency dropping from 12s to 1.5s.
  • Storage footprint reduced by 32% due to normalization and PostgreSQL’s native compression for JSONB and partitioned tables.
  • Leveraged PostgreSQL 17’s new MERGE statement to simplify upsert logic, reducing application write code by 20%.

Lessons Learned

Our key takeaways for large-scale NoSQL to relational migrations:

  • Test migration workflows on a 1% subset first to catch edge cases early.
  • Use bulk load tools (like COPY for PostgreSQL) over single-row inserts for large datasets.
  • Dual writes and change streams are critical for minimizing downtime with live workloads.
  • Leverage target database-specific features (like PostgreSQL 17’s JSONB improvements) to avoid over-normalizing flexible data.

Top comments (0)