DEV Community

Cover image for SQL was fine… until it wasn’t. Why we went NoSQL and never looked back
<devtips/>
<devtips/>

Posted on

SQL was fine… until it wasn’t. Why we went NoSQL and never looked back

We swapped our trusty relational DB for a document store and suddenly stopped firefighting at 2 AM. Here’s the messy truth benchmarks, trade-offs, and facepalm moments included.

Our Postgres setup had been rock-solid for years until one traffic spike turned it into a laggy, overheating mess. Queries that used to run in milliseconds started dragging into seconds. Users were refreshing pages like they were trying to win a sneaker drop, and our CPU graphs looked like a SpaceX launch.

We scaled vertically. We optimized indexes. We sacrificed a rubber duck to the debugging gods. But the truth hit us: for our workload, SQL was the wrong tool. We were storing massive JSON blobs, doing more reads than writes, and changing schemas faster than a startup changes logos.

So we went for it we migrated everything to NoSQL. Three months later, we could handle 5× the traffic and queries were 10× faster. But it wasn’t all sunshine we traded some relational comforts for raw speed and flexibility.

TLDR:

  • SQL started choking under our JSON-heavy, read-heavy load.
  • We moved to NoSQL (MongoDB) for horizontal scaling + flexible schemas.
  • Got huge performance gains… plus a few “eventual consistency” scares.

Table of Contents:

  • When SQL becomes the bottleneck
  • Why we chose NoSQL
  • The migration war story
  • Benchmarks & real-world gains
  • Trade-offs & gotchas
  • Lessons learned & advice
  • Conclusion + resources

When SQL becomes the bottleneck

For the first two years, Postgres was a champ. We could run multi-table JOINs, wrap everything in ACID transactions, and sleep well at night. Then traffic started climbing fast.

The cracks showed up like this:

  • Lock contention: heavy writes blocking reads, making requests queue like a checkout line on Black Friday.
  • JOIN overload: our prettiest queries turned into lag machines once we hit millions of rows.
  • Scaling pain: vertical scaling (more CPU/RAM) got expensive and still couldn’t keep up at peak load.

One of our problem queries looked like:

SELECT u.name, COUNT(o.id) AS orders
FROM users u
JOIN orders o ON o.user_id = u.id
JOIN payments p ON p.order_id = o.id
WHERE p.status = 'completed'
GROUP BY u.name
ORDER BY orders DESC;

It flew at 10k rows, crawled at 10M.

The killer detail? Our app didn’t really need relational gymnastics. Most requests fetched whole JSON-like objects, and schema changes were frequent. Forcing that into relational tables was like shoving a beanbag into a filing cabinet technically possible, but a bad idea.

At some point, we had to decide:

Keep hacking indexes until we lose our minds, or move to a DB that wants to store data the way we actually use it.

Why we chose NoSQL

This wasn’t a “SQL bad, NoSQL good” moment. It was more like, “Our database is choking, what fits our data better?”

Our reality check

  • JSON-heavy data: most of our core entities were already blobs of nested data.
  • Read-heavy load: ~80% of queries pulled entire objects, not stitched-together relations.
  • Fast-changing schema: new features meant frequent migrations, which slowed us down in SQL.

Relational databases shine with structured, interconnected data. But our workload screamed for:

  • Document storage
  • Horizontal scaling
  • Flexible schema

The contenders

We looked at:

  • MongoDB mature ecosystem, solid sharding, easy JSON storage.
  • Couchbase built-in caching and strong replication.
  • DynamoDB AWS-managed infinite scaling (and AWS bills).

Why we picked MongoDB

  • Document model fit perfectly.
  • Query syntax wasn’t alien to the team.
  • Huge community + docs.

Docs we leaned on:

The migration war story

Moving from SQL to NoSQL isn’t a port it’s a remodel. You don’t just swap the engine; you redesign the whole car.

Rethink the data model

In Postgres, we kept things normalized separate tables, foreign keys, and JOINs doing the heavy lifting.
In Mongo, we denormalized storing related data together so reads were self-contained.

Example shift:

  • SQL: usersorderspayments
  • NoSQL: One user document with an orders array, each order embedding its payment details.

Build the migration pipeline

We wrote a script to:

  1. Pull batches from Postgres.
  2. Transform rows into JSON docs.
  3. Insert into Mongo.
for user in sql_users:
orders = get_orders(user.id)
doc = { "name": user.name, "orders": orders }
mongo.users.insert_one(doc)

Not fancy but safe, chunked, and retryable.

Index or die

In SQL, some indexes come almost for free (PKs, foreign keys). In Mongo, if you don’t index what you query, you’ll feel it instantly. We profiled slow queries and added targeted indexes until everything was under ~50ms.

Benchmarks & real-world gains

Once the migration dust settled, we put both databases through the same set of tests using real production queries. The difference was night and day.

Before (Postgres)

  • Avg query latency (peak hours): 450–800ms
  • Max concurrent users before errors: ~2,500
  • CPU spikes: 85–95% during traffic surges
  • Scaling cost: $$$ for bigger instances

After (MongoDB)

  • Avg query latency (peak hours): 50–90ms
  • Max concurrent users before errors: ~12,000
  • CPU spikes: rarely above 60%
  • Scaling cost: spread across smaller nodes → cheaper overall

We also ran a stress test where we doubled simulated traffic every minute until something cracked. Postgres tapped out first queries piled up in connection waits. Mongo kept up until we hit a network bottleneck instead of a DB bottleneck.

Unexpected wins

  • Cheaper infra horizontal scaling with commodity nodes cost less than one beefy SQL box.
  • Simpler reads most endpoints now hit one collection, one query.

Surprise headaches

  • Eventual consistency weirdness occasionally, a just-updated record wouldn’t show up instantly on another node.
  • Learning curve getting the aggregation framework right took practice.

Or, as one dev on our team put it:

“We traded slow JOINs for a fast DB that sometimes gaslights us for 200ms.”

Trade-offs & gotchas

Switching to NoSQL gave us speed and scale, but it wasn’t a free lunch.

1. You lose relational guarantees

No foreign keys, no automatic cascades. If you delete a user, you’d better remember to clean up their orders yourself or you’ll be haunted by orphaned data.

2. Eventual consistency is… a vibe

With replication across shards, sometimes fresh data takes a beat to appear everywhere. For most use cases it’s fine. For a financial transaction? That’s a “nope.”

3. Query limitations

Ad-hoc joins? Nope. Multi-collection transactions? Supported, but slower and less common. You design your data model around queries up-front or you’ll regret it later.

4. Different scaling challenges

You’re not “done” when you go NoSQL now you’re managing shard keys, balancing data distribution, and keeping indexes sane.

5. When SQL still wins

If your app:

  • Requires complex relational integrity
  • Runs heavy analytical queries
  • Needs rock-solid consistency

…SQL is still your best friend.

In short: NoSQL isn’t a silver bullet. It’s a tool. If your workload fits, it’s glorious. If it doesn’t, you’ve just made your life harder.

Lessons learned & advice

If I had to do it over, here’s what I’d tell Past Me before touching a single line of migration code:

Profile before you panic

Don’t assume SQL is your bottleneck sometimes it’s bad queries, missing indexes, or a misconfigured pool. Prove it with metrics first.

Design for your access patterns

In NoSQL, you don’t normalize for elegance; you model for how your app reads and writes data. The wrong structure will tank performance faster than in SQL.

Expect new problems

We killed our JOIN latency issues, but we picked up shard balancing, manual consistency checks, and new query quirks.

Go hybrid if it makes sense

Polyglot persistence is real plenty of apps use SQL for relational data and NoSQL for flexible storage. It’s not heresy; it’s pragmatic.

Learn your new DB deeply

Mongo’s aggregation framework, Dynamo’s partition key design, Couchbase’s N1QL whatever you choose, mastering it pays dividends.

Recommended reads:

Conclusion

Would we do it again?
Absolutely for our workload, the shift was worth every late-night migration script and every “why is this shard unbalanced” moment. We went from constant firefighting to comfortably absorbing traffic spikes, and we bought ourselves room to grow without upgrading to absurdly expensive DB hardware.

But here’s the part some folks gloss over: NoSQL isn’t the hero in every story. If your data is deeply relational, your queries are complex, and consistency is life-or-death stick with SQL. It’s battle-tested for a reason.

For us, the trade-off was clear: speed and scalability > perfect relational integrity. We accepted the quirks, learned the new tools, and stopped waking up to pager alerts at 3 AM.

The next step for us? Multi-region replication and maybe layering in a caching tier like Redis because if dev life has taught me anything, it’s that “done” is a myth.

Helpful resources

Top comments (2)

Collapse
 
franckpachot profile image
Franck Pachot

Great feedback on lessons learned.

You said:

If you delete a user, you’d better remember to clean up their orders

Does that mean that not all orders are embedded in users? It would make sense to avoid unbounded arrays - keep recent orders embedded and archive older ones into other documents (same or different collection). Is that what you do?

Collapse
 
auyeungdavid_2847435260 profile image
David Au Yeung

Why not just create a summary instead of relying on heavy joins? I think there are many solutions that avoid the risks of a data migration, especially if your business is critical.

CREATE TABLE order_summary (
user_id INT PRIMARY KEY,
name VARCHAR(255),
completed_orders_count INT
);