DEV Community

Felipe Malaquias for AWS Community Builders

Posted on • Originally published at Medium

So long DocumentDB, hello MongoDB Atlas

Why did we replace DocumentDB with MongoDB Atlas?

CPU IO Waits

TLDR: Writes IOPS scaling capabilities (sharding) and minor perks.

Disclaimer: This article does not cover application optimization but rather focuses on comparing these two database services.

Before we start, why did we even use DocumentDB?

While moving our on-premise workload to AWS, DocumentDB seemed like a natural choice to replace our in-house-maintained MongoDB cluster.

DocumentDB is a robust and reliable database service for MongoDB-based applications and can make your lift and shift process easier, but there are probably better fits for your needs.

Why? The reasons vary according to your needs, but they will probably boil down to one or more of the ones described below.

Did you write your code using MongoDB drivers and expect it to behave like MongoDB? It won’t.

Amazon DocumentDB is built on top of AWS’s custom Aurora platform, which has historically been used to host relational databases.

This is the first statement on the architectural comparison at the MongoDB website. Amazon DocumentDB supports MongoDB v4.0 and v5.0, but it does not support all features from those versions or from newer versions (e.g., the latest MongoDB v8.0). DocumentDB is currently only about 34% compatible with MongoDB.

Main differences that were relevant for us:

  • no support for index prefix compression

  • no support for network compression

  • poor support of data compression (requires manual setup per collection and setup of threshold per document)

  • further index type limitations in elastic setup

  • built on top of Aurora, hence difficult to keep up with updates and to be able to support the same capabilities as MongoDB

Low capabilities for scaling write operations

While Aurora’s storage layer is distributed, its compute layer is not, limiting scaling options.

This is the second statement on the architectural comparison page, and it is true.

While DocumentDB is great for scaling read-heavy applications (currently, up to 15 instances—see the limits described here), it provides only a few options for scaling writes, and even so, those features were mainly introduced in mid-2024.

That means if your cluster is spending too much time waiting for IO (check your cluster performance insights and disk queue depth) and you need to scale write IOPS/throughput, for example, you can enable the I/O optimized flag, which will reduce costs and use SSD to increase performance. However, I found no clear documentation about how many IOPS it translates to, but from tests, it seems like it is using EBS gp3 volumes, which translates to 3000 IOPS baseline.

In addition to that, the only remaining options are:

  • Scaling vertically: very expensive and no clear documentation on how it translates to improved IOPS, as it will result in CPU increase, allowing incoming requests to be processed, but storage will still be a bottleneck for writes)

  • Sharding: Sharding in DocumentDB is a brand new feature released in January 2024, and it is still not supported in all regions. In addition, it currently has some serious limitations. For example, last I checked, one could only set 2 nodes per shard, which means very low resilience.

And that’s it. Those are the only options for scaling write operations in DocumentDB at the moment. If financial cost, resilience, and scalability are considered, they will likely not be a fit for a scenario of write-heavy applications that require availability close to 100% and high write IOPS.

Better cloud-native technologies

The last reason could be that you are building a new application and have the flexibility of choosing some other cloud-native technology that better fits your use case without the concern of having to refactor your application’s data layer and introduce breaking changes.

Document-based DBs offer great flexibility for storing data, but nowadays, several technologies, like DynamoDB, can provide better scalability and throughput with serverless offers. However, that requires a different approach for your application, each with its pitfalls.

If you are writing a brand new application, think about how your data changes over time, how it is structured, how often it’s going to be inserted, updated, deleted, queried, think about the required availability for your DB cluster, its resilience, costs vs. risk of financial loss with downtimes, etc., and try to choose a technology and sizing that better fits your specific use case. You’ll highly likely end up with something else other than DocumentDB.

What are the benefits of migrating from DocumentDB to MongoDB Atlas?

In short, besides vertical scaling (usually a much more expensive solution), MongoDB has better support for sharding, different instance types offering (low-CPU, general, and NVMe), and provisioned IOPS, which offers more fine-grained control over your cluster capabilities and costs.

The table below shows a brief comparison between both solutions and their current functionality as of the writing of this article.

DocumentDB vs MongoDB Atlas brief comparison

One can also set up an Atlas cluster using private links, which might improve latency (especially for queries that require getMore), but I haven’t tested this yet.

In addition, one can write code to have some compute autoscaling in DocumentDB, but it’s not supported out of the box. For more info, see recommendations for DocumentDB scaling here.

NVMe instances

At first, I thought NVMe instances would solve all our problems. Don’t let yourself get fooled by it!

MongoDB Atlas M40 Local NVMe SSD current offering

Yes, they do provide crazy amounts of IOPS, which may result in excellent throughput, and they cost relatively low in comparison to obtaining way poorer results with something like provisioned IOPS (capped at max. 6000 due to storage block restrictions from AWS). Still, it also comes at a more significant cost: a significantly long time to recover.

In general, NVMe clusters are very robust and performant, but to achieve such high storage throughput, they work with locally attached ephemeral NVMe (non-volatile memory express) SSDs. As a consequence, a file copy based initial sync will always be used to sync all of the nodes of an NVMe cluster whenever an initial sync is required, and because of that, if you need to scale your cluster up/down, or recover backups, you will experience a very long time to perform these operations, and if you need to scale fast, you will find yourself in a terrible situation.

My advice? Avoid those as much as you can by optimizing your application and scaling the DB cluster horizontally by adding shards if you need write IOPS, simply scaling the number of instances if you need read IOPS only, or both in the worst case. You might even end up with a cheaper price for similar or even better performance.

Low-CPU instances

Those instances are great if your application doesn’t require much processing on the DB side, which is a good practice.

CPU and memory are much more valuable resources on the DB side than in your application container. Scaling application containers and distributing parallel processing is a much cheaper and less time-sensitive operation than scaling DB clusters, especially with all the easy-to-use and great capabilities of Kubernetes, Karpenter in combination with spot instances, and so on.

By using low CPU instances, you may benefit from better pricing when choosing instances with higher memory available. This is very important for caching and consequently speeds up your queries by reducing the need to load data from disk, which is slow and can quickly degrade the performance of your cluster.

If you have questions about sizing, I recommend reading the official MongoDB documentation.

General instances

Those instances have double the CPU than the low-CPU tier, but they also come at about 20% price increase. So, if you require processing peaks and can afford the price, go for it.

Atlas Console

Atlas console offers great features for executing queries and aggregation pipelines in your databases, as well as intelligent detection of inefficient indexes and much more.

MongoDB Atlas Collection View

Because of the features offered within the console and the ease of connecting to the cluster through Mongo Shell, we no longer need third-party tool licensing, such as Studio3T, for example.

In addition, it offers much more in-depth metrics than DocumentDB for analyzing your cluster, like how much data is being compressed on disk.

Normalized system CPU metrics of sharded cluster

You might want to ship these metrics to another place like Grafana, though, because if you want to analyze peaks in the past, MongoDB Atlas metrics will be calculated as the average of 1h to save some processing, and therefore, they will not be very useful in that regard.

Query Insights

The main reason we opted to migrate from DocumentDB to MongoDB Atlas was the capability to scale write throughput. Still, I have to confess that the metrics and tools offered by Atlas make developers' lives much easier by providing an excellent overview of overall DB performance, pointing out slow queries that may highly likely be optimized on the application side, consequently making applications faster and more reliable to the final users, and providing opportunities to reduce costs by fine-tuning the DB cluster according to your needs.

P99 latency of reads and writes operations

Query Profiler

The query profiler clusters queries so that you can analyze how the engine processed them in great detail. When you click in one of those clusters below, you will find information about how many keys were examined, how many documents were read, how long the query planner took to process the query, how long it took to read the documents from disk, and much more.

The coloring also makes it very easy to identify the slowest collections in your DB, which may help to identify strange access patterns and inefficient data structure and/or indexing, among other possible problems.

Slow queries overview

Support

I believe nobody can provide better support for some service, tooling, or framework than the source itself. So, if you have a MongoDB-based application, I think MongoDB experts may be able to help you :)

We had a good experience with tailored support for evaluating and identifying bottlenecks, exchanging solutions, and sizing our cluster. Although AWS also offers good support, from my personal experience, DocumentDB experts will only analyze the health of your cluster itself, but will not dive deep into your needs and make recommendations based on your application implementation.

As we have an enterprise contract with MongoDB Atlas (no, they don’t sponsor this article by any means, and all the content here expresses my own opinion and experience), we could benefit from an in-depth analysis of our needs before we migrate the data until after go live.

Drawbacks

If you migrate things as they are without identifying issues in your application and solving them beforehand, you might see yourself paying a lot for overprovisioning your cluster, as they are not the cheapest thing around.

In addition, it might add complexity to your setup and require developers to obtain more in-depth knowledge of DB setup, sharding, and query optimization. Still, I do see this as a benefit. Knowing things work without knowing why is dangerous.

On the other hand, more complexity and fine-tuning opportunities also pose more risks of messing things up, so you will need to pay more attention to details while setting the DB up.

Compute Auto-Scaling

As odd as it sounds, I considered adding auto-scaling under the drawbacks session. The reason is that as good as auto-scaling sounds, and as good as it is portrayed in MongoDB Atlas’ documentation, it may cause more harm than good in your applications.

The reason is that the autoscaling happens on a rolling basis, which is ok. However, it will take nodes down one by one before updating them, which will cause the performance of your cluster to degrade even further because the load is shifted to the other remaining nodes and may lead to a downtime for a longer time than it would in case the autoscaling would be disabled and your application could have stabilized due to caching and other mechanisms. Therefore, if your application needs to handle such peaks without any unavailability, you might need to disable autoscaling and overprovision beforehand, knowing when your application expects peaks, for example.

If this scenario is not your concern, auto-scaling might be a handy tool for optimizing costs while dealing with extra load when necessary.

Hatchet

Well, that has nothing to do with MongoDB Atlas itself, but I learned this tool from a MongoDB consultant during one of our sessions and thought it would be helpful to share it here.

Hatchet log summary — example extracted from github repository

Hatchet is a MongoDB JSON log analyzer and viewer implemented by someone from MongoDB that provides great support for query analysis. It also has a text search that makes it quicker to find issues. You just need to export the logs directly from the MongoDB console and import them into Hatchet, which will provide you with a summary of the insights in addition to some details about them.

Check it out if you ever need to go through MongoDB logs.

Performance and Costs

Finally, let’s talk about what really matters.

Before discussing performance and cost comparisons, let’s discuss our use case so we can better understand the problem.

This specific database serves two backend services. One backend service (let’s call it the Writer application) listens to several Kafka topics, aggregates the data in an optimal way for reads by the other service, and writes it to the DB. It connects to primaries only (primary read preference) and is write-heavy with few parallel connections to the DB.

In this Writer application, we want to keep consumers lag always close to 0 in order to provide real-time, up-to-date data to the other application (let’s call it the Reader application). If we have lags in this application, it translates to outdated data in the Reader application, which should not happen (or at least it should be as close to real-time as possible).

The Reader application will connect to secondaries preferably (secondaryPreferred read preference) and is a read-only application that will perform thousands of queries per second and provide some output to other applications. The Reader application is read-heavy, and latency is also very critical, in addition to high availability. This application must run 24 hours a day, 365 days a year, with an overall average latency per processed request under 100ms, which translates ideally to something less than 10ms per DB query on average.

Scaling read operations in DocumentDB is not a problem and is not very expensive. One scales the number of replicas, distributes the load among them, and is done.

Scaling write operations in DocumentDB is, however, the challenge.

Kafka consumer lag caused by CPU IO wait leading to outdated data in DB reader nodes

As you see in the example above, there were times when peaks of updates in some of those Kafka topics took a long time to process the data and store it in the DB. This was mainly because of CPU waits in the single primary node in our DocumentDB cluster, which already had CPU overprovisioned as a failed attempt to scale IOPS (CPU won’t scale IOPS further the storage capacity).

That is how sharding solves the problem. By distributing the data across multiple primary nodes, each in its shard, we can scale write operations horizontally, similarly to how we scale read operations by increasing the number of nodes. So, let’s say one primary can handle 3000 IOPS. By distributing the data over three shards, we increase the capacity three times to 9000 IOPS if your data is distributed evenly.

Unfortunately, DocumentDB has very low support for sharding (elastic cluster), offering only two nodes per shard, which means low resilience for critical workloads.

Be careful when dealing with shards, though. Sharding collections may make them way less efficient. So you’ll need to dig into optimizations, access patterns, index efficiency, and many more aspects before deciding to shard your collections. Also, be extra careful when choosing your shard keys to avoid hot shards and query inefficiency.

So, what does it mean in terms of performance and costs?

In DocumentDB, we operated a IO optimized db.r6g.8xlarge cluster, which costed us about EUR11k/month. In MongoDB Atlas, we used a M40 cluster with 4 shards and 3 nodes each for initial tests and comparison, which would cost us about half of the price — EUR4.7k/month.

The best thing is that in Atlas, you are not restricted to only two nodes per shard, which significantly helps resilience and read load distribution.

In our tests, we used one specific collection that has a high frequency of updates and is very large in size, which was a very good candidate for sharding. We basically reset the offset of the consumer writing to this collection and waited for it to process all messages in both MongoDB Atlas and DocumentDB clusters and obtained the following results:

DocumentDB updated documents per minute count

MongoDB Atlas M40, 4 shards, documents updated per second count

If you convert DocumentDB metrics to updated documents per second, the throughput in MongoDB Atlas sharded cluster is about 5 times higher than in DocumentDB with no shards. Not to mention, the CPU was blocked most of the time waiting for IO in DocumentDB, which would make it very slow for processing other data, and as a consequence, leading to multiple outdated collections and slowness in processing all writes in the single primary node.

The difference can also be seen at client side, in the Writer application, by looking at its consumer Kafka lag as follows:

Kafka lag while resetting offsets with DocumentDB cluster with single primary node

Kafka lag while resetting offsets with MongoDB Atlas M40 4 Shards

While DocumentDB processed all messages in about two hours, the sharded cluster in MongoDB Atlas took about 20 minutes.

Note that the tests were performed at different dates and timestamps, so the lag won’t match precisely the 5 to 7 times higher throughput, as the test with MongoDB Atlas was performed earlier at a point in time when the Kafka topic had fewer messages than it had when tested against DocumentDB. Therefore, in this case, the primary metric for comparison is the updated documents per second, but you can still grasp what it means in terms of impact to keep data up to date by looking into the Kafka lag metric.

In summary, we were able to achieve about five times better throughput by spending half the money. In reality, our setup is slightly different at the moment, and we end up paying about the same price we used to for DocumentDB, but that has to do with current autoscaling capabilities and shifting load during the scaling process.

Conclusion

It is not the technology that is good or bad; perhaps it’s being misused, or it’s not the best fit for your needs. No size fits all.

Don't change anything if you have no problems (cost, maintenance, performance, availability, etc.). Spend your time somewhere else.

If you do have some real problem to solve, get to know your data and your write and read patterns. Dig into query and index optimizations before you even think about any migration. Invest time in understanding what is really happening in your application that is causing latencies. Do not simply throw more money at cloud providers to scale things up indefinitely, postponing the unavoidable review of your own code and choices.

If you are writing a new application, look for database solutions that fit your needs and see if there is a better fit. It could be an SQL database, a serverless database, or both. Perhaps you expect a lot of changes in your data structure and want to opt for document-based DBs, perhaps DynamoDB or even MongoDB Atlas.

Thankfully, the number of choices nowadays is vast, and some technologies will better suit your use case than others.

If you need to scale writes, consider sharding or some of the newer serverless options with provisioned IOPS and alikes (be careful with provisioned IOPS, though, as they tend to be very expensive).

And, very importantly, make decisions backed by data and facts. Perform benchmark tests and try different technologies and scenarios. How is the monitoring? Check their recovery and scaling capabilities. Know their support and be well prepared to avoid unexpected costs.

Good luck with your decisions!

Top comments (0)