DEV Community

Haripriya Veluchamy
Haripriya Veluchamy

Posted on

The Day I Learned Why DynamoDB Costs More

Introduction

A year ago, I was working with a team that planned to use Amazon DynamoDB for storing application data. As a newcomer, I was excited about the idea of using a serverless, scalable NoSQL database.

But then something happened that I didn’t fully understand at the time our senior engineer asked a simple question:

“What will be the read and write capacity of this system?”

The answer was: very high. And just like that, he suggested we should not go with DynamoDB.

I asked him “Why?” and he replied:

“Because in DynamoDB, you’re charged more for reads and writes, not just storage. For a heavy read/write system, costs will explode.”

Back then, this didn’t click for me. Storage = cost was my entire mental model of databases.

Recently, i got answer for this


DynamoDB’s Pricing Model (The Real Eye-Opener)

Unlike traditional databases where storage dominates cost, DynamoDB flips the model:

  • Storage: Quite cheap (~\$0.25/GB-month).
  • RCUs (Read Capacity Units): You pay per read request (based on item size & consistency).
  • WCUs (Write Capacity Units): You pay per write request (based on item size).
  • Indexes (LSI/GSI): Every write updates indexes too → multiplying your WCU usage.

So, if you have millions of reads and writes per second, your bill won’t be about how many GBs you store it’ll be about how many operations you perform.


DynamoDB System Architecture + Burst Capacity + Caching

Here’s a high-level view of how DynamoDB processes requests and optimizes throughput:

🔹 Key Concepts

  1. Partitions & Capacity
  • Each partition has its own RCUs/WCUs and token bucket for burst capacity.
  • Unused capacity can accumulate up to ~5 minutes worth.
  • Sudden spikes are absorbed from the token bucket before throttling occurs.
  1. Caching
  • DynamoDB can use DynamoDB Accelerator (DAX) or internal caching for hot items.
  • Reads from cache don’t consume RCUs, dramatically reducing cost for read-heavy workloads.
  • Even without DAX, DynamoDB uses internal metadata and caching strategies to maintain high hit rates (~99.7%), reducing latency.
  1. Replicas
  • Each partition is replicated across multiple AZs for durability and availability.
  • Replicas do not add extra throughput RCUs/WCUs are still per partition.

What My Senior Really Meant

Now I finally understand what my senior meant when he said:

“We are charged not for storing, but for what we read and write.”

He wasn’t just talking about cost. He was talking about choosing the right tool for the workload.

  • For moderate read/write workloads → DynamoDB is amazing: scalable, fully managed, predictable.
  • For heavy read/write workloads → the costs might not make sense compared to other options (Aurora, Cassandra, or S3 + cache layers).

Key Takeaways

  • DynamoDB’s biggest cost factor = RCUs & WCUs, not storage.
  • Burst capacity + caching are crucial for predictable performance and cost savings.
  • Always ask: “How read/write heavy is this system?” before picking DynamoDB.
  • Good mentors push you to think about workload patterns, not just technology.

Conclusion

That day, I didn’t understand why DynamoDB wasn’t chosen. Today, after seeing how the architecture, burst capacity, and caching work together, I do.

And honestly, this lesson goes beyond DynamoDB it’s about how every system design decision is tied not just to technology, but also to cost models, workload patterns, and performance strategies.

Sometimes, you only connect the dots years later.


Top comments (1)

Collapse
 
suvrajeet profile image
Suvrajeet Banerjee

Echoes hiring chat's workload analysis, but underplays alternatives' trade-offs—solid for moderate systems, questionable for heavy ops. 💸