DEV Community

Cover image for Solved: Turns out out our DynamoDB costs could be 70% lower if we just… changed a setting. I’m a senior engineer btw
Darian Vance
Darian Vance

Posted on • Originally published at wp.me

Solved: Turns out out our DynamoDB costs could be 70% lower if we just… changed a setting. I’m a senior engineer btw

🚀 Executive Summary

TL;DR: Many organizations face excessive DynamoDB costs, often due to misconfigured capacity modes. By switching from over-provisioned capacity to On-Demand for variable workloads, or implementing Auto Scaling for predictable ones, and leveraging Time-to-Live for data lifecycle management, costs can be significantly reduced.

🎯 Key Takeaways

  • The choice between DynamoDB’s On-Demand and Provisioned capacity modes is the most impactful setting for throughput cost, with On-Demand being ideal for unpredictable workloads.
  • DynamoDB Auto Scaling for Provisioned Capacity dynamically adjusts RCUs/WCUs based on actual traffic, preventing throttling during spikes and reducing over-provisioning during lulls.
  • Time-to-Live (TTL) is crucial for data lifecycle management, automatically deleting expired items to reduce storage costs and potentially improve query performance.

Discover how a single configuration change in Amazon DynamoDB can drastically cut your database costs by up to 70%. This post details common cost pitfalls, explains the critical difference between On-Demand and Provisioned capacity, and provides actionable steps to optimize your DynamoDB spend.

Understanding and Reducing Excessive DynamoDB Costs

Symptoms of Overspending on DynamoDB

As senior engineers, we often encounter situations where cloud costs spiral out of control, even for services we rely heavily upon. DynamoDB, while incredibly powerful and scalable, is no exception. If you’re experiencing any of the following, it might be time to investigate your DynamoDB configurations:

  • Unexpectedly high monthly AWS bills directly attributed to DynamoDB, often appearing as “Write Capacity Unit (WCU)” or “Read Capacity Unit (RCU)” charges.
  • Significant fluctuations in your DynamoDB costs that don’t directly correlate with application usage patterns or business growth.
  • Observing consistently low utilization rates for provisioned throughput in your DynamoDB metrics (e.g., CloudWatch metrics showing ConsumedReadCapacityUnits or ConsumedWriteCapacityUnits far below ProvisionedReadCapacityUnits or ProvisionedWriteCapacityUnits).
  • Application performance issues during peak times, even with seemingly high provisioned capacity, or conversely, paying for unused capacity during off-peak hours.
  • Hesitation to scale up throughput for fear of incurring even higher costs.

The good news is that for many organizations, a substantial portion of these costs can be eliminated by simply adjusting a core setting.

Solution 1: Choosing the Right Capacity Mode – On-Demand vs. Provisioned

The single most impactful setting determining your DynamoDB throughput costs is the capacity mode. DynamoDB offers two distinct modes: Provisioned and On-Demand. Choosing the incorrect mode for your workload is a primary driver of excessive spend.

Provisioned Capacity Mode

In Provisioned Capacity mode, you specify the number of read capacity units (RCUs) and write capacity units (WCUs) that your application requires. You pay for this provisioned capacity, whether you use it or not. It’s like renting a fixed amount of server resources upfront.

  • When to Use: Ideal for applications with predictable, consistent traffic patterns, or those with very high, sustained throughput where reserving capacity upfront is more cost-effective.
  • Cost Implications: Potentially lower cost per request for high, steady workloads. However, over-provisioning leads to paying for unused capacity, and under-provisioning leads to throttled requests and application errors.
# Example: Updating Provisioned Capacity for an existing table via AWS CLI
aws dynamodb update-table \
    --table-name YourTableName \
    --provisioned-throughput ReadCapacityUnits=500,WriteCapacityUnits=100
Enter fullscreen mode Exit fullscreen mode

On-Demand Capacity Mode

On-Demand Capacity mode offers a pay-per-request pricing model. DynamoDB automatically scales throughput up or down based on your application’s actual traffic. You pay only for the read and write requests your application performs.

  • When to Use: Best for new applications with unknown traffic patterns, intermittent workloads, unpredictable spikes, or those with infrequent but intense bursts of activity. It’s also great for development and testing environments.
  • Cost Implications: Higher cost per request compared to a well-tuned Provisioned mode. However, it eliminates the risk of over-provisioning and throttles, providing ultimate flexibility and often lower overall costs for variable workloads.
# Example: Switching a table to On-Demand Capacity mode via AWS CLI
aws dynamodb update-table \
    --table-name YourTableName \
    --billing-mode PAY_PER_REQUEST
Enter fullscreen mode Exit fullscreen mode

On-Demand vs. Provisioned Capacity: A Comparison

Feature Provisioned Capacity On-Demand Capacity
Pricing Model Pay for specified RCUs/WCUs (whether used or not) Pay per read/write request
Cost Predictability High (fixed cost if not auto-scaled) Variable (depends on actual usage)
Workload Suitability Predictable, consistent, high-volume traffic Unpredictable, spiky, intermittent traffic; new applications
Scaling Management Manual or Auto Scaling configuration required Fully managed by DynamoDB
Risk of Throttling High, if under-provisioned Very Low (DynamoDB handles scaling)
Cost Optimization Potential Excellent for stable, high-baseline workloads if tuned correctly Excellent for variable workloads, eliminates over-provisioning waste

Actionable Step: Review your DynamoDB tables in the AWS console. For tables with highly variable or low utilization, switch to On-Demand capacity. For tables with consistently high and predictable usage, ensure your provisioned capacity closely matches your average usage plus a small buffer, and consider Solution 2.

Solution 2: Implementing DynamoDB Auto Scaling for Provisioned Capacity

If your workload has a predictable baseline but experiences some variability, and you’ve determined that Provisioned Capacity is still the most cost-effective option, then DynamoDB Auto Scaling is your next best friend. Auto Scaling dynamically adjusts your table’s provisioned throughput capacity in response to actual traffic patterns.

  • How it Works: You define minimum and maximum capacity units, along with a target utilization percentage. DynamoDB Auto Scaling uses AWS Application Auto Scaling to monitor your table’s consumed capacity and adjusts provisioned capacity accordingly.
  • Benefits: Prevents throttling during spikes, reduces over-provisioning during lulls, and optimizes costs for variable-but-predictable workloads without manual intervention.

Configuring Auto Scaling (AWS CLI Example)

To enable Auto Scaling, you typically need to perform a few steps:

  1. Register the scalable target (the table’s read/write capacity):
# For Write Capacity Units (WCU)
aws application-autoscaling register-scalable-target \
    --service-namespace dynamodb \
    --scalable-dimension dynamodb:table:WriteCapacityUnits \
    --resource-id table/YourTableName \
    --min-capacity 100 \
    --max-capacity 1000 \
    --role-arn arn:aws:iam::123456789012:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling
# Repeat for Read Capacity Units (RCU)
aws application-autoscaling register-scalable-target \
    --service-namespace dynamodb \
    --scalable-dimension dynamodb:table:ReadCapacityUnits \
    --resource-id table/YourTableName \
    --min-capacity 100 \
    --max-capacity 1000 \
    --role-arn arn:aws:iam::123456789012:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling
Enter fullscreen mode Exit fullscreen mode

Note: Replace 123456789012 with your AWS Account ID. The IAM role should be AWSServiceRoleForApplicationAutoScaling or a custom role with equivalent permissions.

  1. Put a scaling policy for the target:
# For Write Capacity Units (WCU)
aws application-autoscaling put-scaling-policy \
    --service-namespace dynamodb \
    --scalable-dimension dynamodb:table:WriteCapacityUnits \
    --resource-id table/YourTableName \
    --policy-name DynamoDBWriteCapacityUtilization \
    --policy-type TargetTrackingScaling \
    --target-tracking-scaling-policy-configuration '{"TargetValue":70.0,"PredefinedMetricSpecification":{"PredefinedMetricType":"DynamoDBWriteCapacityUtilization"}}'
# Repeat for Read Capacity Units (RCU)
aws application-autoscaling put-scaling-policy \
    --service-namespace dynamodb \
    --scalable-dimension dynamodb:table:ReadCapacityUnits \
    --resource-id table/YourTableName \
    --policy-name DynamoDBReadCapacityUtilization \
    --policy-type TargetTrackingScaling \
    --target-tracking-scaling-policy-configuration '{"TargetValue":70.0,"PredefinedMetricSpecification":{"PredefinedMetricType":"DynamoDBReadCapacityUtilization"}}'
Enter fullscreen mode Exit fullscreen mode

Actionable Step: For tables remaining in Provisioned Capacity mode, implement DynamoDB Auto Scaling with appropriate min/max capacity and target utilization values. A common target value is 70%.

Solution 3: Leveraging Time-to-Live (TTL) for Data Lifecycle Management

While not directly a “capacity mode” setting, DynamoDB’s Time-to-Live (TTL) feature is a crucial setting that can significantly reduce storage costs and indirectly improve throughput efficiency by keeping your table lean. TTL allows you to define an attribute in your table that specifies an expiration timestamp for each item. Once this timestamp is reached, DynamoDB automatically deletes the item.

  • Benefits:
    • Reduced Storage Costs: Automatically removes data that is no longer needed, decreasing the total storage footprint.
    • Improved Query Performance (for some cases): Smaller table sizes can sometimes lead to faster scans and queries, as there’s less irrelevant data to process.
    • Simplified Data Lifecycle: Automates the clean-up process, reducing operational overhead.
    • Compliance: Helps meet data retention policies by ensuring data is purged after a certain period.
  • When to Use: For logs, session data, old notifications, temporary caches, or any data that loses its relevance after a defined period.

Enabling TTL (AWS CLI Example)

First, ensure your table has an attribute that stores a Unix epoch timestamp (in seconds). This will be your TTL attribute.

# Example: Enabling TTL on an existing table
aws dynamodb update-time-to-live \
    --table-name YourTableName \
    --time-to-live-specification "Enabled=true,AttributeName=expiryDate"
Enter fullscreen mode Exit fullscreen mode

In this example, expiryDate is the name of the attribute in your table that holds the Unix epoch timestamp for when the item should expire. When inserting/updating items, you would set this attribute like so (e.g., for an item to expire in 7 days):

# Example: Item structure with TTL attribute (JSON for put-item)
{
    "id": {"S": "item123"},
    "data": {"S": "some important data"},
    "creationDate": {"N": "1678886400"},
    "expiryDate": {"N": "1679491200"} // Unix epoch timestamp for 7 days from creationDate
}
Enter fullscreen mode Exit fullscreen mode

Actionable Step: Identify data in your DynamoDB tables that has a finite lifespan. Add a TTL attribute to these items and enable the TTL feature for the respective table. Regularly audit your data retention policies to fine-tune TTL settings.

Conclusion: Proactive Optimization is Key

The Reddit anecdote highlights a common oversight: powerful cloud services often come with complex pricing models where a single configuration choice can have massive cost implications. As senior engineers, it’s our responsibility to not just build, but also to build cost-effectively and sustainably.

By regularly reviewing your DynamoDB capacity modes, leveraging Auto Scaling for provisioned tables, and implementing data lifecycle management with TTL, you can ensure your DynamoDB usage aligns with your operational needs and your budget. Don’t let your “senior engineer btw” status prevent you from a simple setting change that could save your company a fortune.


Darian Vance

👉 Read the original article on TechResolve.blog

Top comments (0)