DEV Community

Cover image for Three silent AWS cost patterns I keep finding in Series A-C SaaS bills
Anushka B
Anushka B

Posted on

Three silent AWS cost patterns I keep finding in Series A-C SaaS bills

I run cost audits for Indian and US-based SaaS companies at AICloudStrategist. In the last six months I have read the line-item bills of 23 Series A-C companies. The median waste was $3,400 per month. The mean was higher because two outliers were burning over $11,000.

I want to share the three patterns that account for roughly 80% of that number, because none of them are clever or architectural. They are the kind of thing a founder-CTO deprioritises for a year because shipping features pays more than reading bills.

Pattern 1: Savings Plan coverage drift

The typical story: a team buys a 1-year Compute Savings Plan in month 3 of their AWS life, sized to roughly match current baseline EC2 spend. Six months later, auto-scaling and new services push sustained usage 30-40% above that baseline. Everything above the commit runs at on-demand rates.

Pull this from Cost Explorer to see it:

aws ce get-savings-plans-coverage \
  --time-period Start=2026-03-01,End=2026-04-01 \
  --granularity MONTHLY \
  --metrics SpendCoveredBySavingsPlans OnDemandCost
Enter fullscreen mode Exit fullscreen mode

If CoveragePercentage is below 70% and your usage is stable, you are leaving 15-20% on the table on the uncovered portion. A typical fix is a second 1-year Compute SP sized to the p50 of the last 90 days of on-demand hours. Not the peak. The p50.

One client held a $4,800/month Compute Savings Plan and still ran 62% of their EC2 hours on-demand because nobody revisited sizing after two new services launched.

Pattern 2: Orphaned EBS plus cross-region egress

These two are separate leaks but they share a root cause: nobody owns the AWS account wide cleanup.

Orphaned EBS

Detached gp3 volumes keep billing at $0.08/GB-month. A 2TB volume left behind after an instance termination is $160/month, forever, until someone deletes it.

Find them:

aws ec2 describe-volumes \
  --filters Name=status,Values=available \
  --query 'Volumes[*].[VolumeId,Size,CreateTime,Tags]' \
  --output table
Enter fullscreen mode Exit fullscreen mode

Anything in available state for more than 30 days with no tag owner is a candidate. I typically find 200-600GB of these per audit. At one client it was 4.1TB across three regions, $330/month of pure waste.

Cross-region egress

This one hides inside the DataTransfer-Regional-Bytes line item. The price is $0.02/GB for traffic between regions. If one of your services in eu-west-1 is calling a DynamoDB table or S3 bucket that lives in us-east-1, and the call pattern is chatty, you bleed.

Check it with:

aws ce get-cost-and-usage \
  --time-period Start=2026-03-01,End=2026-04-01 \
  --granularity MONTHLY \
  --metrics UnblendedCost \
  --group-by Type=DIMENSION,Key=USAGE_TYPE \
  --filter '{"Dimensions":{"Key":"USAGE_TYPE","Values":["DataTransfer-Regional-Bytes"]}}'
Enter fullscreen mode Exit fullscreen mode

One client was paying $900/month because a single microservice was reading user session data from a DynamoDB table in the wrong region. The fix was a 2-line CloudFormation change. Nobody had looked.

Pattern 3: Observability over-spend

This is the fastest-growing line item I see. CloudWatch Logs, Datadog, New Relic, and X-Ray traces at full sampling on every environment including dev and staging.

The specific sub-patterns:

  • CloudWatch Logs ingestion at $0.50/GB with 30-day retention on dev environments nobody has queried in 90 days.
  • Datadog APM at 100% trace sampling in staging.
  • VPC Flow Logs written to S3 without lifecycle rules. I have seen 400GB of Flow Logs from 2024 still sitting in Standard storage.

A CloudWatch Logs audit query that surfaces the largest log groups:

aws logs describe-log-groups \
  --query 'logGroups[?storedBytes>`10000000000`].[logGroupName,storedBytes,retentionInDays]' \
  --output table
Enter fullscreen mode Exit fullscreen mode

Set retention to 7 days on non-production log groups. Use a Lambda subscription filter to route production logs to S3 with Glacier lifecycle rules after 30 days. Median saving: $1,100/month.

Why these persist

Every CTO I talk to knows at least one of these exists in their account. The reason they stay is not laziness. It is that reading an AWS bill line by line, correlating it against actual usage, and writing the fix requires 6 focused hours, and those 6 hours compete with shipping.

That gap is the entire reason our service exists. Upload your last AWS bill, we send a written report within 24 hours with dollar figures per pattern and the exact config changes. Priority tier is Rs 2,000 (~$25).

If you want the long-form writeup with more config examples: https://aicloudstrategist.com/blog/three-silent-cloud-patterns.html

Or submit a bill for audit: https://aicloudstrategist.com/audit

Top comments (0)