This article was originally written in Japanese and published on Qiita. It has been translated with the help of AI.
Original article: https://qiita.com/sassssan68/items/da2aa98bba12748daca7
Have you ever calculated how much it actually costs to keep CloudWatch Logs long-term?
TL;DR
Keeping logs in CloudWatch Logs long-term is expensive.
Subscription → Firehose → S3 (Deep Archive) is more stable and cost-effective.
I recently had an audit requirement to retain logs for 18 months. When I estimated the CloudWatch Logs cost…
18 months = $1,069.20
— That's way too much!
So I followed the AWS-recommended architecture — Subscription → Firehose → S3 — and combined it with a lifecycle policy to transition to Deep Archive. The result:
~85% cost reduction
Plus fully automated, stable operations
This article covers:
- Why CreateExportTask is not recommended (per AWS)
- How much cost you can actually save (with formulas)
- How to set up Subscription → Firehose → S3 (Deep Archive)
Why You Shouldn't Keep Logs in CloudWatch Logs Long-Term
Audit and regulatory requirements often mandate log retention for years. However, storing large volumes of logs in CloudWatch Logs gets expensive fast:
- High storage cost — CloudWatch Logs storage pricing is heavy
- Scales linearly — The more data you store, the worse it gets
- Not designed for long-term archival — It's a monitoring tool, not a storage solution
This raises the question: What's the right way to handle long-term log retention?
AWS Says Export Task Is "Not Recommended" — Here's Why
From the CloudWatch console, you can manually export logs to S3. To automate this, you'd use the CreateExportTask API:
https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateExportTask.html
You could call this periodically from Lambda or EventBridge Scheduler. However, the AWS documentation explicitly discourages this:
Note
We recommend that you don't regularly export to Amazon S3 as a way to continuously archive your logs. For that use case, we instead recommend that you use subscriptions. For more information about subscriptions, see Real-time processing of log data with subscriptions.
On top of that, there's a concurrency limit of 1 export task at a time. If you're exporting from multiple log groups or across multiple time ranges, tasks will queue up, causing failures and delays.
Given these limitations, CreateExportTask is unreliable for audit-grade long-term retention. As the AWS docs say, subscriptions are the way to go.
The AWS-Recommended Architecture: Subscription → Firehose → S3
AWS recommends using CloudWatch Logs subscription filters:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
With Firehose, logs are transferred in near real-time — no manual operations required, stable and suitable for long-term archival.
But when I first saw this architecture, I thought: "This looks expensive. Is it actually cheaper than just leaving logs in CloudWatch?"
So I ran the numbers.
Cost Comparison: CloudWatch Logs vs. Subscription → Firehose → S3 (Deep Archive)
Bottom line:
Over 18 months, Subscription → Firehose → S3 (Deep Archive) is approximately 85% cheaper.⚠️ Note:
If your retention period is 2 months or less, CloudWatch Logs may actually be cheaper.
The conclusion in this article assumes long-term retention.
Assumptions
- Retention period: 18 months
- Monthly log volume: 100 GB
- Tokyo region pricing (as of April 2026):
- CloudWatch Logs storage: $0.033/GB/month
- Firehose delivery: $0.036/GB
- S3 Standard: $0.025/GB/month
- Glacier Deep Archive: $0.002/GB/month
- Case 1: Keep all logs in CloudWatch Logs for the full 18 months
- Case 2: Keep logs in CloudWatch for 2 weeks (for analysis), simultaneously stream via Firehose → S3 Standard → Glacier Deep Archive after 1 day
Comparison Table
| Item | Case 1 | Formula | Case 2 | Formula |
|---|---|---|---|---|
| CloudWatch Logs storage | $1,069.20 | 0.033 × (100 × 18) × 18 | $27.72 | 0.033 × (100 × 14/30) × 18 |
| Firehose delivery | — | — | $64.80 | 0.036 × 100 × 18 |
| S3 Standard (1 day) | — | — | $1.50 | 0.025 × (100 × 1/30) × 18 |
| Glacier Deep Archive | — | — | $64.80 | 0.002 × (100 × 18) × 18 |
| Total | $1,069.20 | — | $158.82 | — |
| Savings | — | — | ~$910.38 (~85% reduction) | — |
- Note:
- Average stored volume for 2-week retention: monthly log volume × (14/30)
Conclusion
By keeping logs in CloudWatch for just 2 weeks (for analysis) and archiving older logs in Deep Archive, you can achieve approximately 85% cost savings over 18 months.
How to Set It Up
The setup involves five steps:
- Create an S3 bucket with a lifecycle rule (transition to Glacier Deep Archive after 1 day)
- Create a Firehose stream (source: Direct PUT, destination: S3)
- Create an IAM role for the subscription filter
- Create a CloudWatch Logs subscription filter
- Verify logs are flowing to S3
For steps 3 and 4, the AWS documentation provides a complete walkthrough including the IAM policy:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#FirehoseExample
Summary
- Export Task is not recommended (per AWS documentation)
- Firehose is the most operationally practical solution
- S3 lifecycle rules enable cost-optimized long-term archival
For short-term log retention, CloudWatch Logs works just fine. But if you need to retain logs for months to years, Subscription → Firehose → S3 (Deep Archive) is the practical solution.
When long-term retention becomes a requirement, it's worth revisiting your architecture.
I hope this helps anyone else dealing with the same log retention cost challenges.
Top comments (0)