DEV Community

Cover image for Automating Your Log Retention Strategy on AWS
Muhammad Zeeshan
Muhammad Zeeshan

Posted on

Automating Your Log Retention Strategy on AWS

Step 1: Set 1-Month Retention in CloudWatch Logs

CloudWatch Logs can store data indefinitely by default, but that’s not ideal for growing infrastructures.

To change the retention period:

Go to the CloudWatch Logs console.
Select your Log Group.
Click “Actions” → “Edit retention”.
Set it to 30 days.
Now, logs older than 30 days will be automatically deleted.

Step 2: Export Logs to S3 with Kinesis Firehose

Next, we want to stream logs to S3 continuously. The best way to do this is via Kinesis Data Firehose.

Create a Firehose Delivery Stream

Go to the Kinesis > Firehose console.
Click Create delivery stream.
Choose:
Source: Direct PUT
Destination: Amazon S3

  1. Create or select an S3 bucket (we’ll handle lifecycle in the next step).

  2. Choose IAM role or let AWS create one for you.

Connect CloudWatch Logs to Firehose

Go to CloudWatch Logs > Log Group > Actions > Create subscription filter.
Choose:
Destination: Kinesis Firehose
Stream: your newly created stream
IAM Role: must allow firehose:PutRecord
From now on, all new logs are delivered in near real-time to S3.

Step 3: Move Logs from S3 to Glacier After 30 Days

Now that logs land in S3, we can apply S3 Lifecycle Rules to move them to Glacier Deep Archive after 30 days.

Here’s how:

Add Lifecycle Rule to the S3 Bucket

Go to your S3 bucket.
Select Management > Lifecycle rules.
Click Create lifecycle rule.
Name it: ArchiveToGlacier.
Scope: Apply to all objects (or prefix /logs/ if needed).
Add transitions:
Transition to Glacier Deep Archive after 30 days

  1. Add expiration:

Expire objects after 6 years
That gives you:

30 days in CloudWatch
30 days in S3 Standard
5 years and 10 months in Glacier Deep Archive
This minimizes cost while keeping you compliant with typical audit requirements.

Top comments (0)