As a DevOps Engineer who use a managed Docker runtime on AWS like Elastic Container Service to run containerize application the most easy to view logs is CloudWatch Logs.
AWS CloudWatch costs for ECS stem from data ingestion (logs/metrics), storage, analysis (Log Insights), and extra features like Container Insights, with ingestion at ~$0.50/GB (standard) and Log Insights at ~$0.005/GB scanned, while free tiers cover basic metrics and limited data, but high-volume container logs and detailed monitoring (like Container Insights) can significantly increase bills, requiring cost optimization via filtering, retention, and using the AWS Pricing Calculator. Container Insights Provides enhanced metrics for ECS/EKS, adding to costs but offering deep visibility. All these increasing CloudWatch cost may spike up Thousands of dollars monthly, which will make one look for alternate ways of streaming logs from applications.
FireLens is an AWS-provided log router for Amazon ECS/Fargate that uses Fluentd or Fluent Bit as a sidecar container to flexibly send container logs to various destinations like CloudWatch, S3, or third-party tools, simplifying complex log routing without changing application code. It works by adding a special logging configuration to your ECS task definition, allowing you to route logs from your main app container through the FireLens sidecar for processing and forwarding. Our goal here is to stream the applications logs to S3 bucket without changing the application code and save CloudWatch Logs cost.
Here is how it works:
⦁ ECS Task Definition: You configure your ECS task definition to use the FireLens log driver for your application container.
⦁ Sidecar Container: FireLens adds a sidecar container (running Fluent Bit or Fluentd) to your task.
⦁ Log Routing: Your application container sends logs to standard output (stdout/stderr), and FireLens intercepts these, processing them based on your configuration.
⦁ Pluggable Architecture: It uses plugins (like AWS for Fluent Bit) to send logs to destinations like CloudWatch, S3, or other endpoints that support JSON over HTTP, Fluentd Forward, or TCP.
The key benefits are
⦁ Route logs to multiple destinations for storage, analysis, or monitoring.
⦁ Efficiently handles log management at scale within ECS environments.
⦁ Route logs to multiple destinations for storage, analysis, or monitoring.
⦁ Easily manage logs without modifying application code or manually installing agents.
Steps for the configurations
- Create an S3 Bucket First, ensure you have an S3 bucket ready for your logs.
- Update IAM Task Execution Role Your ECS task execution role needs permissions to write to S3: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::your-log-bucket/*" } ] }
- Configure Your ECS Task Definition Here's an example task definition with FireLens configured for S3: { "family": "your-task-family", "networkMode": "awsvpc", "requiresCompatibilities": ["FARGATE"], "cpu": "256", "memory": "512", "executionRoleArn": "arn:aws:iam::account-id:role/ecsTaskExecutionRole", "taskRoleArn": "arn:aws:iam::account-id:role/ecsTaskRole", "containerDefinitions": [ { "name": "log_router", "image": "amazon/aws-for-fluent-bit:latest", "essential": true, "firelensConfiguration": { "type": "fluentbit", "options": { "enable-ecs-log-metadata": "true" } }, "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "/ecs/firelens-container", "awslogs-region": "us-east-1", "awslogs-stream-prefix": "firelens" } } }, { "name": "app", "image": "your-app-image", "essential": true, "logConfiguration": { "logDriver": "awsfirelens", "options": { "Name": "s3", "region": "us-east-1", "bucket": "your-log-bucket", "total_file_size": "10M", "upload_timeout": "1m", "s3_key_format": "/logs/%Y/%m/%d/%H_%M_%S", "store_dir": "/tmp/fluent-bit/s3" } } } ] }
Key Configuration Options
S3 Output Plugin Options:
bucket: Your S3 bucket name
region: AWS region
total_file_size: Size of file before uploading (e.g., "10M")
upload_timeout: How often to upload (e.g., "1m")
s3_key_format: Path structure in S3 (supports time formatting and tags)
store_dir: Temporary storage location
Useful s3_key_format variables:
%Y/%m/%d: Date formatting
%H_%M_%S: Time formatting
- Deploy Your Task Deploy the updated task definition to your ECS Fargate service. FireLens will automatically: Capture logs from the application container Buffer them in the log router container Stream them to S3 based on the configuration
After streaming the logs to S3 bucket, we now run into another issue which is how to view the logs to check for errors, 4xx and 5xx errors in the applications.
AWS gives us another service which is Amazon Athena. Amazon Athena is a serverless, interactive query service in AWS that lets you analyze large datasets directly in Amazon S3 using standard SQL, without needing to load data into a database. It's known for its simplicity (just point to data, define schema, and query), pay-per-query cost model (only pay for data scanned), and speed, making it ideal for ad-hoc analysis, log analysis, and exploring data lakes. We will use Amazon Athena to query the logs in the S3 bucket using standard query language.
Steps on how to use Athena to view Logs
Step 1
-- Create database
CREATE DATABASE IF NOT EXISTS ecs_logs_db;
-- Create table for JSON logs
CREATE EXTERNAL TABLE IF NOT EXISTS ecs_logs_db.log_table (
log STRING,
container_id STRING,
container_name STRING,
ecs_cluster STRING,
ecs_task_arn STRING,
ecs_task_definition STRING,
source STRING,
time STRING
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
'ignore.malformed.json' = 'true'
)
LOCATION 's3://ecs-logs-container/logs/'
TBLPROPERTIES ('has_encrypted_data'='false');
If your logs are plain text (not JSON):
-- Create database
CREATE DATABASE IF NOT EXISTS ecs_logs_db;
-- Create table for plain text logs
CREATE EXTERNAL TABLE IF NOT EXISTS ecs_logs_db.log_table (
log_line STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION 's3://ecs-logs-container/logs/';
Step 2: Query the Logs
Basic Search for Errors:
SELECT *
FROM ecs_logs_db.log_table
WHERE log LIKE '%error%'
OR log LIKE '%ERROR%'
OR log LIKE '%exception%'
LIMIT 100;
Search with Time Filter (using S3 path partitions):
SELECT *
FROM ecs_logs_db.log_table
WHERE log LIKE '%error%'
AND "$path" LIKE '%2025/12/24%'
LIMIT 100;
Count Errors by Pattern:
SELECT
CASE
WHEN log LIKE '%404%' THEN '404 Error'
WHEN log LIKE '%500%' THEN '500 Error'
WHEN log LIKE '%exception%' THEN 'Exception'
ELSE 'Other Error'
END AS error_type,
COUNT(*) AS error_count
FROM ecs_logs_db.rexhub_logs
WHERE log LIKE '%error%' OR log LIKE '%exception%'
GROUP BY 1
ORDER BY error_count DESC;
Search for Specific Text:
SELECT *
FROM ecs_logs_db.log_table
WHERE log LIKE '%AccessDenied%'
LIMIT 50;
Get Recent Logs:
SELECT *
FROM ecs_logs_db.log_table
WHERE "$path" LIKE '%2025/12/24%'
ORDER BY "$path" DESC
LIMIT 100;
For better performance, partition the table by date:
-- Drop the old table
DROP TABLE IF EXISTS ecs_logs_db.log_table;
-- Create partitioned table
CREATE EXTERNAL TABLE IF NOT EXISTS ecs_logs_db.log_table (
log_line STRING
)
PARTITIONED BY (
year STRING,
month STRING,
day STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION 's3://ecs-logs-container/logs/';
-- Add partitions
ALTER TABLE ecs_logs_db.log_table ADD
PARTITION (year='2025', month='12', day='24')
LOCATION 's3://ecs-logs-container/logs/2025/12/24/';
Then query with partitions:
SELECT *
FROM ecs_logs_db.log_table
WHERE year='2025'
AND month='12'
AND day='24'
AND log_line LIKE '%error%'
LIMIT 100;
I wrote this article because CloudWatch costs caught me off guard on a previous project. What started as a few dollars quickly grew into a noticeable line item on our AWS bill. After digging into the details and testing different approaches, I uncovered several strategies that significantly reduced our logging costs without sacrificing visibility into our ECS services.
I hope that sharing these lessons helps you save both time and money, whether you’re just setting up ECS monitoring or optimizing an existing setup.
I would love to hear your experience with CloudWatch costs on ECS. Have you found other optimization strategies that worked well? Feel free to drop your questions or insights in the comments— I am happy to discuss specific scenarios, and your input may help others facing similar challenges.
Top comments (0)