<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kadiri George</title>
    <description>The latest articles on DEV Community by Kadiri George (@khadree12).</description>
    <link>https://dev.to/khadree12</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/khadree12"/>
    <language>en</language>
    <item>
      <title>Saving AWS ECS CloudWatch Cost.</title>
      <dc:creator>Kadiri George</dc:creator>
      <pubDate>Sat, 03 Jan 2026 14:02:22 +0000</pubDate>
      <link>https://dev.to/khadree12/saving-aws-ecs-cloudwatch-cost-5g41</link>
      <guid>https://dev.to/khadree12/saving-aws-ecs-cloudwatch-cost-5g41</guid>
      <description>&lt;p&gt;As a DevOps Engineer who use a managed Docker runtime on AWS like Elastic Container Service to run containerize application the most easy to view logs is CloudWatch Logs.&lt;br&gt;
AWS CloudWatch costs for ECS stem from data ingestion (logs/metrics), storage, analysis (Log Insights), and extra features like Container Insights, with ingestion at ~$0.50/GB (standard) and Log Insights at ~$0.005/GB scanned, while free tiers cover basic metrics and limited data, but high-volume container logs and detailed monitoring (like Container Insights) can significantly increase bills, requiring cost optimization via filtering, retention, and using the AWS Pricing Calculator. Container Insights Provides enhanced metrics for ECS/EKS, adding to costs but offering deep visibility. All these increasing CloudWatch cost may spike up Thousands of dollars monthly, which will make one look for alternate ways of streaming logs from applications.&lt;br&gt;
FireLens is an AWS-provided log router for Amazon ECS/Fargate that uses Fluentd or Fluent Bit as a sidecar container to flexibly send container logs to various destinations like CloudWatch, S3, or third-party tools, simplifying complex log routing without changing application code. It works by adding a special logging configuration to your ECS task definition, allowing you to route logs from your main app container through the FireLens sidecar for processing and forwarding. Our goal here is to stream the applications logs to S3 bucket without changing the application code and save CloudWatch Logs cost.&lt;/p&gt;

&lt;p&gt;Here is how it works:&lt;br&gt;
⦁ ECS Task Definition: You configure your ECS task definition to use the FireLens log driver for your application container.&lt;br&gt;
⦁ Sidecar Container: FireLens adds a sidecar container (running Fluent Bit or Fluentd) to your task.&lt;br&gt;
⦁ Log Routing: Your application container sends logs to standard output (stdout/stderr), and FireLens intercepts these, processing them based on your configuration.&lt;br&gt;
⦁ Pluggable Architecture: It uses plugins (like AWS for Fluent Bit) to send logs to destinations like CloudWatch, S3, or other endpoints that support JSON over HTTP, Fluentd Forward, or TCP. &lt;br&gt;
The key benefits are&lt;br&gt;
⦁ Route logs to multiple destinations for storage, analysis, or monitoring.&lt;br&gt;
⦁ Efficiently handles log management at scale within ECS environments. &lt;br&gt;
⦁ Route logs to multiple destinations for storage, analysis, or monitoring.&lt;br&gt;
⦁ Easily manage logs without modifying application code or manually installing agents.&lt;br&gt;
Steps for the configurations&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an S3 Bucket
First, ensure you have an S3 bucket ready for your logs.&lt;/li&gt;
&lt;li&gt;Update IAM Task Execution Role
Your ECS task execution role needs permissions to write to S3:
{
"Version": "2012-10-17",
"Statement": [
{
  "Effect": "Allow",
  "Action": [
    "s3:PutObject",
    "s3:PutObjectAcl"
  ],
  "Resource": "arn:aws:s3:::your-log-bucket/*"
}
]
}&lt;/li&gt;
&lt;li&gt;Configure Your ECS Task Definition
Here's an example task definition with FireLens configured for S3:
{
"family": "your-task-family",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"executionRoleArn": "arn:aws:iam::account-id:role/ecsTaskExecutionRole",
"taskRoleArn": "arn:aws:iam::account-id:role/ecsTaskRole",
"containerDefinitions": [
{
  "name": "log_router",
  "image": "amazon/aws-for-fluent-bit:latest",
  "essential": true,
  "firelensConfiguration": {
    "type": "fluentbit",
    "options": {
      "enable-ecs-log-metadata": "true"
    }
  },
  "logConfiguration": {
    "logDriver": "awslogs",
    "options": {
      "awslogs-group": "/ecs/firelens-container",
      "awslogs-region": "us-east-1",
      "awslogs-stream-prefix": "firelens"
    }
  }
},
{
  "name": "app",
  "image": "your-app-image",
  "essential": true,
  "logConfiguration": {
    "logDriver": "awsfirelens",
    "options": {
      "Name": "s3",
      "region": "us-east-1",
      "bucket": "your-log-bucket",
      "total_file_size": "10M",
      "upload_timeout": "1m",
      "s3_key_format": "/logs/%Y/%m/%d/%H_%M_%S",
      "store_dir": "/tmp/fluent-bit/s3"
    }
  }
}
]
}&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Key Configuration Options&lt;br&gt;
S3 Output Plugin Options:&lt;/p&gt;

&lt;p&gt;bucket: Your S3 bucket name&lt;br&gt;
region: AWS region&lt;br&gt;
total_file_size: Size of file before uploading (e.g., "10M")&lt;br&gt;
upload_timeout: How often to upload (e.g., "1m")&lt;br&gt;
s3_key_format: Path structure in S3 (supports time formatting and tags)&lt;br&gt;
store_dir: Temporary storage location&lt;/p&gt;

&lt;p&gt;Useful s3_key_format variables:&lt;br&gt;
%Y/%m/%d: Date formatting&lt;br&gt;
%H_%M_%S: Time formatting&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deploy Your Task
Deploy the updated task definition to your ECS Fargate service. FireLens will automatically:
Capture logs from the application container
Buffer them in the log router container
Stream them to S3 based on the configuration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After streaming the logs to S3 bucket, we now run into another issue which is how to view the logs to check for errors, 4xx and 5xx errors in the applications.&lt;/p&gt;

&lt;p&gt;AWS gives us another service which is Amazon Athena. Amazon Athena is a serverless, interactive query service in AWS that lets you analyze large datasets directly in Amazon S3 using standard SQL, without needing to load data into a database. It's known for its simplicity (just point to data, define schema, and query), pay-per-query cost model (only pay for data scanned), and speed, making it ideal for ad-hoc analysis, log analysis, and exploring data lakes. We will use Amazon Athena to query the logs in the S3 bucket using standard query language.&lt;br&gt;
Steps on how to use Athena to view Logs&lt;br&gt;
Step 1&lt;br&gt;
-- Create database&lt;br&gt;
CREATE DATABASE IF NOT EXISTS ecs_logs_db;&lt;/p&gt;

&lt;p&gt;-- Create table for JSON logs&lt;br&gt;
CREATE EXTERNAL TABLE IF NOT EXISTS ecs_logs_db.log_table (&lt;br&gt;
  log STRING,&lt;br&gt;
  container_id STRING,&lt;br&gt;
  container_name STRING,&lt;br&gt;
  ecs_cluster STRING,&lt;br&gt;
  ecs_task_arn STRING,&lt;br&gt;
  ecs_task_definition STRING,&lt;br&gt;
  source STRING,&lt;br&gt;
  time STRING&lt;br&gt;
)&lt;br&gt;
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'&lt;br&gt;
WITH SERDEPROPERTIES (&lt;br&gt;
  'ignore.malformed.json' = 'true'&lt;br&gt;
)&lt;br&gt;
LOCATION 's3://ecs-logs-container/logs/'&lt;br&gt;
TBLPROPERTIES ('has_encrypted_data'='false');&lt;/p&gt;

&lt;p&gt;If your logs are plain text (not JSON):&lt;br&gt;
-- Create database&lt;br&gt;
CREATE DATABASE IF NOT EXISTS ecs_logs_db;&lt;/p&gt;

&lt;p&gt;-- Create table for plain text logs&lt;br&gt;
CREATE EXTERNAL TABLE IF NOT EXISTS ecs_logs_db.log_table (&lt;br&gt;
  log_line STRING&lt;br&gt;
)&lt;br&gt;
ROW FORMAT DELIMITED&lt;br&gt;
FIELDS TERMINATED BY '\n'&lt;br&gt;
STORED AS TEXTFILE&lt;br&gt;
LOCATION 's3://ecs-logs-container/logs/';&lt;/p&gt;

&lt;p&gt;Step 2: Query the Logs&lt;br&gt;
Basic Search for Errors:&lt;/p&gt;

&lt;p&gt;SELECT * &lt;br&gt;
FROM ecs_logs_db.log_table&lt;br&gt;
WHERE log LIKE '%error%' &lt;br&gt;
   OR log LIKE '%ERROR%'&lt;br&gt;
   OR log LIKE '%exception%'&lt;br&gt;
LIMIT 100;&lt;/p&gt;

&lt;p&gt;Search with Time Filter (using S3 path partitions):&lt;/p&gt;

&lt;p&gt;SELECT * &lt;br&gt;
FROM ecs_logs_db.log_table&lt;br&gt;
WHERE log LIKE '%error%'&lt;br&gt;
  AND "$path" LIKE '%2025/12/24%'&lt;br&gt;
LIMIT 100;&lt;/p&gt;

&lt;p&gt;Count Errors by Pattern:&lt;/p&gt;

&lt;p&gt;SELECT &lt;br&gt;
  CASE &lt;br&gt;
    WHEN log LIKE '%404%' THEN '404 Error'&lt;br&gt;
    WHEN log LIKE '%500%' THEN '500 Error'&lt;br&gt;
    WHEN log LIKE '%exception%' THEN 'Exception'&lt;br&gt;
    ELSE 'Other Error'&lt;br&gt;
  END AS error_type,&lt;br&gt;
  COUNT(*) AS error_count&lt;br&gt;
FROM ecs_logs_db.rexhub_logs &lt;br&gt;
WHERE log LIKE '%error%' OR log LIKE '%exception%'&lt;br&gt;
GROUP BY 1&lt;br&gt;
ORDER BY error_count DESC;&lt;/p&gt;

&lt;p&gt;Search for Specific Text:&lt;/p&gt;

&lt;p&gt;SELECT * &lt;br&gt;
FROM ecs_logs_db.log_table &lt;br&gt;
WHERE log LIKE '%AccessDenied%'&lt;br&gt;
LIMIT 50;&lt;/p&gt;

&lt;p&gt;Get Recent Logs:&lt;br&gt;
SELECT * &lt;br&gt;
FROM ecs_logs_db.log_table &lt;br&gt;
WHERE "$path" LIKE '%2025/12/24%'&lt;br&gt;
ORDER BY "$path" DESC&lt;br&gt;
LIMIT 100;&lt;/p&gt;

&lt;p&gt;For better performance, partition the table by date:&lt;br&gt;
-- Drop the old table&lt;br&gt;
DROP TABLE IF EXISTS ecs_logs_db.log_table;&lt;/p&gt;

&lt;p&gt;-- Create partitioned table&lt;br&gt;
CREATE EXTERNAL TABLE IF NOT EXISTS ecs_logs_db.log_table (&lt;br&gt;
  log_line STRING&lt;br&gt;
)&lt;br&gt;
PARTITIONED BY (&lt;br&gt;
  year STRING,&lt;br&gt;
  month STRING,&lt;br&gt;
  day STRING&lt;br&gt;
)&lt;br&gt;
ROW FORMAT DELIMITED&lt;br&gt;
FIELDS TERMINATED BY '\n'&lt;br&gt;
STORED AS TEXTFILE&lt;br&gt;
LOCATION 's3://ecs-logs-container/logs/';&lt;/p&gt;

&lt;p&gt;-- Add partitions&lt;br&gt;
ALTER TABLE ecs_logs_db.log_table ADD&lt;br&gt;
  PARTITION (year='2025', month='12', day='24')&lt;br&gt;
  LOCATION 's3://ecs-logs-container/logs/2025/12/24/';&lt;/p&gt;

&lt;p&gt;Then query with partitions:&lt;/p&gt;

&lt;p&gt;SELECT * &lt;br&gt;
FROM ecs_logs_db.log_table&lt;br&gt;
WHERE year='2025' &lt;br&gt;
  AND month='12' &lt;br&gt;
  AND day='24'&lt;br&gt;
  AND log_line LIKE '%error%'&lt;br&gt;
LIMIT 100;&lt;/p&gt;

&lt;p&gt;I wrote this article because CloudWatch costs caught me off guard on a previous project. What started as a few dollars quickly grew into a noticeable line item on our AWS bill. After digging into the details and testing different approaches, I uncovered several strategies that significantly reduced our logging costs without sacrificing visibility into our ECS services.&lt;/p&gt;

&lt;p&gt;I hope that sharing these lessons helps you save both time and money, whether you’re just setting up ECS monitoring or optimizing an existing setup.&lt;/p&gt;

&lt;p&gt;I would love to hear your experience with CloudWatch costs on ECS. Have you found other optimization strategies that worked well? Feel free to drop your questions or insights in the comments— I am happy to discuss specific scenarios, and your input may help others facing similar challenges.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Jenkins: The Legacy CI that shaped Modern DevOps</title>
      <dc:creator>Kadiri George</dc:creator>
      <pubDate>Sat, 06 Dec 2025 13:40:58 +0000</pubDate>
      <link>https://dev.to/khadree12/jenkins-the-legacy-ci-that-shaped-modern-devops-44ao</link>
      <guid>https://dev.to/khadree12/jenkins-the-legacy-ci-that-shaped-modern-devops-44ao</guid>
      <description>&lt;p&gt;If you’ve been in DevOps for a while, chances are your CI/CD journey started with Jenkins. For me, I started with AWS CodePipeline which is propriety to AWS. It is very easy to use and setup but it lacks customization and integrations with third party tools.&lt;/p&gt;

&lt;p&gt;When I started using Jenkins and I was able to customize my pipeline and integrate with other tools by downloading their plugin waoh - it felt like magic.&lt;/p&gt;

&lt;p&gt;Jenkins earned its reputation as the go-to CI tool because it was open-source, flexible, and had a plugin for almost everything. Entire engineering teams built their automation practices on top of it, and for years, it was the backbone of software delivery pipelines. Jenkins might be legacy CI tool but it still shape today modern DevOps world.&lt;/p&gt;

&lt;p&gt;But with that flexibility came challenges. Managing plugins, maintaining servers, and scaling Jenkins across teams often felt like a job of its own. Jenkins was designed before containers and cloud-native patterns became the norm, which makes it feel more “legacy” in today’s fast-moving DevOps world.&lt;/p&gt;

&lt;p&gt;Now, with tools like GitHub Actions, GitLab CI/CD, and cloud-native solutions like ArgoCD, teams can build pipelines that are easier to manage, faster to scale, and better aligned with modern workflows.&lt;/p&gt;

&lt;p&gt;That said, Jenkins still has a place in many organizations — especially where deep customization and legacy workloads exist. It’s hard to ignore the impact it’s had on DevOps as a practice.&lt;/p&gt;

&lt;p&gt;I’m curious: is Jenkins still part of your CI/CD setup, or have you fully transitioned to newer tools?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS CLOUDWATCH VS UNIFIED CLOUDWATCH AGENT</title>
      <dc:creator>Kadiri George</dc:creator>
      <pubDate>Sun, 05 Jan 2025 21:22:58 +0000</pubDate>
      <link>https://dev.to/khadree12/aws-cloudwatch-vs-unified-cloudwatch-agent-4clm</link>
      <guid>https://dev.to/khadree12/aws-cloudwatch-vs-unified-cloudwatch-agent-4clm</guid>
      <description>&lt;p&gt;&lt;strong&gt;Amazon CloudWatch&lt;/strong&gt; provides a reliable, scalable, and flexible monitoring solution that you can start using within minutes. You no longer need to set up, manage, and scale your own monitoring systems and infrastructure.&lt;/p&gt;

&lt;p&gt;CloudWatch Metrics for EC2&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;     AWS Provided Metric (AWS pushes them)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;·        Basic monitoring(default): Metrics are collected at 5 minutes interval.&lt;/p&gt;

&lt;p&gt;·        Detailed monitoring(paid): Metrics are collected at 1 minute interval.&lt;/p&gt;

&lt;p&gt;Metrics includes CPU, Network, Disk and Status checks metrics.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;     Custom Metric (Yours to push)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;·        Basic Resolution: 1 minute resolution.&lt;/p&gt;

&lt;p&gt;·        High Resolution: all the way to 1 second resolution.&lt;/p&gt;

&lt;p&gt;·        Include RAM, application-level metrics.&lt;/p&gt;

&lt;p&gt;·        Make sure the IAM permission on the EC2 instance role are correct.&lt;/p&gt;

&lt;p&gt;EC2 Included Metrics&lt;/p&gt;

&lt;p&gt;· CPU: It include CPU utilization, credit usage and balance.&lt;/p&gt;

&lt;p&gt;· Network: It include Network in/out of the instance&lt;/p&gt;

&lt;p&gt;· Status check:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Instance status: It checks the metrics of the EC2 virtual machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;System status: It checks the underlying hardware.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;· Disk:  It include Read/Write for Ops/Bytes (only for instance store)&lt;/p&gt;

&lt;p&gt;Note: RAM is not included in the AWS EC2 metrics. To view metrics of EC2 RAM one need to use Unified CloudWatch Agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unified CloudWatch Agent&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is used to collect addition system-level such as RAM, processes, used disk space, from EC2 instances, on premises server. The collected logs are sent to CloudWatch logs, by default no logs inside your EC2 instance will be sent to CloudWatch logs without using an agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unified CloudWatch Agent – procstat plugin&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It collects metrics and monitor system utilization of individual processes and support both Linux and Window servers. Examples amount of time the processes use CPU, amount of the memory the process uses….&lt;/p&gt;

&lt;p&gt;Steps to collect metrics and logs from Amazon EC2 instances and on- premises server with CloudWatch agent.&lt;/p&gt;

&lt;p&gt;A.     Attach the appropriate role on your EC2 instance which are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   CloudWatchAgentAdminPolicy
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   CloudWatchAgentServerPolicy
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;B.     Connect to your EC2 instance and run this command.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sudo yum install amazon-cloudwatch-agent and run the wizard to get started with this command sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard, follow the processes and select default choice in places you are not sure of what to select. Then specify your log file path e.g. /var/log/httpd/access_log and /var/log/httpd/error_log. I specify this path because I am running HTTPD on my server so yours may different if you are running others like apache and the likes on your server. You can also save the config file in AWS SSM Parameter Store for future use using this CloudWatchAgentAdminPolicy Role on your instance will allow put to SSM Parameter Store.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One can use this command to use the config stored in the SSM Parameter Store on a new instance.&lt;/p&gt;

&lt;p&gt;sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c ssm:AmazonCloudWatch-linux -s then check the CloudWatch log groups to find the access_log and error_log and start monitoring and also check CloudWatch metric to find CWAgent to see other metrics to monitor.&lt;/p&gt;

&lt;p&gt;Thanks for reading.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>WORKING WITH AWS CLOUD DEVELOPMENT KIT (CDK)</title>
      <dc:creator>Kadiri George</dc:creator>
      <pubDate>Sun, 03 Nov 2024 13:43:54 +0000</pubDate>
      <link>https://dev.to/khadree12/working-with-aws-cloud-development-kit-cdk-13cd</link>
      <guid>https://dev.to/khadree12/working-with-aws-cloud-development-kit-cdk-13cd</guid>
      <description>&lt;p&gt;&lt;strong&gt;AWS CDK&lt;/strong&gt; is an open-source software development framework used to model and provision your cloud applications resources with familiar programming languages such as Python, Java, .NET, Go and Typescript.&lt;br&gt;
Working with AWS CDK helps to deploy infrastructure as code with one favorite programming language instead of using CloudFormation which uses either JSON or YAML format.&lt;br&gt;
When using Typescript for the CDK one needs to install Node.js on the PC.&lt;br&gt;
CDK Building Block&lt;br&gt;
&lt;strong&gt;cdk init&lt;/strong&gt; - cdk init app – Language typescript. It will initialize the script with the prefer programming language in this case it is typescript.&lt;br&gt;
&lt;strong&gt;cdk bootstrap&lt;/strong&gt; – It is use only once for any deployment with CDK to the account of deployment.&lt;br&gt;
&lt;strong&gt;cdk deploy&lt;/strong&gt; – Deploy this stack to your default AWS account.&lt;br&gt;
&lt;strong&gt;cdk destro&lt;/strong&gt;y – Destroy this stack in your default AWS account.&lt;br&gt;
&lt;strong&gt;cdk diff&lt;/strong&gt; – Compare deployed stack with current stack.&lt;br&gt;
&lt;strong&gt;cdk synth&lt;/strong&gt; - Synthesizes and prints the CloudFormation template for one or more specified stacks.&lt;br&gt;
With just a line of code a line of code I was to deploy an S3 bucket into my account.&lt;br&gt;
const bucket = new s3.Bucket(this, 'MyFirstBucket'{bucketName:’myserverless-kadiri’});&lt;br&gt;
CDK improve infrastructure and business logic. It automates AWS services provision with construct.&lt;br&gt;
I was able to use AWS CDK to deploy an EC2 instance which runs python application, connect to my GITHUB repository and able to use it deploy with CodePipeline, CodeBuild and CodeDeploy. With all the resources I created, I was not bothered about the cost on AWS because with cdk destroy I can delete all the resources created. It can also be helpful to create a stagging services which can be deleted every night and started again every morning to save cost.&lt;br&gt;
Useful link to get started with AWS CDK &lt;br&gt;
&lt;a href="https://lnkd.in/gTDaGcBi" rel="noopener noreferrer"&gt;https://lnkd.in/gTDaGcBi&lt;/a&gt;&lt;br&gt;
&lt;a href="https://lnkd.in/gCU23YgC" rel="noopener noreferrer"&gt;https://lnkd.in/gCU23YgC&lt;/a&gt;&lt;br&gt;
Thanks for reading.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
