<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniele Baggio</title>
    <description>The latest articles on DEV Community by Daniele Baggio (@dbanieles).</description>
    <link>https://dev.to/dbanieles</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dbanieles"/>
    <language>en</language>
    <item>
      <title>ECS Orchestration Part 4: Monitoring</title>
      <dc:creator>Daniele Baggio</dc:creator>
      <pubDate>Wed, 13 Nov 2024 10:49:18 +0000</pubDate>
      <link>https://dev.to/dbanieles/ecs-orchestration-part-4-monitoring-3f67</link>
      <guid>https://dev.to/dbanieles/ecs-orchestration-part-4-monitoring-3f67</guid>
      <description>&lt;p&gt;This post is about monitoring an ECS cluster, if you want to learn more about container orchestration with ECS you can see &lt;a href="https://dev.to/dbanieles/ecs-orchestration-part-1-choosing-a-network-mode-47ba"&gt;Part 1&lt;/a&gt;, &lt;a href="https://dev.to/dbanieles/ecs-orchestration-part-2-service-to-service-comunication-576k"&gt;Part 2&lt;/a&gt;, &lt;a href="https://dev.to/dbanieles/ecs-orchestration-part-3-autoscaling-2am6"&gt;Part 3&lt;/a&gt;. Let's start by saying the monitoring an Amazon ECS (Elastic Container Service) cluster is essential for tracking resource utilization, performance, and health of your containerized applications. In ECS, monitoring focuses on aspects like CPU and memory utilization, task and container statuses, and network traffic. Amazon CloudWatch is commonly used to monitor ECS clusters by providing metrics, logs, and alarms for observability.&lt;/p&gt;

&lt;p&gt;Key ECS Monitoring Components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Container Insights: A feature in CloudWatch that provides more granular metrics and analysis on ECS performance.&lt;/li&gt;
&lt;li&gt;CloudWatch Logs: Captures logs from ECS tasks and containers, essential for debugging and tracking application behavior.&lt;/li&gt;
&lt;li&gt;CloudWatch Metrics: These are built-in metrics for CPU, memory, and other resources.&lt;/li&gt;
&lt;li&gt;CloudWatch Alarms: Alerts based on metrics, allowing proactive responses to scaling or failures.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Setting Up Monitoring for ECS Using Terraform&lt;/strong&gt;&lt;br&gt;
Now let us see how to configure monitoring for an ECS cluster using Terraform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
AWS EC2 and AWS Auto Scaling natively does not support memory metrics (like Memory Utilization), as it only includes basic CloudWatch metrics like CPU Utilization, Network In/Out, etc. To collect memory metrics, you’ll need to install and configure the CloudWatch Agent on your EC2 instances. If you’re using an Amazon Machine Image (AMI) that doesn’t have the agent pre-installed, you can add it via a user data script in your Auto Scaling Group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
# Install the CloudWatch Agent
sudo yum install -y amazon-cloudwatch-agent

# Update package list and install CloudWatch Agent on Ubuntu
sudo apt-get update
sudo apt-get install -y amazon-cloudwatch-agent

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;1. Enable ECS Container Insights in Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Container Insights in ECS provides metrics such as memory and CPU utilization at both the cluster and service levels. You can enable Container Insights directly when creating the ECS cluster in Terraform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ecs_cluster" "ecs_cluste" {
  name = "my-cluster"

  setting {
    name  = "containerInsights"
    value = "enabled"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once enabled, you can view memory usage per container/task and set CloudWatch Alarms based on Container Insights metrics. This can provide insights into container resource usage and help set thresholds for scaling policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Configure CloudWatch Logs for ECS Tasks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To capture logs from ECS tasks, create a CloudWatch log group in which each container logs data. Then, configure ECS task definitions to send their logs to this group&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudwatch_log_group" "ecs_task_logs" {
  name              = "/ecs/my-task"
  retention_in_days = 7
}

resource "aws_ecs_task_definition" "task_definition" {
  family                   = "my-task"
  network_mode             = "awsvpc"
  container_definitions    = jsonencode([
    {
      name      = "app-container",
      image     = "nginx:latest",
      cpu       = 256,
      memory    = 512,
      essential = true,
      logConfiguration = {
        logDriver = "awslogs"
        options = {
          "awslogs-group"         = aws_cloudwatch_log_group.ecs_task_logs.name
          "awslogs-region"        = "eu-west-1"
          "awslogs-stream-prefix" = "ecs"
        }
      }
    }
  ])
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup creates a log group and configures each ECS task container to send logs to CloudWatch. The log retention period is set to 7 days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Create CloudWatch Alarms for ECS Metrics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can configure CloudWatch alarms on key ECS metrics to trigger notifications or actions based on thresholds. For example, you might set up alarms for high CPU or memory usage in your ECS service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudwatch_metric_alarm" "cpu_alarm" {
  alarm_name          = "high_cpu_alarm"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = 2
  metric_name         = "CPUUtilization"
  namespace           = "AWS/ECS"
  period              = 60
  statistic           = "Average"
  threshold           = 80
  alarm_description   = "Triggered when CPU utilization exceeds 80%"

  dimensions = {
    ClusterName = aws_ecs_cluster.example.name
  }

  alarm_actions = [aws_sns_topic.alerts.arn]
}

resource "aws_cloudwatch_metric_alarm" "memory_alarm" {
  alarm_name          = "high_memory_alarm"
  comparison_operator = "GreaterThanThreshold"
  evaluation_periods  = 2
  metric_name         = "MemoryUtilization"
  namespace           = "AWS/ECS"
  period              = 60
  statistic           = "Average"
  threshold           = 80
  alarm_description   = "Triggered when CPU utilization exceeds 80%"

  dimensions = {
    ClusterName = aws_ecs_cluster.example.name
  }

  alarm_actions = [aws_sns_topic.alerts.arn]
}

resource "aws_sns_topic" "alerts" {
  name = "ecs_alerts"
}

resource "aws_sns_topic_subscription" "alert_subscription" {
  topic_arn = aws_sns_topic.alerts.arn
  protocol  = "email"
  endpoint  = "your-email@example.com"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the CloudWatch alarm monitors CPU and memory utilization on the ECS cluster and triggers an alarm if it goes above 80% for two consecutive periods of 60 seconds. The alarm sends a notification to an SNS topic configured to send email alerts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Set Up Detailed ECS Monitoring with CloudWatch Dashboards&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can use CloudWatch Dashboards to visualize metrics for ECS services and clusters. With Terraform, you can define custom dashboards that show CPU and memory metrics for quick, real-time monitoring.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudwatch_dashboard" "ecs_dashboard" {
  dashboard_name = "ECS-Dashboard"
  dashboard_body = jsonencode({
    widgets = [
      {
        type = "metric",
        x    = 0,
        y    = 0,
        width = 6,
        height = 6,
        properties = {
          metrics = [
            ["AWS/ECS", "CPUUtilization", "ClusterName", aws_ecs_cluster.my_cluster.name],
            ["AWS/ECS", "MemoryUtilization", "ClusterName", aws_ecs_cluster.my_cluster.name]
          ]
          title = "ECS Cluster CPU and Memory Utilization"
          view = "timeSeries"
          stacked = false
          region = "us-west-2"
          period = 300
          stat = "Average"
        }
      }
    ]
  })
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This dashboard contains a widget showing CPU and memory utilization for the ECS cluster. You can customize the dashboard to display metrics for specific services, tasks, or additional resources in your ECS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable Container Insights to get granular metrics on your ECS cluster and services.&lt;/li&gt;
&lt;li&gt;Set Up CloudWatch Logs to capture ECS task logs and make debugging easier.&lt;/li&gt;
&lt;li&gt;Create CloudWatch Alarms for proactive alerts on resource usage, task health, and other custom metrics.&lt;/li&gt;
&lt;li&gt;Use CloudWatch Dashboards for real-time visual monitoring of ECS cluster and service performance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By setting up these components with Terraform, you achieve consistent and automated monitoring, giving you insight into the performance and health of your ECS cluster and services. This configuration is especially useful in production environments where proactive monitoring is essential for maintaining application uptime and resource efficiency.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ecs</category>
      <category>monitoring</category>
      <category>metrics</category>
    </item>
    <item>
      <title>ECS Orchestration Part 3: Autoscaling</title>
      <dc:creator>Daniele Baggio</dc:creator>
      <pubDate>Wed, 13 Nov 2024 10:48:44 +0000</pubDate>
      <link>https://dev.to/dbanieles/ecs-orchestration-part-3-autoscaling-2am6</link>
      <guid>https://dev.to/dbanieles/ecs-orchestration-part-3-autoscaling-2am6</guid>
      <description>&lt;p&gt;This post describes the main types of autoscaling ECS and how to configure them via terraform , providing some examples from which to take inspiration. If you want to learn more about ECS container orchestration you can look at previous articles (&lt;a href="https://dev.to/dbanieles/ecs-orchestration-part-1-choosing-a-network-mode-47ba"&gt;Part 1&lt;/a&gt;, &lt;a href="https://dev.to/dbanieles/ecs-orchestration-part-2-service-to-service-comunication-576k"&gt;Part 2&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Amazon ECS offers several autoscaling mechanisms to handle varying workloads for containerized applications. Each type of scaling targets different aspects of the infrastructure to ensure your application remains responsive under load. The primary types of autoscaling in ECS include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service Autoscaling: Adjusts the number of task instances within a specific ECS service.&lt;/li&gt;
&lt;li&gt;Cluster Autoscaling (Managed Scaling): Modifies the number of EC2 instances (hosts) in an ECS cluster when running EC2-backed clusters.&lt;/li&gt;
&lt;li&gt;Target Tracking and Step Scaling Policies: Offers two main policy types to control scaling behavior.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Service Autoscaling&lt;/strong&gt;&lt;br&gt;
Service Autoscaling automatically scales the number of tasks in an ECS service to meet demand. This is helpful for applications with variable workloads where you want the service to scale automatically based on CPU, memory, or custom CloudWatch metrics.&lt;/p&gt;

&lt;p&gt;Terraform Configuration for Service Autoscaling&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Define the ECS Service: First, define your ECS service with its properties.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create Autoscaling Policies: Use Terraform to define the target tracking or step scaling policies for your ECS service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attach Autoscaling to ECS Service: Link the ECS service to the autoscaling policy.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ecs_cluster" "ecs_cluster" {
  name = "example-cluster"
}

resource "aws_ecs_service" "ecs_service" {
  name            = "my-service"
  cluster         = aws_ecs_cluster.example.id
  task_definition = aws_ecs_task_definition.ecs_service_task.arn
  desired_count   = 1

  autoscale {
    min_capacity = 1
    max_capacity = 10
  }
}

resource "aws_appautoscaling_target" "app_scaling" {
  max_capacity       = 10
  min_capacity       = 1
  resource_id        = "service/my-cluster/my-service"
  scalable_dimension = "ecs:service:DesiredCount"
  service_namespace  = "ecs"
}

resource "aws_appautoscaling_policy" "cpu_scaling_policy" {
  name               = "cpu-policy"
  policy_type        = "TargetTrackingScaling"
  resource_id        = aws_appautoscaling_target.app_scaling.resource_id
  scalable_dimension = aws_appautoscaling_target.app_scaling.scalable_dimension
  service_namespace  = aws_appautoscaling_target.app_scaling.service_namespace

  target_tracking_scaling_policy_configuration {
        target_value = 50
        customized_metric_specification {
            metric_name = "CPUUtilization"
            namespace   = "AWS/ECS"
            statistic   = "Average"
            unit        = "Percent"

            dimensions {
                name  = "ClusterName"
                value = "my-cluster"
            }

            dimensions {
                name  = "ServiceName"
                value = "my-service"
            }
        }
    }
}

resource "aws_appautoscaling_policy" "memory_scaling_policy" {
  name               = "memory-policy"
  policy_type        = "TargetTrackingScaling"
  resource_id        = aws_appautoscaling_target.app_scaling.resource_id
  scalable_dimension = aws_appautoscaling_target.app_scaling.scalable_dimension
  service_namespace  = aws_appautoscaling_target.app_scaling.service_namespace

 target_tracking_scaling_policy_configuration {
        target_value       = 50
        scale_in_cooldown  = 240
        scale_out_cooldown = 240

        customized_metric_specification {
            metric_name = "MemoryUtilization"
            namespace   = "AWS/ECS"
            statistic   = "Average"
            unit        = "Percent"

            dimensions {
                name  = "ClusterName"
                value = "my-cluster"
            }

            dimensions {
                name  = "ServiceName"
                value = "my-service"
            }
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the ECS service scales based on CPU and memory usage, maintaining CPU and memory utilization around 50%.&lt;br&gt;
To scale up on memory metrics , look first at this &lt;a href="https://dev.to/dbanieles/ecs-orchestration-part-4-monitoring-3f67"&gt;post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster Autoscaling&lt;/strong&gt;&lt;br&gt;
Cluster Autoscaling automatically manages the number of EC2 instances within an ECS cluster. This is essential for EC2-backed clusters where additional hosts may be required based on task placement and resource needs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Define an Auto Scaling Group (ASG): Specify an ASG for your EC2 instances. The ASG handles the scaling of the ECS cluster itself.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable Managed Scaling for the ECS Cluster: Use the aws_ecs_cluster resource to define the cluster with managed scaling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure CloudWatch Alarms: CloudWatch alarms are necessary for scaling based on memory or CPU usage thresholds.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ecs_cluster" "my_cluster" {
  name = "example-ecs-cluster"
}

data "aws_ami" "ecs_optimized" {
  most_recent = true
  owners      = ["amazon"]
  filter {
    name   = "name"
    values = ["amzn2-ami-ecs-hvm-*-x86_64-ebs"]
  }
}

resource "aws_iam_role" "ecs_instance_role" {
  name = "ecsInstanceRole"
  assume_role_policy = jsonencode({
    "Version" : "2012-10-17",
    "Statement" : [
      {
        "Effect" : "Allow",
        "Principal" : {
          "Service" : "ec2.amazonaws.com"
        },
        "Action" : "sts:AssumeRole"
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "ecs_instance_policy" {
  role       = aws_iam_role.ecs_instance_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}

resource "aws_iam_instance_profile" "ecs_instance_profile" {
  name = "ecsInstanceProfile"
  role = aws_iam_role.ecs_instance_role.name
}

resource "aws_launch_template" "ecs_launch_template" {
  name_prefix   = "ecs-instance-"
  image_id      = data.aws_ami.ecs_optimized.id
  instance_type = "t3.small"


  user_data = base64encode(&amp;lt;&amp;lt;EOF
#!/bin/bash
echo "ECS_CLUSTER=${aws_ecs_cluster.my_cluster.name}" &amp;gt;&amp;gt; /etc/ecs/ecs.config
EOF
  )

  iam_instance_profile {
    name = aws_iam_instance_profile.ecs_instance_profile.name
  }
}

resource "aws_autoscaling_group" "ecs_asg" {
  desired_capacity     = 1
  min_size             = 1
  max_size             = 5
  launch_template      = {
    id      = aws_launch_template.ecs_launch_template.id
    version = "$Latest"
  }
  vpc_zone_identifier  = ["subnet-1", "subnet-2"]

  tag {
    key                 = "Name"
    value               = "ECS Instance"
    propagate_at_launch = true
  }

  tag {
      key                 = "AmazonECSManaged"
      value               = true
      propagate_at_launch = true
  }
}

resource "aws_ecs_capacity_provider" "ecs_capacity_provider" {
  name = "example-ecs-capacity-provider"

  auto_scaling_group_provider {
    auto_scaling_group_arn         = aws_autoscaling_group.ecs_asg.arn
    managed_scaling {
      status                    = "ENABLED"
      target_capacity           = 75
      minimum_scaling_step_size = 1
      maximum_scaling_step_size = 4
    }
    managed_termination_protection = "ENABLED"  # Protects tasks from ASG scale-in
  }
}

resource "aws_ecs_cluster_capacity_providers" "ecs_cluster_capacity_providers" {
  cluster_name = aws_ecs_cluster.my_cluster.name
  capacity_providers = [
    aws_ecs_capacity_provider.ecs_capacity_provider.name
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration sets up an EC2-backed ECS cluster with cluster autoscaling. Modify the desired_capacity, max_size, and min_size to define how the ASG scales based on resource demands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Target Tracking vs Step Scaling Policies&lt;/strong&gt;&lt;br&gt;
Both Target Tracking and Step Scaling Policies control how scaling occurs based on specific metrics.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Target Tracking tries to keep a specified metric at a defined target level. AWS will automatically adjust resources up or down to maintain this target.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Step Scaling adjusts the capacity in steps based on specified thresholds. For example, you can set multiple alarms that trigger different scaling actions based on how far the current value is from the threshold.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_appautoscaling_target" "autoscaling_target" {
    max_capacity       = 1
    min_capacity       = 5
    resource_id        = "service/my-service/${aws_ecs_service.service.name}"
    scalable_dimension = "ecs:service:DesiredCount"
    service_namespace = "ecs"
}

# Use this configuration for Step scaling
resource "aws_appautoscaling_policy" "step_scaling_policy" {
  name               = "step-scaling-policy"
  policy_type        = "StepScaling"
  resource_id        = aws_appautoscaling_target.autoscaling_target.resource_id
  scalable_dimension = aws_appautoscaling_target.autoscaling_target.scalable_dimension
  service_namespace  = aws_appautoscaling_target.autoscaling_target.service_namespace

  step_scaling_policy_configuration {
    adjustment_type = "ChangeInCapacity"
    cooldown        = 60

    step_adjustment {
      metric_interval_lower_bound = 0
      scaling_adjustment          = 1
    }

    step_adjustment {
      metric_interval_upper_bound = 0
      scaling_adjustment          = -1
    }
  }
}

# Use this configuration for Target tracking scaling
resource "aws_appautoscaling_policy" "autoscaling_policy" {
    name               = "my-service-policy"
    policy_type        = "TargetTrackingScaling"
    resource_id        = aws_appautoscaling_target.autoscaling_target.resource_id
    scalable_dimension = aws_appautoscaling_target.autoscaling_target.scalable_dimension
    service_namespace  = aws_appautoscaling_target.autoscaling_target.service_namespace

    target_tracking_scaling_policy_configuration {
        target_value = 50
        customized_metric_specification {
            metric_name = "CPUUtilization"
            namespace   = "AWS/ECS"
            statistic   = "Average"
            unit        = "Percent"

            dimensions {
                name  = "ClusterName"
                value = "my-cluster"
            }

            dimensions {
                name  = "ServiceName"
                value = "my-service"
            }
        }
    }

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ECS Capacity Provider&lt;/strong&gt;&lt;br&gt;
It is good to say that ecs provides a feature called Capacity Provider, that simplifies and automates the management of compute capacity (EC2 instances or AWS Fargate tasks) for running containerized workloads. It provides a way to dynamically scale and manage compute resources based on the needs of your ECS tasks and services.&lt;/p&gt;

&lt;p&gt;How ECS Capacity Providers Work?&lt;/p&gt;

&lt;p&gt;If you use EC2 Capacity Providers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managed Scaling: Automatically adjusts the number of EC2 instances in the Auto Scaling group based on the workload.&lt;/li&gt;
&lt;li&gt;Managed Termination Protection: Ensures that ECS tasks running on EC2 instances are not interrupted when scaling in.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you use Fargate Capacity Providers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use AWS Fargate to launch tasks without managing servers.&lt;/li&gt;
&lt;li&gt;ECS tasks are scheduled directly onto Fargate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ECS Capacity Providers are designed to simplify the management of compute resources for ECS clusters, providing flexibility, cost optimization, and scalability for EC2 and Fargate workloads. They are an essential part of modern containerized deployments on AWS ECS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service Autoscaling: Manages tasks within an ECS service. Use Terraform's aws_appautoscaling_target to set up target tracking and policy definitions.&lt;/li&gt;
&lt;li&gt;Cluster Autoscaling: Expands or shrinks the number of EC2 instances. Set up an Auto Scaling Group and link it to the ECS cluster.&lt;/li&gt;
&lt;li&gt;Scaling Policies: TargetTrackingScaling for maintaining a target metric or StepScaling for responding to metric thresholds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using these configurations, you can control and automate your ECS services' behavior based on resource needs, ensuring efficient, cost-effective scaling across various AWS resources.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ecs</category>
      <category>containers</category>
      <category>autoscaling</category>
    </item>
    <item>
      <title>ECS Orchestration Part 2: Service to Service comunication</title>
      <dc:creator>Daniele Baggio</dc:creator>
      <pubDate>Fri, 27 Sep 2024 15:08:03 +0000</pubDate>
      <link>https://dev.to/dbanieles/ecs-orchestration-part-2-service-to-service-comunication-576k</link>
      <guid>https://dev.to/dbanieles/ecs-orchestration-part-2-service-to-service-comunication-576k</guid>
      <description>&lt;p&gt;The first post in the series was about choosing the correct network type for running containers on ECS (&lt;a href="https://dev.to/dbanieles/ecs-orchestration-part-1-choosing-a-network-mode-47ba"&gt;Part 1&lt;/a&gt;), and in this post we continue with a network-related topic, answering at the following question: &lt;br&gt;
"How can I make the services within the ECS cluster communicate with each other?" &lt;br&gt;
Service-to-service communication refers to the exchange of data and messages between different microservices or containerized applications. In a microservices architecture, different parts of an application are broken down into smaller, more modular services that communicate with each other via APIs. This allows for greater flexibility, scalability, and agility when developing and deploying applications.&lt;br&gt;
For microservices to function properly, they need to be able to communicate with each other reliably and securely. This is where ECS comes in.&lt;br&gt;
ECS offers several ways to enable service-to-service communication in microservices, let us explore the main ways.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon ECS Service Connect&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftg4wrg990zj0bocwvuun.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftg4wrg990zj0bocwvuun.png" alt="Image description" width="681" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ECS Service Connect uses a proxy container as a sidecar for each activity within your service. The proxy container intercepts outgoing connections and redirects them to the IP address of the requested service.&lt;/p&gt;

&lt;p&gt;Key Features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrated Service Discovery: Like ECS Service Discovery, it allows services to discover each other via DNS. However, Service Connect also adds load balancing and traffic management.&lt;/li&gt;
&lt;li&gt;Simplified Networking: Abstracts the complexity of service-to-service communication by allowing services to use DNS names like service-name.namespace without managing individual IPs or load balancers.&lt;/li&gt;
&lt;li&gt;Built-in Load Balancing: Automatically balances traffic between service tasks, eliminating the need for manually configuring AWS Load Balancers.&lt;/li&gt;
&lt;li&gt;Automatic Encryption: Traffic between services is encrypted by default, providing a secure communication layer.&lt;/li&gt;
&lt;li&gt;Cross-Cluster Communication: Services can communicate across different ECS clusters.&lt;/li&gt;
&lt;li&gt;No Additional Infrastructure: Unlike ECS Service Discovery, you don’t need to configure and manage load balancers manually. Service Connect handles this behind the scenes.&lt;/li&gt;
&lt;li&gt;Service Mesh: ECS Service Connect introduces service mesh-like capabilities by enabling traffic management, retries, timeouts, and load balancing automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want automated load balancing and traffic management for ECS services.&lt;/li&gt;
&lt;li&gt;You need simplified service communication and don’t want to manage AWS Load Balancers or Route 53 configurations manually.&lt;/li&gt;
&lt;li&gt;Encryption of service-to-service communication is required out of the box.&lt;/li&gt;
&lt;li&gt;Cross-cluster communication between services is needed, and you want to avoid manual configuration.&lt;/li&gt;
&lt;li&gt;Ideal for microservices architectures where services need dynamic communication, load balancing, and secure interconnections.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pricing Model&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No additional charges for Service Connect itself: As of now, ECS Service Connect does not incur extra costs beyond the standard ECS service and task pricing. You pay for the underlying ECS resources (like EC2 instances or Fargate) that you are using.&lt;/li&gt;
&lt;li&gt;AWS Data Transfer Charges: You may incur data transfer charges if your services are communicating across different AWS regions or accounts, but intra-region communication is typically free.&lt;/li&gt;
&lt;li&gt;CloudWatch Costs: If you enable CloudWatch metrics for your ECS tasks/services, there may be additional charges based on your monitoring needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Key Points&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No separate pricing for Service Connect; it is included with the ECS service.&lt;/li&gt;
&lt;li&gt;Simplified billing as it integrates seamlessly into your existing ECS pricing model.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you what to test Service connect with terraform, you can see the example below:&lt;/p&gt;

&lt;p&gt;Change your ECS service confinguartion with this example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ecs_service" "nginx_service" {
  name            = "nginx-service"
  cluster         = aws_ecs_cluster.example_cluster.id
  task_definition = aws_ecs_task_definition.nginx_task.arn
  desired_count   = 1
  launch_type     = "FARGATE"

  network_configuration {
    subnets          = aws_subnet.public[*].id
    security_groups  = [aws_security_group.ecs_service_sg.id]
    assign_public_ip = true
  }

  service_connect_configuration {
    enabled = true

    namespace = "example.local"  # This is the Cloud Map namespace for service discovery

    service {
      port_name = "http"
      discovery_name = "nginx-service"
      client_aliases {
        port = 80
        dns_name = "nginx-service.example.local"  # DNS name for service discovery
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verifying Service Connect&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nslookup nginx-service.example.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Amazon ECS Service Discovery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gmn9ftxgdccjwefuyjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gmn9ftxgdccjwefuyjc.png" alt="Image description" width="641" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Service Discovery in Amazon ECS allows services within a cluster to communicate with each other by name, without needing to know the IP addresses or ports of the services. This is especially useful in dynamic environments where containers can come and go, and their IP addresses may change.&lt;br&gt;
Amazon ECS integrates with AWS Cloud Map and Route 53 for service discovery, allowing services to register and look up DNS names dynamically. &lt;/p&gt;

&lt;p&gt;Key Features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Cloud Map Integration: ECS Service Discovery integrates with AWS Cloud Map, registering ECS services with a DNS-based service registry.&lt;/li&gt;
&lt;li&gt;DNS-based Discovery: Services discover other services by querying DNS records. ECS tasks are registered with A or SRV DNS records.&lt;/li&gt;
&lt;li&gt;Internal Communication: Primarily used for communication within the VPC (via private DNS namespaces), though it can also support public namespaces.&lt;/li&gt;
&lt;li&gt;Health Checks: Can use Route 53 health checks to ensure traffic is routed only to healthy instances.&lt;/li&gt;
&lt;li&gt;Manual Load Balancing: Requires configuring your own load balancers, or you can use AWS Application/Network Load Balancers with ECS tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple service discovery when you have multiple services that need to find each other based on DNS names.&lt;/li&gt;
&lt;li&gt;If you are already using AWS Cloud Map for service discovery across AWS services.&lt;/li&gt;
&lt;li&gt;For scenarios where manual control over load balancing and routing policies is needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pricing Model&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service Discovery Charges: You are charged based on the number of API calls made and the number of service instances registered in Cloud Map.&lt;/li&gt;
&lt;li&gt;Service Instances: You incur costs for each service instance registered in Cloud Map (a small monthly fee per instance).&lt;/li&gt;
&lt;li&gt;API Calls: Charges are applied for API calls made to AWS Cloud Map for operations like registering, deregistering, and discovering services.&lt;/li&gt;
&lt;li&gt;Health Checks: If you configure health checks for services, there may be additional costs based on the frequency of checks and the number of checks performed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Key Points&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Variable costs based on the number of services, instances, and API calls made to Cloud Map.&lt;/li&gt;
&lt;li&gt;More granular control and customization options compared to Service Connect, but at an additional cost.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following example show you how can setup a Service Discovery using ECS and terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_service_discovery_private_dns_namespace" "cloudmap_namespace" {
  name        = "myapp.local"
  description = "Private namespace for my ECS services"
  vpc         = aws_vpc.main.id
}

resource "aws_service_discovery_service" "myapp_service" {
  name = "nginx"

  dns_config {
    namespace_id = aws_service_discovery_private_dns_namespace.cloudmap_namespace.id
    dns_records {
      type = "A"
      ttl  = 60
    }
    routing_policy = "MULTIVALUE"
  }

  health_check_custom_config {
    failure_threshold = 1
  }
}

resource "aws_ecs_service" "ecs_service" {
  name            = "nginx-service"
  cluster         = aws_ecs_cluster.ecs_cluster.id
  task_definition = aws_ecs_task_definition.nginx_task.arn
  desired_count   = 1
  launch_type     = "FARGATE"

  network_configuration {
    subnets          = aws_subnet.subnet[*].id
    security_groups  = [aws_security_group.ecs_service_sg.id]
    assign_public_ip = true
  }

  service_registries {
    registry_arn = aws_service_discovery_service.myapp_service.arn
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify Service Discovery&lt;br&gt;
Once your ECS service is up and running, you can verify the service discovery by resolving the DNS name of the service within the same VPC. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nslookup nginx.myapp.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;br&gt;
Service Discovery enables services to locate each other without hardcoding their locations. This is crucial in dynamic environments, like containers or cloud deployments, where services may change their IP addresses or locations frequently.&lt;br&gt;
Service Connect is a layer that abstracts the complexities of service communication, often integrating Service Discovery, security, and traffic management into a unified solution.&lt;br&gt;
In short, Service Discovery focuses on finding services, while Service Connect is a more comprehensive solution that also handles security, routing, and observability for service-to-service communication.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>ECS Orchestration Part 1: Network</title>
      <dc:creator>Daniele Baggio</dc:creator>
      <pubDate>Thu, 29 Feb 2024 13:22:16 +0000</pubDate>
      <link>https://dev.to/dbanieles/ecs-orchestration-part-1-choosing-a-network-mode-47ba</link>
      <guid>https://dev.to/dbanieles/ecs-orchestration-part-1-choosing-a-network-mode-47ba</guid>
      <description>&lt;p&gt;This is the first post in the ECS Orchestration series. In this part we begin by discussing the ECS network, which is a crucial topic when it comes to containerised applications. &lt;br&gt;
An orchestrator such as ECS is typically used to manage microservices or other systems consisting of several applications using Docker containers. One of the main advantages of using Docker is the possibility of hosting multiple containers on a single server. &lt;br&gt;
When networking containers on the same server, it is important to choose the appropriate network type to effectively manage the containers according to specific requirements.&lt;br&gt;
This article examines the main options of network with ECS and their advantages and disadvantages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Host mode
&lt;/h3&gt;

&lt;p&gt;Using host mode, the networking of the container is tied directly to the underlying host that's running the container. This approach may seem simple, but it is important to consider the following:&lt;br&gt;
When the host network mode is used, the container receives traffic on the specified port using the IP address of the underlying host Amazon EC2 instance.&lt;br&gt;
There are significant drawbacks to using this network mode. You can’t run more than a single instantiation of a task on each host. This is because only the first task can bind to its required port on the Amazon EC2 instance. There's also no way to remap a container port when it's using host network mode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgxqhc1vcy37tfr3ua7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgxqhc1vcy37tfr3ua7o.png" alt="Host port mapping"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An example of task definition with host network:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

  {
    "essential": true,
    "networkMode": "host"
    "name": "myapp",
    "image": "myapp:latest",
    "portMappings": [
      {
        "containerPort": 8080,
        "hostPort": 8080,
        "protocol": "tcp"
      }
    ],
    "environment": [],
     ....
  }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Bridge mode
&lt;/h3&gt;

&lt;p&gt;With bridge mode, you're using a virtual network bridge to create a layer between the host and the networking of the container. This way, you can create port mappings that remap a host port to a container port. The mappings can be either static or dynamic. &lt;/p&gt;
&lt;h4&gt;
  
  
  1. Static port mapping
&lt;/h4&gt;

&lt;p&gt;With a static port mapping, you can explicitly define which host port you want to map to a container port.&lt;br&gt;
If you wish to manage only the traffic port on the host, static mapping might be a proper solution. However, this still has the same disadvantage as using the host network mode. You can't run more than a single instantiation of a task on each host. &lt;br&gt;
This is a problem when an application needs to auto scaling, because a static port mapping only allows a single container to be mapped on a specific host port. To solve this problem, consider using the bridge network mode with a dynamic port mapping.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgxqhc1vcy37tfr3ua7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgxqhc1vcy37tfr3ua7o.png" alt="Static port mapping"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An example of task definition with bridge network and static port mapping:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

  {
    "essential": true,
    "networkMode": "bridge"
    "name": "myapp",
    "image": "myapp:latest",
    "portMappings": [
      {
        "containerPort": 8080,
        "hostPort": 8080,
        "protocol": "tcp"
      }
    ],
    "environment": [],
     ....
  }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  2. Dynamic port mapping
&lt;/h4&gt;

&lt;p&gt;You can specify a dynamic port binding by not specifying a host port in the port mapping of a task definition, allowing Docker to pick an unused random port from the ephemeral port range and assign it as the public host port for the container. This means you can run multiple copies of a container on the host. You can also assign each container its own port on the host. Each copy of the container receives traffic on the same port, and clients sending traffic to these containers use the randomly assigned host ports.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxpkonkwymuq2zunkzo2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxpkonkwymuq2zunkzo2.png" alt="Dynamic port mapping"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An example of task definition with bridge network and dynamic port mapping:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

  {
    "essential": true,
    "networkMode": "bridge"
    "name": "myapp",
    "image": "myapp:latest",
    "portMappings": [
      {
        "containerPort": 8080,
        "hostPort": 0, &amp;lt;-- Dynamic port allocation by Docker
        "protocol": "tcp"
      }
    ],
    "environment": [],
     ....
  }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So far so good, but one disadvantage of using the bridge network with dynamic port mapping is the difficulty in establishing communication between services. Since services can be assigned to any port, it is necessary to open wide port ranges between hosts. It is not easy to create specific rules so that a particular service can only communicate with another specific service. Services do not have specific ports that can be used for security group network rules.&lt;/p&gt;

&lt;h3&gt;
  
  
  Awsvpc mode
&lt;/h3&gt;

&lt;p&gt;With the awsvpc network mode, Amazon ECS creates and manages an Elastic Network Interface (ENI) for each task and each task receives its own private IP address within the VPC. This ENI is separate from the underlying hosts ENI. If an Amazon EC2 instance is running multiple tasks, then each task’s ENI is separate as well.&lt;br&gt;
The advantage of using awsvpc network mode is that each task can have a separate security group to allow or deny traffic. This means you have greater flexibility to control communications between tasks and services at a more granular level.&lt;br&gt;
This means that if there are services that need to communicate with each other using HTTP or RPC protocols, we can manage the connection more easily and flexibly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2ty10dgwso0acd5ig6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2ty10dgwso0acd5ig6u.png" alt="Awsvpc port mapping"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An example of task definition with awsvpc network:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

  {
    "essential": true,
    "networkMode": "awsvpc"
    "name": "myapp",
    "image": "myapp:latest",
    "portMappings": [
      {
        //The container gets its own ENI. 
        // Which means that your container will act as a host a the port that you expose will be the port that you serve on.
        "containerPort": 8080,
        "protocol": "tcp"
      }
    ],
    "environment": [],
     ....
  }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;But when using the awsvpc network mode there are a few challenges you should be mindful of, infact every EC2 instances can allocate a limited range of ENI. This means that it's not possible to execute more container of the maximum limit of EC2 ENI. This behavior has an impact when an application needs to be auto scaled, infact the auto scaling can create another new host instance(EC2) to perform a tasks placement. This behaivior can potentially increases costs and wastes computational power.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How can one avoid this behavior?&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;When choose a ECS network mode like awsvpc and need to increase number of allocable ENI on the EC2 instance managed from the cluster , it's possible to enable awsvpcTrunking.&lt;br&gt;
Amazon ECS supports launching container instances with increased ENI density using supported Amazon EC2 instance types. When you use these instance types, additional ENIs are available on newly launched container instances. This configuration allows you to place more tasks using the awsvpc network mode on each container instance.&lt;br&gt;
You can enable the awsvpcTrunking in account setting with the AWS CLI:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

aws ecs put-account-setting-default \
      --name awsvpcTrunking \
      --value enabled \
      --profile &amp;lt;YOUR_PROFILE_NAME&amp;gt; \
      --region &amp;lt;YOUR_REGION&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you want to view your container instances with increased ENI limits with the AWS CLI:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

aws ecs list-attributes \
      --target-type container-instance \
      --attribute-name ecs.awsvpc-trunk-id \
      --cluster &amp;lt;YOUR_CLUSTER_NAME&amp;gt; \
      --region &amp;lt;YOUR_REGION&amp;gt; \
      --profile &amp;lt;YOUR_PROFILE_NAME&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It is important to know that not all EC2 instance types support awsvpcTrunking and certain prerequisites must be met to utilize this feature. &lt;br&gt;
Please refer to the &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-instance-eni.html" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; for further information.&lt;br&gt;
Another thing to keep in mind that when using ENI trunking, is that each Amazon EC2 instance requires two IP addresses. One IP address for the primary ENI and another for the ENI trunk. In addition,  also ECS activities on the instance require an IP address. &lt;br&gt;
If you need very large scaling, there is a risk of running out of available IP addresses. This could cause Amazon EC2 startup errors or task startup errors. These errors occur because ENIs cannot add IP addresses within the VPC if no IP addresses are available.&lt;br&gt;
To avoid this problem, make sure that the CIDR ranges of the subnet meet the requirements.&lt;/p&gt;

&lt;p&gt;If using the Fargate launch type, the awsvpc is the only network mode supported&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusions&lt;/strong&gt;&lt;br&gt;
We have seen how the choice of network type for container orchestration on ECS affects the scalability and connectivity of the various services within the cluster. Depending on the type of network chosen, there are different behaviours that can bring advantages or disadvantages depending on the use case.&lt;br&gt;
For a microservices application managed by ECS, awsvpc is probably the best network to choose because it allows you to easily scale your application and easily implement service-to-service communications.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>network</category>
      <category>ecs</category>
      <category>containers</category>
    </item>
    <item>
      <title>Secure your application with Aws Secrets Manager 🔒</title>
      <dc:creator>Daniele Baggio</dc:creator>
      <pubDate>Fri, 19 Jan 2024 16:28:25 +0000</pubDate>
      <link>https://dev.to/dbanieles/secure-your-application-with-aws-secret-manager-24l0</link>
      <guid>https://dev.to/dbanieles/secure-your-application-with-aws-secret-manager-24l0</guid>
      <description>&lt;p&gt;One of the main tasks of a software developer is to make code as secure as possible, and to avoid using sensitive data such as database connection strings, passwords, or any other type of data directly in code.&lt;br&gt;
AWS Secrets Manager makes it very easy to improve the security of your application, in this post I will show you an example of how you can use this service in a dotnet application to avoid hard-coded data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Aws Secrets Manager ?&lt;/strong&gt;&lt;br&gt;
Secrets Manager helps us improve the security of applications by eliminating the need for hard-coded credentials in the application source code. Hard-coded credentials are replaced with a runtime call to Secret Manager (or other mechanisms) to dynamically retrieve credentials when you need them.&lt;br&gt;
With this service you can manage secrets such as database credentials, on-premises resource credentials, SaaS application credentials, third-party API keys, and Secure Shell (SSH) keys.&lt;br&gt;
Aws Secrets Manager also allows for automatic secret rotation, which means that it is possible to schedule the update of the credentials contained in the secrets without having to touch the code (topic not covered in this post, but maybe it will be a topic for a future post).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdaeqepsur3sdhszb1acg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdaeqepsur3sdhszb1acg.png" alt="Image description" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a Secret on Aws Secrets Manager&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first step is to create a secret in your Aws account.&lt;br&gt;
To do this, go to the Aws Secrets Manager section in your console and click Save New Secret.&lt;br&gt;
There are different types of stores you can choose from, but what is right for us is Other type of secret, because with this type of secret you can store many key/value properties.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1kd6y7f2m970a754mx3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1kd6y7f2m970a754mx3.png" alt="Image description" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can enter, for example, the connection string or any other value you want to store in this secret.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxe4308hzvmk0b0gg5c0i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxe4308hzvmk0b0gg5c0i.png" alt="Image description" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure to apply an AWS KMS (Key Management Service) key. You can apply a predefined key or specify your own key, if one exists.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80039qklkflm1a68p510.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80039qklkflm1a68p510.png" alt="Image description" width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the next step and enter the secret name and description, you can also add a tag if you wish, before proceeding to the last step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxk65ye0467msdiqpqhi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxk65ye0467msdiqpqhi.png" alt="Image description" width="800" height="655"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the last step you can set up a secret rotation, but that's not a topic of this post. Then leave this option unchecked.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlq4yyfj0l5ayvxiccj7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlq4yyfj0l5ayvxiccj7.png" alt="Image description" width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check that you have filled in all the fields correctly and then click Store.&lt;br&gt;
Ok, now you have a new secret configured on Aws. &lt;/p&gt;

&lt;p&gt;If you prefer to use Terraform to create this resources in Aws, you can follow this example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_secretsmanager_secret" "my_secret" {
  name                    = "secrets-${lower(var.name)}-${lower(var.environment)}"
  recovery_window_in_days = 7

  lifecycle {
    prevent_destroy = true
  }

  tags = {
    Project     = lower(var.name)
    Name        = "secrets-${lower(var.name)}-rds-${lower(var.environment)}"
    Environment = upper(var.environment)
  }
}

resource "aws_secretsmanager_secret_version" "my_secrets_version" {
  secret_id = aws_secretsmanager_secret.my_secret.id

  lifecycle {
    prevent_destroy = true
  }

  secret_string = &amp;lt;&amp;lt;EOF
  {
    "ConnectionString": "Server=${aws_rds_cluster.rds_cluster.endpoint};Database=${lower(var.name)};Uid=${aws_rds_cluster.rds_cluster.master_username};Pwd=${aws_rds_cluster.rds_cluster.master_password};",
  }
  EOF
}`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Use a Secret in a dotnet application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The second step is to retrieve these values from Aws Secrets Manager. A dotnet application is required, so you can start by creating a new ASP.NET Core web application.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;dotnet new webapi --name AwsSecretManager&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Add the necessary dependencies:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Install-Package AWSSDK.Extensions.NETCore.Setup&lt;br&gt;
Install-Package AWSSDK.SecretsManager&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The first thing you need to do when you open the project is edit the appsettings.json file to add the aws section with the Secrets Manager configuration properties.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "Aws": {
    "Region": "eu-west-1", // region where the secrets are present
    "SecretManagerName": "MySecret" // The name of the secret
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you need to add the following classes to your project, to create a custom ConfigurationProvider to handle the values ​​retrieved from Secrets Manager.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; public class SecretManagerProvider : ConfigurationProvider
 {
     private readonly string _region;
     private readonly string _secretName;

     public SecretManagerProvider(string region, string secretName)
     {
         _region = region;
         _secretName = secretName;
     }

     public async override void Load()
     {
         var secret = await GetSecret();
         Data = JsonSerializer.Deserialize&amp;lt;Dictionary&amp;lt;string, string&amp;gt;&amp;gt;(secret);
     }

     public async Task&amp;lt;string&amp;gt; GetSecret()
     {
         IAmazonSecretsManager client = new AmazonSecretsManagerClient(RegionEndpoint.GetBySystemName(_region));

         var request = new GetSecretValueRequest()
         {
             SecretId = _secretName,
             VersionStage = "AWSCURRENT",
         };

         var response = await client.GetSecretValueAsync(request);

         return response.SecretString;
     }
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class SecretManagerSource : IConfigurationSource
{
    private readonly string _region;
    private readonly string _secretName;

    public SecretManagerSource(string region, string secretName)
    {
        _region = region;
        _secretName = secretName;
    }

    public IConfigurationProvider Build(IConfigurationBuilder builder)
    {
        return new SecretManagerProvider(_region, _secretName);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    public static class Secret
    {
        public static void AddAmazonSecretsManager(
          this IConfigurationBuilder configurationBuilder,
          string region,
          string secretName)
        {
            var configurationSource = new SecretManagerSource(region, secretName);
            configurationBuilder.Add(configurationSource);
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class SecretCredentials
{
    public string ConnectionString { get; set; }
    public string EncryptionKey{ get; set; }
    public string BucketBasePath{ get; set; }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The SecretManagerProvider class is a custom configuration provider that retrieves values from the Secrets Manager service. The SecretManagerSource class implements IConfigurationSource, which essentially exposes a key/value store containing configuration values.&lt;br&gt;
The GetSecretValueRequest object uses a property called VersionStage, which specifies a type of version of the secret you want to use. There are 3 types of versions: AWSCURRENT, AWSPREVIOUS, AWSPENDING (during rotation). A secret always has a version labelled AWSCURRENT, and Secrets Manager returns this version by default when you retrieve the secret value.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;How can you retrieve these values from within code?&lt;/strong&gt;&lt;br&gt;
We need to use the IOptions interface, which is used to access and manage configuration options for an application. The generic T parameter specifies the type of options to be managed.&lt;br&gt;
To use IOptions, you must first register the options with the dependency injection container, then you can add the following code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;builder.Configuration.AddAmazonSecretsManager(builder.Configuration["Aws:Region"], builder.Configuration["Aws:SecretManagerName"]);
builder.Services.Configure&amp;lt;SecretCredentials&amp;gt;(builder.Configuration);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to access secret credentials for a particular class, you need to inject the IOptions value into a costructor class. For Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public readonly IOptions&amp;lt;SecretCredentials&amp;gt; _secret;

public MySpecificClass(IOptions&amp;lt;SecretCredentials&amp;gt; secrets)
{
   _secrets = secrets;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The complete source code example is available on &lt;a href="https://github.com/dbanieles/dotnet-aws-secrets-manager"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical uses&lt;/strong&gt;&lt;br&gt;
In your on-premises environment, it is possible to use Aws Secrets Manager with the AWS profile configured on the PC, but the user using the service must have the correct policy to manage that service. If you are running this application in the AWS cloud, for example on the EC2 or ECS service, make sure that these services have the correct policy associated with the role used to retrieve values from Aws Secrets Manager.&lt;/p&gt;

&lt;p&gt;An example of Aws policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version":"2012-10-17",
    "Statement": [
      {
          "Effect": "Allow",
          "Action": [
              "secretsmanager:GetSecretValue",
              "secretsmanager:GetRandomPassword",
              "secretsmanager:DescribeSecret",
              "secretsmanager:PutSecretValue",
              "secretsmanager:UpdateSecretVersionStage"
          ],
          "Resource": "*"
      }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Links&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html"&gt;Aws Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret"&gt;Terraform Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/dotnet/core/extensions/custom-configuration-provider"&gt;Dotnet configuration provider&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>dotnet</category>
      <category>softwaredevelopment</category>
    </item>
  </channel>
</rss>
