<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alok-Saraswat</title>
    <description>The latest articles on DEV Community by Alok-Saraswat (@alok-saraswat).</description>
    <link>https://dev.to/alok-saraswat</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alok-saraswat"/>
    <language>en</language>
    <item>
      <title>Exploring DNS for Multi-Account Hybrid Cloud Setup</title>
      <dc:creator>Alok-Saraswat</dc:creator>
      <pubDate>Thu, 02 Nov 2023 16:50:57 +0000</pubDate>
      <link>https://dev.to/alok-saraswat/exploring-dns-for-multi-account-hybrid-cloud-setup-1n5c</link>
      <guid>https://dev.to/alok-saraswat/exploring-dns-for-multi-account-hybrid-cloud-setup-1n5c</guid>
      <description>&lt;p&gt;When migrating to cloud, enterprises often have internal business applications spread across on-premise and multiple AWS accounts belonging to various business units. Such scenarios require consistent DNS records and domain names between different AWS accounts and on-premises.&lt;/p&gt;

&lt;p&gt;In this article we will explore AWS service and architecture to achieve this use case.&lt;/p&gt;

&lt;p&gt;We know AWS has native DNS service called Route53. This DNS service has always been associated with VPC and often also known by several other names like:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon Provided DNS&lt;/li&gt;
&lt;li&gt;VPC Resolver&lt;/li&gt;
&lt;li&gt;+2 Resolver&lt;/li&gt;
&lt;li&gt;.2 Resolver&lt;/li&gt;
&lt;li&gt;EC2 DNS Resolver&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;however the problem is that Route 53 does not respond to queries which are not originating from within VPC. This essentially means that even there is an established connectivity between on-prem and cloud via VPN or direct connect, on-prem DNS is unable to talk to Route 53.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AXruOqon--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dsnbl734yede48pinr6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AXruOqon--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dsnbl734yede48pinr6j.png" alt="Image description" width="626" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One solution can be to have a custom DNS server hosted on EC2 within VPC which can then connect to On-prem DNS and resolve. However It is a setup and management overhead like EC2 setup, high availability considerations, vulnerability management and so on.&lt;/p&gt;

&lt;p&gt;AWS has another managed service called route 53 resolver inbound/outbound endpoints which can be configured to take care of all required activities to connect on-prem DNS to AWS cloud. The advantage is that it is a managed service without any operational overhead. There are just endpoints and forwarding rules to be configured. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Eq1fbp9F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zml7r4nk4gbbisqkhek6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Eq1fbp9F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zml7r4nk4gbbisqkhek6.png" alt="Image description" width="626" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Architecture is explained below:-&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instances within a VPC will use the Route 53 Resolver (Amazon Provided DNS).&lt;/li&gt;
&lt;li&gt;Private hosted zones will be associated with shared services VPC.&lt;/li&gt;
&lt;li&gt;These Private hosted zones will also be associated with other VPCs in the environment if required.&lt;/li&gt;
&lt;li&gt;Conditional forward rule(s) from the on-premises DNS servers will have an inbound Route 53 Resolver endpoint as their destination.&lt;/li&gt;
&lt;li&gt;Rule(s) for on-premises domain names are created that leverage an outbound Route 53 Resolver endpoint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Double clicking on Inbound resolver endpoint:-&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The inbound endpoints should be deployed in 2 availability zones for high availability.&lt;/li&gt;
&lt;li&gt;Inbound resolver endpoint will creates routable ENIs in VPC reachable over AWS Direct Connect or VPN.&lt;/li&gt;
&lt;li&gt;All the internal domains will be associated with DNS VPC in shared service account.&lt;/li&gt;
&lt;li&gt;Nomenclature: one “endpoint” == multiple ENIs. Limit: 10,000 QPS per ENI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a9LzCh5X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nlhmh7s5c7y7e45kvend.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a9LzCh5X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nlhmh7s5c7y7e45kvend.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On-Prem DNS needs to be configured to enable forwarding of queries Route 53 inbound endpoints.&lt;/strong&gt; These forwarding rules should point to inbound IP addresses.&lt;/p&gt;

&lt;p&gt;Below inputs are required to create inbound endpoints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2TpFBZZV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/erqan2l8qwj38nh580gu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2TpFBZZV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/erqan2l8qwj38nh580gu.png" alt="Image description" width="800" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Double clicking on Outbound resolver endpoint:-&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To resolve domains hosted at on-prem, Route53 Outbound endpoints need to be deployed.&lt;/p&gt;

&lt;p&gt;These endpoints will serve as the path through which all queries will be forwarded out of the VPC (towards on-prem DNS server).&lt;/p&gt;

&lt;p&gt;Outbound endpoints will be directly attached to the DNS VPC in shared service account and indirectly associated with other VPCs via rules. i.e, if a forwarding rule is shared with VPC that does not own the outbound endpoint, all queries that match the forwarding rule pass through to the DNS VPC and then forward out.&lt;/p&gt;

&lt;p&gt;The outbound endpoint may reside in an entirely different Availability Zone than the VPC that originally sent the query, and there is potential for an Availability Zone outage in the DNS VPC to impact query resolution in the VPC using the forwarding rule. Therefore, it is recommended to deploy outbound endpoints in multiple Availability Zones to avoid any impact due to AZ outage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QUv23J4q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ktn4thollfxtl672xjm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QUv23J4q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ktn4thollfxtl672xjm2.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Outbound resolver endpoints are deployed using three separate resources as mentioned below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An endpoint, similar to the inbound resolver with two or more IP addresses and a security group. Restrict access to only designated target resolvers. Update network ACLs and routing tables.&lt;/li&gt;
&lt;li&gt;A rule, which specifies the domain to conditionally forward to name servers.&lt;/li&gt;
&lt;li&gt;An association, which links the rule to one or more VPCs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Forwarding Rules.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;System and forward rules define the path&lt;/li&gt;
&lt;li&gt;Forward all public DNS resolution via route 53 resolver&lt;/li&gt;
&lt;li&gt;Route 53 Resolver should answer: amazonaws.com.&lt;/li&gt;
&lt;li&gt;Private Hosted Zone: abc.mycloud.com.&lt;/li&gt;
&lt;li&gt;Corp office namespace: corp.enterprise.com&lt;/li&gt;
&lt;li&gt;VPC CIDR: 10.10.0.0/23&lt;/li&gt;
&lt;li&gt;On-premises CIDR range: 10.20.0.0/23&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rHQlI-I8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ofgyzuy1ifwityh9or58.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rHQlI-I8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ofgyzuy1ifwityh9or58.png" alt="Image description" width="800" height="663"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resolver Rule summary&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Most specific rule wins&lt;/li&gt;
&lt;li&gt;Private DNS, PrivateLink endpoints, and VPC DNS get auto defined rules&lt;/li&gt;
&lt;li&gt;They can be overridden by Forward rule&lt;/li&gt;
&lt;li&gt;Best practice to allow SYSTEM resolve amazonaws.com.&lt;/li&gt;
&lt;li&gt;Need to create reverse records, e.g., for Kerberos&lt;/li&gt;
&lt;li&gt;VPC CIDR ranges get /24 rules (e.g., x.y.10.in-addr.arpa) auto defined.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lets look at the sample terraform configuration files to configure these Inbound and Outbound resolver with forwarding rules.&lt;/p&gt;

&lt;p&gt;Below terraform script with create all the required resources in Central DNS account (many times also referred as shared service account)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H4vjYeNG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebkbzn7z03nqyeweqse1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H4vjYeNG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebkbzn7z03nqyeweqse1.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform script to setup Central-DNS-VPC&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "CentralDNS" {
  cidr_block       = "172.219.244.128/27"
  instance_tenancy = "default"
  enable_dns_support = true
  enable_dns_hostnames = true

  tags = {
    Name = "CentralDNS"
  }
}


output "aws_vpc" {
  value = "aws_vpc.CentralDNS.ID"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform script for Subnet-1&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_subnet" "DNSPvtSub1" {
  vpc_id     = aws_vpc.CentralDNS.id
  cidr_block = "172.219.244.128/28"


  tags = {
    Name = "DNSPvtSub1"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform script for Subnet-2&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_subnet" "DNSPvtSub2" {
  vpc_id     = aws_vpc.CentralDNS.id
  cidr_block = "172.219.244.144/28"


  tags = {
    Name = "DNSPvtSub2"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform script for route table&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table" "RT-PVTSubnets" {
  vpc_id = aws_vpc.CentralDNS.id


  tags = {
    Name = "RT-PVTSubnets"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform script for Route-Table-Association&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table_association" "RT-Association-PVTSubnet1" {
  subnet_id      = aws_subnet.DNSPvtSub1.id
  route_table_id = aws_route_table.RT-PVTSubnets.id
}


resource "aws_route_table_association" "RT-Association-PVTSubnet2" {
  subnet_id      = aws_subnet.DNSPvtSub2.id
  route_table_id = aws_route_table.RT-PVTSubnets.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform script for Security-Group&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "SG-DNS" {
  name        = "SG-DNS"
  description = "Allow DNS traffic"
  vpc_id      = aws_vpc.CentralDNS.id



  ingress {
    from_port   = 172
    to_port     = 172
    protocol    = "tcp"
    cidr_blocks = [var.CIDR]
  }


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "tcp"
    cidr_blocks = [var.allowall]
  }


  tags = {
    Name = "DNS_SG"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform script for DNS Inbound Endpoint&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route172_resolver_endpoint" "Inbound-EP" {
  name      = "Inbound-EP"
  direction = "INBOUND"


  security_group_ids = [
    aws_security_group.SG-DNS.id        
  ]


  ip_address {
    subnet_id = aws_subnet.DNSPvtSub1.id
    ip = var.InboundEP1["ap-northeast-1]"
  }


  ip_address {
    subnet_id = aws_subnet.DNSPvtSub2.id
    ip        = var.InboundEP2
  }


  tags = {
    Name = "Inbound-EP"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform Script for DNS outbound endpoint&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route172_resolver_endpoint" "Outbound-EP" {
  name      = "Outbound-EP"
  direction = "OUTBOUND"


  security_group_ids = [
    aws_security_group.SG-DNS.id        
  ]


  ip_address {
    subnet_id = aws_subnet.DNSPvtSub1.id
    ip = var.OutboundEP1
  }


  ip_address {
    subnet_id = aws_subnet.DNSPvtSub2.id
    ip        = var.OutboundEP2
  }


  tags = {
    Name = "Outbound-EP"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform script for forwarding Rule-For-On-prem&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route172_resolver_rule" "fwd-rule-to-on-Prem" {
  domain_name          = "exampleintra.net"
  name                 = "fwd-rule-to-on-Prem"
  rule_type            = "FORWARD"
  resolver_endpoint_id = aws_route172_resolver_endpoint.Outbound-EP.id


  target_ip {
    ip = var.onpremdns-firstip
  }


  target_ip {
    ip = var.onpremdns-secondip
  }


  tags = {
    Name = "fwd-rule-to-on-Prem"
  }
}




#FW-Rule-For-Cloud 
resource "aws_route172_resolver_rule" "fwd-rule-to-cloud" {
  domain_name          = "ec1-aws.cloud.exampleintra.net"
  name                 = "fwd-rule-to-cloud"
  rule_type            = "FORWARD"
  resolver_endpoint_id = aws_route172_resolver_endpoint.Outbound-EP.id


  target_ip {
    ip = var.InboundEP1    
  }


  target_ip {
    ip = var.InboundEP2   
  }


  tags = {
    Name = "fwd-rule-to-cloud"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform script for Rule-Association&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route172_resolver_rule_association" "rule-association-onprem" {
  resolver_rule_id = aws_route172_resolver_rule.fwd-rule-to-on-Prem.id
  vpc_id           = aws_vpc.CentralDNS.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform script for Rule-Sharing&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ram_resource_share" "FWD-rule-share" {  
  name = "FWD-rule-share"
  allow_external_principals = false 
}


resource "aws_ram_principal_association" "FWD-rule-principal-association1" {
  principal          = var.WLacc1  
  resource_share_arn = aws_ram_resource_share.FWD-rule-share.arn
}


resource "aws_ram_principal_association" "FWD-rule-principal-association2" {
  principal          = var.WLacc2  
  resource_share_arn = aws_ram_resource_share.FWD-rule-share.arn
}


resource "aws_ram_resource_association" "FWD-resource-association-onprem" {
  resource_arn       = aws_route172_resolver_rule.fwd-rule-to-on-Prem.arn  
  resource_share_arn = aws_ram_resource_share.FWD-rule-share.arn
}


resource "aws_ram_resource_association" "FWD-resource-association-cloud" {  
  resource_arn       = aws_route172_resolver_rule.fwd-rule-to-cloud.arn
  resource_share_arn = aws_ram_resource_share.FWD-rule-share.arn
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Hosted-Zone&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route172_zone" "Frankfurt-PHZ" {
  name = var.phz


  vpc {
    vpc_id = aws_vpc.CentralDNS.id
  }
}




#Authorization


data "aws_region" "current" {}


/*
resource "aws_route172_zone" "domain_example" {
  name = "example.com"
}
*/


resource "null_resource" "create_remote_zone_auth" {
  count = "${length(var.accounts_to_auth)}"
  triggers = {
    zone_id = "${aws_route172_zone.Frankfurt-PHZ.zone_id}"
  }


  provisioner "local-exec" {
    command = "aws route172 create-vpc-association-authorization --hosted-zone-id ${aws_route172_zone.Frankfurt-PHZ.zone_id} --vpc VPCRegion=${data.aws_region.current.name},VPCId=${element(var.accounts_to_auth, count.index)}"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the Terraform variables file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Variable-File&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "CIDR" {}
variable "allowall" {}


variable "InboundEP1" {
  type = map
  default = {
    eu-central-1  = ["172.219.40.133"]
    eu-west-1 = ["172.219.44.133"]
    us-west-2 = ["172.219.140.133"]
    us-east-1 = ["172.219.144.133"]
    sa-east-1 = ["172.219.156.133"]
    ap-southeast-1  = ["172.219.244.133"]
    ap-northeast-1  = ["172.219.248.133"]
  }
}



variable "InboundEP2" {}
variable "OutboundEP1" {}
variable "OutboundEP2" {}
variable "onpremdns-firstip" {}
variable "onpremdns-secondip" {}
variable "WLacc1" {}
variable "WLacc2" {}
variable "phz" {}
variable "accounts_to_auth" {
  default = [
    "vpc-0b13f9b7a557f0568"
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform.tfvar file&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CIDR = "172.0.0.0/8"
allowall  = "0.0.0.0/0"


InboundEP2 = "172.219.244.150"
OutboundEP1 = "172.219.244.134"
OutboundEP2 = "172.219.244.151"
onpremdns-firstip = "172.0.172.6"
onpremdns-secondip = "172.0.172.4"
WLacc1 = "111111111111"
WLacc2 = "222222222222"
phz = "ec1-tafaws.cloud.exampleintra.net"s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;*&lt;em&gt;Conclusion *&lt;/em&gt;- In this article we explored the scenerio and implementation of Hybrid DNS on AWS where requirement is to have consistent DNS records and domain names between different AWS accounts and on-premises.&lt;/p&gt;

&lt;p&gt;Reference- Amazon web service&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Intelligent Guard Who Detects Threat in Cloud- AWS GuardDuty</title>
      <dc:creator>Alok-Saraswat</dc:creator>
      <pubDate>Thu, 02 Nov 2023 16:21:57 +0000</pubDate>
      <link>https://dev.to/alok-saraswat/the-intelligent-guard-who-detects-threat-in-cloud-aws-guardduty-53d9</link>
      <guid>https://dev.to/alok-saraswat/the-intelligent-guard-who-detects-threat-in-cloud-aws-guardduty-53d9</guid>
      <description>&lt;p&gt;With surge in globally connected systems and cloud computing, lot of sensitive data is stored and processed which makes it more important than ever for organizations to focus on protecting it from increasingly sophisticated cyber-attacks.&lt;/p&gt;

&lt;p&gt;To detect threats and protect infrastructure as well as workloads, one has to deploy additional software and additional infrastructure with appliances, sensors and agents, then set them up across all accounts, then continuously monitor and protect those accounts. It means collecting and analyzing tremendous amount of data. Then accurately detect threats based on data analysis, prioritize, and respond to alerts. And when all this is required at scale, we need to ensure that business functions and environments are not disrupted or impede the flexibility in cloud.&lt;/p&gt;

&lt;p&gt;This requires lot of expertise, time and upfront cost. There are some third-party managed tools available like CheckPoint, CloudGuard, Dome9 and Paulo Alto Prisma cloud but they can be costly for small to medium scale environments and require specific skillset to deploy and manage.&lt;/p&gt;

&lt;p&gt;AWS GuardDuty is a cloud scale, easier, smarter and cost effective managed intelligent threat detection and notification service to protect AWS environments and workloads.&lt;/p&gt;

&lt;p&gt;It is a managed service that constantly monitors AWS environment to find unusual or malicious behavior, filter out noise and prioritize critical findings. Or in other words, helps in finding the needle in haystack so that security team can focus on hardening AWS environment and quickly respond to suspicious or malicious activity or behaviour.&lt;/p&gt;

&lt;p&gt;In the context of NIST framework for cloud security, it fits under “detect” as it is AWS’s primary threat detection tool. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rINoUwig--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ompc498ssewvntouxy7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rINoUwig--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ompc498ssewvntouxy7r.png" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most of the threat detection services focus on network traffic to identify malicious activities however GuardDuty also analyses unusual API calls and potential unauthorized deployments indicating possibly compromised account and instances within AWS to detect anomalies.&lt;/p&gt;

&lt;p&gt;AWS GuardDuty analyses output from three primary data sources to detect threats. They are VPC flow logs, DNS logs and AWS CloudTrail and then applies machine learning, anomaly detection and integrated threat intelligence across multiple data sources to identify, prioritize and notify potential threats.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--02S6hTlT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4dc06i8aikm9pbxe1ds2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--02S6hTlT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4dc06i8aikm9pbxe1ds2.png" alt="Image description" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enabling VPC flow logs for a large environment can be very expensive. The good news is that GuardDuty doesn’t require any of these services to be enabled rather the data and logs required are gathered through an independent channel in backend directly from these services. So as soon as GuardDuty is enabled, a parallel stream of data feeds into GuardDuty backend.&lt;/p&gt;

&lt;p&gt;Following are few characteristics of GuardDuty :- &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplicity - There is no architectural or performance impact of enabling GuardDuty on existing environment.&lt;/li&gt;
&lt;li&gt;Continuous monitoring of AWS account and resources - Since there is no agent to be installed, as soon as any resource is created in a region protected by GuardDuty it is automatically covered.&lt;/li&gt;
&lt;li&gt;GuardDuty detects known threats like API calls coming from known malicious IP addresses based on threat intelligence from various up to date sources like AWS security intelligence, CrowdStrike and Proofpoint. It also detects unknown threats like unusual data access, mining of crypto-currency based on machine learning and behaviour of users as well as instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cINwfkqa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v6xnsz4mf81cixstxusk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cINwfkqa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v6xnsz4mf81cixstxusk.png" alt="Image description" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GuardDuty findings are classified as either stateless, which are independent of server or service state like IP match to a known malicious IP address or stateful which are more of behavioural detections that require state of EC2 instance or IAM user or role to be contained to analyse deviation from usual behaviour.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GA0aqErC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6evtzngb6xxurx020bp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GA0aqErC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6evtzngb6xxurx020bp.png" alt="Image description" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These findings are segregated into high, medium and low severity levels based on the threat severity value associated with them. This threat severity value defined by AWS reflects the potential risk with each finding. Severity value falls between 0.1 to 8.9 with higher the value greater is the risk. AWS has reserved values 0 and 9 to 10 for future use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bWf0X7EG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c53das2h50mfncm2gacz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bWf0X7EG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c53das2h50mfncm2gacz.png" alt="Image description" width="703" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GuardDuty supports master and member account structure. Many member accounts can be associated to a master account enabling enterprise wide consolidation and management. So, in a large environment, hundreds of member accounts can be associated to a master account. While individual teams or account owners can look at the findings within their own account, centralized security team only need to look at master account to get the wholistic view and also, they can create policies applicable across accounts like IP whitelisting and suppression filtering of certain findings as well as prevent individual accounts to apply independent policies. This way the control is in the hands of centralized security team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mJ11zzoT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8ai3rzshjm6gp203bw7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mJ11zzoT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8ai3rzshjm6gp203bw7.png" alt="Image description" width="749" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filtering noise&lt;/strong&gt;: While GuardDuty provides important insights, it is equally important to segregate between genuine and insignificant alerts and prioritize accordingly. Some of the alerts require immediate response, however many of them are worth ignoring as these can be an unnecessary panic and overhead. For example, false positive alarm, but they are very rare. Another example is an alert generated for an activity which is expected, like a vulnerability scanning software deployed for port scaling will result in a “port scanning” finding when it performs scanning. While this alert is genuine, it is an expected activity. Or port scanning from a non-malicious IP on the intentionally opened port of a web server or SSH on the bastion. In such scenarios either not much can be done or if the risk is accepted, the user may want to avoid getting notifications.&lt;/p&gt;

&lt;p&gt;The solution is to create automatic filters by creating “suppression rules”. When suppression rule is created the findings are still listed in GuardDuty console but it is not sent to CloudWatch event to avoid any downstream action.&lt;/p&gt;

&lt;p&gt;While only master GuardDuty account can create suppression filters, those are automatically applied to all member accounts. This allows centralized security team to control suppression across enterprise and also reduce efforts required in applying them in individual accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;- GuardDuty service is very quick and easy to deploy. Though it takes only AWS services logs into account but it still generates lots of valuable information to avoid possible attacks and helps in zeroing down to compromised server during cyber forensic activities.&lt;/p&gt;

&lt;p&gt;Reference: -&lt;br&gt;
&lt;a href="https://aws.amazon.com/guardduty/"&gt;https://aws.amazon.com/guardduty/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://d1.awsstatic.com/whitepapers/compliance/NIST_Cybersecurity_Framework_CSF.pdf"&gt;https://d1.awsstatic.com/whitepapers/compliance/NIST_Cybersecurity_Framework_CSF.pdf&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html"&gt;https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.nist.gov/cyberframework/online-learning/five-functions"&gt;https://www.nist.gov/cyberframework/online-learning/five-functions&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Demystifying Cross-Account S3 Access</title>
      <dc:creator>Alok-Saraswat</dc:creator>
      <pubDate>Thu, 02 Nov 2023 16:10:29 +0000</pubDate>
      <link>https://dev.to/alok-saraswat/demystifying-cross-account-s3-access-13jm</link>
      <guid>https://dev.to/alok-saraswat/demystifying-cross-account-s3-access-13jm</guid>
      <description>&lt;p&gt;Multi-account setup in cloud offers several advantages like grouping of workloads based on business purpose and ownership, applying distinct security controls by environment, constraining access to sensitive data, reducing blast radius, supporting multiple IT operating models, managing costs and distributing AWS service quotas and API request rate limits. However, setting up multi-account environment requires understanding of services and its access across account boundaries.&lt;/p&gt;

&lt;p&gt;For example, organizations are often required to grant external users or users from other accounts within their organization access to certain Amazon S3 buckets. In this article lets explore the possible options to enable cross account access to AWS S# buckets and their objects.&lt;/p&gt;

&lt;p&gt;Depending on the use case there are couple of ways to grant cross-account access to objects:&lt;/p&gt;

&lt;p&gt;Lets look at the scenario where IAM user “developer” is in account “A” and S3 bucket “test-app-bucket-1” is in account “B”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--herfzYK5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mw0h9b7t5mashlkfvsef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--herfzYK5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mw0h9b7t5mashlkfvsef.png" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For better understanding and to practically observe this scenario, let's put CloudFormation Script to create user with required permissions in account “A”.&lt;/p&gt;

&lt;p&gt;In this setup we'll define the following:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Who can access the objects inside the bucket (using the Principal element)?&lt;/li&gt;
&lt;li&gt;Objects they can access (using the Resource element).&lt;/li&gt;
&lt;li&gt;How they can access the objects inside the bucket (using the Action element)?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;CloudFormation template in account "A"&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Description: CloudFormation Template for developer IAM user creation in account"A"
Parameters:
  ForEnvironment:
    Type: String
    Default: dev
    Description: Environment name for which this developer user is created
    AllowedValues:
    - sit
    - uat
    - prod
    - dev 
    - test
Resources:
#IAM user in account A#
  AccAdeveloperIAMUser:
    Type: 'AWS::IAM::User'
    Properties:
      UserName: !Join  # give a name to this user      
            - '-'
            - - 'developer'
              - !Ref ForEnvironment
              - "iam-user-01"


      Policies: # list of inline policy documents embedded in the user
        - PolicyName: !Join  # give a name to this user      
            - '-'
            - - 'developer'
              - !Ref ForEnvironment
              - "iam-policy-01"
          PolicyDocument: # JSON policy document
            Version: '2012-10-17'
            Statement:
            - Effect: Allow
              Action:
              - s3:PutObject
              - s3:GetObject
              - s3:ListBucketMultipartUploads
              - s3:DeleteObjectVersion
              - s3:ListBucketVersions
              - s3:ListBucket
              - s3:DeleteObject
              - s3:GetObjectVersion
              - "s3:ListAccessPoints"              
              Resource: "arn:aws:s3:::test-dev-app-bucket-1/*" 


Outputs:
  AccAdeveloperIAMUser:
    Description: ARN to be supplied in developer template for environments
    Value: !GetAtt AccAdeveloperIAMUser.Arn'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now lets create an S3 bucket in account with required bucket policies to grant access to this user "Developer" from account "A".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: '2010-09-09
Description: CloudFormation Template for S3 bucket in account "B"
Parameters:
  EnvName:
    Type: String
    Default: "dev"
    Description: The name of the S3 Bucket to be created for FrontEnd
    AllowedValues:
    - sit
    - uat
    - prod
    - dev     


Resources:
  S3Bucket1:
    Type: 'AWS::S3::Bucket'
    Properties:
      BucketName: !Join
            - '-'
            - - 'test'
              - !Ref EnvName
              - "app-bucket-1"
      VersioningConfiguration:
        Status: Enabled
      AccessControl: "Private"
      PublicAccessBlockConfiguration:
          BlockPublicAcls: "true"
          BlockPublicPolicy: "true"
          IgnorePublicAcls: "true"
          RestrictPublicBuckets: "true"
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: 'AES256'                


  NameBucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref S3Bucket1
      PolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            AWS:
            - !Ref SSIamRoleArn
            - !GetAtt IamUser.Arn           
          Action:
          - s3:GetObject
          - s3:PutObject
          - s3:ListBucket
          - s3:DeleteObject
          Resource:
          - !Join
            - ''
            - - 'arn:aws:s3:::'
              - !Ref S3Bucket1                  
          - !Join
            - ''
            - - 'arn:aws:s3:::'
              - !Ref S3Bucket1
              - /*


Outputs:
  S3Bucket1:
    Description: Name of the bucket created using this template.
    Value: !Ref S3Bucket1'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration will allow user "Developer" from Account "A" to securely access an S3 bucket in account "B".&lt;/p&gt;

&lt;p&gt;Now lets look at other scenario where an EC2 instance called "Jenkins-server" in account "A" needs to access S3 bucket "“test-dev-app-bucket-1" in account "B". Here the EC2 instance will have the required permissions to access S3 across account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1o5QwsSb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dfnbvnyyqtdlvg5r5ocr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1o5QwsSb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dfnbvnyyqtdlvg5r5ocr.png" alt="Image description" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This use case is suitable to centralize permission management when providing cross-account access to multiple services. Cross-account IAM roles simplifies provisioning cross-account access to S3 objects that are stored in multiple S3 buckets. So that we don't have to manage multiple bucket policies for S3 buckets. This IAM role based access also allows cross-account access to objects owned or uploaded by another AWS account or AWS services.&lt;/p&gt;

&lt;p&gt;Following script will create an EC2 instance with required role association in Account "A"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: '2010-09-09
Description: CloudFormation Template for ec2 creation in Account A
Parameters:
  EnvName:
    Type: String
    Default: "dev"
    Description: The name of the S3 Bucket to be created for FrontEnd
    AllowedValues:
    - sit
    - uat
    - prod
    - dev         


  KeyPairName:
    Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
    Type: AWS::EC2::KeyPair::KeyName
    ConstraintDescription: must be the name of an existing EC2 KeyPair. 


  SubnetInstance:
    Type: AWS::EC2::Subnet::Id
    Description: Subnet ID for  Instance    


  VPCIdEC2:
    Type: AWS::EC2::VPC::Id
    Description: VPC ID for  Instance   



Resources:
  EC2SecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupName: "instance-sg-01"
      GroupDescription: "security group attached to  ec2 instance"
      VpcId: !Ref VPCIdEC2
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: '22'
          ToPort: '22'
          CidrIp:  172.16.0.0/16       


  EC2Instance:
    Type: 'AWS::EC2::Instance'
    Properties:
      ImageId: "ami-0015c3ec5dfe53255"         
      InstanceType: "t3.small"        
      KeyName: !Ref KeyPairName
      SubnetId: !Ref SubnetInstance
      SecurityGroupIds:
        - Ref: EC2SecurityGroup
      BlockDeviceMappings:
        - DeviceName: /dev/xvda
          Ebs:
            VolumeSize: 50
            Encrypted: 'true'
            VolumeType: "gp3"
      IamInstanceProfile: !Ref InstanceProfile  
      LaunchTemplate:
          LaunchTemplateId: !Ref InstanceLaunchTemplate
          Version: "1"
      Tags:
        - Key: Name
          Value: !Join
              - '-'
              - - 'kbc-be'
                - !Ref EnvName
                - "-Inst-01"       
        - Key: Schedule
          Value: -server
        - Key: environment
          Value: !Ref EnvName          



  InstanceSSMRole: 
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ec2.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      Path: /
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore  


  CrossAccAssumeRole:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - ec2.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      Path: /
      Policies:
        - PolicyName: root
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action: '*'
                Resource: '*'       


  InstanceProfile: 
    Type: 'AWS::IAM::InstanceProfile'
    Properties:
      Path: /
      Roles:
        - !Ref InstanceSSMRole 
        - !Ref CrossAccAssumeRole'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now lets create a role which will be assumed by account "A" EC2 instance. Following is the script which will create required role in account "B" and an S3 bucket for test.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RoleAccountB:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
        - Action:
          - s3:ListAllMyBuckets
          Effect: Allow
          Resource:
          - arn:aws:s3:::*
        - Action:
          - s3:ListBucket
          - s3:GetBucketLocation
          Effect: Allow
          Resource: arn:aws:s3:::AccountBBucketName
        - Effect: Allow
          Action:
          - s3:GetObject
          - s3:PutObject
          Resource: arn:aws:s3:::AccountBBucketName/* 


  AccountBS3Bucket1:
    Type: 'AWS::S3::Bucket'
    Properties:
      BucketName: !Join
            - '-'
            - - 'test'
              - !Ref EnvName
              - "app-bucket-1"
      VersioningConfiguration:
        Status: Enabled
      AccessControl: "Private"
      PublicAccessBlockConfiguration:
          BlockPublicAcls: "true"
          BlockPublicPolicy: "true"
          IgnorePublicAcls: "true"
          RestrictPublicBuckets: "true"
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: 'AES256' :
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now role in account assumes the role in Account B so that EC2 "Jenkins-server" in Account A can perform the required S3 operations.&lt;/p&gt;

&lt;p&gt;This configuration covers most of the use cases however architectures are evolving every day. For example many organizations prefer to put restrictive rules around editing bucket policies. AWS has another solution called S3 access points which enforce granular access to S3 objects with flexible cross-account access without having to edit bucket policies. I'll cover this topic in my subsequent Articles.&lt;/p&gt;

&lt;p&gt;-- Reference- Amazon web service&lt;/p&gt;

</description>
    </item>
    <item>
      <title>eksctl- Smart Deployment Of AWS EKS</title>
      <dc:creator>Alok-Saraswat</dc:creator>
      <pubDate>Tue, 11 Jul 2023 17:59:56 +0000</pubDate>
      <link>https://dev.to/alok-saraswat/eksctl-smart-deployment-of-aws-eks-3odc</link>
      <guid>https://dev.to/alok-saraswat/eksctl-smart-deployment-of-aws-eks-3odc</guid>
      <description>&lt;p&gt;AWS Elastic Kubernetes service is a very powerful managed AWS service which allows to run upstream Kubernetes on AWS without having to manage underlying Kubernetes control plane.&lt;br&gt;
K8s was developed by Google based on its experience of running containers and was released open source in 2014 to cloud native computing foundation. It is now an open source container orchestration platform which like any other container management platform allows scheduling, scaling, distributing load across containers, replacing failed containers etc.&lt;/p&gt;

&lt;p&gt;EKS is natively integrated with AWS services like IAM, VPC and CloudWatch etc and provides secure and highly scalable way to run applications in Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;However, when it comes to implementation there are several things to be taken care of. For example, VPC, subnet, route tables, VPC endpoints, EKS cluster creation, node groups, security groups for cluster as well as node groups, cluster roles, node group roles and the list goes on.&lt;/p&gt;

&lt;p&gt;But, what if I tell you that fully working cluster with nodes can be deployed with a simple single command?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yes, welcome to the world of &lt;strong&gt;&lt;em&gt;eksctl&lt;/em&gt;&lt;/strong&gt;, which is a &lt;em&gt;&lt;strong&gt;simple CLI tool for creating and managing clusters on elastic Kubernetes service&lt;/strong&gt;&lt;/em&gt;. It was developed by Weaveworks in "Go". It uses CloudFormation templates in back end for deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w_ooXM6c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ttfb3agovlbd3f5rsrhq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w_ooXM6c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ttfb3agovlbd3f5rsrhq.jpeg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above simple command # eksctl create cluster will create a cluster with following:-&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Auto-generated name, e.g. wonderfull-pizza-1527688624&lt;/li&gt;
&lt;li&gt;Two m5.large worker nodes&lt;/li&gt;
&lt;li&gt;Use the official AWS EKS AMI&lt;/li&gt;
&lt;li&gt;AWS Region = us-west-2 &lt;/li&gt;
&lt;li&gt;A dedicated VPC with public and private subnets and route tables.&lt;/li&gt;
&lt;li&gt;Managed node groups, security groups for cluster.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Above example is just the tip of the iceberg, The Kubernetes cluster can be further customized using declarative _*&lt;em&gt;cluster.yaml *&lt;/em&gt;_file. But before diving deep lets look at the prerequisites of using eksctl.&lt;/p&gt;

&lt;p&gt;Install &lt;em&gt;*&lt;em&gt;kubectl *&lt;/em&gt;&lt;/em&gt;– which is a command line tool for working with Kubernetes clusters.&lt;br&gt;
Following are the steps to install kubectl for Kubernetes ver-1.23.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.7/2022-06-29/bin/linux/amd64/kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the downloaded binary with the SHA-256 sum for binary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -o kubectl.sha256 https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.7/2022-06-29/bin/linux/amd64/kubectl.sha256
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the SHA-256 sum for downloaded binary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl sha1 -sha256 kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply execute permissions to the binary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x ./kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the binary to a folder in your PATH.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p $HOME/bin &amp;amp;&amp;amp; cp ./kubectl $HOME/bin/kubectl &amp;amp;&amp;amp; export PATH=$PATH:$HOME/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the $HOME/bin path to shell initialization file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo 'export PATH=$PATH:$HOME/bin' &amp;gt;&amp;gt; ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify kubectl version with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl version --short --client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output should be as below:-&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-27-189 ~]# kubectl version --short --client
Client Version: v1.23.7-eks-4721010
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install **eksctl **for a Linux server.&lt;br&gt;
Download and extract the latest release of eksctl with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Move the extracted binary to /usr/local/bin.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv /tmp/eksctl /usr/local/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test the installation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output should be as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-172-31-27-189 ~]$ eksctl versio
0.112.0n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now since we have kubectl and eksctl installed lets review cluster.yml file which when executed will create:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;IAM role for setup of control plane.&lt;/li&gt;
&lt;li&gt;An EKS cluster in desired subnets/VPC.&lt;/li&gt;
&lt;li&gt;A managed Node-group EC2 type.&lt;/li&gt;
&lt;li&gt;A launch Template and Auto-scaling group with following parameters:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Instance size= c5.large&lt;/li&gt;
&lt;li&gt;Instance type = spot&lt;/li&gt;
&lt;li&gt;Min size = 2&lt;/li&gt;
&lt;li&gt;Max size = 4&lt;/li&gt;
&lt;li&gt;EBS volume size = 50GB&lt;/li&gt;
&lt;li&gt;EBS volume type = gp2&lt;/li&gt;
&lt;li&gt;EBS volume encryption_&lt;/li&gt;
&lt;li&gt;Provision corresponding kubeconfig, aws-auth and ConfigMap files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This cluster.yaml file can be deployed using following simple command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create cluster -f cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;# &lt;strong&gt;Cluster.yaml&lt;/strong&gt; file&lt;/u&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# An example of ClusterConfig object using an existing VPC
--- 
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: kbc-be-kh-sit-eks-cluster-01
  region: eu-central-1
privateCluster:
  enabled: true
  skipEndPointCreation: true
vpc:
  id: "vpc-7435khjhdtrff472d"  
  cidr: "172.16.24.0/21"       
  extraCIDRs: ["172.16.34.0/23","172.16.36.0/24"]  
  subnets:
      private:
      eu-central-1a:
        id: "subnet-78werdfjyu7874gd7"
        cidr: "172.16.24.0/24" 
      eu-central-1b:
        id: "subnet-bdhghghdhfdfhuduf"
        cidr: "172.16.25.0/24"      
managedNodeGroups:  
  - name: kbc-be-kh-sit-eks-managed-nodegrp-spot-01
    instanceTypes: ["c5.large","c5.large"]
    spot: true
    privateNetworking: true
    minSize: 2
    maxSize: 4
    desiredCapacity: 2
    volumeSize: 50
    volumeType: gp2
    volumeEncrypted: true :
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-172-31-27-189 ~]$ kubectl get sv
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   &amp;lt;none&amp;gt;        443/TCP   12mc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify Nodes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-172-31-27-189 ~]$ kubectl get node
NAME                                             STATUS   ROLES    AGE    VERSION
ip-192-168-15-29.eu-central-1.compute.internal   Ready    &amp;lt;none&amp;gt;   6m4s   v1.22.12-eks-ba74326
ip-192-168-37-52.eu-central-1.compute.internal   Ready    &amp;lt;none&amp;gt;   6m     v1.22.12-eks-ba74326s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Likewise, whole setup can be terminated with following simple command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl delete cluster -f cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Well, after experiencing this personally, I can say this is just a small teaser of an exceedingly advance service which has blazing possibilities.&lt;/p&gt;

&lt;p&gt;--Reference- Amazon Web Service&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Demystifying Containers | Containers on AWS</title>
      <dc:creator>Alok-Saraswat</dc:creator>
      <pubDate>Wed, 05 Jul 2023 18:52:08 +0000</pubDate>
      <link>https://dev.to/alok-saraswat/demystifying-containers-exploring-containers-on-aws-20fe</link>
      <guid>https://dev.to/alok-saraswat/demystifying-containers-exploring-containers-on-aws-20fe</guid>
      <description>&lt;p&gt;Are you daunted by horrifying terms like docker, image registry, container orchestration, docker swarm, kubernatics, kubectl? then this article is for you where I'll try to simplify and explore containers.&lt;br&gt;
The concept of containerization dates back prior to 1956 of shipment industry. During those times all the shipping was done using loose boxes and barrels. It was a back breaking and time consuming work to get all those units loaded and offloaded in ships, boats and trucks. Soon, Malcolm McLean a United States businessman and entrepreneur revolutionized international trade and transport industry by developing modern days shipping containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--juV8yPAH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1z1qvnq2cb9jvxwva7uu.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--juV8yPAH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1z1qvnq2cb9jvxwva7uu.jpeg" alt="Image description" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Shipping container provided standardized unit of goods transportation which could be transported through truck, train and giant ships. The tools were developed to handle them and everyone knew how to use or store them resulting in tremendous improvement in shipping industries’ efficiency and margins. &lt;br&gt;
This is effectively what containers are doing for software industry today. So let’s look at what are containers in IT terms?&lt;br&gt;&lt;br&gt;
Container is a sandbox environment for an application to execute. It encompasses everything that an application needs like code libraries, runtime, dependencies, files, images, compiled application code ready to run and configuration. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fl7mT_60--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r98te4bnblp61ma1xqww.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fl7mT_60--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r98te4bnblp61ma1xqww.jpeg" alt="Image description" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Container first came into picture in 1979 and since then it has come a long way. In 2013, docker came in the scene and mobilized widespread organization adoption of containers. As of now there are several container providers like docker, Linux containers LXC and LXD, CoreOS, hyper V containers and Windows server containers etc.&lt;br&gt;
In earlier days before virtualization, the operating system installed on physical server will have entire system resource available. On that system, application and dependent libraries can be installed. This worked well but lack optimal resource utilization which acted as a key driver for concept of virtualization. &lt;br&gt;
Then virtualization concept came, in which a hypervisor gets installed on physical server and have access to entire server resources. It then allocates required server resources in the form of virtual machines and a guest operating system gets installed on virtual machines which then controls allocated resources. This approach made larger pool of computing resources available to large group of people without compromising security and confidentiality. Since resources are utilized more efficiently, multitenancy brought cost-effectiveness.&lt;br&gt;
Containers are one step ahead of virtual machines. At core containers are very similar to virtual machines except only underlying operating system virtualization unlike entire system virtualization in virtual machine.&lt;br&gt;
Thus, in container technology a single guest operating system on host machine can run many different creative applications. This makes containers much more efficient, fast and lightweight in comparison to virtual machines. In simple terms, think of container as running virtual machine just without the overhead of spinning up entire operating system instead it shares the guest operating system’s kernel. This makes them exceptionally lightweight to only few megabytes as they do not need to reproduce entire operating system and hence just take few seconds to bootup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lj5I_ncs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/941dx9o0rw1lr86vp8iz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lj5I_ncs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/941dx9o0rw1lr86vp8iz.jpeg" alt="Image description" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6kibm96O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p446mv9rikfatrau8l6f.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6kibm96O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p446mv9rikfatrau8l6f.jpeg" alt="Image description" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before going further into advantages of containers lets first look at some of the problem statements which contributed in rise of containers. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;u&gt;Portability- One container for all environments &lt;/u&gt;&lt;/strong&gt;– Think of a scenario when an application worked well in development environment but breaks in staging or production environments. This happens due to drift in runtime, dependency or configuration across different environments. The larger are the number of environments more are the chances for drift. Here, container (example Docker) solves this problem by packaging runtime, code, configuration and dependencies as a consistent unit for deployment. This artifact which is created as a part of build process can be then delivered to any machine or environment confidently and it will run correctly because it will bring along everything it needs to run.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZZNhX1R9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ll5v1tjqy8sbh105q3cq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZZNhX1R9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ll5v1tjqy8sbh105q3cq.jpeg" alt="Image description" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;&lt;u&gt;Evolution of microservices&lt;/u&gt;&lt;/strong&gt; - Microservice architecture is gaining lot of popularity these days. The giants like Facebook, Amazon etc. are adopting microservice largely. There are three primary advantages for adopting it - &lt;br&gt;
     a.Certain applications are easy to build and maintain, when they are broken in to small pieces or services instead of maintaining a monolithic application.&lt;br&gt;
     b.If any module or service needs upgrade, it can be done without impacting whole application.&lt;br&gt;
     c.If any module or service goes down, whole application remains largely unaffected as it is loosely coupled. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wMfHYVnJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/onb9vo2zboci47b6hab9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wMfHYVnJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/onb9vo2zboci47b6hab9.jpeg" alt="Image description" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When it comes to deploying microservice on virtual machines there is a lot of wastage of resources like RAM, processor and disk space as each of it gets deployed on an instance. They are not utilized completely by microservice which is running in these virtual machines. So apparently, it’s not an ideal way to deploy a microservice architecture. Also, think of a large application where 50+ microservices needs to be deployed then using instances would not be feasible due to resource wastage. On the contrary, containers allow lot more density in terms of application running on the same hardware resulting in better resource utilization and cost optimization. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Container Orchestration&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Containers due to their light weight and portable nature have made building and scaling cloud native application easier. It is easy to deploy and manage few of them but complexity grow exponentially as environment grows. Think of a large application with hundreds of containers and services, in such cases automation or a tool is required to place the containers appropriately, start and stop them as required or in other words manage lifecycle of container. This is when container orchestration tools come into picture. Some of the examples are AWS Elastic container service, AWS Elastic Kubernetes service, Apache MESOS, Docker Swarm, Azure container service etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RswFw0CE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yltd3tidx4v09q7lapwr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RswFw0CE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yltd3tidx4v09q7lapwr.jpeg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Container orchestrator is a tool which can manage and automate tasks like -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Host provisioning and container deployment&lt;/li&gt;
&lt;li&gt;Configuration and scheduling &lt;/li&gt;
&lt;li&gt;Host server resource allocation for containers&lt;/li&gt;
&lt;li&gt;Manage availability of containers &lt;/li&gt;
&lt;li&gt;Spinning up or removing containers according to workload across infrastructure&lt;/li&gt;
&lt;li&gt;Routing traffic and load balancing &lt;/li&gt;
&lt;li&gt;Container health monitoring and replacement of unhealthy containers&lt;/li&gt;
&lt;li&gt;Manage communication between containers and keep it secure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;AWS Services for Containers&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS has several offerings to help accelerate container deployment. Lets look at them one by one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p8CEl8Bv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vk2oc9beqgwz414psq0x.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p8CEl8Bv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vk2oc9beqgwz414psq0x.jpeg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Amazon Elastic Container Registry&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Elastic container registry is a fully managed AWS PaaS service. It eases the management, storage and deployment of docker images. It is the place where all the container images are placed and distributed to be reproduced as new containers. Think of AMI’s in case of EC2 instances, similarly container image is a base file which tells what all should be created while executing.&lt;br&gt;
ECR is natively integrated with elastic container service, elastic Kubernetes service which makes deployment workflow simple and AWS IAM which controls access to each repository. It uses S3 in backend to store image durably.&lt;br&gt;
It also eliminates the need to manage the infrastructure to host private repository and its availability and scaling. It offers a highly available and scalable platform to reliably host container images. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8-oBlboI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n15n9n1q2qgyy9a4ni6y.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8-oBlboI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n15n9n1q2qgyy9a4ni6y.jpeg" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Amazon Elastic Container Service (ECS)&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As containers gained popularity and enterprises started using them at large scale this is when AWS introduced their container management service for docker containers called elastic container service (ECS).&lt;br&gt;
This AWS managed docker container orchestration service eliminates the need to manage containers and clusters and offers managed cluster of AWS EC2 instances to run application in docker containers.&lt;br&gt;
ECS is widely used within Amazon in backend to power services such as AWS Batch, AWS Polly, AWS LEX, AWS SageMaker which means that it is tried and tested for availability, security, reliability and scale.&lt;br&gt;
It is an AWS regional service that supports native integration with several AWS services like, AWS IAM, AWS Secret manager, AWS CloudFormation, AWS Elastic load balancer,  AWS Route 53, AWS app Mesh which brings rich observability, security and traffic controls to application.&lt;br&gt;
Since launch ECS has grown so fast that currently every hour five times more containers are being launched as compared to EC2 instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qthyexVH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d9i0v8dl6609w74murkq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qthyexVH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d9i0v8dl6609w74murkq.jpeg" alt="Image description" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS ECS offers two container hosting models based on whether user wants to manage infrastructure or not -&lt;br&gt;
      •   EC2 launch type &lt;br&gt;
      •   Fargate – serverless offering&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;EC2 launch type- Infrastructure first approach&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
In this model ECS allow user to define and manage cluster or group of instances over which ECS deploy and manage containers. These container EC2 instances are same as any other EC2 instance, they can be on-demand or spot instances. The only difference is that they have an ECS agent installed on them which controls their lifecycle and takes care of the communication between ECS service and the instance, providing the status of running containers and managing running new ones. This agent can be installed manually on the instances or using pre-baked AMIs. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nssQ1SUK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a1kfxvazo1ydiee0vah8.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nssQ1SUK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a1kfxvazo1ydiee0vah8.jpeg" alt="Image description" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The instance on which containers are running appears in EC2 instance list and user can SSH them. Here user is responsible for monitoring, patching, scaling and security. Below is the responsibility matrix.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IEA3Rfdj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cyn03ai9spapnhdqvviw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IEA3Rfdj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cyn03ai9spapnhdqvviw.jpeg" alt="Image description" width="592" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this approach ECS runs tasks based on available infrastructure. This is explained in below steps:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OTbH_x3r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i1vaikd6iikx2t25cnpa.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OTbH_x3r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i1vaikd6iikx2t25cnpa.jpeg" alt="Image description" width="732" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a task needs to be deployed via ECS service. In the backend list of all available instances is fetched. Then some of those instances which are already running any task and their resources are consumed or are of inappropriate size and configuration are filtered out. &lt;/li&gt;
&lt;li&gt;Instances are further filtered based on placement constraint and strategy, like spread containers across availability zones. &lt;/li&gt;
&lt;li&gt;Finally, tasks are deployed on selected instances. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Likewise, if placement constraints define instance family as r5 and if none are available then deployment fails.&lt;br&gt;
Here it is needless to say that infrastructure is in the center of deployment and application is deployed based on infrastructure availability and suitability. &lt;br&gt;
Lets now look at the deployment steps in AWS console for EC2 launch type. Firstly ECS cluster needs to be provisioned to run tasks over it. Following inputs are taken from user -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define cluster name&lt;/li&gt;
&lt;li&gt;Instance provisioning model- on-demand or spot instances&lt;/li&gt;
&lt;li&gt;EC2 instance type- like m5.large, r5.xlarge or t3.large instance type.&lt;/li&gt;
&lt;li&gt;Instance count – how many instances will be part of the cluster&lt;/li&gt;
&lt;li&gt;EC2 AMI ID- user can choose from a list of available AMI’s which already has ECS agent included in it.&lt;/li&gt;
&lt;li&gt;EBS volume size- in EC2 launch type user an choose to have persistent storage.&lt;/li&gt;
&lt;li&gt;Create Key-pair – in EC2 launch type allows SSH into the instance which gives granular control.&lt;/li&gt;
&lt;li&gt;Define VPC, Subnets and CIDR range.&lt;/li&gt;
&lt;li&gt;Security group inbound rules&lt;/li&gt;
&lt;li&gt;IAM role for container instance- ECS agent needs to communicate with ECS service. To enable these calls “ecsInstanceRole” is required.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once above attributes are defined, ECS creates a CloudFormation template to create the resources. The instances are launched in autoscaling group to ensure that the defined desired count of healthy instances is always available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qcb6ZpD6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kyefkok38j557tp0vw9c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qcb6ZpD6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kyefkok38j557tp0vw9c.jpg" alt="Image description" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once above cluster is ready then we need to define “task definition” which is a blueprint of application. Below attributes are defined -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, we have to decide whether we want our task to be compatible with EC2 or Fargate launch type.&lt;/li&gt;
&lt;li&gt;Define name of task definition&lt;/li&gt;
&lt;li&gt;Define task role if container needs to interact with any other AWS service, like container app needs to put data in S3 bucket.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Network mode - There are several modes which calls for separate detailed discussion but in general for windows only default mode is supported and for Linux, bridge mode allows one to one mapping of instance and container port.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define execution role for task “ecsTaskExecutionRole”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Task size- optional for EC2 launch type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define or add container – here container details like name,&lt;br&gt;
container image repository path, memory limit, port mapping, &lt;br&gt;
health check, container environment details, network settings, &lt;br&gt;
logging etc. can be defined.&lt;br&gt;
With above attributes Task definition is created next to run a task from the task definition. Task is an instance of task definition.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Here task name, EC2 instance cluster, number of tasks need to run, task placement across instances and AZ’s are defined and task is ready to run.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In nutshell, task definition and tasks are the mechanism to tell ECS service which specific container image, with this much CPU and memory, with specific network settings, where it needs to be placed. Once this information is given and containers are spinned up, ECS service manages them by automatically scaling containers, placing them effectively across available compute resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mlC8d5wy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/66hgoec1sm2a0grpfy5k.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mlC8d5wy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/66hgoec1sm2a0grpfy5k.jpeg" alt="Image description" width="624" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, to summarize, In ECS EC2 model the hosting approach is infrastructure centric. AWS manages the control plane but user is still responsible for management of data plane or in other words user is responsible for managing EC2 instances, developing containers and managing them.  It is useful if user wants to define, control and manage infrastructure however AWS further offers another model to eliminate infrastructure management overhead. Let’s look at it next.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Fargate- The game changer- Application first approach&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ECS made it swift to run docker containers in cloud but as we saw, we still have to manage underlying EC2 instances. To eliminate this overhead AWS came up with a serverless compute engine in November 2017 called Fargate. With Fargate there are no clusters of instances to be provisioned, patched and managed. We only have to define containers and the compute resource it requires and all compute requirement are taken care by AWS as and when required. All we do is build container image, define the CPU and memory requirements, define IAM policies, networking and launch it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IoWT2cAd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h1pnnoc5owkmugiu7vd7.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IoWT2cAd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h1pnnoc5owkmugiu7vd7.jpeg" alt="Image description" width="624" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is natively integrated with AWS ecosystem services AWS VPC, AWS IAM, CloudWatch and load balancers. As of now Fargate is the default and most popular choice to run docker containers in AWS. This feature switched the approach to application first, because now the requirements are owned by application developers instead of infrastructure availability and it responds to application requirements.&lt;br&gt;
In ECS control plane is always managed by ECS service but with Fargate, data plane is also managed by AWS. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HkXnO7y2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4fjdaqiqcrdaz1mtg33.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HkXnO7y2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4fjdaqiqcrdaz1mtg33.jpeg" alt="Image description" width="594" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s now look at the deployment steps in AWS console for Fargate launch type to understand the difference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rJ4mqQp---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g1uwyhvv7r9krx13hqlz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rJ4mqQp---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g1uwyhvv7r9krx13hqlz.jpeg" alt="Image description" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are only four objects to be defined:- &lt;br&gt;
1.Define container definition- Choose container image&lt;br&gt;
2.Define task definition name&lt;br&gt;
      - Network mode supported for Fargate is “awsvpc”&lt;br&gt;
      - Define execution role for task “ecsTaskExecutionRole”&lt;br&gt;
      - Define task size. Task memory and CPU. Billing happens &lt;br&gt;
        based on these attributes&lt;br&gt;
3.Define the Service- Service name, desired task count, security group and application load balancer&lt;br&gt;
4.Configure cluster -Cluster name, VPC and subnets&lt;br&gt;
Once above attributes are defined, application is ready to run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---nOXl9RS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/beimhvusw4t9u36akad6.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---nOXl9RS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/beimhvusw4t9u36akad6.jpeg" alt="Image description" width="624" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s look at the key difference between EC2 and Fargate launch type and their suitability in different scenarios.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GcTSUAbf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lwxizti71eh4qjzetsy2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GcTSUAbf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lwxizti71eh4qjzetsy2.jpeg" alt="Image description" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pricing:-&lt;br&gt;
ECS with EC2 launch type is charged according to the resource usage.&lt;br&gt;
ECS Fargate is billed according to VCPU and memory requirement defined in task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;AWS Elastic Kubernetes Service (EKS)&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS Elastic Kubernetes service is a managed AWS service which allows to run Kubernetes on AWS without having to manage underlying Kubernetes control plane.&lt;br&gt;
Kubernetes or “k8s” was developed by Google based on their experience of running containers and was released open source in 2014 to cloud native computing foundation.&lt;br&gt;
It is now an open source container orchestration platform which like any other container management platform allows scheduling, scaling, distributing load across containers, replace failed containers etc. &lt;br&gt;
EKS is natively integrated with AWS services like IAM, VPC and CloudWatch etc and provides secure and highly scalable way to run applications in Kubernetes cluster. &lt;br&gt;
EKS is highly available- EKS by design is highly available as it deploys container servers across AZ’s. It monitors and automatically replaces unhealthy servers. It uses blue-green architecture to provide zero downtime during patching.&lt;br&gt;
&lt;strong&gt;Offers Serverless Option-&lt;/strong&gt; Fargate is serverless compute option for deploying containers. &lt;br&gt;
&lt;strong&gt;Built with the Community-&lt;/strong&gt; EKS provides upstream Kubernetes which means that full scale open source version is running on AWS which allows quick migration from on-prem to cloud without any re-factoring. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kMj5Y4Rz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/guwsws16bi6dlouonei9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kMj5Y4Rz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/guwsws16bi6dlouonei9.jpeg" alt="Image description" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So far, we have touched upon concept of containerization and various services offered by AWS to deploy containers at scale. Each of these components requires separate articles for deep dive which I will try to cover in subsequent posts.&lt;/p&gt;

&lt;p&gt;Reference- Amazon web services&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>docker</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
