<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: globart</title>
    <description>The latest articles on DEV Community by globart (@globart).</description>
    <link>https://dev.to/globart</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/globart"/>
    <language>en</language>
    <item>
      <title>S3 native state locking in Terraform</title>
      <dc:creator>globart</dc:creator>
      <pubDate>Fri, 29 Nov 2024 21:04:36 +0000</pubDate>
      <link>https://dev.to/globart/s3-native-state-locking-in-terraform-518i</link>
      <guid>https://dev.to/globart/s3-native-state-locking-in-terraform-518i</guid>
      <description>&lt;p&gt;Since the &lt;a href="https://github.com/hashicorp/terraform/blob/v0.5.0/CHANGELOG.md" rel="noopener noreferrer"&gt;Terraform 0.5.0 release from May 2015th&lt;/a&gt; we've been able to store our state on S3 buckets.&lt;/p&gt;

&lt;p&gt;But in order to ensure it's consistency, we've had to use state locking using DynamoDB table. Although the DynamoDB-associated costs are negligable, it is still nonetheless another resource, that you have create manually, which could prove to be pretty annoying if you are managing large infrastructure and/or many projects at once.&lt;/p&gt;

&lt;p&gt;The DynamoDB was needed due to the fact that S3 only had eventual consistency, meaning that if you read a file some time after it was written, you would get its latest version. But if you attempted to read a file immediately after it was written, it wasn't guaranteed.&lt;br&gt;
This was addressed when &lt;a href="https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-s3-now-delivers-strong-read-after-write-consistency-automatically-for-all-applications/" rel="noopener noreferrer"&gt;AWS introduced strong read-after-write consistency for S3 in December 2020&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, this didn't solve the issue entirely, because there was no built-in mechanism to check if an object exists, before creating it.&lt;br&gt;
Fortunately, after another 4 years, &lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/" rel="noopener noreferrer"&gt;Amazon introduced support for conditional writes in S3 in August 2024&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These changes made it possible to start &lt;a href="https://github.com/hashicorp/terraform/pull/35661" rel="noopener noreferrer"&gt;work on state locking without DynamoDB&lt;/a&gt;, which doesn't require any additional resources, apart from the bucket itself.&lt;br&gt;
After a couple of months, &lt;a href="https://github.com/hashicorp/terraform/blob/v1.10/CHANGELOG.md" rel="noopener noreferrer"&gt;S3 native state locking was introduced in Terraform 1.10.0 in November 2024&lt;/a&gt;&lt;br&gt;
While &lt;a href="https://github.com/opentofu/opentofu/issues/599" rel="noopener noreferrer"&gt;similar discussion exists in OpenTofu repo since September 2023&lt;/a&gt;, at the time of writing this article, there was no equivalent solution developed. But I hope it will be created in the coming months.&lt;/p&gt;

&lt;p&gt;So, while previously your S3 backend configuration looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket         = "example-bucket"
    key            = "path/to/state"
    region         = "us-east-1"
    dynamodb_table = "example-table"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now it can look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket       = "example-bucket"
    key          = "path/to/state"
    region       = "us-east-1"
    use_lockfile = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While &lt;code&gt;terraform apply/destroy&lt;/code&gt; is going on, &lt;code&gt;key&lt;/code&gt;.tflock file will be created in S3 bucket, which contains lock information, including a unique lock ID and other metadata.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqeta0u1ldnuyd5u0tq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqeta0u1ldnuyd5u0tq4.png" alt="key.tflock file" width="800" height="279"&gt;&lt;/a&gt;&lt;br&gt;
If another user tries to do &lt;code&gt;terraform apply&lt;/code&gt; at the same time, Terraform will see that the &lt;code&gt;key&lt;/code&gt;.tflock file already exists, so &lt;code&gt;apply&lt;/code&gt; will fail.&lt;br&gt;
After &lt;code&gt;apply&lt;/code&gt; is completed, &lt;code&gt;key&lt;/code&gt;.tflock file will be deleted. &lt;/p&gt;

&lt;p&gt;Currently, while this feature is still experimental, &lt;code&gt;use_lockfile&lt;/code&gt; argument is optional and defaults to &lt;code&gt;false&lt;/code&gt;. &lt;br&gt;
To support migration from older versions of Terraform which only support DynamoDB-based locking, it can be configured simultaneously with &lt;code&gt;dynamodb_table&lt;/code&gt; argument. &lt;a href="https://developer.hashicorp.com/terraform/language/backend/s3#state-locking" rel="noopener noreferrer"&gt;In a future minor version the DynamoDB locking mechanism will be removed.&lt;/a&gt; &lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>infrastructureascode</category>
      <category>s3</category>
    </item>
    <item>
      <title>How to create AWS ASG with HTTPS ALB using Terraform modules</title>
      <dc:creator>globart</dc:creator>
      <pubDate>Wed, 03 Jan 2024 18:37:35 +0000</pubDate>
      <link>https://dev.to/globart/how-to-create-aws-asg-with-https-alb-using-terraform-1bio</link>
      <guid>https://dev.to/globart/how-to-create-aws-asg-with-https-alb-using-terraform-1bio</guid>
      <description>&lt;ul&gt;
&lt;li&gt;General settings&lt;/li&gt;
&lt;li&gt;VPC&lt;/li&gt;
&lt;li&gt;AMI&lt;/li&gt;
&lt;li&gt;Additional parameters&lt;/li&gt;
&lt;li&gt;ALB with HTTPS&lt;/li&gt;
&lt;li&gt;
ASG
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is basically a slighty modified version of a &lt;a href="https://antonputra.com/amazon/create-alb-terraform/#secure-alb-with-tls-certificate"&gt;tutorial by Anton Putra&lt;/a&gt;, which I've changed to use modules (with pinned versions for future compatibility) instead of plain resources, so it is easier to read. I've also added the code for creating and assigning appropriate IAM Role and Instance Profile to ASG instances, so you can manage them with SSM.&lt;/p&gt;

&lt;h2&gt;
  
  
  General settings
&lt;/h2&gt;

&lt;p&gt;First of all, you'll have to set the region, which you would like the resources to be deployed in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.31.0"
    }
  }
}

provider "aws" {
  region = "eu-west-1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  VPC
&lt;/h2&gt;

&lt;p&gt;Then, you'll create the VPC and all of its resources (security groups are dependant on eachother, so they are created as resources first and then rules are created as modules)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  ...
  vpc_cidr      = "10.0.0.0/16"
  azs           = slice(data.aws_availability_zones.available.names, 0, 2)
  tags = {
    ManagedBy = "Terraform"
  }
 ...
}

data "aws_availability_zones" "available" {}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.4.0"

  name               = "alb-vpc"
  cidr               = local.vpc_cidr
  azs                = local.azs
  private_subnets    = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k)]
  public_subnets     = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 4)]
  enable_nat_gateway = true
  single_nat_gateway = true
  tags               = local.tags
}

resource "aws_security_group" "ec2" {
  name   = "ec2-sg"
  vpc_id = module.vpc.vpc_id
  tags   = local.tags
}

resource "aws_security_group" "alb" {
  name   = "alb-sg"
  vpc_id = module.vpc.vpc_id
  tags   = local.tags
}

module "alb-sg-rules" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "5.1.0"

  create_sg           = false
  security_group_id   = aws_security_group.alb.id
  ingress_cidr_blocks = ["0.0.0.0/0"]
  ingress_rules       = ["http-80-tcp", "https-443-tcp"]
  egress_with_source_security_group_id = [
    {
      from_port                = 8080
      to_port                  = 8080
      protocol                 = "tcp"
      description              = "App port"
      source_security_group_id = aws_security_group.ec2.id
    },
    {
      from_port                = 8081
      to_port                  = 8081
      protocol                 = "tcp"
      description              = "Full healthcheck"
      source_security_group_id = aws_security_group.ec2.id
    }
  ]
}

module "ec2-sg-rules" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "5.1.0"

  create_sg         = false
  security_group_id = aws_security_group.ec2.id
  ingress_with_source_security_group_id = [
    {
      from_port                = 8080
      to_port                  = 8080
      protocol                 = "tcp"
      description              = "App port"
      source_security_group_id = aws_security_group.alb.id
    },
    {
      from_port                = 8081
      to_port                  = 8081
      protocol                 = "tcp"
      description              = "Full healthcheck"
      source_security_group_id = aws_security_group.alb.id
    }
  ]
  egress_cidr_blocks = ["0.0.0.0/0"]     // these are needed for instance to be able to initiate a connection to SSM
  egress_rules       = ["https-443-tcp"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  AMI
&lt;/h2&gt;

&lt;p&gt;After this, you'll have to create launch template for your ASG. I'll use Packer to create the AMI, as provided in Anton's post. You'll have to create the following files:&lt;br&gt;
&lt;code&gt;files/my-app.service&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=My App
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
ExecStart=/home/ubuntu/go/bin/my-app

User=ubuntu

Environment=GIN_MODE=release

Restart=always
RestartSec=1

[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;scripts/bootstrap.sh&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

set -e

sudo add-apt-repository ppa:longsleep/golang-backports
sudo apt-get update
sudo apt-get install -y golang-go
go install github.com/antonputra/tutorials/lessons/127/my-app@main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;my-app.pkr.hcl&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;packer {
  required_plugins {
    amazon = {
      version = "v1.2.9"
      source  = "github.com/hashicorp/amazon"
    }
  }
}

source "amazon-ebs" "my-app" {
  ami_name      = "my-app-{{ timestamp }}"
  instance_type = "t3a.small" // this isn't the instance type of ASG, it's the type of temp instance Packer will use to build an AMI off of
  region        = "eu-west-1" // change this to the region of your resources
  subnet_id     = "subnet-074a32b171778af28" // change this to any public subnet in the specified region, for example, you can use the one from default VPC

  source_ami_filter {
    filters = {
      name                = "ubuntu/images/*ubuntu-jammy-22.04-amd64-server-*"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    most_recent = true
    owners      = ["099720109477"]
  }

  ssh_username = "ubuntu"

  tags = {
    Name = "My-App",
    ManagedBy = "Packer"
  }
}

build {
  sources = ["source.amazon-ebs.my-app"]

  provisioner "file" {
    destination = "/tmp"
    source      = "files"
  }

  provisioner "shell" {
    script = "scripts/bootstrap.sh"
  }

  provisioner "shell" {
    inline = [
      "sudo mv /tmp/files/my-app.service /etc/systemd/system/my-app.service",
      "sudo systemctl start my-app",
      "sudo systemctl enable my-app"
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, run these commands to initialize and build your AMI using Packer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;packer init my-app.pkr.hcl
packer build my-app.pkr.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Additional parameters
&lt;/h2&gt;

&lt;p&gt;After this, you'll have to create/import keypair and decide on the instance type, you'd like to use for your ASG. You'll also have to create Route53 public hosted zone and update nameservers for your domain with the ones provided in NS record, which will be automatically created in the Route53 zone. Then, you'll add these values to locals:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  ...
  domain_name   = "https-alb.pp.ua"
  keypair_name  = "devops"
  instance_type = "t3a.micro"
  ami_id        = "ami-06f69317847054bb5"
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ALB with HTTPS
&lt;/h2&gt;

&lt;p&gt;You can now also create ALB with support for HTTPS and a target group, which will be attached to ASG:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "acm" {
  source  = "terraform-aws-modules/acm/aws"
  version = "5.0.0"

  domain_name       = local.domain_name
  zone_id           = data.aws_route53_zone.public.zone_id
  validation_method = "DNS"
  tags              = local.tags
}

module "alb" {
  source  = "terraform-aws-modules/alb/aws"
  version = "9.4.0"

  name                       = "alb"
  vpc_id                     = module.vpc.vpc_id
  subnets                    = module.vpc.public_subnets
  security_groups            = [aws_security_group.alb.id]
  enable_deletion_protection = false
  target_groups = {
    asg = {
      name              = "asg-tg"
      port              = 8080
      protocol          = "HTTP"
      vpc_id            = module.vpc.vpc_id
      create_attachment = false

      health_check = {
        enabled             = true
        port                = 8081
        interval            = 30
        protocol            = "HTTP"
        path                = "/health"
        matcher             = "200"
        healthy_threshold   = 3
        unhealthy_threshold = 3
      }
    }
  }
  listeners = {
    http-https-redirect = {
      port     = 80
      protocol = "HTTP"
      redirect = {
        port        = "443"
        protocol    = "HTTPS"
        status_code = "HTTP_301"
      }
    },
    https = {
      port            = 443
      protocol        = "HTTPS"
      ssl_policy      = "ELBSecurityPolicy-2016-08" // set to allow most clients, should be change to newer one
      certificate_arn = module.acm.acm_certificate_arn
      forward = {
        target_group_key = "asg"
      }
    }
  }
  tags = local.tags
}

resource "aws_route53_record" "alb" {
  name    = local.domain_name
  type    = "A"
  zone_id = data.aws_route53_zone.public.zone_id
  alias {
    name                   = module.alb.dns_name
    zone_id                = module.alb.zone_id
    evaluate_target_health = false
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ASG
&lt;/h2&gt;

&lt;p&gt;And, finally, you'll create the ASG itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_service_linked_role" "autoscaling" {
  aws_service_name = "autoscaling.amazonaws.com"
  description      = "A service linked role for autoscaling"
  custom_suffix    = "ssm"

  provisioner "local-exec" {
    command = "sleep 10"
  }
  tags = local.tags
}

module "asg" {
  source  = "terraform-aws-modules/autoscaling/aws"
  version = "7.3.1"

  name                             = "asg"
  use_name_prefix                  = false
  vpc_zone_identifier              = module.vpc.private_subnets
  min_size                         = 1 // automatically set as desired
  max_size                         = 3
  launch_template_name             = "my-app"
  launch_template_use_name_prefix  = false
  update_default_version           = true
  image_id                         = local.ami_id
  instance_type                    = local.instance_type
  key_name                         = local.keypair_name
  security_groups                  = [aws_security_group.ec2.id]
  create_traffic_source_attachment = true
  traffic_source_identifier        = module.alb.target_groups["asg"].arn
  service_linked_role_arn          = aws_iam_service_linked_role.autoscaling.arn
  service_linked_role_arn          = aws_iam_service_linked_role.autoscaling.arn
  create_iam_instance_profile      = true
  iam_instance_profile_name.       = "ssm-instance-profile"
  iam_role_name                    = "ssm-role"
  iam_role_path                    = "/ec2/"
  iam_role_description             = "SSM role example"
  iam_role_tags                    = local.tags
  iam_role_policies = {
    AmazonSSMManagedInstanceCore = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
  }
  block_device_mappings = [
    {
      # Root volume
      device_name = "/dev/xvda"
      no_device   = 0
      ebs = {
        delete_on_termination = true
        encrypted             = true
        volume_size           = 1
        volume_type           = "gp3"
      }
    }
  ]
  scaling_policies = {
    avg-cpu-policy-greater-than-80 = {
      policy_type               = "TargetTrackingScaling"
      estimated_instance_warmup = 300
      target_tracking_configuration = {
        predefined_metric_specification = {
          predefined_metric_type = "ASGAverageCPUUtilization"
        }
        target_value = 80.0
      }
    }
  }
  tags = local.tags
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can optionally add output, to see complete healthcheck URL after all of the resources are created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "custom_domain" {
  value = "https://${module.acm.distinct_domain_names[0]}/ping" // will output only first domain supplied
} 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>terraform</category>
      <category>https</category>
      <category>alb</category>
    </item>
    <item>
      <title>DynamoDB Tips and Tricks</title>
      <dc:creator>globart</dc:creator>
      <pubDate>Mon, 10 Jul 2023 10:15:44 +0000</pubDate>
      <link>https://dev.to/globart/dynamodb-tips-and-tricks-3b6c</link>
      <guid>https://dev.to/globart/dynamodb-tips-and-tricks-3b6c</guid>
      <description>&lt;ul&gt;
&lt;li&gt;Terminology&lt;/li&gt;
&lt;li&gt;Limits&lt;/li&gt;
&lt;li&gt;Best practices&lt;/li&gt;
&lt;li&gt;Examples&lt;/li&gt;
&lt;li&gt;
Sources
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Terminology
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Table and its elements&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ku5uQjJK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o5vi97hdtxdt67i6xirl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ku5uQjJK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o5vi97hdtxdt67i6xirl.png" alt="DynamoDB table" width="800" height="356"&gt;&lt;/a&gt;&lt;br&gt;
Each DynamoDB table contains items, which are rows of data. Each item is composed of two parts. The first one is the primary key. It can contain only partition key, or be composite - containing both primary key and sort key. The second one is the attributes, which are columns that have a name and a value associated with it&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n9N9ySUs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivx52iq0n06ru78s0pzs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n9N9ySUs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivx52iq0n06ru78s0pzs.png" alt="DynamoDB table" width="800" height="432"&gt;&lt;/a&gt;&lt;br&gt;
There is also a concept of item collection. An item collection is one or more items, which correspond to the same partition key.&lt;br&gt;
&lt;strong&gt;Streams&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vIvnd_fp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6eubx8fe7lyaldvwywh9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vIvnd_fp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6eubx8fe7lyaldvwywh9.png" alt="DynamoDB Streams" width="800" height="432"&gt;&lt;/a&gt;&lt;br&gt;
You’ve probably already dealt with or at least heard about streams: a stream gets data from producers and passes it to consumers. In our case, the producer is DynamoDB and consumers are different AWS services. The prominent one is Lambda, with the help of which you can do some post-processing on your data or aggregate it.&lt;br&gt;
&lt;strong&gt;Secondary indexes&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kBS7bKmZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4vdn3gh774jgzsh65dwy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kBS7bKmZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4vdn3gh774jgzsh65dwy.png" alt="Secondary Indexes" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
What makes DynamoDB so much more than just a simple Key-Value store is the secondary indexes. They allow you to quickly query and lookup items based on not only the primary index attributes, but also attributes of your choice. &lt;strong&gt;Secondary Indexes&lt;/strong&gt;, unlike primary keys, are not required, and they don't have to be unique. Generally speaking, they allow much more flexible query access patterns. In DynamoDB, there are two types of secondary indexes: &lt;strong&gt;global secondary indexes&lt;/strong&gt; and &lt;strong&gt;local secondary indexes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LSIs&lt;/strong&gt; must be created at the same time the table is created and they must use the same partition key as the base table but they allow you to use a different sort key. They also limit you to 10 GBs of data per partition key and they share their throughput with base table - if you query for data using LSI, the usage will be calculated against capacity of the underlying table and its base index&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GSIs&lt;/strong&gt; can be created at any time after table creation and may use any attributes from the table as partition and sort keys, so two items can have the same partition and sort key pair on a GSI. Unlike LSIs, they don’t limit you in amount of data per partition key and they also don’t share their throughput. Each of the GSIs is billed independently, and, as a consequence, throttling is also separated. Their only downside is that they only offer eventual consistency, while LSIs offer both eventual and strong consistency. &lt;/p&gt;

&lt;h2&gt;
  
  
  Limits
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dynobase.dev/dynamodb-limits/"&gt;https://dynobase.dev/dynamodb-limits/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Item size&lt;/strong&gt;&lt;br&gt;
DynamoDB's limit on the size of each item, which is row, is &lt;strong&gt;400 KB&lt;/strong&gt;. To avoid hitting it, you should store large objects (e.g images, videos, blobs etc) in S3 and just store a link to them in DynamoDB or, if the item is too large by itself, break it up into multiple items.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Indexes&lt;/strong&gt;&lt;br&gt;
DynamoDB Indexes allow you to create additional access patterns. &lt;strong&gt;GSIs&lt;/strong&gt; (Global Secondary Index), aka indexes that use a different attribute as partition key, are limited to &lt;strong&gt;20 per table&lt;/strong&gt;. However, that limit can be increased by asking the support. &lt;br&gt;
On the other hand, &lt;strong&gt;LSIs&lt;/strong&gt; (Local Secondary Index) are hard-capped to &lt;strong&gt;5 indexes&lt;/strong&gt;. Usage of LSIs adds yet another, often overlooked limitation - it imposes a &lt;strong&gt;10 GB size limit per partition key value&lt;/strong&gt;. &lt;br&gt;
For that reason, you should probably always favor GSIs over LSIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scan and Query operations&lt;/strong&gt;&lt;br&gt;
These two operations have not only similar syntax - both of them can also return up to &lt;strong&gt;1 MB of data per request&lt;/strong&gt;. If the data you're looking for is not present in the first request's response, you'll have to &lt;em&gt;paginate&lt;/em&gt; through the results - call the operation once again but with NextPageToken set to LastEvaluatedKey from the previous one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transactions and Batch Operations&lt;/strong&gt;&lt;br&gt;
Transactional and Batch APIs allow you to read or write multiple DynamoDB items across multiple tables at once. &lt;br&gt;
For transactions:&lt;br&gt;
&lt;strong&gt;TransactWriteItems&lt;/strong&gt; is limited to &lt;strong&gt;25 items per request&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;TransactReadItems&lt;/strong&gt; is limited to &lt;strong&gt;25 items per request&lt;/strong&gt;&lt;br&gt;
For batch operations:&lt;br&gt;
&lt;strong&gt;BatchWriteItem&lt;/strong&gt; is limited to &lt;strong&gt;25 items per request&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;BatchGetItem&lt;/strong&gt; is limited to &lt;strong&gt;100 items per request&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Partition Throughput&lt;/strong&gt;&lt;br&gt;
DynamoDB Tables are internally divided into partitions. &lt;strong&gt;Each partition&lt;/strong&gt; has its own throughput limit, it is set to &lt;strong&gt;3,000 RCUs&lt;/strong&gt; (Read Capacity Units) and &lt;strong&gt;1,000 WCUs&lt;/strong&gt; (Write Capacity Units) &lt;strong&gt;per partition key&lt;/strong&gt;. &lt;br&gt;
Unfortunately, it is not always possible to distribute that load evenly. In cases like that, "hot" partitions (the ones that receive most of the requests), will use adaptive capacity for a limited period of time to continue operation without disruptions or throttling. This mechanism works automatically and is completely transparent to the application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Others&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Throughput Default Quotas per table&lt;/strong&gt; - &lt;strong&gt;40,000 RCUs&lt;/strong&gt; and &lt;strong&gt;40,000 WCUs&lt;/strong&gt; &lt;br&gt;
&lt;strong&gt;Partition Key Length&lt;/strong&gt; - from &lt;strong&gt;1 byte to 2KB&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Sort Key Length&lt;/strong&gt; - from &lt;strong&gt;1 byte to 1KB&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Table Name Length&lt;/strong&gt; - from &lt;strong&gt;3 characters to 255&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Item's Attribute Names&lt;/strong&gt; - from &lt;strong&gt;1 character to 64KB long&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Item's Attribute Depth&lt;/strong&gt; - up to &lt;strong&gt;32 levels deep&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;ConditionExpression&lt;/strong&gt;, &lt;strong&gt;ProjectionExpression&lt;/strong&gt;, &lt;strong&gt;UpdateExpression&lt;/strong&gt; &amp;amp; &lt;strong&gt;FilterExpression&lt;/strong&gt; &lt;strong&gt;length&lt;/strong&gt; - up to &lt;strong&gt;4KB&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;DescribeLimits&lt;/strong&gt; API operation should be called no more than &lt;strong&gt;once a minute&lt;/strong&gt;.&lt;br&gt;
There's also a bunch of reserved &lt;em&gt;keywords&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best practices
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dynobase.dev/dynamodb-best-practices/"&gt;https://dynobase.dev/dynamodb-best-practices/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Queries&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you’re new to DynamoDB, you can use SQL-like query language, called PartiQL, to work with DynamoDB, instead of it’s native language&lt;/li&gt;
&lt;li&gt;Use BatchGetItem for querying multiple tables - you can get up to 100 items identified by primary key from multiple DynamoDB tables at once&lt;/li&gt;
&lt;li&gt;Use BatchWriteItem for batch writes - you can write up to 16MB of data or do up to 25 writes to multiple tables with a single API call. This will reduce overhead related to establishing HTTP connection. &lt;/li&gt;
&lt;li&gt;If you need consistent reads/writes on multiple records - use TransactReadItems or TransactWriteItems&lt;/li&gt;
&lt;li&gt;Use Parallel Scan to scan through big datasets&lt;/li&gt;
&lt;li&gt;Use AttributesToGet to make API responses faster - this will return less data from DynamoDB table and potentially reduce overhead on data transport &lt;/li&gt;
&lt;li&gt;Use FilterExpressions to refine and narrow your Query and Scan results on non-indexed fields.&lt;/li&gt;
&lt;li&gt;Because DynamoDB has over 500 reserved keywords, use ExpressionAttributeNames always to prevent from ValidationException&lt;/li&gt;
&lt;li&gt;If you need to insert data with a condition, use ConditionExpressions instead of Getting an item, checking its properties and then calling a Put operation. This way takes two calls, is more complicated and is not atomic.&lt;/li&gt;
&lt;li&gt;Use DynamoDB Streams for data post-processing with Lambda
Instead of running expensive queries periodically for e.g. analytics purposes, use DynamoDB Streams connected to a Lambda function. It will update the result of an aggregation just-in-time whenever data changes.&lt;/li&gt;
&lt;li&gt;To avoid hot partitions and spread the load more evenly across them, make sure your partition keys have high cardinality. You can achieve that by adding a random number to the end of the partition key values.&lt;/li&gt;
&lt;li&gt;If you need to perform whole-table operations like SELECT COUNT WHERE, export your table to S3 first and then use Athena or any other suitable tool to do so.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Monitoring and Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use IAM policies for security and enforcing best practices this way you can restrict people from doing expensive Scan operations&lt;/li&gt;
&lt;li&gt;Use Contributor Insights to identify most accessed items and most throttled keys which might cause you performance problems.&lt;/li&gt;
&lt;li&gt;Use On-Demand capacity mode to identify your traffic patterns. Once discovered, switch to provisioned mode with auto scaling enabled to save money.&lt;/li&gt;
&lt;li&gt;Remember to enable PITR (point-in-time-recovery), so there’s an option to rollback your table in case of an error&lt;/li&gt;
&lt;li&gt;Add createdAt and updatedAt attributes to each item. - - - Moreover, instead of removing records from the table, simply add deletedAt attribute. It will not only make your delete operations reversible but also enable some auditing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Storage and Data Modelling&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Store large objects (e.g images, videos, blobs etc) in other places, like S3, and store only links to them inside the Dynamo&lt;/li&gt;
&lt;li&gt;You can also split them into multiple rows sharing the same partition key. A useful mental model is to think of the partition key as a directory/folder and sort key as a file name. Once you're in the correct folder, getting data from any file within that folder is pretty straightforward.&lt;/li&gt;
&lt;li&gt;Use common compression algorithms like GZIP before saving large items to DynamoDB.&lt;/li&gt;
&lt;li&gt;Add a column with epoch date format (in seconds) to enable support for DynamoDB TTL feature, which you can use to filter the data or to automatically update or remove it&lt;/li&gt;
&lt;li&gt;If latency to the end-user is crucial for your application, use DynamoDB Global Tables, which automatically replicate data across multiple regions. This way, your data is closer to the end-user. For the compute part, use Lambda@Edge functions.&lt;/li&gt;
&lt;li&gt;Instead of using Scans to fetch data that isn't easy to get using Queries, use GSIs to index the data on required fields. This allows you to fetch data using fast Queries based on the attribute values of the item.&lt;/li&gt;
&lt;li&gt;Use generic GSI and LSI names so you can avoid doing migrations as they change.&lt;/li&gt;
&lt;li&gt;Leverage sort keys flexibility - you can define hierarchical relationships in your data. This way, you can query it at any level of the hierarchy and achieve multiple access patterns with just one field.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Examples
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Composite primary key&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pEVpYlMO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/55h70zw4offh9sj3c4g0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pEVpYlMO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/55h70zw4offh9sj3c4g0.png" alt="Composite primary key" width="729" height="430"&gt;&lt;/a&gt;&lt;br&gt;
This primary key design makes it easy to solve four access patterns:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Retrieve an Organization. Use the GetItem API call and theOrganization’s name to make a request for the item with a PK of ORG# and an SK of METADATA#.&lt;/li&gt;
&lt;li&gt;Retrieve an Organization and all Users within the Organization.Use the Query API action with a key condition expression of PK = ORG#. This would retrieve the Organization and allUsers within it, as they all have the same partition key.&lt;/li&gt;
&lt;li&gt;Retrieve only the Users within an Organization. Use the QueryAPI action with a key condition expression of PK = ORG# AND begins_with(SK, "USER#"). The use of thebegins_with() function allows us to retrieve only the Userswithout fetching the Organization object as well.&lt;/li&gt;
&lt;li&gt;Retrieve a specific User. If you know both the Organizationname and the User’s username, you can use the GetItem API call with a PK of ORG# and an SK of USER# to fetch the User item.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;p&gt;“The DynamoDB Book” by Alex DeBrie: &lt;a href="https://www.dynamodbbook.com/"&gt;https://www.dynamodbbook.com/&lt;/a&gt;&lt;br&gt;
Dynobase - Professional GUI Client for DynamoDB: &lt;a href="https://dynobase.dev/"&gt;https://dynobase.dev/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dynamodb</category>
      <category>devops</category>
      <category>database</category>
    </item>
    <item>
      <title>How to debug running CodeBuild builds in AWS Session Manager</title>
      <dc:creator>globart</dc:creator>
      <pubDate>Mon, 10 Jul 2023 09:44:56 +0000</pubDate>
      <link>https://dev.to/globart/how-to-debug-running-build-in-aws-session-manager-27k5</link>
      <guid>https://dev.to/globart/how-to-debug-running-build-in-aws-session-manager-27k5</guid>
      <description>&lt;ul&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Pause the build&lt;/li&gt;
&lt;li&gt;Start the build&lt;/li&gt;
&lt;li&gt;Connect to the build container&lt;/li&gt;
&lt;li&gt;
Resume the build
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is basically the &lt;a href="https://docs.aws.amazon.com/codebuild/latest/userguide/session-manager.html"&gt;guide from AWS&lt;/a&gt; with added screenshots of the process&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;&lt;br&gt;
This feature is not available in Windows environments.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To allow Session Manager to be used with the build session, you must enable session connection for the build. There are two prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CodeBuild Linux standard curated images already have the SSM agent installed and the SSM agent ContainerMode enabled.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are using a custom image for your build, do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install SSM Agent. For more information, see &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-manual-agent-install.html"&gt;this guide&lt;/a&gt;. SSM Agent version must be 3.0.1295.0 or later.&lt;/li&gt;
&lt;li&gt;Copy &lt;a href="https://github.com/aws/aws-codebuild-docker-images/blob/master/ubuntu/standard/6.0/amazon-ssm-agent.json"&gt;this file&lt;/a&gt; to the /etc/amazon/ssm/ directory in your image. This enables Container Mode in the SSM agent.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;&lt;br&gt;
Custom images would require most updated SSM agent for this feature to work as expected.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;The CodeBuild service role must have the following SSM policy:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Effect": "Allow",
  "Action": [
    "ssmmessages:CreateControlChannel",
    "ssmmessages:CreateDataChannel",
    "ssmmessages:OpenControlChannel",
    "ssmmessages:OpenDataChannel"
  ],
  "Resource": "*"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can have the CodeBuild console automatically attach this policy to your service role when you start the build. Alternatively, you can attach this policy to your service role manually.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you have Auditing and logging session activity enabled in Systems Manager preferences, the CodeBuild service role must also have additional permissions. The permissions are different, depending on where the logs are stored.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;CloudWatch Logs&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If using CloudWatch Logs to store your logs, add the following permission to the CodeBuild service role:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "logs:DescribeLogGroups",
      "Resource": "arn:aws:logs:&amp;lt;region-id&amp;gt;:&amp;lt;account-id&amp;gt;:log-group:*:*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:&amp;lt;region-id&amp;gt;:&amp;lt;account-id&amp;gt;:log-group:&amp;lt;log-group-name&amp;gt;:*"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Amazon S3&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If using Amazon S3 to store your logs, add the following permission to the CodeBuild service role:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetEncryptionConfiguration",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::&amp;lt;bucket-name&amp;gt;",
        "arn:aws:s3:::&amp;lt;bucket-name&amp;gt;/*"
      ]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more information, see Auditing and logging session activity in the AWS Systems Manager User Guide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pause the build
&lt;/h2&gt;

&lt;p&gt;To pause the build, insert the &lt;code&gt;codebuild-breakpoint&lt;/code&gt; command in any of the build phases in your &lt;code&gt;buildspec&lt;/code&gt; file. The build will be paused at this point, which allows you to connect to the build container and view the container in its current state.&lt;br&gt;
For example, add the following to the build phases in your &lt;code&gt;buildspec&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;phases:
  pre_build:
    commands:
      - echo Entered the pre_build phase...
      - echo "Hello World" &amp;gt; /tmp/hello-world
      - codebuild-breakpoint
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Start the build
&lt;/h2&gt;

&lt;p&gt;Go to your project’s pipeline, and click on “AWS CodeBuild” link in "Build" stage. This will take you to CodeBuild project, corresponding to your pipeline:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uPQ3B9bU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q7frh8y8kcrxh2sbmbsn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uPQ3B9bU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q7frh8y8kcrxh2sbmbsn.png" alt="AWS CodeBuild" width="800" height="354"&gt;&lt;/a&gt;&lt;br&gt;
Click “Start build with overrides”:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VnpBLCBd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hduu6v9t91c2xqjnxjmk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VnpBLCBd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hduu6v9t91c2xqjnxjmk.png" alt="AWS CodeBuild" width="800" height="345"&gt;&lt;/a&gt;&lt;br&gt;
Click “Advanced build overrides”:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W3p5Wo6v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6h2gokbylsinwkgqqypm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W3p5Wo6v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6h2gokbylsinwkgqqypm.png" alt="AWS CodeBuild" width="800" height="551"&gt;&lt;/a&gt;&lt;br&gt;
By default, “AWS CodePipeline” will be chosen as Source provider, we’ll have to change it:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E9SnwK-m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t0kxj93kkn8n1w5e3c3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E9SnwK-m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t0kxj93kkn8n1w5e3c3p.png" alt="AWS CodeBuild" width="800" height="284"&gt;&lt;/a&gt;&lt;br&gt;
It should look like this, where “Source version” is the name of your branch:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RKnMIvIs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cl6d5cvvs1n58uheqgms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RKnMIvIs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cl6d5cvvs1n58uheqgms.png" alt="AWS CodeBuild" width="800" height="587"&gt;&lt;/a&gt;&lt;br&gt;
Also, check “Enable session connection” in "Environment" section:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6PutvSQG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/glqhiw3cpjksobl9ougy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6PutvSQG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/glqhiw3cpjksobl9ougy.png" alt="AWS CodeBuild" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect to the build container
&lt;/h2&gt;

&lt;p&gt;After all of this, you can scroll to the bottom and click "Start Build". After some time, link to connect to the build container will appear. Click it and a terminal session will open that allows you to browse and control the build container:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jjwl8n8G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n0f1ovgayf4yjktzlrdx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jjwl8n8G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n0f1ovgayf4yjktzlrdx.png" alt="AWS CodeBuild" width="800" height="355"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5BoSW2mU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2y0r6yb35y0m7kzu4dtv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5BoSW2mU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2y0r6yb35y0m7kzu4dtv.png" alt="AWS CodeBuild" width="800" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Resume the build
&lt;/h2&gt;

&lt;p&gt;After you finish examining the build container, issue the &lt;code&gt;codebuild-resume&lt;/code&gt; command from the container shell:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s4Oz6-BI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1unww0174k14vd7jas5k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s4Oz6-BI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1unww0174k14vd7jas5k.png" alt="AWS CodeBuild" width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ssm</category>
      <category>devops</category>
      <category>codebuild</category>
    </item>
  </channel>
</rss>
