DEV Community

Cover image for IMAGE baking in AWS using Packer and Image builder
santhoshnimmala
santhoshnimmala

Posted on • Updated on

IMAGE baking in AWS using Packer and Image builder

Hey, my Self Santhosh Nimmala, I am Working with Luxoft as a Principal consultant (leading Cloud and DevOps in TRM space), in coming Articles I will be explaining about DevOps and DevTools with respective to AWS it will also have real world DevOps projects with Code and common DevOps Patterns , in this blog we are going to learn about image baking with the help of Image builder which is AWS native tool and Packer which is a tool form hashicorp in the next blog we will also give source code to impalement both image builder and packer.


Image building is the process of creating a pre-configured, standardized image that can be used as a base for launching new instances in the cloud. In AWS, images are created using Amazon Machine Images (AMI) and can be customized to include operating systems, applications, configurations, and other components.

Image building is an important part of cloud computing as it provides a consistent, repeatable process for deploying new instances. By creating a standardized image, you can ensure that all instances launched from that image have the same configuration and software stack, reducing the risk of configuration errors and improving the reliability of your infrastructure.

In addition to consistency, image building can also help improve security and reduce costs. By pre-configuring your images with security best practices and only including the necessary software components, you can reduce the risk of security vulnerabilities and reduce the amount of time and resources required for maintenance and updates.


Here are some best practices to follow when building images in AWS:

Use automation: Automating the image building process can help improve efficiency, reduce errors, and provide a repeatable, auditable process for managing images. AWS provides several services for automating image building, such as EC2 Image Builder and CodePipeline.

Create golden images: Golden images are standardized, pre-configured images that can be used as a base for launching new instances. By creating a golden image, you can ensure consistency across your environment and simplify the process of deploying new instances.

Build immutable infrastructure: Immutable infrastructure involves creating images that are designed to be immutable - that is, they cannot be changed once they are deployed. This can help improve reliability and security by reducing the risk of configuration drift and unauthorized changes.

Version your images: Creating multiple versions of an image can help simplify management and provide greater flexibility in deploying and scaling applications. Versioning can also help ensure that older versions of an image are still available if needed.

Reduce image size: Large images can increase launch times and storage costs. By reducing the size of your images and only including the necessary software components, you can improve performance and reduce costs.

Use security best practices: When building images, it's important to follow security best practices, such as keeping software up to date, limiting access to the image, and using encryption for sensitive data.

By following these best practices and leveraging AWS services, you can create a scalable, reliable, and secure image-building process that meets the needs of your organization.


a step-by-step guide for using the EC2 Image Builder service in AWS to bake an image:

Create a recipe: The first step in using EC2 Image Builder is to create a recipe that defines the components of the image. A recipe can include various components, such as the operating system, applications, and configurations.

Create an image pipeline: An image pipeline is a set of instructions that tell Image Builder how to build and test the image. The pipeline includes stages such as building the image, testing it, and validating it.

Run the pipeline: Once the pipeline is created, you can run it to build the image. Image Builder will automatically build the image according to the recipe and pipeline.

Validate the image: After the image is built, it's important to validate it to ensure that it meets the necessary requirements. Image Builder provides validation tools to help ensure that the image is compliant with best practices and security standards.

Distribute the image: Finally, once the image is validated, it can be distributed to other accounts or regions using the Image Builder console or APIs.

Here's a more detailed breakdown of each step:

Create a recipe: To create a recipe, you can use the EC2 Image Builder console or CLI. A recipe is essentially a script that defines the components of the image, such as the operating system, applications, and configurations. You can specify the source for each component, such as a Dockerfile or shell script.

Create an image pipeline: An image pipeline is a set of instructions that tell Image Builder how to build and test the image. The pipeline includes stages such as building the image, testing it, and validating it. You can create a pipeline using the Image Builder console or CLI. Each stage in the pipeline is defined by a component, such as a build recipe or test command.

Run the pipeline: Once the pipeline is created, you can run it to build the image. Image Builder will automatically build the image according to the recipe and pipeline. You can monitor the progress of the build in the Image Builder console.

Validate the image: After the image is built, it's important to validate it to ensure that it meets the necessary requirements. Image Builder provides validation tools to help ensure that the image is compliant with best practices and security standards. You can use the validation tool in the Image Builder console or CLI to test the image against various standards.

Distribute the image: Finally, once the image is validated, it can be distributed to other accounts or regions using the Image Builder console or APIs. You can specify the target accounts and regions, and Image Builder will automatically copy the image to those locations.


lets create a AMI using image builder with a pipeline using terraform some of the pre-requisites are make sure you have ssmagent installed on the base AMI and replace VPC and TAG's for code with yours

1) create main.tf This code is defining several AWS resources using Terraform's AWS provider. These resources include an IAM role, two IAM role policy attachments, an IAM role policy, and an IAM instance profile.

The IAM role named "imagebuilder_role" is being created with an assume role policy that allows EC2 and Image Builder services to assume the role. It is also being tagged with a name of "imagebuilder_role".

Two IAM role policy attachments are being created to attach existing AWS-managed policies to the IAM role. These policies are "AmazonSSMManagedInstanceCore" and "CloudWatchLogsFullAccess".

An IAM role policy named "imagebuilder_policy" is being created with a policy document that allows various actions on EC2 instances and Image Builder resources. It also allows all actions on S3 resources.

Finally, an IAM instance profile named "imagebuilder_instance_profile" is being created and associated with the "imagebuilder_role" IAM role.

resource "aws_imagebuilder_image" "example" {
  distribution_configuration_arn   = aws_imagebuilder_distribution_configuration.example.arn
  image_recipe_arn                 = aws_imagebuilder_image_recipe.example.arn
  infrastructure_configuration_arn = aws_imagebuilder_infrastructure_configuration.example.arn
}

resource "aws_imagebuilder_infrastructure_configuration" "example" {
  description                   = "example description"
  instance_profile_name         = aws_iam_instance_profile.imagebuilder_instance_profile.name
  instance_types                = ["t2.micro"]
  name                          = "example"
  security_group_ids            = [data.aws_security_group.test.id]
  subnet_id                     = data.aws_subnets.private.ids[0]
  terminate_instance_on_failure = true

  logging {
    s3_logs {
      s3_bucket_name = "aws-codepipeline-bitbucket-integration990"
      s3_key_prefix  = "logs"
    }
  }

  tags = {
    Name = "Example-Image"
  }
}

resource "aws_imagebuilder_image_recipe" "example" {
  block_device_mapping {
    device_name = "/dev/xvdb"

    ebs {
      delete_on_termination = true
      volume_size           = 100
      volume_type           = "gp2"
    }
  }

  component {
    component_arn = aws_imagebuilder_component.example.arn


  }

  name         = "example"
  parent_image = "ami-0515b741b33078e02"
  version      = "1.0.0"
}


resource "aws_imagebuilder_component" "example" {
  data = yamlencode({
    phases = [{
      name = "build"
      steps = [{
        action = "ExecuteBash"
        inputs = {
          commands = ["echo 'hello world'"]
        }
        name      = "example"
        onFailure = "Continue"
      }]
    }]
    schemaVersion = 1.0
  })
  name     = "example"
  platform = "Linux"
  version  = "1.0.0"

}

resource "aws_imagebuilder_distribution_configuration" "example" {
  name = "example"

  distribution {
    ami_distribution_configuration {
      ami_tags = {
        CostCenter = "IT"
      }

      name = "example-{{ imagebuilder:buildDate }}"

    }



    region = "us-east-1"
  }
}

resource "aws_imagebuilder_image_pipeline" "example" {
  image_recipe_arn                 = aws_imagebuilder_image_recipe.example.arn
  infrastructure_configuration_arn = aws_imagebuilder_infrastructure_configuration.example.arn
  name                             = "example"

}

Enter fullscreen mode Exit fullscreen mode

2) data.tf This code is defining three Terraform data sources that retrieve information from the AWS provider.

The first data source is named "aws_subnets" and retrieves information about private subnets within a specific VPC. It filters for subnets associated with a specific VPC ID and a tag with a name of "Public".

The second data source is named "aws_security_group" and retrieves information about a specific security group within a VPC. It filters for security groups associated with a specific VPC ID and a tag with a name of "Public".

The third data source is named "aws_ami" and retrieves information about the latest Amazon Machine Image (AMI) that meets certain criteria. In this case, the filter criteria specify that the AMI should be owned by Red Hat's AWS account ID, have a name that starts with "RHEL-8.5", have an architecture of "x86_64", use Elastic Block Store (EBS) as its root device type, and use hardware virtualization (HVM). This data source could be used to obtain the ID of the latest RHEL 8.5 AMI that matches these criteria, which could then be used in a subsequent resource definition to launch EC2 instances using that AMI.

data "aws_subnets" "private" {
  filter {
    name   = "vpc-id"
    values =    ["vpc-01a4ec7b",]
  }

  tags = {
    Name = "Public"
  }
}

data "aws_security_group" "test" {

  filter {
    name   = "vpc-id"
    values = ["vpc-01a4ec7b",]
  }
  tags = {
    Name = "Public"
  }
}

data "aws_ami" "rhel_8_5" {
  most_recent = true
  owners = ["309956199498"] // Red Hat's Account ID
  filter {
    name   = "name"
    values = ["RHEL-8.5*"]
  }
  filter {
    name   = "architecture"
    values = ["x86_64"]
  }
  filter {
    name   = "root-device-type"
    values = ["ebs"]
  }
  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

Enter fullscreen mode Exit fullscreen mode

3) create security.tf This Terraform code creates several AWS Identity and Access Management (IAM) resources that are required for using AWS Image Builder service. The resources created are:

aws_iam_role: This resource creates an IAM role that allows EC2 instances to assume this role and interact with Image Builder service. The assume_role_policy specifies the permissions for EC2 instances and Image Builder service to assume this role.

aws_iam_role_policy_attachment: This resource attaches an IAM policy to the IAM role created in the previous step. Two policies are attached, AmazonSSMManagedInstanceCore and CloudWatchLogsFullAccess.

aws_iam_role_policy: This resource creates a custom IAM policy that grants permissions for EC2 instances to interact with Image Builder service and S3.

aws_iam_instance_profile: This resource creates an instance profile that is associated with the IAM role created in the first step. This instance profile can be attached to an EC2 instance at launch time to provide the necessary permissions for that instance to interact with Image Builder service.

resource "aws_iam_role" "imagebuilder_role" {
  name = "imagebuilder_role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "ec2.amazonaws.com"
        }
      },
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "imagebuilder.amazonaws.com"
        }
      }
    ]
  })

  tags = {
    Name = "imagebuilder_role"
  }
}

resource "aws_iam_role_policy_attachment" "imagebuilder_policy_attachment" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
  role       = aws_iam_role.imagebuilder_role.name
}

resource "aws_iam_role_policy_attachment" "cloudwatch_logs_policy_attachment" {
  policy_arn = "arn:aws:iam::aws:policy/CloudWatchLogsFullAccess"
  role       = aws_iam_role.imagebuilder_role.name
}

resource "aws_iam_role_policy" "imagebuilder_policy" {
  name = "imagebuilder_policy"
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "ec2:CreateTags",
          "ec2:ModifyInstanceAttribute",
          "ec2:DescribeInstances",
          "ec2:RunInstances",
          "ec2:TerminateInstances"
        ]
        Resource = "*"
      },
      {
        Effect = "Allow"
        Action = [
          "imagebuilder:GetComponent",
          "imagebuilder:UpdateComponent",
          "imagebuilder:ListComponentBuildVersions",
          "imagebuilder:CreateImage",
          "imagebuilder:GetImage",
          "imagebuilder:GetImagePipeline",
          "imagebuilder:ListImages",
          "imagebuilder:ListImageBuildVersions",
          "imagebuilder:ListImagePipelineImages",
          "imagebuilder:ListImagePipelines",
          "s3:*"
        ]
        Resource = "*"
      }
    ]
  })

  role = aws_iam_role.imagebuilder_role.name
}

resource "aws_iam_instance_profile" "imagebuilder_instance_profile" {
  name = "imagebuilder_instance_profile"
  role = aws_iam_role.imagebuilder_role.name
}

Enter fullscreen mode Exit fullscreen mode

once you have created run terraform command like terraform init , terraform plan , terraform apply .

Image description

once you apply code using terraform go to aws console and search for image builder service .

Image description

Image description


In conclusion, building custom images for your infrastructure is a key aspect of modern cloud computing. With tools like Packer and AWS Image Builder, it's become easier than ever to create custom images that meet the specific needs of your applications and services. Custom images can be used to standardize configurations, reduce deployment time, and improve overall security and reliability. It's important to follow best practices when building images, such as regularly updating packages and avoiding hardcoding credentials. By automating the image building process, you can ensure that your images are consistently built and up-to-date, and can be easily replicated across multiple environments.

Top comments (0)