Maintaining a secure, up-to-date AMI (Amazon Machine Image) used to be a tedious, and expensive process of running the latest updates on a running EC2 machine, and creating an image based off of this EC2 machine. If any required step is missed such as stopping any service, the image created will not work as intended and can sometimes waste an engineer's time looking through logs and finding the issue.
In December 2019, AWS introduced Image Builder that aims to solve these problems and allow users to create an image pipeline at no cost, except for the underlying resources used by the service.
What is EC2 Image Builder?
EC2 Image Builder is an AWS service that simplifies the building, testing, and deployment of Virtual Machine and container images for use on AWS or on-premises. Image builder has the capability to create an image with customization with pre-built templates available, allow testing on the image, and export the image making it accessible across regions and/or across accounts.
This is a walkthrough on the different steps of building an image pipeline to ultimately produce a secure, working image of an application.
Creating the Image Pipeline
To get started, navigate through the Image Builder service in AWS. On the left pane, click on Image pipelines and click on the Create image pipeline to start building.
1. Specify Pipeline Details
On the first step, specify the pipeline details such as Pipeline name, description. Here, you can also specify a Build schedule to instruct image builder to build a new version of the AMI on a regular/cron-based schedule. You can also choose to trigger the pipeline if there is a dependency update on the components that you specify.
2. Creating a Recipe
Next step is for the creation of the image recipe. As a recipe contains the ingredients list and steps in creating a final food product, an image recipe is a document defining the components to be applied to the base images to create the desired configuration for the output image. If a recipe needs to be modified, a new version must be created with the updated components
If you already created an image recipe prior to building the image pipeline, you can just select it here. If building the recipe for the first time, you can follow along the steps. Image type can either be an existing AMI or a Docker image. Also add a name and description to easily identify this recipe. Note that you can update the working directory at this stage. I've updated mine to /var/tmp
to test this out.
Going back to the image recipe, I'm selecting an Amazon-managed AMI using the latest OS version of Amazon Linux 2 x86. As part of the recipe, you can also add user data. User data are commands run on a Linux instance at launch time.
Build Components
This is a section on the recipe that I'd like to focus on. As defined, components are software scripts allowing for custom configuration of the image. The components can either be Amazon-managed, owned by me, shared with me, or third party managed.
Let us ensure that we are updating to the latest linux version. This can be done by adding the Amazon-managed update-linux component.
Let us also create a custom component to be added, by clicking on the Create build component button on the upper right side.
Creating a Custom Component
Select the component type as Build. This component will run the configuration script to run the script to install Apache, and create and run the Apache application, so I'm naming this as apache-application and setting the version as 1.0.0.
For the content, copy and modify the script below:
name: imagebuilder-applicationcomponent
description: 'This component will download the apache install script, and deploy a sample application'
schemaVersion: 1.0
phases:
- name: build
steps:
- name: DownloadInstallScript
action: S3Download
onFailure: Abort
maxAttempts: 3
inputs:
- source: s3://bitscollective/install-apache.sh
destination: /var/tmp/install-apache.sh
- name: RunScript
action: ExecuteBash
onFailure: Abort
maxAttempts: 3
inputs:
commands:
- 'chmod 755 {{ build.DownloadInstallScript.inputs[0].destination }}'
- 'bash {{ build.DownloadInstallScript.inputs[0].destination }}'
- name: SampleApplication
action: S3Download
onFailure: Abort
maxAttempts: 3
inputs:
- source: s3://bitscollective/index.html
destination: /var/www/html/index.html
- name: CleanupInstallFiles
action: ExecuteBash
onFailure: Abort
maxAttempts: 3
inputs:
commands:
- 'rm {{ build.DownloadInstallScript.inputs[0].destination }}'
This will allow us to add this custom component in the recipe.
For the s3 files that will be accessed by this component, create the following files:
- index.html ```html
<!DOCTYPE html>
Sample Application
hello!
- install-apache.sh
```bash
#!/bin/bash
# Install Apache
sudo yum install -y httpd
sudo systemctl start httpd
sudo systemctl enable httpd
Adding a Test Component
Another great feature of the image builder is the capability to add tests to the instance after image creation, to ensure that the image is working as you expect. There are a few basic AWS-managed test components available for use, and you can create any custom test component yourself to be added to the pipeline.
2. Defining Infrastructure Configuration (Optional)
Infrastructure configuration determines the EC2 instances created when customizing the images and running validation tests. The default setting will also create an IAM role with the required policy to execute commands and customize the image.
3. Defining Distribution Settings (Optional)
Distribution settings will allow customizations on allowed regions, encryption, launch permissions, allowed accounts that can launch the output AMI, and license configurations.
Continue to the last step, and create the pipeline.
Running the Image Builder Pipeline
To start testing the image, first let's run the pipeline to create an image with all the customizations we defined. Go to image pipelines, select the pipeline created, then click on Actions > Run pipeline.
On the message that pops up, click on View details. This will allow navigation to the pipeline workflow where we can drill down to the steps and identify any issues if any.
Based from the logs, there is a 403 encountered because the role assumed by the EC2 image builder does not have permissions to get files from the S3 bucket. To find out the role, go back to the pipeline and get the IAM role that has been set up.
Now, go the the S3 bucket and add the required bucket policy to allow access to Image builder.
#Replace <accountid> with the aws account id
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ImageBuilder",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<accountid>:role/EC2InstanceProfileForImageBuilder"
},
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::bitscollective/*",
"arn:aws:s3:::bitscollective"
]
}
]
}
Re-run the pipeline, and the issue should be resolved now
Once complete, the image should appear under the output resources and can now be used for EC2s.
Testing and Using the Created AMI
To quickly test if the image works as expected, go to the EC2 service, and on the left pane, select AMI. Select the AMI created and click on Launch instance from AMI.
Add a name to the EC2 host, let other details remain as default and click on Launch instance.
Once the instance is created, get the public IP and attempt to load this in a web browser. If the page doesn't load for some reason, one possibility is that http traffic is not allowed on the EC2 instance. To update this, go to the instance's security tab and click on the security group link. Edit the inbound rules and add an allow all for http traffic (port 80). Save the rule.
Reload the public IP again from the browser.
Now that you verified that the image works, you can terminate the EC2 resource created to ensure you are not being charged with anything.
This is the first of a three-part series about Infrastructure as Code. Stay tuned for the next article!
Latest comments (0)