DEV Community

Cover image for EC2 Configuration using Ansible & GitHub Actions

EC2 Configuration using Ansible & GitHub Actions

Disclaimer

  1. Some basic understanding of GitHub, GitHub Actions, Terragrunt, and Ansible is needed to be able to follow along.
  2. This article builds on my previous article, so to follow along you'll need to go through it first.

In this article, we'll use GitHub Actions & Ansible to deploy a test web page to our provisioned EC2 instance and test that it worked using the instance's public DNS or public IP address.

In the previous article, we provisioned an EC2 instance in a public subnet using Terraform & Terragrunt. We made sure that this instance would be accessible via SSH, and this is important because the Ansible host will need to connect to the instance via SSH to perform its configuration management tasks.

These are the steps we'll need to follow to achieve our objectives:

  1. Version our infrastructure code with GitHub.

  2. Create a GitHub Actions workflow and delegate the infrastructure provisioning to it, instead of applying changes from our local computers.

  3. Add a job to our GitHub Actions workflow that configures Ansible and deploys our test web page to the provisioned EC2 instance.

1. Version our infrastructure code with GitHub

We'll start by creating GitHub repositories for each of our building blocks from the previous article

You should be shown a screen similar to the one below, asking you to enter your repository name and a description for the repository.

Creating GitHub repository for building block

Enter the appropriate information for the building block, then scroll down and click on the Create repository button.
You can then go to your local code for this building block and push it to your newly created repository.

Repeat this step for each building block, and you should end up with a list of repositories similar to the one below (you should have more repositories of course).

List of repositories

You should then create a repository for your Terragrunt code, and name it infra-live, for example.

Terragrunt code repository

The next step will be to update each terragrunt.hcl file in your infra-live project so that it points to the corresponding Git repository for your building blocks, and remove the AWS credentials lines of code from the inputs section of this file:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY_ID
  • AWS_REGION

Terragrunt VPC pointing to Terraform building block in GitHub

You can then push your infra-live code to its GitHub repository, and our infrastructure code will have been versioned!

2. GitHub Actions workflow for infrastructure provisioning

Now that our code has been versioned, we can write a workflow that will be triggered whenever we push code to the main branch (use whichever branch you prefer, like master).
Ideally, this workflow should only be triggered after a pull request has been approved to merge to the main branch, but we'll keep it simple for illustration purposes.

Before doing anything, we'll configure some secrets in our GitHub infra-live repository settings. These secrets will be required for the GitHub Actions workflow to be able to properly provision your infrastructure.

From within the infra-live repository, click on the Settings tab to access the repository's settings.

Repository settings

In the left menu, under the Security block, expand Secrets and variables and select Actions

Repository secrets

You can then add repository secrets for AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION) by clicking on the New repository secret button.
You'll also need to create a SSH_PRIVATE_KEY secret, which will be required by Ansible to SSH into the EC2 instance. You should use the content of the .pem file you created for the SSH key pair used in the previous article. It should look similar to this (make sure not to share this with anyone, as it would be a big security risk):

Sample redacted content of SSH private key

We can now start working on our GitHub Actions workflow!
The first thing will be to create a .github/workflows in the root directory of your infra-live project. You can then create a YAML file within this infra-live/.github/workflows directory called configure.yml, for example.

We'll add the following code to our infra-live/.github/workflows/configure.yml file to handle the provisioning of our infrastructure:

name: Configure

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  terraform:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.5
          terraform_wrapper: false

      - name: Setup Terragrunt
        run: |
          curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
          chmod +x terragrunt_linux_amd64
          sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
          terragrunt -v

      - name: Apply Terraform changes
        run: |
          cd dev
          terragrunt run-all apply -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
          cd apache-server/ec2-web-server
          public_ip=$(terragrunt output instance_public_ip)
          echo "$public_ip" > public_ip.txt
          cat public_ip.txt
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
Enter fullscreen mode Exit fullscreen mode

Let's break down what this file does:

a) The name: Configure line names our workflow Configure

b) The following lines of code tell GitHub to trigger this workflow whenever code is pushed to the main branch or a pull request is merged to the main branch:

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main
Enter fullscreen mode Exit fullscreen mode

c) Then we define our first job called terraform using the lines below, telling GitHub to use a runner that runs on the latest version of Ubuntu. Think of a runner as the GitHub server executing the commands in this workflow file for us:

jobs:
  terraform:
    runs-on: ubuntu-latest
Enter fullscreen mode Exit fullscreen mode

d) We then define a series of steps or blocks of commands that will be executed in order.
The first step uses a GitHub action to checkout our infra-live repository into the runner so that we can start working with it:

      - name: Checkout repository
        uses: actions/checkout@v2
Enter fullscreen mode Exit fullscreen mode

The next step uses another GitHub action to help us easily set up SSH on the GitHub runner using the private key we had defined as a repository secret:

      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
Enter fullscreen mode Exit fullscreen mode

The following step uses yet another GitHub action to help us easily install Terraform on the GitHub runner, specifying the exact version that we need:

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.5
          terraform_wrapper: false
Enter fullscreen mode Exit fullscreen mode

Then we use another step to execute a series of commands that install Terragrunt on the GitHub runner. We use the command terragrunt -v to check the version of Terragrunt installed and confirm that the installation was successful:

      - name: Setup Terragrunt
        run: |
          curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
          chmod +x terragrunt_linux_amd64
          sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
          terragrunt -v
Enter fullscreen mode Exit fullscreen mode

Finally, we use a step to apply our Terraform changes, then we use a series of commands to retrieve the public IP address of our provisioned EC2 instance and save it to a file called public_ip.txt (we'll need this for the Ansible configuration).

With our infrastructure provisioned, we can now proceed to configure Ansible and deploy our test web page in our EC2 instance.

3. Configure Ansible and deploy test web page

We can now configure Ansible using a different workflow job that we'll call ansible, and deploy our test web page to our EC2.

But first, we need to make the file containing the EC2 instance's public IP address (public_ip.txt) from the terraform job available to our ansible job.
For that, we need to add another step to our terraform job to upload the artifact we generated (public_ip.txt):

      - name: Upload artifact
        uses: actions/upload-artifact@v4
        with:
          name: ip-artifact
          path: dev/apache-server/ec2-web-server/public_ip.txt
Enter fullscreen mode Exit fullscreen mode

With that out of the way, we can configure our ansible job:

  ansible:
    runs-on: ubuntu-latest
    needs: terraform

    steps:
      - name: Download artifact
        uses: actions/download-artifact@v4
        with:
          name: ip-artifact

      - name: Configure Ansible
        run: |
          sudo apt update
          sudo pipx inject ansible-core jmespath
          ansible-playbook --version
          sudo echo "[web]" >> ansible_hosts
          sudo cat public_ip.txt >> ansible_hosts
          mv ansible_hosts $HOME
          sudo cat $HOME/ansible_hosts

      - name: Configure playbook
        run: |
          cd $HOME
          cat > deploy.yml <<EOF
          ---
          - hosts: web
            remote_user: ec2-user
            become: true

            tasks:
              - name: Create web page
                copy:
                  dest: "/var/www/html/test.html"
                  content: |
                    <html>
                      <head>
                        <title>Test Page</title>
                      </head>
                      <body>
                        <h1>This is a test page</h1>
                      </body>
          EOF
          cat $HOME/deploy.yml

      - name: Run playbook
        uses: dawidd6/action-ansible-playbook@v2
        with:
          playbook: deploy.yml
          directory: /home/runner
          key: ${{secrets.SSH_PRIVATE_KEY}}
          options: |
            --inventory ansible_hosts
            --verbose
Enter fullscreen mode Exit fullscreen mode

Let's break this down.

a) We define our second job called ansible, telling GitHub again to use a runner with the latest version of Ubuntu, and specifying that this job needs the terraform job to first complete successfully before it can be run:

  ansible:
    runs-on: ubuntu-latest
    needs: terraform
Enter fullscreen mode Exit fullscreen mode

b) We then define our job's steps, the first being to download the artifact we generated in the previous job using a GitHub action:

      - name: Download artifact
        uses: actions/download-artifact@v4
        with:
          name: ip-artifact
Enter fullscreen mode Exit fullscreen mode

c) The next step is to install Ansible on the runner and create our inventory (or hosts) file. We define a group of servers called [web] in this file and pass the public IP address of our EC2 instance to this [web] group.
We then move our inventory file to our $HOME directory so that it can be accessed in the subsequent steps.

      - name: Configure Ansible
        run: |
          sudo apt update
          sudo pipx inject ansible-core jmespath
          ansible-playbook --version
          sudo echo "[web]" >> ansible_hosts
          sudo cat public_ip.txt >> ansible_hosts
          mv ansible_hosts $HOME
          sudo cat $HOME/ansible_hosts
Enter fullscreen mode Exit fullscreen mode

d) In the following step, we configure our Ansible playbook by defining its configuration and putting it in a file called deploy.yml in our $HOME directory. The configuration has a task to create an HTML page called test.html in the /var/www/html/ directory:

      - name: Configure playbook
        run: |
          cd $HOME
          cat > deploy.yml <<EOF
          ---
          - hosts: web
            remote_user: ec2-user
            become: true

            tasks:
              - name: Create web page
                copy:
                  dest: "/var/www/html/test.html"
                  content: |
                    <html>
                      <head>
                        <title>Test Page</title>
                      </head>
                      <body>
                        <h1>This is a test page</h1>
                      </body>
          EOF
          cat $HOME/deploy.yml
Enter fullscreen mode Exit fullscreen mode

e) Finally, our last step runs our Ansible playbook using a custom GitHub action that takes as input the name of our playbook file, the path to the directory that has our playbook file, our SSH private key (which is retrieved from the repository secrets), and an argument to determine where our inventory file is located.

      - name: Run playbook
        uses: dawidd6/action-ansible-playbook@v2
        with:
          playbook: deploy.yml
          directory: /home/runner
          key: ${{secrets.SSH_PRIVATE_KEY}}
          options: |
            --inventory ansible_hosts
            --verbose
Enter fullscreen mode Exit fullscreen mode

The final version of our workflow file should then look like this:

name: Configure

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  terraform:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      - name: Setup SSH
        uses: webfactory/ssh-agent@v0.4.1
        with:
          ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.5.5
          terraform_wrapper: false

      - name: Setup Terragrunt
        run: |
          curl -LO "https://github.com/gruntwork-io/terragrunt/releases/download/v0.48.1/terragrunt_linux_amd64"
          chmod +x terragrunt_linux_amd64
          sudo mv terragrunt_linux_amd64 /usr/local/bin/terragrunt
          terragrunt -v

      - name: Apply Terraform changes
        run: |
          cd dev
          terragrunt run-all apply -auto-approve --terragrunt-non-interactive -var AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -var AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -var AWS_REGION=$AWS_DEFAULT_REGION
          cd apache-server/ec2-web-server
          public_ip=$(terragrunt output instance_public_ip)
          echo "$public_ip" > public_ip.txt
          cat public_ip.txt
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}

      - name: Upload artifact
        uses: actions/upload-artifact@v4
        with:
          name: ip-artifact
          path: dev/apache-server/ec2-web-server/public_ip.txt

  ansible:
    runs-on: ubuntu-latest
    needs: terraform

    steps:
      - name: Download artifact
        uses: actions/download-artifact@v4
        with:
          name: ip-artifact

      - name: Configure Ansible
        run: |
          sudo apt update
          sudo pipx inject ansible-core jmespath
          ansible-playbook --version
          sudo echo "[web]" >> ansible_hosts
          sudo cat public_ip.txt >> ansible_hosts
          mv ansible_hosts $HOME
          sudo cat $HOME/ansible_hosts

      - name: Configure playbook
        run: |
          cd $HOME
          cat > deploy.yml <<EOF
          ---
          - hosts: web
            remote_user: ec2-user
            become: true

            tasks:
              - name: Create web page
                copy:
                  dest: "/var/www/html/test.html"
                  content: |
                    <html>
                      <head>
                        <title>Test Page</title>
                      </head>
                      <body>
                        <h1>This is a test page</h1>
                      </body>
          EOF
          cat $HOME/deploy.yml

      - name: Run playbook
        uses: dawidd6/action-ansible-playbook@v2
        with:
          playbook: deploy.yml
          directory: /home/runner
          key: ${{secrets.SSH_PRIVATE_KEY}}
          options: |
            --inventory ansible_hosts
            --verbose
Enter fullscreen mode Exit fullscreen mode

We can now commit and push our code to the main branch of our infra-live GitHub repository and the pipeline will be automatically triggered to provision our infrastructure and deploy our test web page to our EC2 instance.

Both workflow jobs should succeed like in the image below:

Jobs succeeded

You should then be able to access the test web page which was deployed by opening a browser and entering the public IP address of your EC2 instance then /test.html.
For example, http://18.212.153.185/test.html.

Your browser should display like in the image below:

Test web page displayed in browser

Conclusion
We now have some foundations on how to use GitHub Actions to help us automate the provisioning of our infrastructure using Terraform and Terragrunt, as well as the configuration of our servers using Ansible. We can build on this to design more complex pipelines depending on our use cases.

If I made any mistake or you think I could have done something more efficiently, please don't hesitate to point that out to me in a comment below.

Until next time, happy coding!!

Top comments (0)