Introduction
The world of technology is moving quickly. It's moving so fast that keeping up with everything new can be overwhelming. Therefore, the need to automate some basic tasks is inevitable.
Most development jobs today require you to deploy the application you've built (API, chat-bot, web app, etc.) either as a final stage or during development to show real-time progress to your clients. As a developer, you need to understand the deployment process. However, your main focus should be on building the product and its logic.
To save time and stay focused, you should automate the deployment process before starting development. Once this is set up, you can concentrate on building the product.
Automating deployments is part of DevOps practices that improve an organization's efficiency. It reduces lead time, allowing changes to be shipped faster (in minutes), decreases deployment time since the process is automated, and enhances the overall work experience. This is known as Continuous Delivery
, meaning released code is shipped to production faster and automatically. While there's more to the terminology, today we will learn how to automate deployments.
Follow up with this GitHub repository: https://github.com/jackkweyunga/auto-deploy-flask-to-server
Tools
Most of you have already heard of or used Ansible and GitHub Actions since they are popular tools. If you haven't, don't worry—they are simple and easy to understand. By the end of this article, you'll have a good start.
ANSIBLE is a general-purpose IT automation tool created using Python with a large community around it. It is mostly used for configuration management, provisioning resources, orchestration, and more. Read more
GITHUB ACTIONS is a feature by GitHub that lets you automate workflows such as running tests, builds, and releases based on given triggers. Read more
Setup
To accomplish our goal today, we need to package our application so that it's portable and easy to run. This is where Docker comes in. You have likely heard of Docker, and maybe even used it.
For this article, we'll be building and deploying a simple Flask API.
The folder structure will be as follows:
auto-deploy-flask-to-server
.github\
workflows\
docker-image.yml
deploy.yml
ansible\
flask-api\
files\
docker-compose.yml
tasks\
main.yml
deploy.yml
create-sudo-password-ansible-secret.sh
Dockerfile
app.py
requirements.txt
docker-compose.dev.yml
.dockerignore
.gitignore
Application ( Flask API )
In our app.py
, there is a simple Flask route that returns a JSON object with the current time.
app.py
from flask import Flask
from datetime import datetime as dt
import os
app = Flask(__name__)
@app.route("/")
def heartbeat():
return {
"now": dt.utcnow()
}
if __name__ == "__main__":
# Get the DEBUG environment variable, defaulting to True if not set.
debug = os.environ.get("DEBUG", "True").lower() in ("true", "1")
app.run(debug=debug)
Docker
This is what the Dockerfile should look like.
Dockerfile
# Use an official Python runtime as a parent image
FROM python:3.11-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
# Create and set the working directory
WORKDIR /app
# Copy the requirements file and install dependencies
COPY requirements.txt /app/
RUN pip install --no-cache-dir -r requirements.txt
# Copy the application code to the working directory
COPY . /app/
# Expose the port the app runs on
EXPOSE 5000
# Set the environment variable to ensure Python runs in production mode
ENV FLASK_ENV=production
# Command to run the application
CMD ["python", "app.py"]
After adding a Dockerfile, we can proceed to test our application image. Use the following docker command to build the image. Solve any occurring errors until the build is successful and the docker image runs correctly.
# Build docker image
# windows
docker build -t flask_app .
# linux
sudo docker build -t flask_app .
# Run the image
# windows
docker run -rm -p "5000:5000" flask_app
# linux
sudo docker run -rm -p "5000:5000" flask_app
Now let's add content to the docker-image.yml
file in the .github/workflows
directory. This will automate Docker image builds for our application. For more details about this file, please refer to this article: Upload Docker Images to GitHub: A Simple Guide with GitHub Actions. This setup will create a Docker image tagged with the commit hash and git branch, whenever we commit our code to GitHub.
.github/workflows/docker-image.yml
name: ci # Put the name of your choice
on:
workflow_dispatch:
push:
branches:
- "main" # Change to the branch name you are targeting
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }} # takes the name of the repository.
jobs:
build-publish-deploy:
name: build and push docker
runs-on: ubuntu-latest
permissions:
contents: write
packages: write
steps:
- name: checkout
uses: actions/checkout@v3
- name: Set up Docker Builds
uses: docker/setup-buildx-action@v2
- name: Login to Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@69f6fc9d46f2f8bf0d5491e4aabe0bb8c6a4678a
with:
images: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=tag
type=ref,event=pr
type=sha
flavor: |
latest=auto
prefix=
suffix=
- name: Build and push hash tagged image
uses: docker/build-push-action@v2
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=registry,ref=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
cache-to: type=inline
Let us add content to the docker-compose.dev.yml
file in the root directory of our project. Use this docker-compose file for testing and development if needed.
docker-compose.dev.yml
services:
web:
build: .
container_name: flask_app
environment:
- FLASK_ENV=development
- DEBUG=True
ports:
- "5000:5000"
restart: always
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
deploy:
resources:
limits:
memory: "256m"
cpus: "0.50"
To run the docker-compose file, run te following commands.
# windows
docker compose -f docker-compose.dev.yml build
docker compose -f docker-compose.dev.yml up -d
# linux
sudo docker compose -f docker-compose.dev.yml build
sudo doccker compose -f docker-compose.dev.yml up -d
Ansible
It's time to finish setting up Ansible in our project. We will be making changes to the contents of the Ansible folder.
ansible\
flask-api\
files\
docker-compose.yml
tasks\
main.yml
deploy.yml
create-sudo-password-ansible-secret.sh
We've organized ansible in roles. flask-api
is an ansible role with all the files and tasks needed to deploy our flask application. But hey, what's an ansible role ?
An ansible role provides a structured way to organize files, vars, templates and tasks related to a given automation.
You can always add another role for other automation. For example: A database role
Let's add content to the docker-compose.yml
file in the ansible/flask-api/files
directory, which is our production Docker Compose file. Notice that we no longer set build: .
; instead, we use the remote path to the Docker image we built earlier.
You might ask yourself why this Docker Compose file look similar to the one in the root directory. Well, this is because your testing and, if possible, development environments should be as close as possible to the production environment. This helps minimize errors and bugs that might occur in production but are not detectable in testing or development. It doesn't eliminate them entirely but reduces them.
Additional services are portainer
and watchtower
.
Portainer is a web interface for Docker that helps you manage containers, images, stacks, and more.
Watchtower is crucial in our setup because it pulls new images after they are published with GitHub Actions. Since we are going to be using ghcr.io
, in a public way, we do not need to setup any docker authentication for it. Otherwise, we had to setup docker authentication for private container registries.
ansible/flask-api/files/docker-compose.yml
volumes:
portainer-data:
services:
portainer:
image: portainer/portainer-ce:alpine
container_name: portainer
command: -H unix:///var/run/docker.sock
ports:
- "9000:9000"
volumes:
# Connect docker socket to portainer
- "/var/run/docker.sock:/var/run/docker.sock"
# Persist portainer data
- "portainer-data:/data"
restart: always
watchtower:
container_name: "watchtower"
image: "docker.io/containrrr/watchtower"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# To enable docker authentication, uncomment the line below.
# You also need to make sure you are logged in to docker in the server
# E.g by running: sudo docker login ghcr.io
# - /root/.docker/config.json:/config.json:ro
restart: always
environment:
TZ: Africa/Dar_es_Salaam
WATCHTOWER_LIFECYCLE_HOOKS: "1" # Enable pre/post-update scripts
command: --debug --cleanup --interval 30
web:
image: ghcr.io/jackkweyunga/auto-deploy-flask-to-server:main
environment:
- FLASK_ENV=production
- DEBUG=${DEBUG}
ports:
- "5000:5000"
restart: always
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
deploy:
resources:
limits:
memory: "256m"
cpus: "0.50"
Let's define our task in ansible/flask-api/tasks/main.yml
. What does it do?
Creates a project directory on the remote server if it doesn't exist. In our case, it's the
.flask-api
folder.Copies relevant files to the server. In our case, it's the
docker-compose.yml
file.Runs the compose commands to pull and run the services.
ansible/flask-api/tasks/main.yml
# main task
- name: Deploy app
become: true
block:
- name: Create a directory if it does not exist
ansible.builtin.file:
path: "{{ lookup('ansible.builtin.env', 'HOME_DIR') }}/{{ lookup('ansible.builtin.env', 'PROJECT_DIR') }}/.flask-app"
state: directory
- name: Copy compose file to server
ansible.builtin.copy:
src: "files/"
dest: "{{ lookup('ansible.builtin.env', 'HOME_DIR') }}/{{ lookup('ansible.builtin.env', 'PROJECT_DIR') }}/.flask-app/"
# Uncomment this when working with private repositories
# Make sure the environment variables; DOCKER_TOKEN and DOCKER_USERNAME are provided
#
# - name: Login to Docker via vars/main.yaml
# shell: "echo \"{{ lookup('ansible.builtin.env', 'DOCKER_TOKEN') }}\" | docker login ghcr.io -u {{ lookup('ansible.builtin.env', 'DOCKER_USERNAME') }} --password-stdin"
- name: Docker Compose Up
community.docker.docker_compose_v2:
project_src: "{{ lookup('ansible.builtin.env', 'HOME_DIR') }}/{{ lookup('ansible.builtin.env', 'PROJECT_DIR') }}/.flask-app"
pull: "always"
register: output
- name: Debug output
ansible.builtin.debug:
var: output
For Ansible to run automation tasks, you need to define a playbook. But what is a playbook?
A playbook in Ansible contains a list of tasks and/or roles to be executed against a list of hosts (e.g., web servers).
In our setup, we define the playbook ansible/deploy.yml
. What does it do?
Provides a way to pass sudo password when running privileged
sudo
commandsDefines the roles to be run
Defines the remote environment variables ( We do not need for this setup )
ansible/deploy.yml
---
- hosts: webservers
# an encrypted ansible secret file containing the sudo password
vars_files:
- secret
roles:
- services
environment:
DEBUG: "{{ lookup('ansible.builtin.env', 'DEBUG') }}"
To ensure that the encrypted sudo password is available ( secret
) , we need to run the script create-sudo-password-ansible-secret.sh
. What does the script do ?
creates the vault password file (
vault.txt
)create the encrypted vars file (
secret
) containing the ansible_sudo_pass which will be passed when running privileged sudo commands with ansible.
ansible/create-sudo-password-ansible-secret.sh
#!/bin/bash
# variables
VAULT_PASSWORD=$(openssl rand -base64 12)
VAULT_PASSWORD_FILE="ansible/vault.txt"
VAULT_FILE="ansible/secret"
SUDO_PASSWORD="$1"
SUDO_PASSWORD_FILE="/tmp/sudo-password"
# sudo passord is required
if [ -z "${SUDO_PASSWORD}" ]; then
echo "Usage: $0 <sudo-password>"
exit 1
fi
# create vault password file
echo "${VAULT_PASSWORD}" > "${VAULT_PASSWORD_FILE}"
# create a sudo password file
echo "ansible_sudo_pass: ${SUDO_PASSWORD}" > "${SUDO_PASSWORD_FILE}"
# encrypt sudo password
ansible-vault encrypt --vault-password-file "${VAULT_PASSWORD_FILE}" "${SUDO_PASSWORD_FILE}" --output "${VAULT_FILE}"
That's it for Ansible. Coming up next: GitHub Actions setup.
GitHub Actions
We previously created a GitHub Actions workflow file .github/workflows/docker-image.yml
that builds and publishes a Docker image for us on push events.
In this section, we are going to add another workflow, that will run the deploy.yml
ansible playbook. Let's add content to the file: .github/workflows/ansible-deploy.yml
.github/workflows/ansible-deploy.yml
name: ansible-deploy
on:
workflow_dispatch:
inputs:
REMOTE_USER:
type: string
description: 'Remote User'
required: true
default: 'ubuntu'
HOME_DIR:
type: string
description: 'Home Directory'
required: true
default: '/home/ubuntu'
TARGET_HOST:
description: 'Target Host'
required: true
default: "example.com" # Change this to your server IP or Domain
jobs:
ansible:
runs-on: ubuntu-latest
env:
DEBUG: 0
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Add SSH Keys
run: |
cat << EOF > ansible/devops-key
${{ secrets.SSH_DEVOPS_KEY_PRIVATE }}
EOF
- name: Update devops private key permissions
run: |
chmod 400 ansible/devops-key
- name: Install Ansible
run: |
pip install ansible
- name: Adding or Override Ansible inventory File
run: |
cat << EOF > ansible/inventory.ini
[webservers]
${{ inputs.TARGET_HOST }}
EOF
- name: Adding or Override Ansible Config File
run: |
cat << EOF > ./ansible/ansible.cfg
[defaults]
ansible_python_interpreter='/usr/bin/python3'
deprecation_warnings=False
inventory=./inventory.ini
remote_tmp="${{ inputs.HOME_DIR }}/.ansible/tmp"
remote_user="${{ inputs.REMOTE_USER }}"
host_key_checking=False
private_key_file = ./devops-key
retries=2
EOF
- name: Run deploy playbook
run: |
sh ansible/create-sudo-password-ansible-secret.sh ${{ secrets.SUDO_PASSWORD }}
ANSIBLE_CONFIG=ansible/ansible.cfg ansible-playbook ansible/deploy.yml --vault-password-file=ansible/vault.txt
The workflow installs Ansible on the GitHub Ubuntu runner.
It creates the Ansible inventory file
ansible/inventory.ini
, filling it with the target host IP or domain.It creates the Ansible configuration file
ansible/ansible.cfg
based on the GitHub Actions inputs and secrets.It runs the script to create the sudo password Ansible secret.
It runs the Ansible playbook.
Notice areas in the YAML file where values are read from the variable secret
. This refers to the GitHub Actions secrets. You can edit them by navigating to settings/secrets/actions
in your GitHub repository. For this workflow, we need to add the following GitHub secrets:
SUDO_PASSWORD: The password of the remote user
SSH_DEVOPS_KEY_PRIVATE: The SSH private key with access to the remote server
Use GitHub secrets to store private information needed by a workflow. That's it for GitHub Actions. Now, let's review what we have done so far.
Operation
Now that our setup is complete, let's learn how it operates.
When you push to the main branch, a new Docker image will be published to the GitHub container registry. If the workflow fails, you need to check and fix the error until it works again.
The workflow
ansible-deploy
is manually triggered to configure and deploy the application to the server. This is done for the first time and whenever configurations change. There is no need to run it regularly.Whenever a new image is published, Watchtower will automatically pull the image into the server and continuously update the containers running the image.
In this way, we have completed the set up for automated deployments to a server using ansible and GitHub actions.
Conclusion
Wow! It's been quite a journey. If you've made it this far, you've probably learned a thing or two about automated deployments with Ansible and GitHub Actions. If anything is unclear, please ask, and I'll be happy to help. I hope you enjoyed the article.
Seeking expert guidance in Ops, DevOps, or DevSecOps? I provide customized consultancy services for personal projects, small teams, and organizations. Whether you require assistance in optimizing operations, improving your CI/CD pipelines, or implementing strong security practices, I am here to support you. Let's collaborate to elevate your projects. Contact me today.
Whats next ?
One can go further and do the following to improve or scale up the process:
Add automated tests that run before an image is built and published.
Add Ansible configuration tests that run when Ansible configurations are updated.
Scale up the setup with Kubernetes and Argo CD (depending on the organization's needs and components).
Top comments (0)