DEV Community

Mattias Lundberg
Mattias Lundberg

Posted on

Using ansible for container orchestration

There exists almost as many ways to deploy applications as there exists applications. In the post we will look closer at one of them, using ansible to deploy or orchestrate applications in Docker containers.

Ansible is a platform for management of servers and what is running on them. In ansible you use one or more so called playbook to perform changes on one or more servers at the same time. A playbook has setting for which servers it should run agains and what it should do, so called tasks. A task can be to install a package or making sure a container is running.

Docker is a system for packaging and running applications in a consequent way no matter what programming language it is written in. Docker can build and run containers, a container can be seen as a lightweight virtual computer.

Orchestration is automated configuration and management of multiple computer systems and software. In this post it is used to describe how and where an application is running.

For orchestration of containers there are many ways to handle them. A common solution is kubernetes that makes the orchestration easy. One drawback is that it adds complexity, especially when running in smaller environments. This complexity is sometimes too great to handle in smaller projects, especially if the application does not require all powerful features of other orchestrators.

I will walk trough a more lightweight alternative that is ansible. The description and example is based on a real world project running in AWS and the application is small web application with a Flask/Python backend. The concepts described are applicable to any web application running in docker no matter the framework/language used. The application required a database, a loadbalancer and multiple application servers. This post will focus on the application servers. The application servers all runs the same codebase with the same configuration.

The servers and supporing services is setup by terraform and then ansible is used to install and configure them. After the creation of the server using terraform the following steps is required:

  1. Run an ansible playbook to install docker
  2. Run an ansible playbook to start the application and add the containers to a loadbalancer

The first step to install the base system could easily be removed by building a custom image, but kept for clarity and simplicity. The second step is what we will focus on, this is a simplified version of the playbook used to deploy the application:

---
- name: Prepare deployment
  hosts: appplication_servers

  tasks:
    - name: Run database migrations
      run_once: yes
      docker_container:
        name: app-migrations
        pull: yes
        image: "application:{{ version }}"
        command: "db upgrade"

- name: Deploy to one server at the time
  hosts: application_servers
  serial: 1

  tasks:
    - name: Remove from load balancer
      delegate_to: localhost
      elb_target:
        state: absent

    - name: Start new docker container
      docker_container:
        name: app
        pull: yes
        image: "application:{{ version }}"
        ports:
          - 8080:8080

    - name: Add to load balancer
      delegate_to: localhost
      elb_target:
        state: present

Walking through the playbook we first run database schema migrations (in this case with flask-migrate) then for every server three steps is performed:

  1. Removing the server from the loadbalancer waiting for it to no logger have any running requests
  2. Restarting the container with the new version
  3. Adding the server back to the loadbalancer

This is done for one server at the time so the users won't notice anything during the deploy. The deploy is triggered using the following command: ansible-playbook --extra-vars version=<tag> deploy.yaml. This can be done manually or by a CI system.

Docker-images are build for every push to git by a CI system and pushed to a docker container registry where each server can download the image.

When running with this approach I have not seen any major problems when performing normal development of the application. These actions include updating the code, running database migrations and adding new severs. Despite not having any problems yet this approach has some drawbacks when compared to other solutions:

  • It is hard to add new services/other types of containers
  • It is hard to run periodic tasks
  • The environment cannot easily be scaled automatically
  • The application cannot automatically move containers from broken servers

For this particular application the advantages of simplicity far outweighs these limitations. But for other applications the tradeoffs might be different. It is also possible move to an orchestration platform if required without too much work to throw away if required in the future.

An almost complete and running example of this can be found at github: https://github.com/mattiaslundberg/ansible-orchestration

Top comments (1)

Collapse
 
anasri72 profile image
anasri72 • Edited

Dear, I could not understand this part
tasks:
- name: Remove from load balancer
delegate_to: localhost
elb_target:
state: absent

Would you please let me know how ansible knows from where to remove the server?