This article is intended to provide background for other articles I plan on writing
I have used Red Hat Single Sign-On (KeyCloak) off and on for a few years now and have good experiences with it. It can be a little overwhelming though for developers that don't have experience with Identity Access Management (IAM). Having a good reference architecture readily available is invaluable to demonstrate how it works. So, I decided to create a Python microservices prototype using FastAPI, SSO, and an API Gateway. Prior to starting, I hadn't yet tried Kong and decided to use it as the API gateway for the prototype.
To start off, I began looking for example implementations with KeyCloak and Kong and found this gem of an article. It's great for getting KeyCloak and Kong to work together. The instructions were clear and I didn't have to figure out the versions issues right away. Those conflicts came later when I wanted additional features.
But, it was clear that the author's choice of docker-compose and Curl wouldn't work for my needs. Using docker-compose could not setup integration between KeyCloak and Kong by itself. I wanted something that would allow users to be able to access the stack through just one command. This is why Ansible is the better choice.
Objectives
Security is difficult without automation. It can also slow work down too if this step isn't done. It's also much harder to collaborate if development environments are not consistent.
- Simple to create/tear-down environments
- Must encapsulate deployment commands
- Allow deployment to co-exist with application
- Allow deployment to scale with application
What is Ansible?
For newcomers, Ansible can best be described as a tool that specializes in orchestration, configuration management, and automation. It is agentless and allows management of resources without requiring client software be installed. This allows many built-in integrations (called modules) to be used - including Docker.
Comparing Ansible and docker-compose
Caveat: The Ansible here is written entirely as playbooks. For simplicity it has minimal amount of variables and no external roles. I later plan to write an additional article for reducing redundancy with Ansible roles.
Building Docker Images is still painless. When docker-compose up
is run it will automatically build any Dockerfile if the image is not already available. Ansible requires that the docker_image module be provided the instructions for images to be built.
Example Docker build using Ansible:
- name: Build Kong OIDC image
docker_image:
name: kong:0.14.1-centos-oidc
source: build
build:
pull: yes
path: ../kong
Setting up docker resources is relatively the same. Both docker-compose and Ansible can setup resources such as networks and volumes within Docker. The YAML used by both systems are relatively similar.
Example docker-compose for building resources from jerney.io:
networks:
keycloak-net:
volumes:
keycloak-datastore:
Example using docker_network and docker_volume:
- name: Setup network for KeyCloak
docker_network:
name: keycloak-net
state: "{{ sso_network_state | default('present') }}"
- name: Setup volumes for KeyCloak
docker_volume:
name: keycloak-volume
state: "{{ sso_volume_state | default('present') }}"
In the above examples you can see that Ansible is a bit more verbose. But, much of this can be simplified further.
Container provisioning is a bit better in docker-compose. KeyCloak requires a database for it to operate. Ensuring containers are deployed in order is easily done through docker-compose with the 'depends_on' keyword. Deployments with docker-compose have some benefits in that it is efficient at ensuring the database is up before starting the KeyCloak container.
Example setting up KeyCloak database from jerney.io
services:
...
keycloak-db:
image: postgres:9.6
volumes:
- keycloak-datastore:/var/lib/postresql/data
networks:
- keycloak-net
ports:
- "25432:5432"
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
example setting up KeyCloak from jerney.io
services:
...
keycloak:
image: jboss/keycloak:4.5.0.Final
depends_on:
- keycloak-db
networks:
- keycloak-net
ports:
- "8180:8080"
environment:
DB_VENDOR: POSTGRES
DB_ADDR: keycloak-db
DB_PORT: 5432
DB_DATABASE: keycloak
DB_USER: keycloak
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
The Ansible code required to setup the KeyCloak containers is again similar to that of docker-compose with two differences. Firstly, the docker_container_info module is used to determine if the database container is already deployed. If it isn't it will then use the docker_container module to then pull, setup and start the image.
---
- name: Check if KeyCloak DB is running
docker_container_info:
name: keycloak-db
register: keycloak_db_state
- block:
- name: Start KeyCloak DB
docker_container:
name: keycloak-db
image: postgres:9.6
volumes:
- keycloak-datastore:/var/lib/postresql/data
networks_cli_compatible: true
networks:
- name: keycloak-net
exposed_ports:
- '25432:5432'
env:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
register: keycloak_db_register
...
Secondly, the wait_for is then used to ensure that the database is operational before continuing.
Example wait for database port for Ansible:
...
- name: Wait for KeyCloak DB to accept connections
wait_for:
host: "{{ keycloak_db_register['ansible_facts']\
['docker_container']\
['NetworkSettings']\
['Networks']\
['keycloak-net']\
['IPAddress'] }}"
port: 5432
state: started
connect_timeout: 1
timeout: 30
register: keycloak_db_running
until: keycloak_db_running is success
retries: 10
when: not keycloak_db_state.exists
The KeyCloak container can then be provisioned once the database is operational. The process is identical to initializing the database with both docker_container_info and docker_container modules being utilized again.
Example starting KeyCloak with Ansible
- name: Check if KeyCloak DB is running
docker_container_info:
name: keycloak
register: keycloak_state
- block:
- name: Start KeyCloak
docker_container:
name: keycloak
image: jboss/keycloak:7.0.0
networks_cli_compatible: true
networks:
- name: keycloak-net
links:
- keycloak-db
- name: api-net
links:
- webapp
- kong
ports:
- '8080:8080'
env:
DB_VENDOR: POSTGRES
DB_ADDR: keycloak-db
DB_PORT: '5432'
DB_DATABASE: keycloak
DB_USER: keycloak
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
register: keycloak_register
- name: Wait for KeyCloak to accept connections
wait_for:
host: "{{ keycloak_register['ansible_facts']\
['docker_container']\
['NetworkSettings']\
['Networks']\
['keycloak-net']\
['IPAddress'] }}"
port: 8080
state: started
connect_timeout: 1
timeout: 30
register: keycloak_running
until: keycloak_running is success
retries: 10
when: not keycloak_state.exists
Additional Configuration with Ansible
So, if you have been following up to now you may wonder where the real benefits in using Ansible are. Ansible may not be as efficient at managing Docker as well as docker-compose. It is a bit more verbose. But, it is in the post setup where Ansible shines and docker-compose is essentially a no-show.
Here docker-compose lacks Curl and JSON integration. Ansible on the other hand provides the uri module and native JSON support to perform additional tasks.
Example retrieving login token from KeyCloak
- name: Authenticate with KeyCloak
uri:
url: "http://localhost:8080/auth/realms/master\
/protocol/openid-connect/token"
method: POST
body_format: form-urlencoded
body:
client_id: admin-cli
username: admin
password: admin
grant_type: password
return_content: yes
until: sso_auth.status != -1
retries: 10
delay: 1
register: sso_auth
- name: Set KeyCloak access token
set_fact:
token: "{{ sso_auth.json.access_token }}"
...
Example create KeyCloak client for Kong:
...
- name: Create KeyCloak client
keycloak_client:
auth_client_id: admin-cli
auth_keycloak_url: http://localhost:8080/auth
auth_realm: master
auth_username: admin
auth_password: admin
client_id: api-gw
id: "{{ sid }}"
protocol: openid-connect
public_client: false
root_url: http://localhost:8000
redirect_uris:
- http://localhost:8000/mock/*
direct_access_grants_enabled: true
standard_flow_enabled: true
client_authenticator_type: client-secret
state: present
register: client_register
...
With this layout provisioning the full stack with Ansible requires only one command. Comparatively, provisioning with docker-compose requires that a separate curl command be issued to create a client, fetch the client_secret and registered with the Kong. And while task runners such as automake, rake, pyinvoke, or even just plain Bash but it would still entail that docker-compose couldn't do it alone.
Example provisioning command with Ansible:
# ansible-playbook -i localhost, sso/deploy.yml
Please view the prototype to test it out:
kuwv / python-microservices
Python Microservices with OpenID-Connect/OAuth2
When to use one or the other
If you develop on POSIX systems, such as Linux or Mac, using Ansible with Docker might just be easier.
If you develop on Windows systems then using either Vagrant with Ansible or a task runner with docker-compose might work best.
Also, if you develop on Swarm then docker-compose might just be your comfort zone. But, take a look at docker_swarm module if you're curious.
This is just my opinion of course.
Summary
The problem with using docker-compose alone is that it doesn't provide any other automation capabilities outside of managing Docker. Interfacing APIs or running additional post configuration tasks just requires additional tools. This can include provisioning an image with your configuration management tool of choice or using a task runner such as pyinvoke or rake locally.
Comparing docker-compose to Ansible is probably unfair since it competes more with Vagrant for developer mind space - I guess. But, where Vagrant has integrations with configuration management tools, docker-compose requires additional images to be deployed with those tools instead.
Top comments (0)