When Python Runs Your Containers Series |
---|
When Python Runs Your Containers (part 1) |
When Python Runs Your Containers (part 2) |
Welcome gentle reader. My name is Silent-Mobius, also known as Alex M. Schapelle. Today I'd like to introduce you a small project that I have developed while tinkering with various tools.
In this quest of proof of concept, we set to prove that python/flask application can be used to manage working node of Linux Distribution while running docker-compose environment.
Why would one might wish to do this with flask or python, in this age of devops wonders, you may wonder ? As mentioned, I was just tinkering, while also considering cases where k8s is not option or cases where we do not have multi-node environment but still to scaling up with in our capability.
Then why not use Docker API? Docker API is a great choice for single api requests but when the logic comes in play, I preferred to create event driven application to manage workflow of container life cycle.
Initial recognition is due, to one @andreagrandi, who inspired me for this project, thus: Thank you for your docker-puller project.
Before we dive in, it is upon us to cover the definitions what the project is about, and how we can use it.
Now without further ado, lets dive into existing wisdom, and let us remind the common knowledge objects (CKO) up until now:
CKO1: docker has python sdk which you can install with:
pip3 install docker
, meaning that you can start, stop, restart containers as well as pull and push imagesCKO2: docker hub or any other container registry, usually can provide us with web-hook, which can include several data references, such as , which image to pull, what container to restart and so on.
CKO3: docker-compose has also really fun api that is called
python-on-whales
created and developed by @gabrieldemarmiesse, and you could find it here, gist of it being that you can also control docker-compose with python.CKO4: one may control docker engine of the
HOST
from the runningContainer
, by connectingContainer
to Docker-Socket of theHOST
(we'll show it later)CKO5: reading comments might reveal why things do not work, thus stay alert
From here on, we'll combine all CKO's and hope to tell the tale of one happy python that was able to manage a swarm of whales.
===========================================
We begin as all beginnings by setting up repository where all of the basis of our project need to be saved, and used later as source for our integration and deployment, as such here are fundamental commands:
mkdir mazin-docker
cd mazin-docker
git init --inital-branch=main
git config user.name "silent-mobius"
git config user.email "alex@mobiusdevteam.com"
touch README.md LICENSE .gitignore
git add *
git commit -m "initial commit"
git push
I leave it up to you to realize how these contribute to our humble beginning.
Considering all CKOs we'll setup the existing project inside our repository:
# clone @andreagrandi project to our project
# remember to stay under our original project folder for now
git clone https://github.com/andreagrandi/docker-puller
For the purpose of using this project we will need some dependencies set up, thus we install python3
, python3-pip
, python3-docker
, python3-flask
, gunicorn
and nginx
. Those akin to my scripts, know that I only use Linux based distributions, and this case is no exception. We'll add another layer for this project to be clean, a package named pipenv
:
# I am working on fedora36 on this project
# while still being in our projects home folder
sudo dnf install -y python3 python3-pip pipenv nginx
pipenv shell
(docker-agent) pip install gunicorn flask docker
# parenthesis show that we work in virtual environment
(docker-agent)pip freeze > requirements.txt
Once this is done, we can start doing our modifications, and for now we work on the application alone. Lets open app.py
file and change the end of it ... or you can copy code below:
# I prefer to run the application with gunicorn
# thus changed the original application little bit
import os
import json
import logging
import subprocess
from flask import Flask
from flask import request
from flask import jsonify
logging.basicConfig(level=logging.DEBUG)
app = Flask(__name__)
app.config["DEBUG"] = True
config = None
def load_config():
with open('config.json') as conf:
return json.load(conf)
@app.route('/',methods=['GET','POST'])
def listen():
config = load_config()
if request.method == 'GET':
return jsonify(success=True, message="Agent Is Running"), 200
if request.method == 'POST':
token = request.args.get('token')
app.logger.debug(type(token))
if token == config['token']:
hook = request.args.get('hook')
image = request.args.get('image')
if image:
os.environ['image']=image
else:
return jsonify(success=False, error="Missing Image Name"),400
if hook:
hook_value = config['hooks'].get(hook)
if hook_value:
try:
child = subprocess.run(hook_value)
return jsonify(success=True, message=child.returncode), 200
except OSError as e:
return jsonify(success=False, error=str(e)), 400
else:
return jsonify(success=False, error="Hook not found"), 404
else:
return jsonify(success=False, error="Invalid request: missing hook"), 400
else:
return jsonify(success=False, error="Invalid token"), 400
In addition to changes of the application, we need to add custom hook that this application will be using. Lets call this script docker-pull.py
:
(docker-agent) vi docker-puller/scripts/docker-pull.py
#!/usr/bin/env python3
import os
import sys
import pip
import logging
import argparse
import subprocess
ENV_VAR_IMAGE = os.environ['image']
logging.basicConfig(level=logging.DEBUG)
def main(image):
logging.info('Pulling image: '+ str(image))
if image:
logging.info('Image passed as variable')
pull_status = pull(image)
if pull_status:
restart()
elif ENV_VAR_IMAGE:
logging.info('Image passed as environemnt variable')
pull_status = pull(ENV_VAR_IMAGE)
if pull_status:
restart()
else:
print('No Image Provided')
sys.exit()
def install(pkg):
logging.info('Installing: ',pkg)
if hasattr(pip,'main'):
pip.main(['install', pkg])
return True
else:
pip._internal.main(['install', pkg])
return True
def pull(image):
client = docker.from_env()
status = client.images.pull(image,tag='latest')
if status:
return True
else:
return False
def restart():
result = subprocess.call('systemctl restart mkdocs-compose.service', shell=True)
if restart == 0 :
return True
else:
return False
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--image", help='Providing image name to pull from remote registry', type=str)
args = parser.parse_args()
main(args.image)
This hook script will work any Linux system, by calling the script with specific parameter of image.
The issue with application is that it is not acknowledged the docker-puller.py
script. Part of application is built with configuration in mind saved in config.json
. We need to configure it by adding required configuration to config.json
file:
(docker-agent) vi docker-puller/config.json
{
"host": "0.0.0.0",
"port": 8080,
"token": "abc123", // you may choose any id you want
"hooks": {
"docker-pull": "scripts/docker-pull.py"
}
}
As mentioned, we will run the project as a service, which means we'll need to setup Linux System Service based on systemd:
# lets create service file
(docker-agent) sudo vi /etc/systemd/system/docker-agent.service
Lets add service content to the file
[Unit]
Description=Docker-Agent service for pull images from any container registry
After=network-target
[Service]
# for now we'll use root user, it not that secure, but we'll fix it later
User=root
Group=root
WorkingDirectory=/opt/docker-puller
# As mentioned, gunicorn runs the python-flask application
# service and connect to socket file
ExecStart=/usr/local/bin/gunicorn -w 3 -b unix:/opt/docker-agent/docker-puller/docker-agent.sock app:app
# we'll configure nginx later to connect to socket file
[Install]
WantedBy=mutli-user.target
Save the file and restart system daemon with sudo systemctl daemon-reload
and restart the service with sudo systemctl enable --now docker-agent.service
. Just in case, verify that it is working, sudo systemctl status docker-agent
What does this provide us, one may ask ? Essentially the application is running as a service and systemd
will guarantee that process of the service will running as any other service on Linux system.
Yet, how it will communicate with the world ? This is where nginx
will provide gateway functionality, by connecting to applications socket.
Let us bring up the gateway that will connect to our access to docker-agent service with nginx
service.
(docker-agent) sudo systemctl enable --now nginx.service
(docker-agent) sudo vi /etc/nginx/conf.d/proxy.conf
Add the proxy config to nginx and restart the service
server {
listen 80;
server_name docker-agent;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /opt/docker-agent/docker-puller;
}
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://unix:/opt/docker-agent/docker-puller/docker-agent.sock;
}
}
(docker-agent) sudo nginx -t # check if errors present
(docker-agent) sudo systemctl restart nginx.service
After all that work, only thing left, is to test all we have created.
curl localhost # should receive json with success
curl -X POST "localhost?token=abc123&hook=docker-pull&container=hello_world"
# docker-agent pulls hello_world image with docker-pull hook
curl -X POST "localhost?token=abc123&hook=docker-pull&container=ubuntu"
# docker-agent pulls ubuntu image with docker-pull hook
===========================================
This is where our story pauses and we'll conclude it with a summary and future predictions.
We have setup development environment with pipenv
, and installed a bunch of packages. We have also setup services of nginx
which we have used and docker-agent that will serve as a system to pull docker images. To keep up with everything we have created a gitlab repository where we save the project.
We do not know what the future holds, for this project working like this is not acceptable, yet the project is currently is not enough we'll create a deployment with ansible/helm and a pipeline with gitlab-ci/Jenkins, eventually packing it all into a docker container image to be used with management.
Thank you gentle reader for joining me on this journey and hopefully embarking on new adventures with lots of things to learn and enrich ourselves. For now gentle reader, I bid you farewell and I hope to see you shortly with promised continuation of our path. Until then, please remember to have some fun and never stop learning.
Top comments (0)