DEV Community

Cover image for Prometheus: Python metrics (with Docker and Gitlab CI)

Posted on

Prometheus: Python metrics (with Docker and Gitlab CI)

Prometheus is a metric based monitoring platform also it's one of all-time my favorite tools ever. And from a time ago I was thinking to develop some project with it. What's my plan? I want to create my multi-language application cluster with Prometheus monitoring, and then add some Grafana Loki, Cortex and Thanos integrations.

What's the first step?. Integrate some Prometheus metrics library with a basic python app. To reach this, I just use the Prometheus "client_python" sample, and make a container with it and push into a public registry. So...

First step! Create a Gitlab repository

Second step! Create an "app" folder and copy-paste this code into a "":

from prometheus_client import start_http_server, Summary
import random
import time

# Create a metric to track time spent and requests made.
REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request')

# Decorate function with metric.
def process_request(t):
    """A dummy function that takes some time."""

if __name__ == '__main__':
    # Start up the server to expose the metrics.
    # Generate some requests.
    while True:
Enter fullscreen mode Exit fullscreen mode

Add a "requisites.txt" file with this content:

Enter fullscreen mode Exit fullscreen mode

Add a "Dockerfile" like this one:

FROM python:3.9-alpine

COPY . .
RUN pip install -r requisites.txt

RUN chmod u+x

ENTRYPOINT ["/app/"]
Enter fullscreen mode Exit fullscreen mode

Third Step! Create a Gitlab CI with Container Registry pipeline. To manage this task, I created a ".gitlab-ci.yaml" file in the repository basepath:

- build

image: docker:stable

- docker:dind

  stage: build
  when: on_success
  - master
  - docker build -f app/Dockerfile -t $CI_REGISTRY_IMAGE app
  - docker push $CI_REGISTRY_IMAGE
Enter fullscreen mode Exit fullscreen mode

This repository should look like this:

├── app
│   ├── Dockerfile
│   ├──
│   └── requisites.txt
├── .gitlab-ci.yml
Enter fullscreen mode Exit fullscreen mode

Now, let commit all the files and wait until the pipeline is finished.

Four and last step! Run that image and scrape some metrics:

Run docker container as a daemon, with "python-prom" name, listening on 8000/TCP and delete that container when the image stops:

$> docker run -d --rm --name python-prom -p 8000:8000
Enter fullscreen mode Exit fullscreen mode

Check that the image is up & running (also listening on that port)

$> docker ps

CONTAINER ID   IMAGE                                      COMMAND          CREATED              STATUS              PORTS                                       NAMES
380cc1a00a8b   "/app/"   About a minute ago   Up About a minute>8000/tcp, :::8000->8000/tcp   python-prom
Enter fullscreen mode Exit fullscreen mode

And scrape those metrics!

$> curl localhost:8000

# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 309.0
python_gc_objects_collected_total{generation="1"} 43.0
python_gc_objects_collected_total{generation="2"} 0.0
# HELP python_gc_objects_uncollectable_total Uncollectable object found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 36.0
python_gc_collections_total{generation="1"} 3.0
python_gc_collections_total{generation="2"} 0.0
Enter fullscreen mode Exit fullscreen mode

Our Python Prometheus base image is done.

On my next post, I will create a Go or Java base prometheus application and all three of these base images will be deployed into a Kubernetes cluster alongside a Prometheus and create some Prometheus workbench with them.

Top comments (0)