DEV Community

Shy Devy
Shy Devy

Posted on

[pytheus] simple multiprocess metrics for sync/async python applications

pytheus

A modern python library for collecting prometheus metrics for your application that works both in synchronous and asyncio programs.

docs: https://pythe.us

GitHub logo Llandy3d / pytheus

experimenting with a new prometheus client for python

pytheus

playing with metrics

ci pypi versions license downloads


Documentation: https://pythe.us


pytheus is a modern python library for collecting prometheus metrics built with multiprocessing in mind.

Some of the features are:

  • multiple multiprocess support
    • redis backend βœ…
    • Rust powered backend πŸ§ͺ
    • bring your own βœ…
  • support for default labels value βœ…
  • partial labels value (built in an incremental way) βœ…
  • customizable registry support βœ…
  • registry prefix support βœ…

Philosophy

Simply put is to let you work with metrics the way you want.

Be extremely flexible, allow for customization from the user for anything they might want to do without having to resort to hacks and most importantly offer the same api for single & multi process scenarios, the switch should be as easy as loading a different backend without having to change anything in the code.

  • What you see is what you get.
  • No differences between singleprocess & multiprocess, the only change is…

Usage

Install with:

pip install pytheus
Enter fullscreen mode Exit fullscreen mode

create a counter and increment it:

from pytheus.metrics import Counter

cache_hit_total = Counter(name='cache_hit_total', description='desc')
cache_hit_total.inc()
Enter fullscreen mode Exit fullscreen mode

finally generate the metrics:

from pytheus.exposition import generate_metrics

metrics = generate_metrics()
Enter fullscreen mode Exit fullscreen mode

Full example with a flask application

import time
from flask import Flask, Response
from pytheus.metrics import Histogram
from pytheus.exposition import generate_metrics, PROMETHEUS_CONTENT_TYPE

app = Flask(__name__)

http_request_duration_seconds = Histogram(
    'http_request_duration_seconds', 'documenting the metric..'
)

@app.route('/metrics')
def metrics():
    data = generate_metrics()
    return Response(data, headers={'Content-Type': PROMETHEUS_CONTENT_TYPE})

# track time with the context manager
@app.route('/')
def home():
    with http_request_duration_seconds.time():
        return 'hello world!'

# alternatively you can also track time with the decorator shortcut
@app.route('/slow')
@http_request_duration_seconds
def slow():
    time.sleep(3)
    return 'hello world! from slow!'

app.run(host='0.0.0.0', port=8080)
Enter fullscreen mode Exit fullscreen mode

Rust powered multiprocess πŸ¦€

For multiprocess in python you need to synchronize the metrics between all the gunicorn workers for example, this backend implementation makes it extremely easy for your application to support that and it's also compatible with asyncio python applications as the processing will be done in parallel under the hood!

Install:

pip install pytheus-backend-rs
Enter fullscreen mode Exit fullscreen mode

Load the backend:

from pytheus.backends import load_backend
from pytheus_backend_rs import RedisBackend

load_backend(
    backend_class=RedisBackend,
    backend_config={"host": "127.0.0.1", "port": 6379},
)
Enter fullscreen mode Exit fullscreen mode

and that's it! Now regardless of how many workers you use the metrics will be in sync between them :)


FastAPI automatic metrics collection

The library offer a middleware for FastAPI applications that will automatically collect metrics regarding http request processing times & size of data, with the possibility of inspecting it depending on method (GET/POST/..), status_code (200,400,..) & route (/home).

Load the middleware

from fastapi import FastAPI
from pytheus.middleware import PytheusMiddlewareASGI


app = FastAPI()
app.add_middleware(PytheusMiddlewareASGI)
Enter fullscreen mode Exit fullscreen mode

Create the endpoint for prometheus to collect metrics

from fastapi.responses import PlainTextResponse
from pytheus.exposition import generate_metrics


@app.get('/metrics', response_class=PlainTextResponse)
def pytheus_metrics():
    return generate_metrics()
Enter fullscreen mode Exit fullscreen mode

If you visit the endpoint you will see metrics like:

# HELP http_request_duration_seconds duration of the http request
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_bucket{method="GET",route="/metrics",status_code="200",le="0.005"} 1.0
http_request_duration_seconds_bucket{method="GET",route="/metrics",status_code="200",le="0.01"} 1.0
http_request_duration_seconds_bucket{method="GET",route="/metrics",status_code="200",le="0.025"} 1.0
http_request_duration_seconds_bucket{method="GET",route="/metrics",status_code="200",le="0.05"} 1.0
http_request_duration_seconds_bucket{method="GET",route="/metrics",status_code="200",le="0.1"} 1.0
http_request_duration_seconds_bucket{method="GET",route="/metrics",status_code="200",le="0.25"} 1.0
http_request_duration_seconds_bucket{method="GET",route="/metrics",status_code="200",le="0.5"} 1.0
http_request_duration_seconds_bucket{method="GET",route="/metrics",status_code="200",le="1"} 1.0
http_request_duration_seconds_bucket{method="GET",route="/metrics",status_code="200",le="2.5"} 1.0
http_request_duration_seconds_bucket{method="GET",route="/metrics",status_code="200",le="5"} 1.0
http_request_duration_seconds_bucket{method="GET",route="/metrics",status_code="200",le="10"} 1.0
http_request_duration_seconds_bucket{method="GET",route="/metrics",status_code="200",le="+Inf"} 1.0
http_request_duration_seconds_sum{method="GET",route="/metrics",status_code="200"} 0.0014027919969521463
http_request_duration_seconds_count{method="GET",route="/metrics",status_code="200"} 1.0
# HELP http_request_size_bytes http request size
# TYPE http_request_size_bytes histogram
# HELP http_response_size_bytes http response size
# TYPE http_response_size_bytes histogram
http_response_size_bytes_bucket{method="GET",route="/metrics",status_code="200",le="10.0"} 0.0
http_response_size_bytes_bucket{method="GET",route="/metrics",status_code="200",le="100.0"} 0.0
http_response_size_bytes_bucket{method="GET",route="/metrics",status_code="200",le="1000.0"} 1.0
http_response_size_bytes_bucket{method="GET",route="/metrics",status_code="200",le="10000.0"} 1.0
http_response_size_bytes_bucket{method="GET",route="/metrics",status_code="200",le="100000.0"} 1.0
http_response_size_bytes_bucket{method="GET",route="/metrics",status_code="200",le="1000000.0"} 1.0
http_response_size_bytes_bucket{method="GET",route="/metrics",status_code="200",le="10000000.0"} 1.0
http_response_size_bytes_bucket{method="GET",route="/metrics",status_code="200",le="100000000.0"} 1.0
http_response_size_bytes_bucket{method="GET",route="/metrics",status_code="200",le="+Inf"} 1.0
http_response_size_bytes_sum{method="GET",route="/metrics",status_code="200"} 296.0
http_response_size_bytes_count{method="GET",route="/metrics",status_code="200"} 1.0
Enter fullscreen mode Exit fullscreen mode

Top comments (0)