DEV Community

Chris White
Chris White

Posted on

Python Deployment: WSGI with uWsgi

In the previous installment gunicorn was looked at as a candidate for serving WSGI applications. As mentioned when discussing WSGI, pure python (as how gunicorn does it) and python C API interactions are ways to achieve integration with WSGI applications. For this article I'll be looking at uWsgi, which approaches the problem through the python C API method.

Installation

uWsgi is a C based application and also interacts with python's C API. That means your distro will need to have a compiler toolchain as well as the python header files and library. As an example:

$ apt-get install build-essential python3-dev
Enter fullscreen mode Exit fullscreen mode

If you use something like pyenv or manage your python via source compilation then the python3-dev part will not be necessary. Given the development and release pace of uWsgi, I tend to utilize the master branch for the source. Then I use the setup.py installation process as it tracks a few python locations that you'd have to specify on the command line otherwise:

$ git clone https://github.com/unbit/uwsgi.git
$ cd uwsgi
$ python3 setup.py install
Enter fullscreen mode Exit fullscreen mode

The Somewhat Basics

uWsgi has a fairly extensive set of features bound to configuration options. Here's an example run with a basic app:

wsgi_test.py

def application(env, start_response):
    data = b'Hello World'

    status = '200 OK'
    response_headers = [
        ('Content-Type', 'text/plain'),
        ('Content-Length', str(len(data))),
    ]
    start_response(status, response_headers)
    return [data]
Enter fullscreen mode Exit fullscreen mode
$ uwsgi --http :8000 --wsgi-file wsgi_test.py --master --processes 2 --threads 2
*** Starting uWSGI 2.1-dev+31c6c430 (64bit) on [Sun Aug 27 15:59:09 2023] ***
compiled with version: 10.2.1 20210110 on 17 August 2023 02:44:19
os: Linux-6.1.21-v8+ #1642 SMP PREEMPT Mon Apr  3 17:24:16 BST 2023
nodename: raspberrypi
machine: aarch64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
<snip>
Python version: 3.11.4 (main, Aug 17 2023, 03:18:09) [GCC 10.2.1 20210110]
Python main interpreter initialized at 0x7f89fc5b30
dropping root privileges after plugin initialization
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
your request buffer size is 4096 bytes
mapped 250368 bytes (244 KB) for 4 cores
*** Operational MODE: preforking+threaded ***
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x7f89fc5b30 pid: 56323 (default app)
dropping root privileges after application loading
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 56323)
spawned uWSGI worker 1 (pid: 56368, cores: 2)
spawned uWSGI worker 2 (pid: 56369, cores: 2)
spawned uWSGI http 1 (pid: 56370)
Enter fullscreen mode Exit fullscreen mode

So there's certainly a lot going on here. For the options:

  • --http :8000 tells uWsgi to run an http server on port 8000.
  • --wsgi-file my_test.py points to a python file with a WSGI application in it
  • --master This ensures a master process is available to manage workers
  • --processes 2 The number of processes to spin up, in this case 2
  • --threads 2 The number of threads per process to run under, in this case 2

The numbers here are fairly small compared to the usual settings you'd see as this is running on a Raspberry Pi. One thing to note is that when declaring the WSGI python file uWsgi knows to call an application callable by default. If we named it app instead of application then we'd use the --callable option to point to it instead:

$ uwsgi --http :8000 --wsgi-file wsgi_test.py --callable app --master --processes 2 --threads 2
Enter fullscreen mode Exit fullscreen mode

With a simple curl run we get:

$ curl -v http://127.0.0.1:8000
*   Trying 127.0.0.1:8000...
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.74.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Content-Length: 11
< 
* Connection #0 to host 127.0.0.1 left intact
Hello World
Enter fullscreen mode Exit fullscreen mode

Configuration

Given the number of available arguments and for easier maintainability, you can also utilize a configuration file in ini format like so:

[uwsgi]
http = 8000
wsgi-file = wsgi_test.py
processes = 2
threads = 2
master = true
Enter fullscreen mode Exit fullscreen mode

Now simply pass in the config file to uwsgi:

$ uwsgi ./uwsgi-config.ini
Enter fullscreen mode Exit fullscreen mode

Notable HTTP Features

Here I'll look at the ability to support chunked input, chunked output, and range headers. These features may or may not be necessary depending on your use case.

Chunked Input Support

Support for chunked input via WSGI is available through http11-socket and wsgi-manage-chunked-input flags:
wsgi_chunked_input.py

def application(environ, start_response):
    input = environ['wsgi.input']
    with open('test.json', 'wb') as stream_fp:
        stream_fp.write(input.read())

    status = '200 OK'
    body = b'Hello World\n'
    response_headers = [
        ('Content-Type', 'text/plain'),
        ('Content-Length', str(len(body))),
    ]
    start_response(status, response_headers)
    return [body]
Enter fullscreen mode Exit fullscreen mode
[uwsgi]
http11-socket = :8000
wsgi-manage-chunked-input = true
wsgi-file = wsgi_chunked_input.py
processes = 2
threads = 2
master = true
Enter fullscreen mode Exit fullscreen mode

The reason for this being a WSGI specific option is that uWsgi has its own API for handling this. An example curl call with a 25MB JSON file that the above would write back to test.json:

$ curl -v -H "Transfer-Encoding: chunked" -d @large-file.json http://127.0.0.1:8000
*   Trying 127.0.0.1:8000...
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> POST / HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.74.0
> Accept: */*
> Transfer-Encoding: chunked
> Content-Type: application/x-www-form-urlencoded
> Expect: 100-continue
> 
* Done waiting for 100-continue
* Signaling end of chunked upload via terminating chunk.
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Content-Length: 12
< 
Hello World
* Connection #0 to host 127.0.0.1 left intact
$ ls -lah test.json 
-rw-r--r-- 1 cwprogram cwprogram 25M Aug 27 19:35 test.json
Enter fullscreen mode Exit fullscreen mode

Chunked Response Support

Chunked response requires a few options to be set:

[uwsgi]
http = :8000
wsgi-file = wsgi_chunked_output.py
processes = 2
threads = 2
master = true
route-run = chunked:
route-run = last:
Enter fullscreen mode Exit fullscreen mode

This utilizes the uWsgi transformations feature to enable chunked encoding. For the WSGI code:

wsgi_chunked_output.py

class TestIter(object):

    def __iter__(self):
        lines = [b'line 1\n', b'line 2\n']
        for line in lines:
            yield line

def application(environ, start_response):
    status = '200 OK'
    response_headers = [
        ('Content-type', 'text/plain')
    ]
    start_response(status, response_headers)
    return TestIter()
Enter fullscreen mode Exit fullscreen mode

A simple curl call confirms the chunked encoding works as expected:

$ curl -iv --raw http://127.0.0.1:8000
*   Trying 127.0.0.1:8000...
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.74.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Content-type: text/plain
Content-type: text/plain
< Transfer-Encoding: chunked
Transfer-Encoding: chunked

< 
7
line 1

7
line 2

0
Enter fullscreen mode Exit fullscreen mode

Range

If you want to handle range with dynamically generated content then you'll want to use werkzeug module's parse_range_header option:

wsgi_range.py

from werkzeug.http import parse_range_header

def application(environ, start_response):
    range = parse_range_header(environ['HTTP_RANGE'])
    start, end = range.ranges[0]

    with open('large-file.json', 'rb') as stream_fp:
        stream_fp.seek(start)
        data = stream_fp.read(end - start)

    status = '200 OK'
    response_headers = [
        ('Content-type', 'application/json')
    ]
    start_response(status, response_headers)
    return [data]
Enter fullscreen mode Exit fullscreen mode

Outside of that the expectation is that you want range for static content such as video files. Simply have the honour-range option along with a folder to check static files for like so:

[uwsgi]
http = :8000
honour-range = true
check-static = /home/uwsgi/static/
wsgi-file = wsgi_range.py
processes = 2
threads = 2
master = true
Enter fullscreen mode Exit fullscreen mode

When ran with curl using a range option and a file that's present in the static directory:

$ curl -v -r 1200-1299 http://127.0.0.1:8000/large-file.json > result.json
*   Trying 127.0.0.1:8000...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET /large-file.json HTTP/1.1
> Host: 127.0.0.1:8000
> Range: bytes=1200-1299
> User-Agent: curl/7.74.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 206 Partial Content
< Content-Type: application/json
< Content-Length: 100
< Content-Range: bytes 1200-1299/26141343
< Last-Modified: Sat, 19 Aug 2023 00:48:33 GMT
< 
{ [100 bytes data]
100   100  100   100    0     0  33333      0 --:--:-- --:--:-- --:--:-- 33333
* Connection #0 to host 127.0.0.1 left intact
Enter fullscreen mode Exit fullscreen mode

uwsgi Protocol

The uwsgi protocol is something I've discussed in a previous installment. This is primarily meant to support communication with frontend servers such as nginx. Taking an example nginx config:

server {
    listen 9898;
    listen [::]:9898;
    root /var/www/html;
    index index.html index.htm index.nginx-debian.html;
    server_name _;

    location / {
        uwsgi_pass 127.0.0.1:8087;
        include uwsgi_params;
    }
}
Enter fullscreen mode Exit fullscreen mode

Now for the uwsgi config:

[uwsgi]
uwsgi-socket = :8087
wsgi-file = wsgi_test.py
processes = 2
threads = 2
master = true
Enter fullscreen mode Exit fullscreen mode

This time uwsgi-socket is being used so the traffic is done via uwsgi protocol instead of the previous http protocol. Not much changes as far as the response is concerned:

$ curl -v http://127.0.0.1:9898
*   Trying 127.0.0.1:9898...
* Connected to 127.0.0.1 (127.0.0.1) port 9898 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:9898
> User-Agent: curl/7.74.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx/1.18.0
< Date: Sun, 27 Aug 2023 23:42:05 GMT
< Content-Type: text/plain
< Content-Length: 11
< Connection: keep-alive
< 
* Connection #0 to host 127.0.0.1 left intact
Hello World
Enter fullscreen mode Exit fullscreen mode

One thing with uwsgi is that while the uWsgi server receives the request uwsgi encoded, the actual response is still HTTP. This means things like chunked responses will still work. Chunked inputs however, don't quite work out of the box and alternatives will generally be required. This is something I'll most likely go over in a separate article.

Python Integration

It's possible to interact with uwsgi as a module in python. Do keep in mind that this does introduce some vendor lock in. I'd recommend a plugin/driver type abstraction if you do go this route that you can easily swap in something else if your requirements change.

Low Level API Calls

As an example for making calls to uWsgi's API, you can setup a cache like so:

[uwsgi]
uwsgi-socket = :8087
wsgi-file = uwsgi_module.py
processes = 2
threads = 2
master = true
cache2 = name=mycache,items=100
Enter fullscreen mode Exit fullscreen mode

Then with python code you can easily access this cache (not really practical code but it does show access):

uwsgi_module.py

from uwsgi import cache_get, cache_set, cache_clear

def application(env, start_response):
    data = b'Hello World'
    cache_set('return_data', data)

    status = '200 OK'
    response_headers = [
        ('Content-Type', 'text/plain'),
        ('Content-Length', str(len(data))),
    ]
    start_response(status, response_headers)
    return [cache_get('return_data')]
Enter fullscreen mode Exit fullscreen mode

Decorators

There are also some decorators available to simplify interaction with some of the uWsgi services. Take sending an email to a customer where you don't want to wait for it to finish. We can interact with uWsgi's spooler like so:

[uwsgi]
uwsgi-socket = :8087
wsgi-file = wsgi_test.py
processes = 2
threads = 2
master = true
spooler = myspool
import = uwsgi_spool_task
Enter fullscreen mode Exit fullscreen mode

Here a spool named myspool is setup, and code for the uwsgi_spool_task module is used to import the spool code.

uwsgi_spool_task.py

from uwsgidecorators import *

@spool
def send_email(arguments):
    print(f"Send email to {arguments['email_address']}")
Enter fullscreen mode Exit fullscreen mode

A spool decorator declares the spool worker code. This is where you would have code to reach out to your email provider and send off an email. Now for the WSGI app:

wsgi_test.py

from uwsgi_spool_task import send_email

def application(env, start_response):
    data = b'Hello World'

    status = '200 OK'
    response_headers = [
        ('Content-Type', 'text/plain'),
        ('Content-Length', str(len(data))),
    ]
    start_response(status, response_headers)
    send_email.spool(email_address='someone@no.spam')
    return [data]
Enter fullscreen mode Exit fullscreen mode

This will import the send_email and then send it off to the spool with the email address to send the email to.

Project Support Overview

Now we'll talk about project maintenance. This information can help you make a decision on if you want to utilize this solution against something else. I will say that while there is a supposed commercial support link, it's currently broken at this time.

Documentation

The documentation got a pretty decent update recently. This has substantially improved the overall layout from what it was before, making it easier to navigate. Even so the overwhelming amount of features can still make finding what you need a daunting task.

Source Maintainability

The repo for this indicates the project is in maintenance mode and as such there are a large number of issues and pull requests left unattended. Given that python 3.12 is in RC mode, it will be interesting to see if it gets properly supported or not when the official release comes out. My general consensus is that if you plan to use the uwsgi python module, have a backup solution in place.

Final Thoughts

My general thought is that for most developers that just want to test their code out gunicorn would be good enough. For the serious optimizer types, consider looking at the Bloomberg article and Cloudbees article on production performance.

Company wise I think that unbit allocating more resources to the project and actually having a valid link to some commercial support offering would help. The project has good potential and volunteers willing to help out, but without the maintainers stepping in to merge and release there's not much else that can be done. Either that or maybe we'll see a solid fork of it step up.

Top comments (0)