I've used uWSGI for a long time but newer Python web servers have come along. Let's compare uWSGI to uvicorn with the all-important "hello, world" style program. I yield the body in pieces instead of sending back the whole body at once to include some context switching overhead.
I've also been working on an experimental PyPy + ASGI plugin for uWSGI. Let's see if the performance is in the right ballpark.
I'm using hey -z 8s http://localhost
to send some requests to the app.
WSGI:
import sys
import pprint
def application(env, start_response):
start_response("200 OK", [("Content-Type", "text/html")])
results = [
b"<pre>",
b"Hello World\n",
sys.executable.encode("utf-8"),
b" ",
pprint.pformat(env),
b"</pre>",
]
for result in results:
if isinstance(result, str):
result = result.encode("utf-8")
yield result
ASGI:
import sys
import pprint
def results(scope):
results = [
b"<pre>",
b"Hello World\n",
sys.executable.encode("utf-8"),
b" ",
pprint.pformat(scope),
b"</pre>",
]
for result in results:
if isinstance(result, str):
result = result.encode("utf-8")
yield result
async def application(scope, receive, send):
await send(
{
"type": "http.response.start",
"status": 200,
"headers": [[b"content-type", b"text/html"],],
}
)
for result in results(scope):
await send({"type": "http.response.body", "body": result, "more_body": True})
await send({"type": "http.response.body", "body": result, "more_body": False})
WSGI, Python 3.8 and standard uWSGI Python plugin: uwsgi --plugin python --wsgi minimal --http [::]:80 --enable-threads --disable-logging --listen 4096
Summary:
Total: 8.0239 secs
Slowest: 0.0306 secs
Fastest: 0.0057 secs
Average: 0.0231 secs
Requests/sec: 2157.3084
Response time histogram:
0.006 [1] |
0.008 [1] |
0.011 [1] |
0.013 [2] |
0.016 [8] |
0.018 [10] |
0.021 [12] |
0.023 [10728] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.026 [6388] |■■■■■■■■■■■■■■■■■■■■■■■■
0.028 [95] |
0.031 [64] |
WSGI, Python 3.8, uvicorn --host 0.0.0.0 --port 80 --interface wsgi minimal:application --workers 1 --backlog 4096 --no-access-log
Summary:
Total: 8.0296 secs
Slowest: 0.0939 secs
Fastest: 0.0092 secs
Average: 0.0461 secs
Requests/sec: 1080.8754
Response time histogram:
0.009 [1] |
0.018 [197] |■■■■■
0.026 [732] |■■■■■■■■■■■■■■■■■■■
0.035 [1490] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.043 [1454] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.052 [1408] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.060 [1542] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.068 [1129] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.077 [591] |■■■■■■■■■■■■■■■
0.085 [96] |■■
0.094 [39] |■
ASGI, Python 3.8, uvicorn --host 0.0.0.0 --port 80 minimalasgi:application --workers 1 --backlog 4096 --no-access-log
Summary:
Total: 8.0146 secs
Slowest: 0.0450 secs
Fastest: 0.0051 secs
Average: 0.0221 secs
Requests/sec: 2258.2405
Response time histogram:
0.005 [1] |
0.009 [11] |
0.013 [349] |■
0.017 [58] |
0.021 [1049] |■■■
0.025 [16560] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.029 [43] |
0.033 [8] |
0.037 [8] |
0.041 [10] |
0.045 [2] |
ASGI version, PyPy 3.6.9 and experimental plugin:
Summary:
Total: 8.0430 secs
Slowest: 0.0757 secs
Fastest: 0.0077 secs
Average: 0.0460 secs
Requests/sec: 1081.9301
Response time histogram:
0.008 [1] |
0.014 [5] |
0.021 [6] |
0.028 [7] |
0.035 [8] |
0.042 [14] |
0.048 [7824] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.055 [164] |■
0.062 [69] |
0.069 [147] |■
0.076 [457] |■■
WSGI version, PyPy 3.6.9 and experimental plugin:
Summary:
Total: 8.0179 secs
Slowest: 0.0396 secs
Fastest: 0.0022 secs
Average: 0.0187 secs
Requests/sec: 2660.7978
Response time histogram:
0.002 [1] |
0.006 [3] |
0.010 [13] |
0.013 [17] |
0.017 [108] |
0.021 [20001] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.025 [352] |■
0.028 [105] |
0.032 [146] |
0.036 [318] |■
0.040 [270] |■
I thought it was pretty interesting that uWSGI and uvicorn's WSGI / ASGI performance is flipped: in this test uWSGI's WSGI speed is about the same as uvicorn's ASGI speed. The WSGI mode in uvicorn also shows a greater spread of response times than the others.
The experimental plugin is written almost entirely in Python. It hasn't been optimized yet but it performs well in WSGI mode. A lot of things might be done to improve its ASGI mode.
Top comments (0)