With HTTP 1.1 you can stream data using chunked transfer encoding, which means you can send the header Transfer-Encoding: chunked and the data you want. You can see an example on MDN.
You can also stream with other protocols: HTTP 2, websockets, grpc and so on.
So if you control both the client and the server you can choose how to stream your data.
However, because WSGI servers and applications do not communicate via HTTP, what RFC 2616 calls "hop-by-hop" headers do not apply to WSGI internal communications. WSGI applications must not generate any "hop-by-hop" headers [4], attempt to use HTTP features that would require them to generate such headers, or rely on the content of any incoming "hop-by-hop" headers in the environ dictionary. WSGI servers must handle any supported inbound "hop-by-hop" headers on their own, such as by decoding any inbound Transfer-Encoding, including chunked encoding if applicable.
This because WSGI servers are middlewares and streaming would break the pattern.
By day time I design and implement connected services and in my offtime I fiddle around with fun OSS things. Trying to help people play with nodeJS and get into programming.
If you're used to NodeJS I'm sure you'll fit in with aiohttp being async and all.
About the generator trick: I haven't tried with gunicorn and multiple processes. I feel like it's going to destroy the performance because each process might be able to serve only on request (the one generating the stream).
Can't wait to read your article on the solution ;)
Hi Andreas,
Do you have special requirements?
With HTTP 1.1 you can stream data using chunked transfer encoding, which means you can send the header
Transfer-Encoding: chunked
and the data you want. You can see an example on MDN.You can also stream with other protocols: HTTP 2, websockets, grpc and so on.
So if you control both the client and the server you can choose how to stream your data.
Falcon unfortunately doesn't support that header.
The actual reason why is that WSGI itself (the interface under pretty much all Python servers) does not support Transfer-Encoding:
This because WSGI servers are middlewares and streaming would break the pattern.
Flask does not support it either but they worked around it using generators. Here it's my example:
and the client:
As you can see it's not using "Transfer-Encoding", just iterating on the generator and sending data.
Another option you have is to use aiohttp which is not based on WSGI and works well with chunked streaming.
You can find an example in this article though there's a bug on line 8 of his example, the rest works :-)
Replace:
interval = int(request.GET.get('interval', 1))
with
interval = int(request.query.get('interval', 1))
The headers sent by the server:
As you can see it supports chunked streaming.
Wow thanks for this amazing response! I will take a deeper look at aiohttp. But wrapping my code into a generator would also be a valid fallback.
Maybe I am just too spoiled by the way nodeJS handles streams that I have a hard time understanding why things are so difficult in the python world :)
If you're used to NodeJS I'm sure you'll fit in with aiohttp being async and all.
About the generator trick: I haven't tried with gunicorn and multiple processes. I feel like it's going to destroy the performance because each process might be able to serve only on request (the one generating the stream).
Can't wait to read your article on the solution ;)
It's gonna work well if you don't need to handle 10+ clients simultaneously
Hi Alex,
do you mean Flask's solution? If so, probably even less. If you mean aiohttp's I'm curious to know if you tested it.
I've never used aiohttp so I don't know much.