If you are using Gunicorn as API server Gateway and you want to reduce the response time drastically. then you are in the right place.
A while ago our application experienced a surge in the number of active users and our API response time started to rise up and then Asynchronous server came to our rescue.
This post covers the Load testing result comparison between Sync and Async Gunicorn servers.
Load testing was done using Locust. one of the best tools out there IMO
Synchronous Gunicorn Server
credits
100 concurrent Users
RPS(Requests per second): 12
Response time:
--> median: 11000 ms
--> 95 %tile: 17000 ms (95% requests finish before this time)
RPS
Response Time
350 concurrent Users (max supported with synchronous server)
RPS: 6
Response time:
--> median: 25000 ms
--> 95 %tile: 30000 ms (95% requests finish before this time)
RPS
Response time
400 concurrent users
server started to respond with **ConnectionError* for 100% for requests*
Asynchronous Gunicorn Server(Gevent)
credits
100 concurrent Users
RPS: 50
Response time:
--> median: 2200 ms
--> 95 %tile: 5000 ms (95% requests finish before this time)
RPS:
Response time:
500 concurrent Users
RPS: 50
Response time:
--> median: 6500 ms
--> 95 %tile: 18000 ms (95% requests finish before this time)
RPS:
Response time:
1000 concurrent Users
RPS: ~25-30
Response time:
--> median: 20000 ms
--> 95 %tile: 50000 ms (95% requests finish before this time)
RPS:
Response time:
Observations:
1) RPS starts to decrease with increased load.
2) RPS increased by at least 400% in Async server.
3) With the same number of users response time was just 20% in the case of the async server (500% better performance).
4) with the same response time async server was able to serve 400% more users than sync server. (500 users in async vs 100 users in sync).
Top comments (0)