When I started the series, I did want to capture more metrics, but that kept pushed and Its been months so I decided to do something simple atleast. The source is in GitHub so if you are interested feel free to use it and publish a follow up. I might not have time anytime in the near future due to other commitments. The metrics you are suggesting will take a lot of effort and time to do properly. The bottleneck is the sleep introduced, so theoritically 25 seconds is the best possible for this code. If I remove the sleep this is the result for same 10k req with 100 concurrent
Concurrency Level: 100
Time taken for tests: 0.309 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 2830000 bytes
HTML transferred: 1760000 bytes
Requests per second: 32344.98 [#/sec] (mean)
Time per request: 3.092 [ms] (mean)
Time per request: 0.031 [ms] (mean, across all concurrent requests)
Transfer rate: 8939.09 [Kbytes/sec] received
When I started the series, I did want to capture more metrics, but that kept pushed and Its been months so I decided to do something simple atleast. The source is in GitHub so if you are interested feel free to use it and publish a follow up. I might not have time anytime in the near future due to other commitments. The metrics you are suggesting will take a lot of effort and time to do properly. The bottleneck is the sleep introduced, so theoritically 25 seconds is the best possible for this code. If I remove the sleep this is the result for same 10k req with 100 concurrent
Concurrency level 100 is way to small. Try something in range 500-5000. Beware that ab is not a good tool for testing high concurrency.
Ya, I wasn't expecting people take this simple experiment of mine so seriously. I'll try to update the tests to something better