Why curl
?
There is a bunch of specific tools for benchmarking HTTP requests.
ab
, JMeter
, wrk
...
Then why still use curl
for the purpose?
It's because curl
is widely-used and it's a kind of common language for Web Developers.
Also, some tools have a feature to retrieve an HTTP request as a curl
command.
It's quite useful because it copies not only the URL and parameters but also request headers including Authorization
or Cookie
.
Tools
In this article, I use these tools:
Measure response time using curl
At first, let's prepare a curl
command. In this time, I got the command of the request to my personal blog using Google Chrome. (Cookie
is removed)
$ curl 'https://blog.yuyat.jp/' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Accept-Language: en-US,en;q=0.8,ja;q=0.6' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.86 Safari/537.36' -H 'Connection: keep-alive' --compressed
It just outputs the response body from the server.
Let's append these options.
-s -o /dev/null -w "%{time_starttransfer}\n"
-s
is to silence the progress, -o
is to dispose the response body to /dev/null
.
And what is important is -w
.
We can specify a variety of format and in this time I used time_starttransfer
to retrieve the response time (time to first byte).
It shows like below:
$ curl 'https://blog.yuyat.jp/' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Accept-Language: en-US,en;q=0.8,ja;q=0.6' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.86 Safari/537.36' -H 'Connection: keep-alive' --compressed -s -o /dev/null -w "%{time_starttransfer}\n"
0.188947
The response time is 0.188947 second (188 msec).
To simplify, I also created a wrapper command curlb
:
#!/bin/sh
curl -s -o /dev/null -w '%{time_starttransfer}\n' "$@"
Measure the percentile of the response times
It's not proper to benchmark from just a single request.
Then let's measure the percentile of 100 requests.
ntimes
is useful for such purposes.
You can install with go get github.com/yuya-takeyama/ntimes
or the repository has pre-built binaries.
Let's append ntimes 100 --
at the beginning of the curl
command.
$ ntimes 100 -- curlb 'https://blog.yuyat.jp/' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Accept-Language: en-US,en;q=0.8,ja;q=0.6' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.86 Safari/537.36' -H 'Connection: keep-alive' --compressed
0.331915
0.064085
0.059883
0.074047
0.059774
...
And to measure the percentile of the numbers, the command called percentile
may be the easiest option.
Install it by go get github.com/yuya-takeyama/percentile
or download the pre-built binary from the repo.
And append | percentile
to the end of the command.
$ ntimes 100 -- curlb 'https://blog.yuyat.jp/' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Accept-Language: en-US,en;q=0.8,ja;q=0.6' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.86 Safari/537.36' -H 'Connection: keep-alive' --compressed | percentile
50%: 0.061777
66%: 0.06412
75%: 0.06872300000000001
80%: 0.07029000000000001
90%: 0.07496700000000001
95%: 0.076153
98%: 0.077226
99%: 0.07957
100%: 0.109931
That's it!
Top comments (6)
Good article !
For zsh users, you could use
repeat
Nice post!
by the way, why
curlb
on the last two commands? is it a typo?Hi, did you see this section?
-s -o /dev/null -w "%{time_starttransfer}\n"
is toooo long to type or to remember.So I always use
curlb
and recommend using it.Interesting approach
This was an interesting post and I got curious to see how this measures up against how we record response time with our product templarbit.com/sonar and found that it works similar!
Hi! Is it possible to use your tool on Mac?
If so, how to install the ntimes?