DEV Community

Cover image for curl: getting performance data with curl
grant horwood
grant horwood

Posted on

curl: getting performance data with curl

the api endpoint you've written is slow. everyone says so. but how slow? there are lots of packages we could install to inspect an url's performance, but why bother with all that when curl can get us the performance metrics we need?

in this post, we'll be going over using curl to get data like execution time, download speed and header values.

the 'waiting' meme
a user waits patiently for an url that was not profiled for performance

getting total time using --write-out

most of us, when we use curl to inspect an url, just throw the -v switch on it for 'verbose' and then sift through the headers and body using grep, looking for the data we want. this works in a banging-two-rocks-together, paleolithic kind of way, but we can craft a much more elegant and powerful solution by using the --write-out parameter.

as the name implies, write-out is about filtering and formatting curl's output. the write-out argument accepts a string that defines the output format using pre-defined variables to represent various response data.

let's look at an example of how we would get the total execution time of a request:

curl -s \
-X GET \
-o /dev/null \
--write-out "Total time: %{time_total} seconds" \
https://example.ca/path/to/endpoint
Enter fullscreen mode Exit fullscreen mode

here we see that we provided write-out with a format string that included the variable %{time_total}. this variable gets interpolated in the output with the time, in seconds, that the entire process took; from the moment the request was initiated to the instant the last byte of the response was downloaded.

the output looks similar to this:

Total time: 0.897665 seconds
Enter fullscreen mode Exit fullscreen mode

that's useful information!

note that this command also suppresses all other output to keep thing nice and clean. we use the -s switch for 'silent' operation, and throw away the body of the response with -o /dev/null. it just makes everything nicer and easier to read.

more timing data

getting the total round-trip time of our call is good and useful, but we might want a more detailed accounting.

fortunately, curl provides with seven different timing measurements. of these, there are four that are particularly useful:

  • time_namelookup: how long, in seconds, it took to do the name lookup. ie. dns.
  • time_pretransfer: the time, in seconds, the entire request portion of the call took; all that dns and connection and ssl stuff.
  • time_starttransfer: the time, in seconds, from the start of the call to right before the first byte of the response was transferred.
  • time_total: how long, in seconds, the whole call took

let's look at a sample curl call that gets all these data points:

curl -s \
-X GET \
-o /dev/null \
--write-out "time_namelookup %{time_namelookup}\ntime_pretransfer %{time_pretransfer}\ntime_starttransfer %{time_starttransfer}\ntime_total %{time_total}" \
https://example.ca/ | column -t
Enter fullscreen mode Exit fullscreen mode

you'll notice that we added a pipe out to column -t to format the output into columns. this is just a little bit of display sugar to make things easier on the eyes. the output looks like this:

time_namelookup     0.001698
time_pretransfer    0.171207
time_starttransfer  0.298703
time_total          0.587096
Enter fullscreen mode Exit fullscreen mode

looking at this output and knowing what these data points represent allow us to do a little bit of subtraction math to learn some important things.

how long our server took to calculate the result: since we know that time_pretransfer is the time from the start of the call to the end of the request, and that time_starttransfer is the time from the start to very beginning of the response, we can determine that the time the server took to calculate the response is (approximately) time_starttransfer - time_pretransfer.

for instance, in the above example, our time_pretransfer was 0.171207 and our time_starttransfer was 0.298703 meaning that our time-on-server was about 0.127496 seconds.

this value is mildly accurate and will obvious change from one request to the next. however, it is good data to have for making comparisons.

how long our download took: we can also use subtraction to determine how long it took for the response data to get downloaded from the server to our test machine by subtracting time_starttransfer, the time to the first response byte, from time_total. however, there are better ways to inspect download speed.

speeds and sizes

if we want to know how much data we're downloading and how fast that download is, we can get that with the following write-out variables:

  • size_download: the size of the total download, in bytes.
  • size_header: the size of just the headers downloaded, in bytes.
  • speed_download: the speed of the download in bytes/second.
curl -s \
-X GET \
-o /dev/null \
--write-out "size_download %{size_download}\nsize_header %{size_header}\nspeed_download %{speed_download}" \
https://example.ca/ | column -t
Enter fullscreen mode Exit fullscreen mode

the output of this example call looks something like:

size_download   922306
size_header     704
speed_download  1186138
Enter fullscreen mode Exit fullscreen mode

response codes and headers

getting timing data is great for telling us what happened, but if we want to get some clues as to why, we're going to want to inspect things like response codes and headers.

getting the response code

if you've ever noticed that an endpoint suddenly got a lot faster because it started returning a 404, you'll understand that knowing the response code of a request is important.

we can get the response code with the %{response_code} variable:

curl -s \
-X GET \
-o /dev/null \
--write-out "response_code %{response_code}" \
https://example.ca/ | column -t
Enter fullscreen mode Exit fullscreen mode

it outputs what we would expect:

response_code  200
Enter fullscreen mode Exit fullscreen mode

getting headers

we can extract individual headers from the response by using the write-out variable

%header{<header name>}
Enter fullscreen mode Exit fullscreen mode

the <header name> here is the name of the header in all lowercase and without the trailing colon.

if we wanted to test what our url's cloudflare cache status was, for instance, we could do:

curl -s \
-X GET \
-o /dev/null \
--write-out "cf-cache-status %header{cf-cache-status}" \
https://example.ca/
Enter fullscreen mode Exit fullscreen mode

and get as output something like

cf-cache-status MISS
Enter fullscreen mode Exit fullscreen mode

getting all the headers as json

we can get all the response headers in json format with %{header_json}. note that this is singular: header, not headers.

curl -s \
-X GET \
-o /dev/null \
--write-out "header_json %{header_json}" \
https://example.ca/
Enter fullscreen mode Exit fullscreen mode

the output looks like:

header_json {
    "server":["nginx/1.18.0 (Ubuntu)"],
    "date":["Thu, 11 Sep 2025 19:06:12 GMT"],
    "content-type":["text/html"],
    "content-length":["178"],
    "connection":["keep-alive"],
    "location":["https://example.ca/"]
}
Enter fullscreen mode Exit fullscreen mode

we can then inspect the data in the output using a tool like jq.

using a format configuration file

of course, typing out a long write-out format every time we want to curl an endpoint is annoying. we can save time and keystrokes by storing output formats in configuration files.

first, let's create a format file:

time_namelookup %{time_namelookup}\n
time_pretransfer %{time_pretransfer}\n
time_starttransfer %{time_starttransfer}\n
time_total %{time_total}\n
size_download %{size_download}\n
size_header %{size_header}\n
speed_download %{speed_download}\n
Enter fullscreen mode Exit fullscreen mode

this file holds an output format and is the same as if we had used it directly in the command.

once we have our format file, we can supply it to the write-out argument using the @ directive, which tells curl to read input from a file

curl -s \
-X GET \
-o /dev/null \
--write-out "@/path/to/file" \
https://example.ca/
Enter fullscreen mode Exit fullscreen mode

the output looks like what we would expect:

time_namelookup 0.037403
time_pretransfer 0.206697
time_starttransfer 0.241542
time_total 0.556894
size_download 922308
size_header 704
speed_download 1656164
Enter fullscreen mode Exit fullscreen mode

format configuration files are good things because they allow us to easily standardize formats and write our commands more accurately and qucklly.

conclusion

curl may not be the most sophisticated profiling tool out there, but it is still incredibly powerful and almost universally available. having a better understanding of its capabilities will always pay off.

Top comments (0)