loading...
Cover image for Building a Response Timer to Benchmark API Performance

Building a Response Timer to Benchmark API Performance

karllhughes profile image Karl L. Hughes ・5 min read

I've written around 50 blog posts in the past six months, focusing on technical topics like web hosting and performance. In one set of articles, I needed to measure the speed improvement that one might expect to see when using an edge hosting platform to serve an API to users across the world.

The problem with most benchmarking tools is that they assume you'll be testing from a single location. In my case, I wanted to test from several locations because the hosting provider I was working with wanted to prove that performance on their edge network was comparable no matter where in the world users were.

I designed a test that required me to test an API from four or more test nodes and my local machine (as a control). I'd deploy the API to a single server and my client's edge hosting network and ensure that the latency was acceptable from any test node.

Testing an API from multiple locations

It seems pretty straightforward, but surprisingly, I couldn't find an existing tool made for this task. In this article, I'll lead you on my journey, which ended in creating an open-source response timer Docker image that I can deploy quickly to a server anywhere in the world.

Solution 1: Pingdom

While not designed for this task, Pingdom's Website Speed Tester actually works pretty well. You plug in your URL, select the data center you want to test from, and Pingdom returns stats on how quickly the website or API responded. It's free, requires no server configuration, and it's simple to use.

Pingdom isn't a perfect solution, though. The problems with using Pingdom to benchmark your API are:

  • It rate-limits your usage, so running tests from several locations takes a long time.
  • It can't make POST, PUT, or DELETE requests.
  • You can't add headers or a body to the request.
  • Making repeated requests to the same endpoint isn't feasible (again due to rate-limits).

The problem with testing your API with Pingdom is the rate-limiting

While you can use Pingdom to test a simple GET request to your service, it's woefully inadequate for more complex API requests like I needed.

Solution 2: Curl

Experienced Linux users will point out that curl can do this on its own, which is exactly what I thought next. The problem with curl is formatting.

For example, you can run something like this to get the total request time to an API endpoint:

curl --output /dev/null \
  --silent --write-out '%{time_total}' \
  https://jsonplaceholder.typicode.com/posts
0.005255

This shows the response time in seconds, and because it's curl, you can use command-line arguments to specify a method, headers, of a body. My only gripe with this solution is that it doesn't give you any timing details (i.e.: how much time was spent on DNS resolution vs. data transfer?).

It would certainly work, but I felt like it could be better.

As I Googled around, I found this response on StackOverflow based on a blog post where the author uses a curl-format.txt template file to format the output of his curl request. I didn't realize you could do this, but this was a game-changer.

Once I created the template file, I could get a response like this:

curl -w "@curl-format.txt" -o /dev/null -s https://jsonplaceholder.typicode.com/posts
# Response
          final_url:  https://jsonplaceholder.typicode.com/posts
      response_code:  200s
    time_namelookup:  0.065948s
       time_connect:  0.080941s
    time_appconnect:  0.135187s
   time_pretransfer:  0.135663s
      time_redirect:  0.000000s
 time_starttransfer:  0.164189s
                    ----------
         time_total:  0.166145s

It was much nicer to look at, but I really didn't want to create this template on every server before I started running my tests. My ideal solution would allow me to run a single command via SSH to run this test on any URL.

Solution 3: Docker Image + Curl

There are plenty of ways to package up a text file and curl command. A custom shell script would work, but then I'd have to copy it every time I wanted to run this test. It's also not as versatile - what if I want to expand this project and offer more features?

So, my solution was to build a custom Docker image based on curl's that included the output template and curl command with the required arguments already included. This allows me to run the response timer from any environment that can run a Docker container and get a consistent output format every time. It's admittedly a simple solution, but it works.

To use this script locally, pull the Docker image:

docker pull draftdev/rt

And run it with your API's URL as the input:

docker run --rm draftdev/rt jsonplaceholder.typicode.com/posts
# Response
          final_url:  http://jsonplaceholder.typicode.com/posts
      response_code:  200s
    time_namelookup:  0.025098s
       time_connect:  0.042070s
    time_appconnect:  0.000000s
   time_pretransfer:  0.042265s
      time_redirect:  0.000000s
 time_starttransfer:  0.091801s
                    ----------
         time_total:  0.098020s

Because the shell script passes any inputs you use to the underlying curl request, you can add any of the arguments that curl supports including custom methods (-X), headers (-H), or a body (-d):

docker run --rm draftdev/rt jsonplaceholder.typicode.com/posts -H 'Content-Type: application/json' -d '{"title": "Another great post"}' -X POST
# Response
          final_url:  http://jsonplaceholder.typicode.com/posts
      response_code:  201s
    time_namelookup:  0.014518s
       time_connect:  0.029930s
    time_appconnect:  0.000000s
   time_pretransfer:  0.029982s
      time_redirect:  0.000000s
 time_starttransfer:  0.143273s
                    ----------
         time_total:  0.143517s

You can use the same idea to run this script from a remote server that has Docker installed. I prefer DigitalOcean for things like this, and their marketplace Docker image makes spinning up new droplets with Docker installed fast and easy.

Setting up a new DigitalOcean droplet with Docker installed

I typically add my SSH key to each new droplet so that after it's provisioned, I can run something like this to test my API:

ssh root@<YOUR_DROPLET_IP> "docker run --rm draftdev/rt jsonplaceholder.typicode.com/posts -H 'Content-Type: application/json' -d '{\"title\": \"Another great post\"}' -X POST"
# Response
          final_url:  http://jsonplaceholder.typicode.com/posts
      response_code:  201s
    time_namelookup:  0.001051s
       time_connect:  0.004071s
    time_appconnect:  0.000000s
   time_pretransfer:  0.004155s
      time_redirect:  0.000000s
 time_starttransfer:  0.046294s
                    ----------
         time_total:  0.046395s

This allows me to test my APIs' response time and benchmark performance between different hosting options from any one of DigitalOcean's regions.

Next Steps

While this Docker image and script gets the job done, I'd like to continue improving this idea. Realizing that Pingdom is grossly insufficient for response time testing and that distributed hosting is becoming an important part of the modern web stack, I could do more to improve this tool.

For one, I'd like to support a hosting provider with more regions. It should be possible to run this on AWS EC2 instances without much trouble, but I'd like to automate some of that server setup. I could also see building a web interface to improve usability and packaging it up to work as part of a CI workflow.

Whether I do anything else with it or not, it was an interesting and useful project to spend an evening on.

What do you think? Are there tools out there for benchmarking APIs that I should use instead? I'd love to hear from you, especially if you've solved this problem before.

Discussion

pic
Editor guide
Collapse
anuraj profile image
Anuraj P

I am using Postman to do measure and monitor my API performance. Currently, I am using the free version, but in the paid option, you can choose the locations from which we need to execute the tests.

Collapse
karllhughes profile image
Karl L. Hughes Author

Oh, nice, I didn't realize the paid version of Postman did that. Thanks!