DEV Community

Cover image for Testing gRPC services - request collections and modern load testing
Dmitry Kozhedubov
Dmitry Kozhedubov

Posted on • Edited on

Testing gRPC services - request collections and modern load testing

A while ago I a post called Easy ways to (performance) test a gRPC service about bring workflows around gRPC services closer to what many people are used to with tools like cURL and Apache Bench. In this post I further explore how to streamline gRPC workflows and make them more familiar to those with REST API experience.

Collections

At this point anyone involved with APIs most likely worked with Postman. Over time Postman has become very powerful and covers a full range of API building and testing activities from mock servers to unit tests, but one thing that probably that heavily contributed to Postman's rise was the ability to create easily accessible request collections.

Of course Postman isn't the only tool in the space, and one of the wonderful alternatives is Insomnia. While Postman has become somewhat bloated over time, Insomnia pitches, among the other things, simplicity and elegance and in fact delivers on those. I've personally used for quite some time with REST APIs but only recently discovered that they've added gRPC support. Let's explore how it works.

gRPC request collections in Insomnia

The algorithm is actually pretty simple, you need to create a request and select gRPC as its type, then import a single .proto file or a folder containing one or more .proto files to accommodate imports. Imports will work across your entire collection, but for for a specific request you'll need to select a .proto file that contains the definition of a service you want to test.

From there you need to set an endpoint and request body in JSON format (Insomnia supports both unary and streaming calls) and hit Send. Here I'm using Greeter service from grpc-go repository ⬇️
gRPC request editor in Insomnia
It appears that gRPC support in Insomnia is indeed very young as there's no support yet for request headers or any response metadata other than status ⬇️
gRPC response view in Insomnia
Good news is that Insomnia seems to be moving fast and is open source, so it's only a matter of time until this improvements are made by the core team or contributed from outside.

Modern load testing with k6

In part 1 we looked at ghz for load testing gRPC services, and now I want to cover k6, which claims to be a modern load testing tool built for developer happiness. After only a brief experience with it I can see why is that and why Grafana moved to acquire k6 earlier this year.

The boilerplate Greeter service performance test would look like this:

import grpc from 'k6/net/grpc';
import { check, sleep } from 'k6';

const client = new grpc.Client();
client.load(['../grpc-go/examples/helloworld/helloworld'], 'helloworld.proto');

export default () => {
  client.connect('localhost:50051', {
    plaintext: true
  });

  const data = { "name": "Dmitry"  };
  const response = client.invoke('helloworld.Greeter/SayHello', data);

  check(response, {
    'status is OK': (r) => r && r.status === grpc.StatusOK,
  });

  client.close();

  sleep(0.5);
};
Enter fullscreen mode Exit fullscreen mode

The algorithm for the above code is really simple:

  • In the so-called init phase photo definitions are loaded
  • VU (virtual user) phase (default function) is where actual test behavior is defined
    • Create gRPC channel and a stub
    • Invoke the RPC
    • Check if the response status is zero (OK)
    • Close the connection and wait for 500ms before completing this specific iteration

To make this a little more interesting, let's add some thresholds that we expect our test run to meet. To do that, we need to export options object in the init phase ⬇️

export let options = {
  thresholds: {
    grpc_req_failed: ['rate<0.001'],   // http errors should be less than 0.1%
    grpc_req_duration: ['p(95)<10'], // 95% of requests should be below 10ms
  },
};
Enter fullscreen mode Exit fullscreen mode

Here we require error rate of less than 0.1% and p95 response time to be under 10ms.
Finally, assuming the above test is saved to a file greeter.js to run it with 100 request concurrency and 10000 total requests, execute the following command:

k6 run -i 10000 -u 100 greeter.js
Enter fullscreen mode Exit fullscreen mode

After a few seconds the output will look something like this ⬇️

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  ()  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: greeter.js
     output: -

  scenarios: (100.00%) 1 scenario, 100 max VUs, 10m30s max duration (incl. graceful stop):
           * default: 10000 iterations shared among 100 VUs (maxDuration: 10m0s, gracefulStop: 30s)


running (00m50.9s), 000/100 VUs, 10000 complete and 0 interrupted iterations
default ✓ [======================================] 100 VUs  00m50.9s/10m0s  10000/10000 shared iters

     ✓ status is OK

     checks...............: 100.00% ✓ 10000      ✗ 0
     data_received........: 1.4 MB  27 kB/s
     data_sent............: 2.1 MB  42 kB/s
   ✓ grpc_req_duration....: avg=1.93ms   min=172.66µs med=1.12ms  max=34.43ms  p(90)=4.33ms   p(95)=5.81ms
     iteration_duration...: avg=509.28ms min=500.64ms med=505.5ms max=621.77ms p(90)=518.03ms p(95)=525.32ms
     iterations...........: 10000   196.286414/s
     vus..................: 100     min=100      max=100
     vus_max..............: 100     min=100      max=100
Enter fullscreen mode Exit fullscreen mode

Note that p95 response time threshold was met and none of the requests failed, therefore the test run was successful.

Limitations

During my brief introduction to k6 I found a couple of things that somewhat limit its usage at my work at this time:

  • It doesn't yet support streaming RPCs, which appears to be due to lack of event loops in k6 but is work in progress. Our RPCs are primarily bi-directional streaming however.
  • Creating new connection for every iteration is not best practice in gRPC and I occasionally ran into connection errors. This design decision was probably made to keep virtual users fully independent from each other, but real word gRPC applications would either reuse the same connection or maintain a relatively small pool of those.

Conclusion

As expected, gRPC tooling ecosystem is rapidly evolving, lowering the entry barrier into relatively complex technology. Both covered tools certainly have room for improvement, but I appreciated k6's superior developer experience for load testing and will look to integrate it in our teams' workflows even beyond gRPC.

Cover photo by Felix Fuchs on Unsplash

Top comments (0)