DEV Community

Ryo Yamaoka
Ryo Yamaoka

Posted on

Library for easy implementation of instant load testing

Motivation

There are many load testing tools in the world, and most of them are highly functional, with the ability to create scenarios and send requests from multiple nodes. Of course that's great, but

  • I would like to check if the newly implemented API is meeting the target RPS.
  • I want to see if there is a significant impact on performance before and after adding indexes.

In the case of simple requirements, such as the need to see if there is a significant impact on performance before and after adding indexes, I think it is a bit tedious and time-consuming to set up the tool. On the other hand, if you try to implement an instant tool by yourself

  • Processing of test parameter acquisition
  • Performance-aware implementation
  • Control of concurrency
  • Statistics output

As you can see, there are many troublesome implementations that require a certain amount of effort. Or, if you try to use a simpler testing tool, it may not be able to reach your itch.

I have created the Otchkiss in the hope that a library that can easily implement load testing tools for such cases would make it easier and more convenient to perform load testing.

https://github.com/ryo-yamaoka/otchkiss

It's read as "Ochikisu" and origin is the machine gun manufacturer Hotchkiss because this tool does not have as much firepower as the Gatling.

The lose of the H at the beginning is not a typo, but a French reading, and furthermore, the intention is to use a different string of letters just in case something might be bad trademark-wise.

Feel of using

$ go run ./sample/...

[Setting]
* warm up time:   5s
* duration:       1s
* max concurrent: 1

[Request]
* total:      90
* succeeded:  90
* failed:     0
* error rate: 0 %
* RPS:        90

[Latency]
* max: 11.0 ms
* min: 10.0 ms
* avg: 10.9 ms
* med: 11.0 ms
* 99th percentile: 11.0 ms
* 90th percentile: 11.0 ms
Enter fullscreen mode Exit fullscreen mode

How to use

Basics

If you want to try using it for a while, you can get a feel for it by looking at the sample, but the following steps are the basic flow.

  1. Implement the Requester interface
  2. Pass the interface implementation to Otchkiss and call Start()
  3. Call Report() to output the results
  4. Build the above code and make it into a CLI tool

https://github.com/ryo-yamaoka/otchkiss/blob/main/sample/main.go

If implemented using otchkiss.New(), the following command line options can be used to change the configuration values at runtime, so there is no need to rebuild even in the case of a trial -> main test.

  • -p: Run goroutine as many times as specified here and make requests in parallel.
  • -d: The execution time of the test.
  • -w: Time to warm up the requestor. Requests will send for the time specified here, but will not be reflected in the test results.

It is difficult to understand -d and -w, but the time when the request actually keeps requesting to the test target is d + w, and the time specified by -d is the time to be aggregated as the test result.

Specify the number of concurrency and execution time

As mentioned above, the basic implementation uses the value specified on the command line or the default value, but if you wish to fix this value, you can hard-code it.

If you want to fix this value, you can hardcode it. In this case, you can use a sub-package github.com/ryo-yamaoka/otchkiss/setting and put the value into the configuration struct otchkiss.FromConfig() .

    s := &setting.Setting{
        MaxConcurrent: 10,
        RunDuration: 60 * time.Second,
        WarmUpTime: 10 * time.Second,
    }
    ot, err = otchkiss.FromConfig(&SampleRequester{}, s, 1000)
Enter fullscreen mode Exit fullscreen mode

Performance tuning

Otchkiss sends many parallel requests and keeps track of their success or failure and latency, but internally it uses slices.

As many of you may know from blog posts, Go's slices behave in such a way that when the allocated memory is insufficient, they allocate a larger memory area.

FYI: Go Slices: usage and internals

Therefore, frequent allocations may affect performance (i.e., RPS). On the other hand, since memory is finite, it is not advisable to allocate too large an amount.

The default allocation is for 100,000,000 requests, but since the optimal value may differ for each user, this can also be specified. (Incidentally, if you specify 1 billion as shown in the following code, several GB of memory will be consumed at the moment of startup.)

    ot, err = otchkiss.FromConfig(&SampleRequester{}, s, 1000*1000*1000)
Enter fullscreen mode Exit fullscreen mode

Change the format in which results are displayed

The default display is as shown at the beginning of this document, but you can use otchkiss.TemplateReport() to output in your preferred format. The only thing you need to do is to pass the Go template itself, so it is not particularly difficult to specify.

The parameters are all the ones used in the default template below.

https://github.com/ryo-yamaoka/otchkiss/blob/main/template.go

TODO

  • Histogram display
  • Visual enhancements such as time series graphs

Top comments (0)