If you haven't done any performance testing before, then k6 is a really simple free tool that you can use to do it.
Now you might be wondering what is the point of performance testing, and why do I need to bother with it?
You might already be writing unit tests and integration tests to make sure that your application is working as designed. These are both forms of functional tests, as you are testing the functionality of your application.
However, we also need to consider non-functional tests. Non-functional tests are for testing things like security, efficiency, reliability and performance.
We are going to look at performance testing, which will give us an idea of how our application is likely to perform under various real world conditions.
Subscribe for more video content
In particular, we are going to have a look at:
- Load Testing
- Stress Testing
- Spike Testing
- Soak Testing
I will cover what each of those are and why we might want to do them as we go through the examples.
Getting started with k6
To get started we need to install k6 on our machines.
There are a number of ways to do this depending on what platform you are using.
If you are on a Mac like myself then you can use Homebrew:
brew install k6
On Windows, if you are using Chocolatey you can do:
choco install k6
There are also packages available for Linux and there is also the option of running it as a docker image.
For the examples shown in this article, I am going to assume that you have k6 installed directly on your machine.
Simple k6 Test
K6 tests are all written in JavaScript, and you don't need many lines of code at all to get started.
So let's start a new test. First we to import the http
package from k6.
import http from 'k6/http';
We then add in an export default block and inside we can basically add any JavaScript code we want.
import http from 'k6/http';
export default () => {
http.get('http://localhost:5157/api');
};
Here I am doing a GET call to an API running locally on my machine.
If I run this then it will call the API once and then stop. Which is obviously not particularly useful by itself, but we can see here the details that we get back in the output:
In particular, we can see how long the request took with http_req_duration
and how many requests per second it managed http_reqs
.
To make this more useful we are going to add in some options. K6 has the concept of virtual users. This is the number of parallel requests that we are going to send to our API.
If we just add in 1 user with a duration of 10 seconds then this is going to call our API as many times as it can, one call after another, until the time runs out.
import http from 'k6/http';
export const options = {
vus: 1,
duration: '10s'
};
export default () => {
http.get('http://localhost:5157/api');
};
Test API
Now to be able to demonstrate all these different test types I have put together a very simple .Net Core API that if you put in your date of birth, it will calculate how old you are in years, months, days, hours, minutes and seconds.
GET http://localhost:5157/age/1990-03-02
{
"years": 33,
"months": 6,
"days": 25,
"hours": 10,
"minutes": 35,
"seconds": 37
}
This application doesn't read or write from a database, so it should only be bound by CPU and Memory limits.
All the code shown in this article and my video is available to my Patreon subscribers. Subscribers also get access to my private discord community as well discounts to my courses when I release them.
Realistic test set up
The whole point of performance testing is to determine how your application is going to respond under certain conditions.
Your application isn't going to be running on your own machine when its in production, so you really need to do your performance testing against an environment that is as close to production as possible.
Now, I wouldn't recommend running performance tests against production itself, as you will see, some of these tests will really push your application to its limits, and it would likely cause problems for your users.
For this tutorial, I am not going to run my API on a production like system, but I am going to run it on a separate machine, so we can see how it performs under load.
I am using an old Raspberry Pi 2 that I have sitting around with:
- 900MHz quad-core ARM Cortex-A7 CPU
- 1 GB RAM
Given this is a low power machine we should be able to really push it to its limits with these tests.
Load Test
The first test we are going to look at is a standard load test. The purpose of a load test is really to determine how your application will respond under an average load.
If on a typical day your application gets 200 request per second then that is what you would set your load test to.
At the moment our script is set up to run as 1 user with one call after another. For this test we need multiple users with 1 call, per user, per second.
To do that we need to add in a sleep which is also included in the k6 package.
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
vus: 1,
duration: '10s'
};
export default () => {
http.get('http://localhost:5157/api');
sleep(1);
};
With load tests we want to slowly ramp up the number of requests to our target of 200 requests per second. Then we keep the test running for a decent amount of time, say 20 minutes, and then ramp it back down again.
We can do this by setting up stages in our options block. So we can get rid of what we have currently and replace it with the stages:
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
stages: [
{ duration: '1m', target: 200 }, // ramp-up
{ duration: '20m', target: 200 }, // stable
{ duration: '1m', target: 0 }, // ramp-down
],
};
export default () => {
http.get('http://localhost:5157/age/1990-03-02');
sleep(1);
};
This is good, but it is going to call our API with the same parameters over and over again. I haven't got any caching set up, but generally it is good practice to vary the test data, so it is more realistic to a real life load.
With k6 you can set up a shared array that gives you a place to store data for your tests. As my test needs dates as an input, I can write a quick function that will generate me 100 dates within the last 100 years, and add that to the shared array.
I then just need to select a random date from my array and pass that in as a parameter.
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
stages: [
{ duration: '5m', target: 200 }, // ramp-up
{ duration: '20m', target: 200 }, // stable
{ duration: '5m', target: 0 }, // ramp-down
],
};
const dates = new SharedArray('dates', function () {
var dates = [];
var currentDate = new Date();
var minDate = new Date();
minDate.setFullYear(currentDate.getFullYear() - 100);
for (var i = 0; i < 100; i++) {
var randomTime = Math.random() * (currentDate.getTime() - minDate.getTime());
var randomDate = new Date(minDate.getTime() + randomTime);
dates.push(randomDate.toISOString());
}
return dates;
});
export default () => {
const randomDate = dates[Math.floor(Math.random() * dates.length)];
http.get(`http://localhost:5157/age/${randomDate}`);
sleep(1);
};
You will also want to make sure that your application is actually responding correctly. There are various ways to do this, but it can be as simple as checking the status code of your responses:
export default () => {
const randomDate = dates[Math.floor(Math.random() * dates.length)];
const res = http.get(`http://localhost:5157/age/${randomDate}`);
check(res, { '200': (r) => r.status === 200 });
sleep(1);
};
Load tests are usually used to make sure that our application is meeting specific performance requirements. You might have a requirement that says requests need to come back within 100ms.
With k6 you can add in expected performance thresholds as part of the setup, and it will test those for you.
For example, here I am adding in a requirement that 99% of requests should come back within 100ms.
export const options = {
stages: [
{ duration: '5m', target: 200 }, // ramp-up
{ duration: '20m', target: 200 }, // stable
{ duration: '5m', target: 0 }, // ramp-down
],
thresholds: {
http_req_duration: ['p(99)<100'], // 99% of requests must complete within 100ms
}
};
You can then add this test as part of your CI/CD pipeline to make sure any changes you have made haven't affected the performance of your application.
When running performance tests it is important to make sure you are monitoring your application and server to see how it is responding. You don't want your server running out of memory or staying at 100% CPU for the entire time.
In my case, I am using htop
to see how my Raspberry Pi responds at different levels. Ideally you would have some proper monitoring set up using something like either Grafana or Datadog.
Once complete you will get an output like this:
Note: Yes I did only run this for 6 mins just to show the output, but an actual load test would be longer 😉.
You can see the tick next to http_req_duration
which is showing that it passed the threshold that we set.
Stress Test
With a stress test we want to see how the application will handle an increased load.
You can pick a realistic number for this which might be a 50% to 100% increase in the number of requests. Alternatively, you could increase the number of requests until your application breaks, so you know what your system is capable of.
In the example below, I am slowly ramping up the number of requests over the space of a minute, keeping it stable for 5 minutes before ramping up the number of requests again.
import http from 'k6/http';
import { sleep } from 'k6';
import { SharedArray } from 'k6/data';
export const options = {
stages: [
{ duration: '1m', target: 200 }, // ramp up
{ duration: '5m', target: 200 }, // stable
{ duration: '1m', target: 800 }, // ramp up
{ duration: '5m', target: 800 }, // stable
{ duration: '1m', target: 1000 }, // ramp up
{ duration: '5m', target: 1000 }, // stable
{ duration: '5m', target: 0 }, // ramp-down to 0 users
],
};
const dates = new SharedArray('dates', function () {
var dates = [];
var currentDate = new Date();
var minDate = new Date();
minDate.setFullYear(currentDate.getFullYear() - 100);
for (var i = 0; i < 100; i++) {
var randomTime = Math.random() * (currentDate.getTime() - minDate.getTime());
var randomDate = new Date(minDate.getTime() + randomTime);
dates.push(randomDate.toISOString());
}
return dates;
});
export default () => {
const randomDate = dates[Math.floor(Math.random() * dates.length)];
http.get(`http://localhost:5157/age/${randomDate}`);
check(res, { '200': (r) => r.status === 200 });
sleep(1);
};
This is how my Raspberry Pi responded at each stage.
200 req/s
At 200 req/s things are stable with average CPU under 20%.
800 req/s
At 800 req/s the Raspberry Pi is starting to show a bit of strain with CPU staying around 50%
1000 req/s
Finally at 1000 req/s surprising the Raspberry Pi is still holding with an average load around 55%.
We would have to push this even further if we wanted to push it to breaking point, although a sustained load of that amount might not be realistic.
Here are the results:
We can see that the average and max response time is a lot higher compared to what it was with the load test, but it was still respectable.
Spike Test
There may be occasions where you get a sudden spike of traffic that quickly dies off.
Maybe your application gets on the front page of a HackerNews, and suddenly it has to deal with a very high load in a short space of time.
This is where spike tests come in, where you have a very quick ramp up of traffic that lasts a short amount of time before quickly dying down.
In this example we quickly increase traffic to 2000 req/s, holding for 2 mins and then quickly drop back down again.
export const options = {
stages: [
{ duration: '30s', target: 2000 }, // ramp up
{ duration: '2m', target: 2000 }, // stable
{ duration: '30s', target: 0 }, // ramp-down to 0 users
],
};
You can see at 2,000 req/s my poor old Raspberry Pi is struggling a bit.
Occasionally some of the cores did reach 100%. The memory usage was also a lot higher than it was in the other tests.
I didn't include the 100ms threshold in this test, but you can see from the results that it would have failed as the 95th percentile reached 138ms.
Soak Test
In some cases, running a load test for half an hour isn't going to be enough to gauge the stability of your application.
Things like memory leaks and excessive disk space usage happen gradually, and you aren't likely to notice them unless you run an application for an extended period of time.
The soak test is similar to the load test we did earlier, except you are running it for a very long time, typically around 8 hours, but you can run it for longer if you need to.
Generally, we set the number of virtual users to match the average load on your system, but if you are trying to test for excessive use of resources like disk space, then you may want to increase that amount, so you don't have to run the test for so long.
You can also use soak tests to make sure that you don't lose any requests if your application has to restart, or to check there aren't any rate limits on any third party APIs that you are using.
The stages for a soak test will typically include a short ramp up of traffic followed by the sustained load and then a short ramp down.
export const options = {
stages: [
{ duration: '5m', target: 200 },
{ duration: '8h', target: 200 },
{ duration: '5m', target: 0 },
],
};
Final Thoughts
I hope this overview of performance testing has been useful. Make sure you check out the k6 documentation as there are a lot of additional options on there that might be useful for you.
📨 Are you looking to level up your skills in the tech industry?
My weekly newsletter is written for engineers like you, providing you with the tools you need to excel in your career. Join here for free →
Top comments (0)