Hi everyone so following our previous article about load testing your app , we will be building our small test app using nodejs ,
for everyone who is not using node you can follow along as everything will be straightforward and can apply to any languages .
In this app we will be focusing on sending concurrent requests.
Concurrent means happening at the same time, as in two movies showing at the same theater on the same weekend
concurrency gives you an actual reading of the performance of your app, like you will really know if your app can handle requests coming at the same time .
Points
in here we will think of what our app will need in order for it to work properly ,
first of all we know that we are going to make an http request I will be using the http module from node you can add an external package if you want. Cause I’ll be sending concurrent requests we need to implement our app using asynchronous cause we know that sending http request take some time to respond and if we waited for each request to come in order to move the next one we will be following a synchronous way and therefore the request will not be sent in a concurrency , we need a timer to calculate how much time each request take to respond, in this example I will be sending 10 request this number can change of course to summarize the above.
1- http package
2- asynchronous
3- timer
4- default number of request will be 10
Writing the code
I will start by importing the http module from node and hrtime from the process
import { request } from "http";
import { hrtime } from "process";
hrtime it’s a built in function that will help us to calculate the time each request took, after that we will define a url to send request to
import { request } from "http";
import { hrtime } from "process";
const url = http://localhost:8000/
note that in the above I create a local server however you can use any url , our code now is only testing against http not https so incase you want to test with https just make sure to import the “https” or add any functionality to cover both cases.
import { request } from "http";
// in case of https
import { request } from "https";
by default we will be sending 10 request concurrently , and we need to define a counter cause we are sending requests in a asynchronous way we don’t know when we will be receiving the final response so each time we receive a response we’ll increase the counter and when it reach 10 that’s mean we receive the last response and we should calculate the results
import { request } from "http";
import { hrtime } from "process";
const url = http://localhost:8000/
const numberOfRequests = 10;
let counter = 0 ;
let overAllTime = 0 ;
now let’s create a function that is going to send a request
function makeRequest (){
const startTime = hrtime();
const req = request ( url , (res)=>{
res.on("datat",()=> {});
res.on("end",()=> {
const endTime = hrtime();
let responseTime = (endTime[0] - startTime[0]) * 1000 + (endTime[1] - startTime[1]) / 1e6 ;
overAllTime += responseTime;
counter ++;
})
})
req.end(() =>{})
}
so let’ quickly go through what we have done above ,
we create a variable called startTime with the help of hrtime() from the process model.
The process.hrtime() method to measure code execution time
which returns array which include current high-resolution
real time in a [seconds, nanoseconds] The advantage of
process.hrtime() is it measures time very accurate execution > time which last less than a millisecond.
also we create another variable called req and initiate the request with it, in this moment immediately the request will be handled by the operating system and we will communicate through callbacks , the request will take a url in this case we will pass the desired url already specified above and a callback function that will be invoke when the response is back we have also some events we can listen to by the parameter passed from the http request in the function that we passed to it and we will be focus on the below
res.on("data", () = > {
}
res.on("end", () = > {
}
if you want to read the data coming from the response you should listen to the data event however the data will be passed in chunks we don’t want read the data in this app so we will just leave it as it is but you can read more here, the main thing here is the “end event” as this mean that the response from the request is being received so when ever our function that we passed to the “end event “ invoked we create a variable called endTime to calculate the difference between the startTime and the endTime
(endTime[0] - startTime[0]) * 1000 + (endTime[1] - startTime[1]) / 1e6;
in a above will calculate the difference and convert it to millisecond we will attach the value from above to a variable in our example we called it responseTime after that we will increase the overAllTime with responseTime so that we can get the average later, also we are increasing the counter by one so that we can now that is the last response , we should also end connection which req.end() we are just telling the request object that we are done sending request .
The last response
so if the counter was equal to 10 or what number of requests you specified , we should now print the results
function makeRequest (){
const startTime = hrtime();
const req = request ( url , (res)=>{
res.on("datat",()=> {});
res.on("end",()=> {
const endTime = hrtime();
let responseTime = (endTime[0] - startTime[0]) * 1000 + (endTime[1] - startTime[1]) / 1e6 ;
overAllTime += responseTime;
counter ++;
if( counter === 10 ) {
const averageTime = (overAllTime/ numberOfRequests).toFixed(3);
console.log("the average the request took is " + averageTime );
}
})
})
}
in our app w’ll just print the average a request will take to get a respond that’s why we keep track of the overAllTime we will calculate the average time by dividing overAllTime by numberOfRequests we will end up with a big decimal number that’s why we used toFixed to get only 3 number after the decimal point.
Looping through the makeRequest function
hey we are already finished we just need to loop over our function to make the requests
for (let i = 0; i < numberOfRequests; i++) {
makeRequest();
}
as I mentioned above the requests will be sent in a asynchronous way that’s mean we will not wait for each request to be finished so that we can go to the other request it will be sent all at the same time in a concurrent way so that we can actually measure the performance of our app our final app should be something like .
import { hrtime } from "process";
import { request } from "http";
const url = "http://localhost:8000/";
const numberOfRequests = 10;
let counter = 0;
let overAllTime = 0;
function makeRequest() {
const startTime = hrtime();
const req = request(url, (res) => {
res.on("datat", () => {});
res.on("end", () => {
const endTime = hrtime();
let responseTime =
(endTime[0] - startTime[0]) * 1000 + (endTime[1] - startTime[1]) / 1e6;
overAllTime += responseTime;
counter++;
if (counter === 10) {
const averageTime = (overAllTime / numberOfRequests).toFixed(3);
console.log(
"the average time the request took is " + averageTime
);
}
});
});
}
for (let i = 0; i < numberOfRequests; i++) {
makeRequest();
}
Testing our app
no let’s run our app and check if it’s working fine
in node you can run your app by entering node {{name of your app}} I will name it load-test.js
node load-test
if everything is working correctly you should get “ the average time the request took is {{number}} ms printed in your terminal .
to test the app I created a small server app just to demonstrate the load test app that we created you can test it on what ever application you want.
import http from "node:http";
const server = http.createServer((req, res) => {
res.writeHead(200, { "Content-Type": "application/json" });
res.write("request was received");
console.log("request");
res.end();
});
server.listen(8000, () => {
console.log("server is listening on port 8000");
});
so now if I run the tester app and provide my server url I should get some actual result.
Comparison
now to make sure that our app is really working as expected let's test it against some popular apps.
1- ab ( apache bench )
if you can see in Time per request we can get 5.300 ms which is pretty close from ours.
2- hey
if you can see here in the average we get 0.0075 sec which is 7.5ms and if you compare it to our app it also pretty close .
Conclusion
note that our app is very small comparing to the other apps which have a lot of features and customization options. In our sample app we only scratch the surface but what I want to say at the end is that every app start small like this and what comes after that it's just a continues improvement , let's take the app we built as in example , we can make it accept arguments like the other apps and let the user specify number of concurrency or number of request we can calculate how much each request take to end and so on , there is no limit here.



Top comments (0)