DEV Community

Tanisha G
Tanisha G

Posted on

Implementation of Cluster -Node.js

Image description

As we all know node.js is a popular runtime environment that offers scalability and high performance. To handle tasks at the process level, the "Cluster" module was introduced.

Do not get confused between cluster and work thread,worker threads are to run javascript in parallel,Cluster helps in running the process in parallel by creating multiple instances of nodejs

Benefits of clustering

1. Improves the performance:
Clustering enables your Node.js application to process more concurrent requests by making use of many CPU cores. This leads to faster reaction times and enhanced performance in general, particularly in applications with heavy traffic.

2. Scalability:
Horizontal scaling—which involves adding more machines to your infrastructure and dividing the workload among them—is made possible by clustering. You may easily scale your application as it grows by adding more worker processes across many servers.

3. Fault Tolerance:
If a bug or other problem causes one of the worker processes to crash, the remaining worker processes can continue processing incoming requests. This makes your application more fault tolerant and guarantees that it will continue to function even if one or more of its components fail.

Now let's dive into executing an API with and without clustering and load testing results.

Create a simple js file with the express server running on a dynamic port

const express = require("express");
const port = 3000;
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
Enter fullscreen mode Exit fullscreen mode

now install the cluster and load test npm's

cluster - npm link
loadtest - npm link
"os" is a nodejs core module

In the below code "os" gives information about our operating system and next have to check whether the process is primary or not then cluster.fork() is responsible for creating the worker processes.

const cluster = require("cluster");
const totalCPUs = require("os").availableParallelism();

if (cluster.isPrimary) {
  console.log(`Number of CPUs is ${totalCPUs}`);
  console.log(`Primary ${process.pid} is running`);

  // Fork workers.
  for (let i = 0; i < totalCPUs; i++) {
    cluster.fork();
  }

  cluster.on("exit", (worker, code, signal) => {
    console.log(`worker ${worker.process.pid} died`);
    console.log("Let's fork another worker!");
    cluster.fork();
  });
} else {
  const app = express();
  console.log(`Worker ${process.pid} started`);

  app.get("/", (req, res) => {
    res.send("Hello World!");
  });

  app.get("/api/:n", function (req, res) {
    let n = parseInt(req.params.n);
    let count = 0;

    if (n > 5000000000) n = 5000000000;

    for (let i = 0; i <= n; i++) {
      count += i;
    }

    res.send(`Final count is ${count}`);
  });

  app.listen(port, () => {
    console.log(`App listening on port ${port}`);
  });
}
Enter fullscreen mode Exit fullscreen mode

Now lets do the load testing

 loadtest http://localhost:3000/api/50000000 -n 1000 -c 100
Enter fullscreen mode Exit fullscreen mode

below are the results based on the above-mentioned load testing pattern just compare the rps,mean time

$ loadtest http://localhost:3000/api/50000000 -n 1000 -c 100
Requests: 9 (2%), requests per second: 2, mean latency: 2388.1 ms
Requests: 82 (16%), requests per second: 16, mean latency: 2554.7 ms
Requests: 82 (16%), requests per second: 15, mean latency: 7957.6 ms
Requests: 100 (20%), requests per second: 4, mean latency: 5470.7 ms
Requests: 162 (32%), requests per second: 12, mean latency: 10967.5 ms
Requests: 109 (22%), requests per second: 5, mean latency: 10633.8 ms
Requests: 153 (31%), requests per second: 9, mean latency: 11488.6 ms
Requests: 200 (40%), requests per second: 8, mean latency: 11290.9 ms
Requests: 201 (40%), requests per second: 10, mean latency: 11753.6 ms
Requests: 234 (47%), requests per second: 7, mean latency: 11896.6 ms
Requests: 221 (44%), requests per second: 4, mean latency: 11999.5 ms
Requests: 300 (60%), requests per second: 13, mean latency: 12040.1 ms
Requests: 300 (60%), requests per second: 16, mean latency: 11784 ms
Requests: 308 (62%), requests per second: 2, mean latency: 11705.5 ms
Requests: 309 (62%), requests per second: 2, mean latency: 11520.4 ms
Requests: 385 (77%), requests per second: 15, mean latency: 11541.5 ms
Requests: 373 (75%), requests per second: 13, mean latency: 11684.4 ms
Requests: 400 (80%), requests per second: 3, mean latency: 11566.5 ms
Requests: 409 (82%), requests per second: 7, mean latency: 12009.4 ms
Requests: 450 (90%), requests per second: 10, mean latency: 12009.5 ms
Requests: 444 (89%), requests per second: 7, mean latency: 12018 ms

Target URL:          http://localhost:3000/api/50000000
Max requests:        1000
Concurrent clients:  200
Running on cores:    2
Agent:               none

Completed requests:  1000
Total errors:        0
Total time:          58.25 s
Mean latency:        10472.6 ms
Effective rps:       17

Percentage of requests served within a certain time
  50%      11675 ms
  90%      12033 ms
  95%      12076 ms
  99%      12155 ms
 100%      12191 ms (longest request)

Enter fullscreen mode Exit fullscreen mode

next result is when we hit the API when cluster is enabled

$ loadtest http://localhost:3000/api/50000000 -n 1000 -c 100
Requests: 139 (28%), requests per second: 28, mean latency: 2377.8 ms
Requests: 139 (28%), requests per second: 28, mean latency: 2343.4 ms
Requests: 278 (56%), requests per second: 28, mean latency: 3438.2 ms
Requests: 281 (56%), requests per second: 28, mean latency: 3441.2 ms
Requests: 405 (81%), requests per second: 25, mean latency: 3855.3 ms
Requests: 410 (82%), requests per second: 26, mean latency: 3864.8 ms

Target URL:          http://localhost:3000/api/50000000
Max requests:        1000
Concurrent clients:  200
Running on cores:    2
Agent:               none

Completed requests:  1000
Total errors:        0
Total time:          18.563 s
Mean latency:        3318.1 ms
Effective rps:       54

Percentage of requests served within a certain time
  50%      3515 ms
  90%      3924 ms
  95%      3961 ms
  99%      3992 ms
 100%      4004 ms (longest request)
Enter fullscreen mode Exit fullscreen mode

To sum up, Node.js clustering is an essential method for realizing the full potential of your server-side apps. In addition to enhancing performance, it offers a scalable and resilient base, guaranteeing that your applications can adapt to growing traffic levels and still function well in the face of unforeseen difficulties.

I’ll keep on adding more topics as I learn.Thankyou

Top comments (0)