Node.js applications run on a single thread. If you suddenly have to serve lots of clients at once, you're bound to run into problems with throughput. Learn how to mitigate these with a simple strategy: Using child processes.
Nowadays, everything is containerized.
This is generally a good thing, because:
- There's a clear distinction between software components.
- You can scale up and down as you like.
- And do more stuff.
The cost of using Docker
These many benefits, however, induce hidden costs.
If you don't believe me, ask ChatGPT: "What are possible technical costs of using container software such as docker?"
Sometimes you just want to keep it simple. By simple, I mean: Using Node core modules. There are process managers like pm2 or forever. Which bring their own
- features
- learning curve
- complexity And so forth. You see where I'm going.
No. Today, I just want to start many HTTP servers at once.
The 'child_process' module
You can achieve this with a Node core module called child_process
. It allows you to execute Javascript files as a distinct system process.
This means: You can start as many web servers as you like (or have free ports). You should limit yourself to one process per available CPU on your machine for optimal results.
If you started more than one HTTP server per CPU core, they would cannibalize each others
So let's do it.
Write code for the server
We'll use the 'http' module to spin up a server that answers with the port it was started on. In reality, this server would be responsible to connect with a data layer. Or validate incoming requests.
The core difference is that we pass in the 'PORT' as a process variable. It's not declared inside the server module. Instead, the parent Node process must pass it down once it spawns its child process.
const http = require('http');
const PORT = process.argv[2]
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello from port ' + PORT);
})
server.listen(PORT, () => {
console.log(`Server running at http://localhost:${PORT}`);
});
Write the application's main-function
We'll use another Node core module to determine the count of CPUs of our machine. Then, it's time to spin up one server process for each available core.
const cp = require('child_process');
const os = require('os');
const cpuCount = os.cpus();
cpuCount.forEach((cpu, index) => {
const PORT = `${4000 + index * 10}`
console.log(`Starting server on port ${PORT} with cpu ${cpu.model}, speed ${cpu.speed}`);
cp.fork('./server.js', [PORT]);
});
The port range starts from '4000' and increases by '10' for each additional server that starts. In my machine's case, since I have 8 CPU cores available, I'll see the following console output:
Which also means we're done. You can now try and visit these server URLs to validate if everything is working correctly.
How to proceed
Several servers at once are a good start to reduce server load. But there's more to be done.
By itself, the above method brings no real value. You probably don't want to modify your frontend app to decide what server it requests data from.
It's much easier - and good practice - to employ a load balancer. You could use an Nginx config like the following:
http {
# Define upstream servers for load balancing
upstream node_services {
# ip_hash; # uncomment if you handle serverside sessions
server http://localhost:4000;
server http://localhost:4010;
server http://localhost:4020;
# ... more node services
}
}
server {
listen 80;
# Add a reverse proxy location
location /api {
proxy_pass http://node_services
}
}
Finally, it's up to you how to structure your architecture. If you like to keep things simple, the child_process
module is a great choice. When working on large scale applications, you will still want to use the industry's favorite tooling though.
Top comments (4)
Hi Gaurav. Thank you for your interest :)
I think there's no way of securely setting up reverse proxy addresses if you don't know the ports beforehand.
nmap
to figure out the ports, but then you would also need to have knowledge of the port range.I would do some investigation on your machine beforehand. It's quicker and easier.
nproc --all
into a terminal.The same goes for docker-compose.
Nice written article 👍
But, I have one confusion regarding this piece in particular
How would we go on setting up the addresses in the server group if we don't know the port numbers?
Or, can we apply the same logic we use to assign port numbers to the
http/express
servers here also?I'm also currently learning Nginx and Docker and would love to know how to do this dynamically
Could you use es6 modules?
Good question. I tried that just now. You would have to add a
package.json
file in which to declare the type = module.Then you can replace all the 'require' with 'import' and voilá