DEV Community

💻 Bleeding Code-By John Jardin
💻 Bleeding Code-By John Jardin

Posted on • Updated on

Increase Node JS Performance With Libuv Thread Pool

Alt Text

In this 5th instalment of my “Node JS Performance Optimizations” series, I show you how to increase Node JS performance with thread pool management. We achieve this by understanding how Libuv works, how the thread pool works and how to configure the number of threads based on your machine specs.

Are you a Node developer who’s not yet familiar with the inner workings of Node JS? If so, you might be deploying production applications with some default configurations that existed since installing Node. In this article, I’m going to touch on a lesser-known configuration that could very well double the performance of some of your application’s operations. This will depend on a number of factors, but chances are good this is going to be a win for many.

Watch The Video On YouTube

The Node JS Runtime Environment

The Node runtime environment is made up of a few moving parts. We should all be familiar with the Google V8 engine, which is responsible for executing our JavaScript logic. There is, however, a lesser known library called Libuv, which is responsible for managing asynchronous I/O operations.

These I/O operations are also well known as “heavy duty tasks” that related to the Operating System. Tasks such as files and folder management, TCP/UDP transactions, compression, encryption, etc. are handled via Libuv.

Now, while most of these operations are asynchronous by design, there are a few that are synchronous, and if not handled correctly, could cause our applications to be blocked. It is for this reason that Libuv has what is called a “Thread Pool”.

Libuv Thread Pool

Libuv initiates a thread pool of 4 threads that it uses to offload synchronous operations to. In doing this, Libuv ensures that our application does not get blocked unnecessarily by synchronous tasks.

It is here that we will take advantage of a setting to better suit the specs of our machine or the virtual machine that our app will be deployed to. This is because we are allowed to change the default value of 4 threads to anything up to 1024 threads. We achieve this by setting the UV_THREADPOOL_SIZE Node variable.

Physical vs Logical CPU Cores

To better understand what to set the UV_THREADPOOL_SIZE to, we need to first understand how many logical cores our machine is running. If we take my MacBook Pro as an example, it’s running 6 Physical CPU cores (Intel).

However, these cores have hyperthreading, which means that each core can run 2 operations simultaneously. We therefore regard 1 physical core with hyperthreading as 2 logical cores. In my case, my MacBook Pro is running 12 logical cores.

How To Increase Node JS Performance

The recommendation is to set the UV_THREADPOOL_SIZE to the number of logical cores your machine is running. In my case I will set the thread pool size to 12.

It makes no sense setting the size to anything more than the logical cores your hardware is running and could actually result in poorer performance.

How To Check For Logical Cores

When it comes to deployment, the last thing you want to do is manually set the UV_THREADPOOL_SIZE, as your app might run on multiple environments with different machine specifications. As such, we need a way to dynamically set the thread pool size the moment the app is launched on the relevant environment.

The good news is that this is quite simple, but must be dealt with caution. To achieve this, add the below code to the top of your root JS file of your Node application:

const OS = require('os')
process.env.UV_THREADPOOL_SIZE = OS.cpus().length
Enter fullscreen mode Exit fullscreen mode

The OS module is native to Node JS. It has a function cpus(), which returns the total amount of logical cores your machine is running. What’s nice is, should your CPU cores not have hyperthreading, this function will just return the number of physical cpu cores instead, which is perfect.

Closing

I trust this article proved valuable. I recommend watching the embedded video as well as checking out my Source Code repo on GitHub which has code samples of everything mentioned here.

Till next time, cheers 😎

Top comments (11)

Collapse
 
fauzanss profile image
Akhmad Fauzan • Edited

Great article.
How do you think when we deploy nodejs app on openshift?
As we know, we can configure the CPU size and scale up the services.
Is this will increase apps performance with UV_THREADPOOL_SIZE config ?

Collapse
 
bleedingcode profile image
💻 Bleeding Code-By John Jardin

Hi Akhmad. Thanks for the comment 👍.

So, with my limited knowledge of OpenShift, when you re-configure your CPU size, your node app doesn't get restarted and remains in memory. Therefore, it will not take advantage of the new CPU cores until it's rebooted.

However, if you autoscale your node services in that OpenShift spawns new replicas, then the new replicas will take advantages of the new CPUs.

If I didn't answer your question correctly, maybe provide me with a bit more info ;)

Collapse
 
tomeraitz profile image
Tomer Raitz

Great article, thank you for that!
I did not know about this ability in nodejs, so I have a nuby question, is it make a difference if I run one project on the machine or several projects? For example, I use a host cloud like Heroku, and I believe they run many nodejs projects on a single machine. Is this important?

Collapse
 
bleedingcode profile image
💻 Bleeding Code-By John Jardin

Hi Tomer. Thanks for the response and sorry for the late reply. It seems Dev.to doesn't notify me when users comment on my articles.

To answer your question simply, you need to balance how many nodejs apps you run on a machine, because each nodejs app requires resources to initiate and keep running. If you have 5 web services running as 1 nodejs app, it would be less resource intensive than running 1 nodejs app per web services (i.e. 5 nodejs apps).

However, you don't want to put yourself in a position where all your services are now running as 1 nodejs application...this would start looking like a monolithic app, which is not what you want.

I would group together a subset of services that make sense to be together and run them as a nodejs application.

Make sense?

Collapse
 
tomeraitz profile image
Tomer Raitz

Yes, Thank you for the answer, I think this information is essential, even though it's something the DevOps team / CTO need to deal with this, but it's good to know this kind of configuration to help them make the project better.

Collapse
 
louieaniez profile image
louieaniez

Hi! I just have a question, because I've implemented a, somewhat like a watchdog mechanism for user session, using socket.io, I sent an event in a regular interval from my client controller in angularjs to my node server, as long as the client's browser is active, the interval continues, does that affect my server's response time eventually due to a regular interval? And also, I'm using callbacks in all of my database query, like this:

app.post('/postData', (req, res) => {
modules.postData(req.body, function (response) {
if (!response) { res.send('Error') }
else { res.status(200).send(response) }
})
})

Does this affect the response time of the server initially? this seriously is a pain in the ass, because I've created an embeddable app, and before it loads, it should get its data from my node server, however, the server will return a connection timeout error very often and that causes the app to not to display in the clients iframe, any idea how should I fix this? thanks in advance!

Collapse
 
bleedingcode profile image
💻 Bleeding Code-By John Jardin

Hi there. Regular pings to your server should be fine as long as the actual processing time and payloads are kept to a minimum. Your example above doesn't really give me too much insight into what's happening.

I see below you are new to Node. Firstly welcome to Node World :). I recommend joining the Node Reddit Group and bouncing ideas and asking for help there, but you might need to provide a bit more info to get a response.

Collapse
 
louieaniez profile image
louieaniez

I just started using nodeJS actually, I think it's been 2 months since I started to code in nodejs, there's a lot to learn I guess

Collapse
 
farabidev473 profile image
Al Farabi

Good write brother. One question, If our production application is managed by PM2, do we still need to manually adjust the thread pool size to utilize the maximum CPU potential, as demonstrated in your example?

Collapse
 
victorneves profile image
Victor Neves

Thanks for sharing this
This helped me a lot and now I think I very a better overview of the problem that I'm currently facing on my app
After running the basic test and even seetting the UV_THREADPOOL_SIZE dynamically I got an average of 6.4 requests per second :/

The app has an auth0 autentication to access basically all pages, is it possibe to pass the authentication when running the tests in order to test those pages? I try to run the project and access it on the browser doing the authentication and only run the tests after, but it doesn't, I see the redirect from the localhost:3000/ to localhost:3000/login when running the test :/

Collapse
 
louieaniez profile image
louieaniez

Please help, thanks!