DEV Community

Andrei Pechkurov
Andrei Pechkurov

Posted on • Originally published at itnext.io on

An Intro to Node.js That You May Have Missed

Photo by Zachary Young on Unsplash

Everybody knows that Node.js is an open-source, cross-platform JavaScript runtime. Most of Node.js developers know that it’s built on top of V8, a JS engine, and libuv, a multi-platform C library that provides support for asynchronous I/O based on event loops. But only few developers can clearly explain how Node.js works internally and how it affects their code. That’s probably because many Node.js developers already know JavaScript before learning node. So, they often start learning node with Express.js, Sequelize, Mongoose, Socket.IO and other well-known libraries instead of investing their time in learning Node.js itself and its standard APIs. It seems to be the wrong choice to me, as understanding Node.js runtime and knowing specifics of built-in APIs may help to avoid many common mistakes.

This post does an intro to Node.js in a compact, yet comprehensive (hopefully) manner. We’re going to make a general overview of Node.js architecture. As the result, we’ll try to determine some guidelines for writing higher-performance, more-secure server-side web applications with Node.js. It should be helpful for Node.js beginners, as well as for experienced JS developers.

Main Building Blocks

Any Node.js application is built on top of the following components:

  • V8 — a Google’s open source high-performance JavaScript engine, written in C++. It is also used in Google Chrome browser and others. Node.js controls V8 via V8 C++ API.
  • libuv — a multi-platform support library with a focus on asynchronous I/O, written in C. It was primarily developed for use by Node.js, but it’s also used by Luvit, Julia, pyuv, and others. Node.js uses libuv to abstract non-blocking I/O operations to a unified interface across all supported platforms. This library provides mechanisms to handle file system, DNS, network, child processes, pipes, signal handling, polling and streaming. It also includes a thread pool, also known as Worker Pool, for offloading work for some things that cannot be done asynchronously at the OS level.
  • Other open-source, low-level components, mostly written in C/C++:
  • c-ares — a C library for asynchronous DNS requests, which is used for some DNS requests in Node.js.
  • http-parser — a lightweight HTTP request/response parser library.
  • OpenSSL — a well-known general-purpose cryptography library. Used in tls and crypto modules.
  • zlib — a lossless data-compression library. Used in zlib module.
  • The application — it’s your application’s code and standard Node.js modules, written in JavaScript.
  • C/C++ bindings — wrappers around C/C++ libraries, built with N-API, a C API for building native Node.js addons, or other APIs for bindings.
  • Some bundled tools that are used in Node.js infrastructure:
  • npm — a well-known package manager (and ecosystem).
  • gyp — a python-based project generator copied from V8. Used by node-gyp, a cross-platform command-line tool written in Node.js for compiling native addon modules.
  • gtest — Google’s C++ test framework. Used for testing native code.

Here is a simple diagram that shows main Node.js components that were mentioned in the list:

Main Node.js components

Node.js Runtime

Here is a diagram that shows how Node.js runtime executes your JS code:

Node.js runtime diagram (simplified)

This diagram does not show all details that are happening in Node.js, but it highlights the most important parts. We’re going to briefly discuss them.

Once your Node.js application starts, it first completes an initialization phase, i.e. runs the start script, including requiring modules and registering callbacks for events. Then the application enters the Event Loop (aka the main thread, event thread, etc.), which conceptually is built for responding to incoming client requests by executing the appropriate JS callback. JS callbacks are executed synchronously, and may use Node APIs to register asynchronous requests to continue processing after the callback completes. The callbacks for these asynchronous requests will also be executed on the Event Loop. Examples of such Node APIs include various timers (setTimeout(), setInterval(), etc.), functions from fs and http modules and many more. All of these APIs require a callback that will be triggered once the operation has finished.

The Event Loop is a single threaded and semi-infinite loop based on libuv. It’s called a semi-infinite loop because it quits at some point when there is no more work left to be done. From developer’s perspective, that’s the point when your program exits.

The Event Loop is pretty complex. It assumes manipulations with event queues and includes several phases:

  • Timers phase — this phase executes callbacks scheduled by setTimeout() and setInterval().
  • Pending callbacks phase — executes I/O callbacks deferred to the next loop iteration.
  • Idle and prepare phases — internal phases.
  • Poll phase — includes the following: retrieve new I/O events; execute I/O related callbacks (almost all with the exception of close, timers and setImmediate() callbacks); Node.js will block here when appropriate.
  • Check phase — setImmediate() callbacks are invoked here.
  • Close callbacks phase — some close callbacks are executed here, e.g. socket.on('close', ...).

Note. Check the following guide to learn more about Event Loop phases.

During the poll phase Event Loop fulfills non-blocking, asynchronous requests (started via Node APIs) by using libuv’s abstractions for OS-specific I/O polling mechanisms. These OS-specific mechanisms are epoll for Linux, IOCP for Windows, kqueue for BSD and MacOS, event ports in Solaris.

It’s a common myth that Node.js is single-threaded. In essence, it’s true (or it used to be partially true as there is an experimental support for web workers, called Worker Threads) as your JS code is always run on a single thread, within the Event Loop. But you may also notice the Worker Pool, which is a fixed size thread pool, on the diagram, so any Node.js process has multiple threads running in parallel. The reason for that is the following: not all Node API operations can be executed in a non-blocking fashion on all supported operation systems. Another reason for having the Worker Pool is that the Event Loop is not suited for CPU intensive computations.

So, Node.js (or libuv, in particular) does its best to keep the same asynchronous, event-driven API for such blocking operations and executes these operations on a separate thread pool. Here are some examples of such blocking operations in the built-in modules:

  • I/O-bound:
  • Some DNS operations in dns module: dns.lookup(), dns.lookupService().
  • Most of file system operations provided by fs module, like fs.readFile().
  • CPU-bound:
  • Some cryptographic operations provided by crypto module, like crypto.pbkdf2(), crypto.randomBytes() or crypto.randomFill().
  • Data compression operations provided by zlib module.

Notice that some 3rd-party native libraries, like bcrypt, also offload computations to the worker thread pool.

Now, when you should have a better understanding of Node.js overall architecture, let’s discuss some guidelines for writing higher-performance, more-secure server-side applications.

Rule #1 — Avoid Mixing Sync and Async In Functions

When you write any functions, you need to make them either completely synchronous or completely asynchronous. You should avoid mixing these approaches in a single function.

Note. If a function accepts a callback as an argument, it does not mean that it’s asynchronous. As an example, you can think of Array.forEach() function. Such approach is often called continuation-passing style (CPS).

Let’s consider the following function as an example:

const fs = require('fs')

function checkFile (filename, callback) {
  if (!filename || !filename.trim()) {
    // pitfalls are here:
    return callback(new Error('Empty filename provided.'))
  }

  fs.open(filename, 'r', (err, fileContent) => {
    if (err) return callback(err)

    callback(null, true)
  })
}
Enter fullscreen mode Exit fullscreen mode

This function is quite simple, but it’s fine for our needs. The problem here is the return callback(...) branch, as the callback is invoked synchronously in case of an invalid argument. On the other hand, in case of a valid input, the callback is invoked in an async fashion, inside of the fs.open() call.

To show the potential issue with this code, let’s try to call it with different inputs:

checkFile('', () => {
  console.log('#1 Internal: invalid input')
})
console.log('#1 External: invalid input')

checkFile('main.js', () => {
  console.log('#2 Internal: existing file')
})
console.log('#2 External: existing file')
Enter fullscreen mode Exit fullscreen mode

This code will output the following to the console:

#1 Internal: invalid input
#1 External: invalid input
#2 External: existing file
#2 Internal: existing file
Enter fullscreen mode Exit fullscreen mode

You may have already noticed the problem here. The order of code execution is different in these cases. It makes the function non-deterministic, thus such style must be avoided. The function can be easily fixed to a completely async style by wrapping the return callback(...) call with setImmediate() or process.nextTick():

if (!filename || !filename.trim()) {
  return setImmediate(
    () => callback(new Error('Empty filename provided.'))
  )
}
Enter fullscreen mode Exit fullscreen mode

Now our function became much more deterministic.

Rule #2 — Don’t Block the Event Loop

In terms of server-side web applications, e.g. RESTful services, all requests are processed concurrently within Event Loop’s single thread. So, for example, if processing of an HTTP request in your application spends significant amount of time on execution of a JS function that does a heavy calculation, it blocks the Event Loop for all other requests. As an another example, if your applications spends 10 milliseconds on processing of JS code for each HTTP request, throughput of a single instance of the application will be about 1000 / 10 = 100 requests per second.

Thus, the first golden rule of Node.js is “never block the Event Loop”. Here is a short list of recommendations that will help you to follow this rule:

  • Avoid any heavy JS calculations. If you have any code with time complexity worse than O(n), consider optimizing it or at least splitting calculations into chunks that are recursively called via a timer API, like setTimeout() or setImmediate(). This way you will not be blocking the Event Loop and other callbacks will be able to be processed.
  • Avoid any *Sync calls, like fs.readFileSync() or crypto.pbkdf2Sync(), in server applications. The only exception to this rule might be startup phase of your application.
  • Choose 3rd-party libraries wisely as they might blocking the event loop, e.g. by running some CPU intensive computations written in JS.

Rule #3 — Block the Worker Pool Wisely

It may be surprising, but the Worker Pool may be also blocked. As already know, it’s a fixed size thread pool with the default size of 4 threads. The size may be increased by setting UV_THREADPOOL_SIZE environment variable, but in many cases it won’t solve your problem.

To illustrate the Worker Pool problem let’s consider the following example. Your RESTful API has an authentication endpoint which calculates hash value for the given password and matches it with the value obtained from a database. If you did everything right the hashing is done on Worker Pool. Let’s imagine that each computation takes about 100 milliseconds in order to be finished. This means that with the default Worker Pool size you’ll get about 4*(1000 / 100) = 40 requests per second in terms of the hashing endpoint’s throughput (an important note: we’re considering the case of 4+ CPU cores here). While all threads in the Worker Pool are busy, all incoming tasks, such as hash computations or fs calls, will be queued.

So the second golden rule of Node.js is “block the Worker Pool wisely”. Here is a short list of recommendations that will help you to follow this rule:

  • Avoid long-running tasks happening on Worker Pool. As an example, prefer stream-based APIs over reading the whole file with fs.readFile().
  • Consider partitioning CPU-intensive tasks if possible.
  • Once again, choose 3rd-party libraries wisely.

Rule #0 — One Rule to Rule Them All

Now, as a summary, we can formulate a rule of thumb for writing high-performance Node.js server-side applications. This rule of thumb is “Node.js is fast if the work done for each request at any given time is small enough”. This rule covers both Event Loop and Worker Pool.

Futher Reading

As the further reading, I advice you to read the following:


Latest comments (0)