DEV Community

Cover image for Under Pressure: Benchmarking Node.js on a Single-Core EC2
Caio Borghi
Caio Borghi

Posted on

Under Pressure: Benchmarking Node.js on a Single-Core EC2

Hi!

In this post, I'm going to stress test a Node.js 21.2.0 pure API (no framework!) to see the efficiency of the Event Loop in a limited environment.

I'm using AWS for hosting the servers (EC2) and database (RDS with Postgres).

The main goal is to understand how many requests per second a simple Node API can handle on a single core, then identify the bottleneck and optimize it as much as possible.

Let's dive in!

Infrastructure

  • AWS RDS running Postgres
  • EC2 t2.small for the API
  • EC2 t3.micro for the load tester

Database Setup

The database will consist of a single users table created with the following SQL query:

CREATE TABLE IF NOT EXISTS users (
    id SERIAL PRIMARY KEY,
    email VARCHAR(255) NOT NULL,
    password VARCHAR(255) NOT NULL
);
TRUNCATE TABLE users;
Enter fullscreen mode Exit fullscreen mode

API Design

The API will have a single POST endpoint that will be used to save a user to the Postgres database. I know, there are a lot of javascript frameworks out there that I could use to make the development easier, but it's possible to use only Node to handle the requests/responses.

To connect to the database, I chose the library pg as it is the most popular one, we'll start with it.

Connection Pooling

One thing that is important when connecting to a database is using a connection pool. Without a connection pool, the API needs to open/close a connection to the database at each request, which is extremely inefficient.

A pool allows the API to reuse connections, as we're planning to send a lot of concurrent requests to our API, it's crucial to have it.

To check your Postgres database's connection limit, run:

SHOW max_connections;
Enter fullscreen mode Exit fullscreen mode

In my case, I'm using an RDS running on a t3.micro database with these specs:

AWS RDS configs

So this is the outcome of the query:
Max Connections

Cool, having 81 as the maximum number of connections to our database, we know what is the upperbound limit we should not surpass.

As the API will run on a single-core processor, it's not a good idea to have a high number of connections on the connection pool, as this would cause a lot of headache to the processor (context switching).

Let's start with 40.

Creating the API

We'll start by starting our project with npm init and creating our index.mjs file. MJS so I can use EcmaScript synthax without doing too much magic/parsing/loading.

The first thing I'll do is add the pg library with npm add pg. I'm using npm but you can use pnpm, yarn or any other node package manager you want.

Then, let's start by creating our connection pool:

import pg from "pg"; // Required because pg lib uses CommonJS 🤢
const { Pool } = pg;

const pool = new Pool({
  host: process.env.POSTGRES_HOST,
  user: process.env.POSTGRES_USER,
  password: process.env.POSTGRES_PASSWORD,
  port: 5432,
  database: process.env.POSTGRES_DATABASE,
  max: 40, // Limit is 81, let's start with 40
  idleTimeoutMillis: 0, // How much time before kicking out an idle client.
  connectionTimeoutMillis: 0, // How much time to disconnect a new client, we don't want to disconnect them for now.
  ssl: false
  /* If you're running on AWS, you'll need to use:
  ssl: {
    rejectUnauthorized: false
  }
  */
});
Enter fullscreen mode Exit fullscreen mode

We're using process.env to access the environment variables, so create a .env file on the root and fill with your postgres informations:

POSTGRES_HOST=
POSTGRES_USER=
POSTGRES_PASSWORD=
POSTGRES_DATABASE=
Enter fullscreen mode Exit fullscreen mode

Then, let's create a function to persist our user on the database.

const createUser = async (email, password) => {
  const queryText =
    "INSERT INTO users(email, password) VALUES($1, $2) RETURNING id";
  const { rows } = await pool.query(queryText, [email, password]);
  return rows[0].id;
};
Enter fullscreen mode Exit fullscreen mode

Finally, let's create a node HTTP server by importing the node:http package and writing a code to handle new requests, parse from string to JSON, query the database and return 201, 400 or 500 in case of any errors, the final file looks like this.

// index.mjs
import http from "node:http";
import pg from "pg";
const { Pool } = pg;

const pool = new Pool({
  host: process.env.POSTGRES_HOST,
  user: process.env.POSTGRES_USER,
  password: process.env.POSTGRES_PASSWORD,
  port: 5432,
  database: process.env.POSTGRES_DATABASE,
  max: 40,
  idleTimeoutMillis: 0,
  connectionTimeoutMillis: 2000,
  ssl: false
  /* If you're running on AWS, you'll need to use:
  ssl: {
    rejectUnauthorized: false
  }
  */
});

const createUser = async (email, password) => {
  const queryText =
    "INSERT INTO users(email, password) VALUES($1, $2) RETURNING id";
  const { rows } = await pool.query(queryText, [email, password]);
  return rows[0].id;
};

const getRequestBody = (req) =>
  new Promise((resolve, reject) => {
    let body = "";
    req.on("data", (chunk) => (body += chunk.toString()));
    req.on("end", () => resolve(body));
    req.on("error", (err) => reject(err));
  });

const sendResponse = (res, statusCode, headers, body) => {
  headers["Content-Length"] = Buffer.byteLength(body).toString();
  res.writeHead(statusCode, headers);
  res.end(body);
};

const server = http.createServer(async (req, res) => {
  const headers = {
    "Content-Type": "application/json",
    Connection: "keep-alive", // Default to keep-alive for persistent connections
    "Cache-Control": "no-store", // No caching for user creation
  };
  if (req.method === "POST" && req.url === "/user") {
    try {
      const body = await getRequestBody(req);
      const { email, password } = JSON.parse(body);
      const userId = await createUser(email, password);

      headers["Location"] = `/user/${userId}`;
      const responseBody = JSON.stringify({ message: "User created" });
      sendResponse(res, 201, headers, responseBody);
    } catch (error) {
      headers["Connection"] = "close";
      const responseBody = JSON.stringify({ error: error.message });
      console.error(error);
      const statusCode = error instanceof SyntaxError ? 400 : 500;
      sendResponse(res, statusCode, headers, responseBody);
    }
  } else {
    headers["Content-Type"] = "text/plain";
    sendResponse(res, 404, headers, "Not Found!");
  }
});

const PORT = process.env.PORT || 3000;

server.listen(PORT, () => {
  console.log(`Server running on http://localhost:${PORT}`);
});
Enter fullscreen mode Exit fullscreen mode

Now, after running npm install, you can run

node --env-file=.env index.mjs
Enter fullscreen mode Exit fullscreen mode

To start the application, you should see this on your terminal:

Server is running

Congrats, we have built a simple NodeAPI with one endpoint that connects to Postgres through a Connection Pool and inserts a new user to the users table.

Deploying the API to an EC2

First, create an AWS account and go to EC2 > Instances > Launch an Instance.

Then, create an Ubuntu 64-bit (x86) t2.micro instance, allow SSH traffic and allow HTTP traffic from the Internet.

Your summary should look like this:

AWS EC2 t2.micro Summary

You'll need to create a key-value-pair.pem file to be able to SSH into it, I won't cover this in this article, there are already plenty of tutorials teaching how to launch and connect to an EC2 instance on the internet, so find them!

Allowing TCP connections on port 3000

After creation, we need to allow TCP traffic for port 3000, this is done on the Security Group config (EC2 > Security Groups > Your Security Group)

Security Group page

At this page, click on "Edit inbound rules", then "Add rule" and fill the form as shown on the image, this will allow us to hit port 3000 of our instance.

Inbound Rule

Your final Inbound Rules table should look something like this.

Inbound Rules

Connecting to EC2

Download the .pem file in a folder, then access the EC2 instance and copy the public IPV4 IP, then, run this command on the same folder:

Public IPV4 address

ssh -i <path-to-pen> ubuntu@<public-ipv4-address>
Enter fullscreen mode Exit fullscreen mode

If you see this EC2 welcome page, then you're in 🎉

EC2 Welcome page

Installing Node

Let's follow the Node documentation for Debian/Ubuntu-based Linux distros.

Run:

sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
Enter fullscreen mode Exit fullscreen mode

Then:

NODE_MAJOR=21
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | sudo tee /etc/apt/sources.list.d/nodesource.list
Enter fullscreen mode Exit fullscreen mode

Important: Double check that the NODE_MAJOR is 21, as we want to use the latest version of Node <3

sudo apt-get update
sudo apt-get install nodejs -y
node -v
Enter fullscreen mode Exit fullscreen mode

And that's what you should see (it may differ the version as this post gets old)

Node installed

Nice, now we have a fresh new ubuntu server with node installed, we need to transfer our API code to it and start it.

Deploying API to EC2

We'll use a tool called scp that uses ssh connection to copy file from local to a target location, in our case, the EC2 instance we just created.

Steps:

  • Delete the node_modules folder from the project.
  • Go to the parent folder of the root folder of the application.

In my case, the name of the folder is node-api (I know, very creative!)

Now, run:

scp -i <path-to-pem> -r ./node-api ubuntu@<public-ipv4-address>:/home/ubuntu
Enter fullscreen mode Exit fullscreen mode

To transfer the folder node-api to the /home/ubuntu/node-api folder at our EC2 instance.

You should see something similar to this:
files transfered

Running the API on EC2

Head back to the EC2 server using ssh and run

cd node-api
npm install
NODE_ENV=production node --env-file=.env index.mjs
Enter fullscreen mode Exit fullscreen mode

And boom, the API is running on AWS.

Let's double check that it's working by making a POST request passing email and password to the IP of our API, at the port 3000.

You can use curl (on another terminal), to do this:

curl -X POST -H "Content-Type: application/json" -d {email: user@example.com, password: password} http://<public-ipv4-address>:3000/user
Enter fullscreen mode Exit fullscreen mode

The result should look like this:
User Created

I'm using Table Plus to connect to the RDS Postgres database, you could use any Postgres Client.

To ensure that the API is persisting data to the database, let's run this query:

 SELECT COUNT(id) FROM users;
Enter fullscreen mode Exit fullscreen mode

Returned

It should return 1.

Nice, it's working!

Stress Test

Now that we have our API working, we need to be able to test how many concurrent requests it can handle with a single core.

There are tons of tools to do this, I'll use Vegeta

You can run the following steps from your local machine, but keep in mind that your network may be the bottleneck, as the stress test requires a lot of packages to be sent at the same time.

I'll use another EC2 instance (a more powerful one, t2x.large) running Ubuntu.

Configuring Vegeta

Follow the docs to install Vegeta on your OS.

Then, create a new folder for load testers on the root folder of the application, it's looking like this:

node_benchmark/
  node-api/
  load-tester/
    vegeta/
Enter fullscreen mode Exit fullscreen mode

Go to the vegeta folder and create a start.sh script with the following content:

#!/bin/bash
if [[ $# -ne 1 ]]; then
    echo 'Wrong arguments, expecting only one (reqs/s)'
    exit 1
fi

TARGET_FILE="targets.txt"
DURATION="30s"  # Duration of the test, e.g., 60s for 60 seconds
RATE=$1    # Number of requests per second
RESULTS_FILE="results_$RATE.bin"
REPORT_FILE="report_$RATE.txt"
ENDPOINT="http://<ipv4-public-address>:3000/user"

# Check if Vegeta is installed
if ! command -v vegeta &> /dev/null
then
    echo "Vegeta could not be found, please install it."
    exit 1
fi

# Create target file with unique email and password for each request
echo "Generating target file for Vegeta..."
> "$TARGET_FILE"  # Clear the file if it already exists

# Assuming body.json exists and contains the correct JSON structure for the POST request
for i in $(seq 1 $RATE); do 
    echo "POST $ENDPOINT" >> "$TARGET_FILE"
    echo "Content-Type: application/json" >> "$TARGET_FILE"
    echo "@body.json" >> "$TARGET_FILE"
    echo "" >> "$TARGET_FILE"
done

echo "Starting Vegeta attack for $DURATION at $RATE requests per second..."
# Run the attack and save the results to a binary file
vegeta attack -rate=$RATE -duration=$DURATION -targets="$TARGET_FILE" > "$RESULTS_FILE"

echo "Load test finished, generating reports..."
# Generate a textual report from the binary results file
vegeta report -type=text "$RESULTS_FILE" > "$REPORT_FILE"
echo "Textual report generated: $REPORT_FILE"

# Generate a JSON report for further analysis
JSON_REPORT="report.json"
vegeta report -type=json "$RESULTS_FILE" > "$JSON_REPORT"
echo "JSON report generated: $JSON_REPORT"

cat $REPORT_FILE
Enter fullscreen mode Exit fullscreen mode

IMPORTANT: Replace <ipv4-public-address> with the IP of your EC2 Node API Server

Now, create a body.json file:

{
  "email": "A1391FDC-2B51-4D96-ADA4-5EEE649A4A75@example.com",
  "password": "password"
}
Enter fullscreen mode Exit fullscreen mode

Now you're ready to start load-testing our api.

This script will:

  • Run for 30s
  • Hit the API with concurrent requests/s defined by the first argument of the script
  • Generate a textual and .json file with infos about the test.

Last, but not least, we need to make the start.sh file executable, we can do this by running:

chmod +x start.sh
Enter fullscreen mode Exit fullscreen mode

Before running each test, I'll clear the users table on Postgres with the following query.

TRUNCATE TABLE users;
Enter fullscreen mode Exit fullscreen mode

This will help us see how many users were created!

1.000 Reqs/s

Alright, let's go to the interesting part, let's see if our single core, 1GB server can handle 1.000 requests per second.

Run

./start.sh 1000
Enter fullscreen mode Exit fullscreen mode

Wait for the completion, here it generates the following output:

1.000 reqs/s

Let me break it down for you:

At the rate os 1.000 requests per second, the Node API was able to successfully process all of them, returning the expected success status 201.

In average, each request took 4.254 ms to be returned, with 99% of them returning in less than 25.959 ms.

Metric Value
Requests per Second 1000.04
Success Rate 100%
p99 Response Time 25.959 ms
Average Response Time 4.254 ms
Slowest Response Time 131.889 ms
Fastest Response Time 2.126 ms
Status Code 201 30000

Let's check our database:
Database Count

Cool, it worked!

Let's try harder and double the number of requests per second.

2.000 Requests per second

Run

./start.sh 2000
Enter fullscreen mode Exit fullscreen mode

Let's check the output

2.000 reqs/s

Awesome, it can handle 2.000 requests/second and still keep a 100% success rate.

Metric Value
Requests per Second 2000.07
Success Rate 100.00%
p99 Response Time 2.062 s
Average Response Time 136.347 ms
Slowest Response Time 4.067 s
Fastest Response Time 2.164 ms
Status Code 201 60000

A couple things to notice here, while the success rate was still 100%, the p99 jumped from 25.959ms to 2.067s (79x slower than the previous test).

The average response time also jumped from 4.254ms to 136.347 (32.1x slower).

So yeah, doubling the number of requests per second is making our server to suffer A LOT.

Let's try harder and see what happens.

3.000 Requests per second

./start.sh 3000
Enter fullscreen mode Exit fullscreen mode

3.000 reqs/s output

For 3.000 requests/second our Node.js API started to present problems, being able to process only 52.20%, let's see what happened.

Metric Value
Requests per Second 2267.72
Success Rate 52.20%
p99 Response Time 30.001 s
Average Response Time 6.146 s
Slowest Response Time 30.156 s
Fastest Response Time 3.018 ms
Status Code 201 36089
Status Code 500 21588
Status Code 0 11465

Database

For 21,588 requests, our API returned status code 500, let's check the API logs:

API logs

We can see that our Postgres connection is hitting timeout, the current connectionTimeoutMillis is configured to be 2000 (2s), let's try increasing this to 30000 and see if that improves our load test.

We can do that by changing the line 13 of index.mjs from 2000 to 30000:

connectionTimeoutMillis: 30000
Enter fullscreen mode Exit fullscreen mode

Let's run it again:

./start.sh 3000
Enter fullscreen mode Exit fullscreen mode

And the result?

97.39% success

Metric Value
Requests per Second 2959.90
Success Rate 97.39%
p99 Response Time 13.375 s
Average Response Time 6.901 s
Slowest Response Time 30.001 s
Fastest Response Time 3.476 ms
Status Code 201 86486
Status Code 0 2318

Nice, by simply increasing the connection timeout for the database we improved the success rate by 45,19%, also, all the 500 errors are now completely gone!

Let's take a look at the remaining errors (status code 0).
bind address already in use

Status code 0 usually means that the server reset the connection because it couldn't handle more.

Let's check if it's CPU, Memory or Network.

At the peak of the test, the CPU is only consuming 13%, so it's not CPU.

CPU

By running it again with htop I noticed that memory was up only about 70%, so that's also not the problem:

Memory

Let's try something different.

File Descriptors

In unix systems, each new connection (socket) is assigned to a File Descriptor. By default, on Ubuntu, the maximum number of open file descriptors is 1024.

You chan check that by running ulimit -n.

limit api

Let's try increasing that to 2000 and redo the test to see if we can get rid of these 2% timeout errors.

To do so, I'll follow this tutorial and change to 6000

sudo vi /etc/security/limits.conf
Enter fullscreen mode Exit fullscreen mode

new limits for nofile

nofile = number of files.
soft = soft limit.
hard = hard limit.

Then reboot the EC2 with sudo reboot now.

After logging in, we can see that the limit changed:

New ulimit is 2000

And let's redo the test:

Start the API with

NODE_ENV=production node --env-file=.env index.mjs
Enter fullscreen mode Exit fullscreen mode

And start the load-tester with:

./start.sh 3000
Enter fullscreen mode Exit fullscreen mode

Let's check the results:

Results with 2000 open files

Surprisingly, the results are worse!

With a maximum of 2.000 open files, the node API successfully answered only 78.43% of the requests.

This is because by having only one core, adding more open sockets make the processor switch between the files more often than the previous version.

Let's try reducing it to 700 to see if it gets better.

(I'll skip the step on how to do it because it's the same).

And let's see the new output with 700 as maximum open files.

700

With 700 maximum open files, we hit 83.20% success rate. Let's go back to 1024 and try reducing the connection pool to 20 instead of 40.

If that doesn't work, let's assume 3.000 req/s is slightly higher than the limit and we'll try to find the maximum number of requests/s that a single core node API can handle with 100% success.

93%

With 20 connections for the connection pool, the API was able to process 93.06%, proving that we probably don't need 40.

Let's try with 2.600 reqs/s:

2.600 reqs/s

./start.sh 2600
Enter fullscreen mode Exit fullscreen mode

And that's the result:
2600 100% success

Metric Value
Requests per Second 2600.04
Success Rate 100%
p99 Response Time 8.171 s
Average Response Time 4.573 s
Slowest Response Time 9.234 s
Fastest Response Time 5.244 ms
Status Code 201 77999

That's a wrap!

Conclusion

This experiment demonstrates the capabilities of a pure Node.js API on a single-core server.

With a pure Node.js 21.2.0 API, using a single core with 1GB of RAM + connection pool with a maximum 20 connections, we were able to achieve 2.600 requests/s without failures.

By fine-tuning parameters like connection pool size and file descriptor limits, we can significantly impact performance.

What's the highest load your Node.js server has handled? Share your experiences!

Top comments (6)

Collapse
 
carlosimartin profile image
Carlos Martin

Love this article! thanks for sharing. I have only one question, in which part the socket is opening a new file? Because, you have the node.js process running, a new request comes but it is processed by the node.js process, and this process don't relies on opening a new file on each request to work, so I don't really understand that part, can you explain it please?

Collapse
 
ocodista profile image
Caio Borghi

Yes sure, in Linux, whenever a new connection arrives, the OS opens a socket and creates a new file descriptor to store the data related to that connection.

That means that at every new connection, there is a new file.

I created this video that shows the count of files opened by a process (in this case, Node API process) where we can see that, as soon as requests start arriving, new files are created.

https://www.veed.io/view/1b948c4e-d49b-4461-9256-58cefe52fa71?sharingWidget=true&panel=share

The number of files created by a process can be seen using the following command:

ls /proc/$PROCESS_ID/fd | wc- l
Enter fullscreen mode Exit fullscreen mode

And this is my monitor.mjs file:

import { exec } from "child_process";

const getOpenFileDescriptors = (pid) => {
  const command = `ls /proc/${pid}/fd | wc -l`;

  exec(command, (error, stdout, stderr) => {
    if (error) {
      console.error(`exec error: ${error}`);
      return;
    }
    if (stderr) {
      console.error(`stderr: ${stderr}`);
      return;
    }
    console.log(`Number of open file descriptors: ${stdout.trim()}`);
  });
};

const pid = process.argv[2];

if (!pid) {
  console.error("Please provide a PID.");
  process.exit(1);
}

const interval = setInterval(() => getOpenFileDescriptors(pid), 500);

process.on("SIGINT", () => {
  console.log("End");
  clearInterval(interval);
  process.exit();
});
Enter fullscreen mode Exit fullscreen mode
Collapse
 
carlosimartin profile image
Carlos Martin

Thanks for your answer!

Collapse
 
vthuan5421 profile image
Dang Van Thuan

I am grateful for your sharing. Thank you

Collapse
 
techy-guru profile image
Techy guru

thank you so much

Collapse
 
ianbromwich profile image
Ian B

Impressive how much it can handle being thrown at it!

Loved the article, will try this myself during the christmas break. Thanks again