This article summarizes the talk A Guide to Debugging Memory Leaks in SSR Environments (Node.js) presented at FEConf 2023. The content will be published in two parts.
In Part 1, we’ll explore what memory leaks are and how to detect them using monitoring tools.
In Part 2, we’ll walk through the process of debugging an actual memory leak and discuss how to resolve it.
All images included in this article are taken from the presentation slides of the same title, so separate attributions are not provided.
You can download the presentation materials from the FEConf 2023 website.
“A Guide to Debugging Memory Leaks in SSR Environments (Node.js)” – Presented at FEConf 2023 by Jihye Park, Frontend Engineer at Toss Place
Hello, I’m Jihye Park, a frontend engineer at Toss Place.
In this article, I’ll introduce how to debug memory leaks that can occur in server-side rendering (SSR) environments built with Node.js.
When you see the title, “A Guide to Debugging Memory Leaks in SSR Environments (Node.js),” which keywords stand out the most to you?
I'd say the key terms here are "Node.js" and "memory leaks". Among these, I’d like to focus on the concept of memory leaks, drawing from my personal experience.
One day, a DevOps engineer on my team came up to me and said, “There’s an OOM (Out Of Memory) issue happening with a particular service. Could you check it out?”
I initially thought I could resolve it quickly by reviewing the code and making some simple adjustments. However, when I opened the code, nothing seemed obviously wrong—yet the memory leak continued.
That’s when I decided to study more deeply and try to fix the issue through thorough debugging.
Through this article, I hope to deliver two key takeaways:
- Confidence that you, too, can debug memory leaks
- Practical know-how on identifying the cause of memory leaks using the browser’s Memory tab, even in complex environments
If you're facing similar issues, I hope this article will be helpful to you—just as I once searched for guidance during my own debugging process.
What Is a Memory Leak, and Why Is It a Problem?
Memory Leak
A memory leak refers to a situation where memory that is no longer needed continues to be occupied.
Let’s use an elevator analogy to better understand memory leaks.
s
📝 Slide summary: "Memory that should be freed stays in use – like people who never get off an elevator."
Think of an elevator that can hold 10 people. If 4 people get on and never get off, even as others enter and leave, the elevator effectively only has room for 6 more people.
As a result, it often reaches full capacity more quickly and operates inefficiently.
In other words, memory that should have been freed remains in use, reducing the overall efficiency—this is what we call a memory leak.
Why Memory Leaks Are Problematic
So why is an inefficient elevator (i.e., memory leak) such a problem?
JavaScript applications need memory to function properly. When memory runs low, performance takes a hit.
JavaScript handles memory management through a garbage collector (GC). When memory leaks occur, the GC has to work harder, which increases CPU usage.
As the CPU becomes more strained, event loops can get blocked. This is critical because the event loop is at the heart of JavaScript's operation, if it's delayed, the application slows down significantly.
In severe cases, the server may even crash.
Even if you have a system in place to automatically restart crashed servers, there will still be a moment when the server is down and unable to serve users—this affects availability.
In short, memory leaks can lead to degraded performance and unstable applications that crash frequently.
📝 Slide summary: "Memory leaks reduce available memory, increase GC work, block the event loop, and can crash the server."
How to Detect Memory Leaks
So, how can we detect memory leaks in a Node.js environment?
One of the most common signs is when you see an error like heap out of memory
printed in the terminal where Node.js is running.
📝 Slide summary: “Node.js process crashes with 'heap out of memory' – a classic sign of a memory leak.”
However, developers aren’t always watching the terminal in real time.
More often, servers are connected to monitoring tools that help observe their behavior over time. These tools typically visualize CPU usage, memory status, and other metrics as graphs.
It's relatively straightforward to integrate monitoring tools in server environments.
However, it's much more challenging on the client-side (like browsers) due to the wide variety of browser types and hardware specifications.
That said, the actual debugging method is the same in both environments, so I’ll explain them using a shared approach.
Detecting Memory Leaks with Monitoring Tools
In this section, we’ll walk through an example using real source code to observe a memory leak via monitoring tools.
We’ll intentionally introduce a memory leak and then debug it.
Pay close attention to this example code—we'll be coming back to it throughout our debugging process.
📝 Slide summary: “Basic Node.js HTTP server code — returning simple HTML for every request.”
The above is a basic Node.js HTTP server that returns an HTML page with a 200
status in response to every request.
Now, let’s compare two versions of this server: one with a memory leak and one without.
📝 Slide summary: “Only one line differs between the two versions — that one line causes a memory leak.”
We used a conditional statement to switch between the two versions.
Other than the highlighted part, the code is identical.
To simulate user traffic, we created a simple shell script that sends repeated requests:
📝 Slide summary: “We created a simple shell script to simulate user traffic by sending repeated requests..”
Let’s first look at the non-leaking version, which calls nonMemoryLeakFunction
.
This function creates a listItems
array inside the function scope and fills it with one million items using a loop.
It also prints the amount of heap memory currently in use.
Pay attention to where the listItems
array is declared — this will be important.
📝 Slide summary: “Declaring the list inside the function allows it to be garbage-collected after use.”
When this function is executed, memory usage stays stable — around 25MB per second — without significant changes.
📝 Slide summary: “Stable memory usage confirms no memory leak in this version.”
If you observe this in a monitoring tool, you’ll likely see a flat memory usage graph.
It might briefly dip during a deployment, but overall, the memory usage remains consistent.
If your monitoring tool shows a graph like this, it's a good indication that your service is free of memory leaks.
📝 Slide summary: “Flat graph = good. Indicates stable memory without leaks.”
Now let’s examine the leaking version.
This time, listItems
is declared outside the function — as a global variable.
Yes, this is an intentional memory leak.
The same one-million-item loop is used, and memory usage is printed again.
📝 Slide summary: “Declaring the list outside the function keeps data alive — causes memory leak.”
When you run this version, memory usage starts at 33MB and climbs to 193MB.
Eventually, the process crashes with a heap out of memory error.
📝 Slide summary: “Memory grows over time until process crashes – a textbook memory leak.”
What does this look like in a monitoring tool?
You’ll see a steadily increasing line — an upward slope.
When the server crashes and restarts, the graph will drop sharply, only to begin rising again as the memory leak continues.
This recurring sawtooth pattern is a dead giveaway that you have a memory leak.
📝 Slide summary: “‘Mountain-shaped’ memory graph = sign of recurring memory leak and server restarts.”
In the next article, we'll take a hands-on approach to debug this memory leak and explore effective ways to resolve it.
Top comments (0)