<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ram</title>
    <description>The latest articles on DEV Community by Ram (@ram_lakshmanan_001).</description>
    <link>https://dev.to/ram_lakshmanan_001</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ram_lakshmanan_001"/>
    <language>en</language>
    <item>
      <title>How to Read Thread Dumps – easily &amp; efficiently</title>
      <dc:creator>Ram</dc:creator>
      <pubDate>Thu, 05 Dec 2024 07:05:19 +0000</pubDate>
      <link>https://dev.to/ram_lakshmanan_001/how-to-read-thread-dumps-easily-efficiently-9b0</link>
      <guid>https://dev.to/ram_lakshmanan_001/how-to-read-thread-dumps-easily-efficiently-9b0</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmt7h93minqjujcddh7s9.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmt7h93minqjujcddh7s9.JPG" alt="Image description" width="786" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thread dumps are vital artifacts for troubleshooting performance problems in production applications. When an application experiences issues like slow response times, hangs, or CPU spikes, thread dumps provide a snapshot of all active threads, including their states and stack traces, helping you pinpoint the root cause. While tools like &lt;a href="https://fastthread.io/" rel="noopener noreferrer"&gt;fastThread&lt;/a&gt; can automate thread dump analysis, you may still need to analyze them manually for a better understanding. This post outlines key patterns to look at when analyzing thread dumps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Capture Thread Dump?&lt;/strong&gt;&lt;br&gt;
You can capture a Java thread dump using the jstack tool that comes with the JDK. To do so, run the following command in your terminal to generate a thread dump for the specific process:&lt;/p&gt;

&lt;p&gt;jstack -l  &amp;gt; &lt;br&gt;
Where:&lt;/p&gt;

&lt;p&gt;process-id: The Process ID (PID) of the Java application whose thread dump you want to capture.&lt;br&gt;
output-file: The file path where the thread dump will be saved.&lt;br&gt;
For instance:&lt;/p&gt;

&lt;p&gt;jstack -l 5678 &amp;gt; /var/logs/threadDump.log&lt;br&gt;
There are &lt;a href="https://blog.fastthread.io/how-to-take-thread-dumps-7-options/" rel="noopener noreferrer"&gt;9 different methods to capture thread dumps&lt;/a&gt;. Depending on your security policies and system requirements, you can choose the one that best fits your environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anatomy of a Thread Dump&lt;/strong&gt;&lt;br&gt;
A thread dump contains several important details about each active thread in the JVM. Understanding these details is crucial for diagnosing performance issues. Note that fields 1 and 2 apply to the overall JVM (such as the timestamp and version), while key details about each thread, like fields 3–9 (thread name, priority, Thread ID, Native ID, Address Space, State and stack trace), are repeated for every individual thread. Below is a breakdown of these essential fields:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyboqp0pnxqgp496ms7i.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyyboqp0pnxqgp496ms7i.JPG" alt="Image description" width="673" height="340"&gt;&lt;/a&gt;&lt;br&gt;
Fig: Thread Dump Details&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqap63i3rvjj3th7o50zo.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqap63i3rvjj3th7o50zo.JPG" alt="Image description" width="664" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotbo8mp9zym3gjwzyfwi.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fotbo8mp9zym3gjwzyfwi.JPG" alt="Image description" width="659" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7blcitze1fmzr8491ey8.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7blcitze1fmzr8491ey8.JPG" alt="Image description" width="673" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9 Tips to Read Thread Dumps&lt;/strong&gt;&lt;br&gt;
In this section let’s review the 9 tips that will help you to read and analyze thread dumps effectively:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Threads with Identical Stack Traces&lt;/strong&gt;&lt;br&gt;
Whenever there is a bottleneck in the application, multiple threads will get stuck on that bottleneck. In such circumstances all those threads who are stuck on the bottleneck will end up having the same stack trace. Thus, if you can group the threads which have the same stack trace and investigate the stack traces that have the highest count, it would help to uncover the bottlenecks in the application. &lt;/p&gt;

&lt;p&gt;Case Study: In a real-world incident at a major financial institution in North America,&lt;a href="https://blog.fastthread.io/troubleshooting-java-ee-application-thread-dump-analysis/" rel="noopener noreferrer"&gt; a slowdown in the backend System of Record (SOR)&lt;/a&gt; caused several threads to share identical stack traces, indicating a bottleneck. By analyzing these threads, engineers pinpointed the issue and quickly resolved the JVM outage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. BLOCKED Threads&lt;/strong&gt;&lt;br&gt;
When a thread is in BLOCKED state, it indicates it’s stuck and unable to progress further. A thread will enter in a BLOCKED state if some other thread has acquired the lock, and it didn’t release it. When a thread remains in a BLOCKED state for a prolonged period, customer transactions will slow down. Thus, when examining thread dump, you need to identify all the BLOCKED state threads and find which threads have acquired those locks and didn’t release them. &lt;/p&gt;

&lt;p&gt;Below is the stack trace of a thread that is BLOCKED by another thread:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dyln040a3hh3hfn38t7.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dyln040a3hh3hfn38t7.JPG" alt="Image description" width="665" height="318"&gt;&lt;/a&gt;&lt;br&gt;
Fig: BLOCKED Thread stack trace&lt;/p&gt;

&lt;p&gt;You can notice that ‘Thread-1’ is waiting to acquire the lock ‘0x00000007141e3fe0’, on the other hand ‘Thread-2’ acquired this lock ‘0x00000007141e3fe0’ and didn’t release it, due to that ‘Thread-1’ got in to BLOCKED state and couldn’t proceed further with execution.&lt;/p&gt;

&lt;p&gt;Case Study: In a real-world scenario, 50 threads entered the &lt;a href="https://blog.fastthread.io/java-uuid-generation-performance-impact/" rel="noopener noreferrer"&gt;BLOCKED state while calling java.util.UUID#randomUUID()&lt;/a&gt;, leading to application downtime. The threads were stuck because they were all waiting for a shared resource, causing a bottleneck that halted further progress. Resolving the issue involved identifying the root cause of the BLOCKED state and implementing solutions to ensure threads could proceed without being stuck, thereby restoring normal application operation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. CPU-Consuming Threads&lt;/strong&gt;&lt;br&gt;
One of the primary reasons engineers analyze thread dumps is to diagnose CPU spikes, which can severely impact application performance. Threads in the RUNNABLE state are actively executing and using CPU resources, which makes them the typical culprits behind CPU spikes. To effectively identify the root cause of CPU spikes, focus on analyzing threads in the RUNNABLE state and their stack traces to understand what operations they are performing. Stack traces can reveal if a thread is caught in an infinite loop, executing resource-intensive computations.&lt;/p&gt;

&lt;p&gt;Note: The most precise method for diagnosing CPU spikes is to combine thread dump analysis with live CPU monitoring data. You can achieve this using the top -H -p  command, which shows the CPU usage of each individual thread in a process. This allows you to correlate high-CPU-consuming threads from the live system with their corresponding stack traces in the thread dump, helping you pinpoint the exact lines of code responsible for the CPU spike. Refer to this post for more details on &lt;a href="https://blog.fastthread.io/diagnose-cpu-spike-non-intrusive-approach/" rel="noopener noreferrer"&gt;how to do this CPU diagnosis&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Lengthy Stack trace&lt;/strong&gt;&lt;br&gt;
When analyzing thread dumps, pay close attention to threads with lengthy stack traces. These traces can indicate two potential issues: &lt;/p&gt;

&lt;p&gt;Deep recursion &lt;br&gt;
Code consuming excessive CPU cycles&lt;br&gt;
In cases of deep recursion, the stack trace often shows the same method appearing repeatedly, which suggests that a function is being called over and over without reaching a termination condition. This pattern could mean the application is stuck in a recursive loop, which may eventually lead to a StackOverflowError, cause performance degradation, or result in a system crash.&lt;/p&gt;

&lt;p&gt;Lengthy stack traces can also be a sign of parts of the code that are consuming a high number of CPU cycles. Threads with deep call stacks may be involved in resource-intensive operations or complex processing logic, leading to increased CPU usage. Examining these threads can help you identify performance bottlenecks and areas in the code that need optimization.&lt;/p&gt;

&lt;p&gt;In the example below, the stack trace shows repeated invocations of the start() method, indicating a potential infinite recursion scenario. The stack depth continues to increase as the same method is called repeatedly, lacking a proper base case or exit condition:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;at com.buggyapp.stackoverflow.StackOverflowDemo.start(StackOverflowDemo.java:30)
at com.buggyapp.stackoverflow.StackOverflowDemo.start(StackOverflowDemo.java:30)
at com.buggyapp.stackoverflow.StackOverflowDemo.start(StackOverflowDemo.java:30)
at com.buggyapp.stackoverflow.StackOverflowDemo.start(StackOverflowDemo.java:30)
at com.buggyapp.stackoverflow.StackOverflowDemo.start(StackOverflowDemo.java:30)
at com.buggyapp.stackoverflow.StackOverflowDemo.start(StackOverflowDemo.java:30)
at com.buggyapp.stackoverflow.StackOverflowDemo.start(StackOverflowDemo.java:30)
at com.buggyapp.stackoverflow.StackOverflowDemo.start(StackOverflowDemo.java:30)
at com.buggyapp.stackoverflow.StackOverflowDemo.start(StackOverflowDemo.java:30)
at com.buggyapp.stackoverflow.StackOverflowDemo.start(StackOverflowDemo.java:30)
at com.buggyapp.stackoverflow.StackOverflowDemo.start(StackOverflowDemo.java:30)
:
:
:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Note: For more details visit the blog post &lt;a href="https://blog.fastthread.io/stackoverflowerror/" rel="noopener noreferrer"&gt;‘Diagnosing and Fixing StackOverflowError in Java’&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Threads Throwing Exceptions&lt;/strong&gt;&lt;br&gt;
When your application encounters an issue, exceptions are often thrown to signal the problem. Therefore, while analyzing the thread dump, be on the lookout for the following exceptions or errors:&lt;/p&gt;

&lt;p&gt;java.lang.Exception – General exceptions that can indicate a wide range of issues.&lt;br&gt;
java.lang.Error – Severe problems, such as OutOfMemoryError or StackOverflowError, which often signal that the application is in a critical state.&lt;br&gt;
java.lang.Throwable – The superclass of all exceptions and errors, sometimes used in custom error handling.&lt;br&gt;
Keep in mind that many enterprise applications use custom exceptions, such as MyCustomBusinessException, which can provide valuable insight into specific areas of your code. Pay close attention to these, as they can lead you directly to business logic errors.&lt;/p&gt;

&lt;p&gt;These exceptions reveal where the application is struggling, whether it’s due to unexpected conditions, resource limitations, or logic errors. Threads throwing exceptions often point directly to the problematic code paths, making them highly valuable for root cause analysis. Here’s an example of a stack trace from a thread that’s printing the stack trace of an exception:&lt;/p&gt;

&lt;p&gt;java.lang.Thread.State: RUNNABLE&lt;br&gt;
 at java.lang.Throwable.getStackTraceElement(Native Method)&lt;br&gt;
 at java.lang.Throwable.getOurStackTrace(Throwable.java:828)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;locked &amp;lt;0x000000079a929658&amp;gt; (a java.lang.Exception)
at java.lang.Throwable.getStackTrace(Throwable.java:817)
at com.buggyapp.message.ServerMessageFactory_Impl.defaultProgramAndCallSequence(ServerMessageFactory_Impl.java:177)
at com.buggyapp.message.ServerMessageFactory_Impl.privatCreateMessage(ServerMessageFactory_Impl.java:112)
at com.buggyapp.message.ServerMessageFactory_Impl.createMessage(ServerMessageFactory_Impl.java:93)
at com.buggyapp.message.ServerMessageFactory_Impl$$EnhancerByCGLIB$$3012b84f.CGLIB$createMessage$1()
at com.buggyapp.message.ServerMessageFactory_Impl$$EnhancerByCGLIB$$3012b84f$$FastClassByCGLIB$$3c9613f4.invoke()&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. Compare Thread States across dumps&lt;/strong&gt;&lt;br&gt;
A thread dump provides a snapshot of all the threads running in your application at a specific moment. However, to determine if a thread is genuinely stuck or just momentarily paused, it’s crucial to capture multiple snapshots at regular intervals. For most business applications, capturing three thread dumps at 10-second intervals is a good practice. This method helps you observe whether a thread remains stuck on the same line of code across multiple snapshots. [Learn more &lt;a href="https://blog.fastthread.io/best-practices-for-capturing-thread-dumps/" rel="noopener noreferrer"&gt;thread dump capturing best practices&lt;/a&gt; here].&lt;/p&gt;

&lt;p&gt;By taking three thread dumps at 10-second intervals, you can track changes (or lack thereof) in thread behavior. This comparison helps you determine whether threads are progressing through different states or remain stuck, which could point to performance bottlenecks.&lt;/p&gt;

&lt;p&gt;Why Compare Multiple Thread Dumps?&lt;/p&gt;

&lt;p&gt;Analyzing thread dumps taken over time allows you to detect patterns indicating performance issues and pinpoint their root causes:&lt;/p&gt;

&lt;p&gt;High-CPU Threads: Threads consistently in the RUNNABLE state across multiple dumps may be consuming excessive CPU resources. This often points to busy-waiting loops, high computational load, or inefficient processing within the application.&lt;br&gt;
Lock Contention: Threads frequently found in the BLOCKED state could indicate lock contention, where multiple threads are competing for shared resources. In these cases, optimizing lock usage or reducing the granularity of locks may be necessary to improve performance.&lt;br&gt;
Thread State Transitions: Monitoring threads transitioning between states (e.g., from RUNNABLE to WAITING) can reveal patterns related to resource contention, such as frequent lock acquisitions or I/O waits. These transitions can help identify areas of the application that need tuning.&lt;br&gt;
By comparing thread states across multiple dumps, you gain a clearer picture of how your application is behaving under load, allowing for more accurate troubleshooting and performance optimization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Deadlock&lt;/strong&gt;&lt;br&gt;
A deadlock happens when two or more threads are stuck, each waiting for the other to release a resource they need. As a result, none of the threads can move forward, causing parts of the application to freeze. Deadlocks usually occur when threads acquire locks in an inconsistent order or when improper synchronization is used. Here’s an example of a deadlock scenario captured in a thread dump:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2nso7bphsm84l0xcv5i.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2nso7bphsm84l0xcv5i.JPG" alt="Image description" width="645" height="219"&gt;&lt;/a&gt;&lt;br&gt;
Fig: Deadlock Threads Stack trace&lt;/p&gt;

&lt;p&gt;From the stack trace, you can observe the following deadlock scenario:&lt;/p&gt;

&lt;p&gt;Thread-0 has acquired lock 0x00000007ac3b1970 (Lock-1) and is waiting to acquire lock 0x00000007ac3b1980 (Lock-2) to proceed.&lt;br&gt;
While Thread-1 has already acquired lock 0x00000007ac3b1980 (Lock-2) and is waiting for lock 0x00000007ac3b1970 (Lock-1), creating a circular dependency.&lt;br&gt;
This deadlock occurs because Thread-1 is attempting to acquire the locks in reverse order compared to Thread-0, causing both threads to be stuck indefinitely, waiting for the other to release its lock.&lt;/p&gt;

&lt;p&gt;Deadlocks don’t just happen in applications; they can, unfortunately, occur in real-life marriages too. Just like two threads in a program can hold onto resources and wait for the other to release them, partners in a marriage can sometimes get caught in similar situations. Each person might be waiting for the other to make the first move—whether it’s apologizing after an argument, taking responsibility for a task, or initiating an important conversation. When both hold onto their position and wait for the other to act, progress stalls, much like threads in a deadlock, leaving the relationship in a stalemate. This stalemate, if unresolved, can leave both partners stuck in a cycle of frustration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Case Study:&lt;/strong&gt; In a real-world incident, an application experienced a deadlock due to a bug in the &lt;a href="https://pdfbox.apache.org/" rel="noopener noreferrer"&gt;Apache PDFBox library&lt;/a&gt;. The problem arose when two threads acquired locks in opposite orders, resulting in a deadlock that caused the application to hang. To learn more about this case and how the deadlock was resolved, check out &lt;a href="https://blog.fastthread.io/troubleshooting-deadlock-in-an-apache-opensource-library/" rel="noopener noreferrer"&gt;Troubleshooting Deadlock in an Apache Open-Source Library&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. GC Threads&lt;/strong&gt;&lt;br&gt;
The number of Garbage Collection &lt;a href="https://blog.gceasy.io/troubleshooting-high-jvm-gc-threads/" rel="noopener noreferrer"&gt;(GC) threads in the JVM is determined by the number of CPUs&lt;/a&gt; available on the machine, unless explicitly configured using the JVM arguments -XX:ParallelGCThreads, -XX:ConcGCThreads. On multi-core systems, this can result in a large number of GC threads being created. While more GC threads can improve parallel processing, having too many can degrade performance due to the overhead associated with increased context switching and thread management.&lt;/p&gt;

&lt;p&gt;As the saying goes, “Too many cooks spoil the broth,” and the same applies here: too many GC threads can harm JVM performance by leading to frequent pauses and higher CPU usage. It’s important to check the number of GC threads in a thread dump to ensure that they are appropriately tuned for the system and workload.&lt;/p&gt;

&lt;p&gt;How to Identify GC Threads? GC threads can typically be identified in a thread dump by their names, which often include phrases such as ‘GC Thread#’, ‘G1 Young RemSet’, or other GC-related identifiers, depending on the garbage collector in use. Searching for these thread names in a thread dump can help you understand how many GC threads are active and whether adjustments are needed.&lt;/p&gt;

&lt;p&gt;Below is an excerpt from a thread dump showing various GC-related threads:&lt;/p&gt;

&lt;p&gt;"GC Thread#0" os_prio=0 cpu=979.53ms elapsed=236.18s tid=0x00007f9cd4047000 nid=0x13fd5 runnable  &lt;/p&gt;

&lt;p&gt;"GC Thread#1" os_prio=0 cpu=975.08ms elapsed=235.78s tid=0x00007f9ca0001000 nid=0x13ff7 runnable  &lt;/p&gt;

&lt;p&gt;"GC Thread#2" os_prio=0 cpu=973.05ms elapsed=235.78s tid=0x00007f9ca0002800 nid=0x13ff8 runnable  &lt;/p&gt;

&lt;p&gt;"GC Thread#3" os_prio=0 cpu=970.09ms elapsed=235.78s tid=0x00007f9ca0004800 nid=0x13ff9 runnable  &lt;/p&gt;

&lt;p&gt;"G1 Main Marker" os_prio=0 cpu=30.86ms elapsed=236.18s tid=0x00007f9cd407a000 nid=0x13fd6 runnable  &lt;/p&gt;

&lt;p&gt;"G1 Conc#0" os_prio=0 cpu=1689.59ms elapsed=236.18s tid=0x00007f9cd407c000 nid=0x13fd7 runnable  &lt;/p&gt;

&lt;p&gt;"G1 Conc#1" os_prio=0 cpu=1683.66ms elapsed=235.53s tid=0x00007f9cac001000 nid=0x14006 runnable  &lt;/p&gt;

&lt;p&gt;"G1 Refine#0" os_prio=0 cpu=13.05ms elapsed=236.18s tid=0x00007f9cd418f800 nid=0x13fd8 runnable  &lt;/p&gt;

&lt;p&gt;"G1 Refine#1" os_prio=0 cpu=4.62ms elapsed=216.85s tid=0x00007f9ca400e000 nid=0x14474 runnable  &lt;/p&gt;

&lt;p&gt;"G1 Refine#2" os_prio=0 cpu=3.73ms elapsed=216.85s tid=0x00007f9a9c00a800 nid=0x14475 runnable  &lt;/p&gt;

&lt;p&gt;"G1 Refine#3" os_prio=0 cpu=2.83ms elapsed=216.85s tid=0x00007f9aa8002800 nid=0x14476 runnable &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Idle Threads in a thread pool&lt;/strong&gt;&lt;br&gt;
In many applications, thread pools may be over-allocated, meaning more threads are created than necessary to handle the workload. This over-allocation often results in many threads being in a WAITING or TIMED_WAITING state, where they consume system resources without doing any useful work. Since threads occupy memory and other resources, excessive idle threads can lead to unnecessary resource consumption, increasing memory usage and even contributing to potential performance issues.&lt;/p&gt;

&lt;p&gt;When analyzing thread dumps, look for threads in WAITING or TIMED_WAITING states within each thread pool. If you notice a high count of such threads, especially compared to the number of active or RUNNABLE threads, it may indicate that the thread pool size is too large for the application’s current load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Adjust Thread Pool Sizes Dynamically: Consider implementing dynamic thread pool sizing, where the pool can grow or shrink based on the workload. Using techniques like core and maximum thread pool sizes can help manage resources more efficiently.&lt;br&gt;
Monitor Thread Usage Regularly: Regularly review thread usage patterns, especially during peak load times, to ensure that the thread pool size aligns with actual needs.&lt;br&gt;
Optimizing the number of threads in a pool can help reduce memory consumption, lower CPU context switching overhead, and improve overall application performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Analyzing thread dumps is an essential skill for diagnosing performance bottlenecks, thread contention, and resource management issues in Java applications. With the insights gained from thread dumps, you are better equipped to optimize your application’s performance and ensure smooth operation, especially in production environments. We hope this post has provided valuable guidance in helping you achieve that.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Effective Methods to Diagnose and Troubleshoot CPU Spikes in Java Applications</title>
      <dc:creator>Ram</dc:creator>
      <pubDate>Fri, 08 Nov 2024 10:48:09 +0000</pubDate>
      <link>https://dev.to/ram_lakshmanan_001/effective-methods-to-diagnose-and-troubleshoot-cpu-spikes-in-java-applications-6fl</link>
      <guid>https://dev.to/ram_lakshmanan_001/effective-methods-to-diagnose-and-troubleshoot-cpu-spikes-in-java-applications-6fl</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmopp6nszds01vd4uvhk.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmopp6nszds01vd4uvhk.JPG" alt="Image description" width="757" height="383"&gt;&lt;/a&gt;&lt;br&gt;
CPU spikes are one of the most common performance challenges faced by Java applications. While traditional APM (Application Performance Management) tools provide high-level insights into overall CPU usage, they often fall short in identifying the root cause of the spike. APM tools usually can’t pinpoint the exact code paths causing the issue. This is where non-intrusive, thread-level analysis proves to be much more effective. In this post, I’ll share a few practical methods to help you diagnose and resolve CPU spikes without making changes in your production environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the difference?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Intrusive Approach: Intrusive approaches involve making changes to the application’s code or configuration, such as enabling detailed profiling, adding extra logging, or attaching performance monitoring agents. These methods can provide in-depth data, but they come with the risk of affecting the application’s performance and may not be suitable for production environments due to the added overhead.&lt;/p&gt;

&lt;p&gt;Non-Intrusive Approach: Non-intrusive approaches, on the other hand, require no modifications to the running application. They rely on gathering external data such as thread dumps, CPU usage, and logs without interfering with the application’s normal operation. These methods are safer for production environments because they avoid any potential performance degradation and allow you to troubleshoot live applications without disruption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. top -H + Thread Dump&lt;/strong&gt;&lt;br&gt;
High CPU consumption is always caused by the threads which are continuously application code. Our application tends to have hundreds (sometimes thousands) of threads. First step in diagnosis is to identify CPU consuming threads from these hundreds of threads. &lt;/p&gt;

&lt;p&gt;A simple and effective way to do this is by using the top command. The top command is a utility available on all flavors of Unix systems that provides a real-time view of system resource usage, including CPU consumption by each thread in a specific process. You can issue the following top command to identify which threads are consuming the most CPU:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;top -H -p &amp;lt;PROCESS_ID&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command lists individual threads within a Java process and their respective CPU consumption, as shown in Fig below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j0kc1ge3654omcfzkuo.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j0kc1ge3654omcfzkuo.JPG" alt="Image description" width="636" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig: top -H -p  command showing threads and their CPU consumption&lt;/p&gt;

&lt;p&gt;Once you’ve identified the CPU-consuming threads, the next step is to figure out what lines of code those threads are executing. To do this, you need to &lt;a href="https://blog.fastthread.io/how-to-take-thread-dumps-7-options/" rel="noopener noreferrer"&gt;capture a thread dump&lt;/a&gt; from the application, which will show the code execution path of those threads. However, there are a couple of things to keep in mind:&lt;/p&gt;

&lt;p&gt;You need to issue the top -H -p  command and capture the thread dump simultaneously to know the exact lines of code causing the CPU spike. CPU spikes are transient, so capturing both at the same time ensures you can correlate the high CPU usage with the exact code being executed. Any delay between the two can result in missing the root cause.&lt;br&gt;
The top -H -p  command prints thread IDs in decimal format, but in the thread dump, thread IDs are in hexadecimal format. You’ll need to convert the decimal Thread IDs to hexadecimal to look them up in the dump.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futz6zimg0ffaor9u6ytk.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futz6zimg0ffaor9u6ytk.JPG" alt="Image description" width="659" height="350"&gt;&lt;/a&gt;&lt;br&gt;
Fig: yCrash reporting CPU consumption by each thread and their code execution Path&lt;br&gt;
Disadvantages: This is the most effective and accurate method to troubleshoot CPU spikes. However, in certain environments, especially containerized environments, the top command may not be installed. In such cases, you might want to explore the alternative methods mentioned below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. RUNNABLE State Threads Across Multiple Dumps&lt;/strong&gt;&lt;br&gt;
Java threads can be in several states: NEW, RUNNABLE, BLOCKED, WAITING, TIMED_WAITING, or TERMINATED. If you are interested, you may learn more about &lt;a href="https://blog.fastthread.io/java-thread-states-explained-video-tutorial/" rel="noopener noreferrer"&gt;different Thread States&lt;/a&gt;. When a thread is actively executing code, it will be in the RUNNABLE state. CPU spikes are always caused by threads in the RUNNABLE state. To effectively diagnose these spikes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Capture 3-5 thread dumps at intervals of 10 seconds.&lt;/li&gt;
&lt;li&gt;Identify threads that remain consistently in the RUNNABLE state across all dumps.&lt;/li&gt;
&lt;li&gt;Analyze the stack traces of these threads to determine what part of the code is consuming CPU.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While this analysis can be done manually, thread dump analysis tools like &lt;a href="https://fastthread.io/" rel="noopener noreferrer"&gt;fastThread&lt;/a&gt; automate the process. fastThread generates a ‘CPU Spike’ section that highlights threads which were persistently in the RUNNABLE state across multiple dumps. However, this method won’t indicate the exact percentage of CPU each thread is consuming.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32ghi5jzuahlxyzorl67.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32ghi5jzuahlxyzorl67.JPG" alt="Image description" width="662" height="298"&gt;&lt;/a&gt;&lt;br&gt;
Fig: fastThread tool reporting ‘CPU spike’ section&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages:&lt;/strong&gt; This method will show all threads in the RUNNABLE state, regardless of their actual CPU consumption. For example, threads consuming 80% of CPU and threads consuming only 5% will both appear. It wouldn’t provide the exact CPU consumption of individual threads, so you may have to infer the severity of the spike based on thread behavior and execution patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Analyzing RUNNABLE State Threads from a Single Dump&lt;/strong&gt;&lt;br&gt;
Sometimes, you may only have a single snapshot of a thread dump. In such cases, the approach of comparing multiple dumps can’t be applied. However, you can still attempt to diagnose CPU spikes by focusing on the threads in the RUNNABLE state. One thing to note is that the JVM classifies all threads running native methods as RUNNABLE, but many native methods (like java.net.SocketInputStream.socketRead0()) don’t execute code and instead just wait for I/O operations.&lt;/p&gt;

&lt;p&gt;To avoid being misled by such threads, you’ll need to filter out these false positives and focus on the actual RUNNABLE state threads. This process can be tedious, but fastThread automates it by filtering out these misleading threads in its ‘CPU Consuming Threads’ section, allowing you to focus on the real culprits behind the CPU spike.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n9gkgeun5qeiy2tdvui.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n9gkgeun5qeiy2tdvui.JPG" alt="Image description" width="674" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig: fastThread tool reporting ‘CPU Consuming Threads’ section&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages:&lt;/strong&gt; This method has couple of disadvantages:&lt;/p&gt;

&lt;p&gt;A thread might be temporarily in the RUNNABLE state but may quickly move to WAITING or TIMED_WAITING (i.e., non-CPU-consuming states). In such cases, relying on a single snapshot may lead to misleading conclusions about the thread’s impact on CPU consumption.&lt;br&gt;
Similar to method #2, it will show all threads in the RUNNABLE state, regardless of their actual CPU consumption. For example, threads consuming 80% of CPU and threads consuming only 5% will both appear. It wouldn’t provide the exact CPU consumption of individual threads, so you may have to infer the severity of the spike based on thread behavior and execution patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Case Study: Diagnosing CPU Spikes in a Major Trading Application&lt;/strong&gt;&lt;br&gt;
In one case, a major trading application experienced severe CPU spikes, significantly affecting its performance during critical trading hours. By capturing thread dumps and applying the method #1 discussed above, we identified that the root cause was the use of a non-thread-safe data structure. Multiple threads were concurrently accessing and modifying this data structure, leading to excessive CPU consumption. Once the issue was identified, the development team replaced the non-thread-safe data structure with a thread-safe alternative, which eliminated the contention and drastically reduced CPU usage. For more details on this case study, read more here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Diagnosing CPU spikes in Java applications can be challenging, especially when traditional APM tools fall short. By using non-intrusive methods like analyzing thread dumps and focusing on RUNNABLE state threads, you can pinpoint the exact cause of the CPU spike.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Solve OutOfMemoryError: Metaspace</title>
      <dc:creator>Ram</dc:creator>
      <pubDate>Thu, 19 Sep 2024 10:35:23 +0000</pubDate>
      <link>https://dev.to/ram_lakshmanan_001/how-to-solve-outofmemoryerror-metaspace-4d0j</link>
      <guid>https://dev.to/ram_lakshmanan_001/how-to-solve-outofmemoryerror-metaspace-4d0j</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7v8not1k1dd3ckv7tbv9.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7v8not1k1dd3ckv7tbv9.JPG" alt="Image description" width="733" height="483"&gt;&lt;/a&gt;&lt;br&gt;
There are &lt;a href="https://blog.heaphero.io/types-of-outofmemoryerror/" rel="noopener noreferrer"&gt;9 types of java.lang.OutOfMemoryError&lt;/a&gt;, each signaling a unique memory-related issue within Java applications. Among these, ‘java.lang.OutOfMemoryError: Metaspace’ is a challenging error to diagnose. In this post, we’ll delve into the root causes behind this error, explore potential solutions, and discuss effective diagnostic methods to troubleshoot this problem. Let’s equip ourselves with the knowledge and tools to conquer this common adversary.&lt;br&gt;
Here’s a video summary of the article:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/J4p_qDSUOLk" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JVM Memory Regions&lt;/strong&gt;&lt;br&gt;
To better understand OutOfMemoryError, we first need to understand different JVM Memory regions. Here is a video clip that gives a good introduction about different &lt;a href="https://www.youtube.com/watch?v=P3gFfPIN3sw&amp;amp;t=1s" rel="noopener noreferrer"&gt;JVM memory regions&lt;/a&gt;. But in nutshell, JVM has following memory regions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6xbsgzvk8yghy41met8.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6xbsgzvk8yghy41met8.JPG" alt="Image description" width="667" height="148"&gt;&lt;/a&gt;&lt;br&gt;
Fig: JVM Memory Regions&lt;br&gt;
&lt;strong&gt;Young Generation:&lt;/strong&gt; Newly created application objects are stored in this region.&lt;br&gt;
&lt;strong&gt;Old Generation:&lt;/strong&gt; Application objects that are living for longer duration are promoted from the Young Generation to the Old Generation. Basically this region holds long lived objects.&lt;br&gt;
&lt;strong&gt;Metaspace:&lt;/strong&gt; Class definitions, method definitions and other metadata that are required to execute your program are stored in the Metaspace region. This region was added in Java 8. Before that metadata definitions were stored in the PermGen. Since Java 8, PermGen was replaced by Metaspace.&lt;br&gt;
&lt;strong&gt;Threads:&lt;/strong&gt; Each application thread requires a thread stack. Space allocated for thread stacks, which contain method call information and local variables are stored in this region.&lt;br&gt;
&lt;strong&gt;Code Cache:&lt;/strong&gt; Memory areas where compiled native code (machine code) of methods is stored for efficient execution are stored in this region.&lt;br&gt;
Direct Buffer: ByteBuffer objects are used by modern framework (i.e. Spring WebClient) for efficient I/O operations. They are stored in this region.&lt;br&gt;
&lt;strong&gt;GC (Garbage Collection):&lt;/strong&gt; Memory required for automatic garbage collection to work is stored in this region. &lt;br&gt;
JNI (Java Native Interface): Memory for interacting with native libraries and code written in other languages are stored in this region.&lt;br&gt;
&lt;strong&gt;misc:&lt;/strong&gt; There are areas specific to certain JVM implementations or configurations, such as the internal JVM structures or reserved memory spaces, they are classified as ‘misc’ regions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is ‘java.lang.OutOfMemoryError: Metaspace’?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8oe30wqt84sm3v9pnkxd.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8oe30wqt84sm3v9pnkxd.JPG" alt="Image description" width="658" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig: ‘java.lang.OutOfMemoryError: Metaspace’&lt;br&gt;
When lot of class definitions, method definitions are created in the ‘Metaspace’ region than the allocated Metaspace memory limit (i.e., ‘-XX:MaxMetaspaceSize’), JVM will throw ‘java.lang.OutOfMemoryError: Metaspace’.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What causes ‘java.lang.OutOfMemoryError: Metaspace’?&lt;/strong&gt;&lt;br&gt;
‘java.lang.OutOfMemoryError: Metaspace’ is triggered by the JVM under following circumstances:&lt;/p&gt;

&lt;p&gt;Creating large number of dynamic classes: If your application uses Groovy kind of scripting languages or Java Reflection to create new classes at runtime. &lt;br&gt;
Loading large number of classes: Either your application itself has a lot of classes or it uses a lot of 3rd party libraries/frameworks which have a lot of classes in it.&lt;br&gt;
Loading large number of class loaders: Your application is loading a lot of class loaders.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions for ‘OutOfMemoryError: Metaspace’&lt;/strong&gt;&lt;br&gt;
Following are the potential solutions to fix this error:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increase Metaspace Size:&lt;/strong&gt; If OutOfMemoryError surfaced due to increase in number of classes loaded, then increase the JVM’s Metaspace size (-XX:MetaspaceSize and -XX:MaxMetaspaceSize). This solution is sufficient to fix most of the ‘OutOfMemoryError: Metaspace’ errors, because memory leaks rarely happen in the Metaspace region.&lt;br&gt;
&lt;strong&gt;Fix Memory Leak:&lt;/strong&gt; Analyze memory leaks in your application using the approach given in this post. Ensure that class definitions are properly dereferenced when they are no longer needed to allow them to be garbage collected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Program that generates ‘OutOfMemoryError: Metaspace’&lt;/strong&gt;&lt;br&gt;
To better understand ‘java.lang.OutOfMemoryError: Metaspace’, let’s try to simulate it. Let’s leverage &lt;a href="https://github.com/ycrash/buggyapp" rel="noopener noreferrer"&gt;BuggyApp&lt;/a&gt;, a simple open-source chaos engineering project. BuggyApp can generate various sorts of performance problems such as Memory Leak, Thread Leak, Deadlock, multiple BLOCKED threads, … Below is the java program from the BuggyApp project that simulates ‘java.lang.OutOfMemoryError: Metaspace’ when executed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import java.util.UUID;
import javassist.ClassPool;

public class OOMMetaspace {

    public static void main(String[] args) throws Exception {

        ClassPool classPool = ClassPool.getDefault();

        while (true) {

            // Keep creating classes dynamically!
            String className = "com.buggyapp.MetaspaceObject" + UUID.randomUUID();
            classPool.makeClass(className).toClass();
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above program ‘OOMMetaspace’ class’s ‘main()’ method contains an infinite ‘while (true)’ loop. Within the loop, thread uses &lt;a href="https://github.com/jboss-javassist/javassist" rel="noopener noreferrer"&gt;open-source library javassist&lt;/a&gt; to create dynamic classes whose names start with ‘com.buggyapp.MetaspaceObject’. Class names generated by this program will look something like this: ‘com.buggyapp.MetaspaceObjectb7a02000-ff51-4ef8-9433-3f16b92bba78’. When so many such dynamic classes are created, the Metaspace memory region will reach its limit and the JVM will throw ‘java.lang.OutOfMemoryError: Metaspace’.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to troubleshoot ‘OutOfMemoryError: Metaspace’?&lt;/strong&gt;&lt;br&gt;
To diagnose ‘OutOfMemoryError: Metaspace’, we need to inspect the contents of the Metaspace region. Upon inspecting the contents, you can figure out the leaking area of the application code. Here is a blog post that describes a &lt;a href="https://blog.gceasy.io/2022/08/23/inspect-the-contents-of-the-java-metaspace-region/" rel="noopener noreferrer"&gt;few different approaches to inspect the contents of the Metaspace region&lt;/a&gt;. You can choose the approach that suits your requirements.  My favorite options are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. -verbose:class:&lt;/strong&gt; If you are running on Java version 8 or below then you can use this option. When you pass the ‘-verbose:class’ option to your application during startup, it will print all the classes that are loaded into memory. Loaded classes will be printed in the standard error stream (i.e. console, if you aren’t routing your error stream to a log file). Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;java {app_name} -verbose:class
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we passed the ‘-verbose:class’ flag to the above program, in the console we started to see following lines to be printed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Loaded com.buggyapp.MetaspaceObjecta97f62c5-0f71-4702-8521-c312f3668f47 from __JVM_DefineClass__]
[Loaded com.buggyapp.MetaspaceObject70967d20-609f-42c4-a2c4-b70b50592198 from __JVM_DefineClass__]
[Loaded com.buggyapp.MetaspaceObjectf592a420-7109-42e6-b6cb-bc5635a6024e from __JVM_DefineClass__]
[Loaded com.buggyapp.MetaspaceObjectdc7d12ad-21e6-4b17-a303-743c0008df87 from __JVM_DefineClass__]
[Loaded com.buggyapp.MetaspaceObject01d175cc-01dd-4619-9d7d-297c561805d5 from __JVM_DefineClass__]
[Loaded com.buggyapp.MetaspaceObject5519bef3-d872-426c-9d13-517be79a1a07 from __JVM_DefineClass__]
[Loaded com.buggyapp.MetaspaceObject84ad83c5-7cee-467b-a6b8-70b9a43d8761 from __JVM_DefineClass__]
[Loaded com.buggyapp.MetaspaceObject35825bf8-ff39-4a00-8287-afeba4bce19e from __JVM_DefineClass__]
[Loaded com.buggyapp.MetaspaceObject665c7c09-7ef6-4b66-bc0e-c696527b5810 from __JVM_DefineClass__]
[Loaded com.buggyapp.MetaspaceObject793d8aec-f2ee-4df6-9e0f-5ffb9789459d from __JVM_DefineClass__]
:
:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a clear indication that classes with ‘com.buggyapp.MetaspaceObject’prefix are loaded so frequently into the memory. This is a great clue/hint to let you know from where the leak is happening in the application. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. -Xlog:class+load:&lt;/strong&gt; If you are running on Java version 9 or above then you can use this option. When you pass the ‘-Xlog:class+load’ option to your application during startup, it will print all the classes that are loaded into memory. Loaded classes will be printed in the file path you have configured. Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;java {app_name} -Xlog:class+load=info:/opt/log/loadedClasses.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are still unable to determine the origination of the leak based on the class name, then you can do a deep dive by taking a heap dump from the application. You can &lt;a href="https://blog.heaphero.io/2017/10/13/how-to-capture-java-heap-dumps-7-options/" rel="noopener noreferrer"&gt;capture heap dump using one of the 8 options&lt;/a&gt; discussed in this post. You might choose the option that fits your needs. Once a heap dump is captured, you need to use tools like &lt;a href="https://heaphero.io/" rel="noopener noreferrer"&gt;HeapHero&lt;/a&gt;, JHat, … to analyze the dumps.&lt;br&gt;
&lt;strong&gt;What is Heap Dump?&lt;/strong&gt;&lt;br&gt;
Heap Dump is basically a snapshot of your application memory. It contains detailed information about the objects and data structures present in the memory. It will tell what objects are present in the memory, whom they are referencing, who are referencing, what is the actual customer data stored in them, what size of they occupy, are they eligible for garbage collection… They provide valuable insights into the memory usage patterns of an application, helping developers identify and resolve memory-related issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to analyze Metaspace Memory leak through Heap Dump?&lt;/strong&gt;&lt;br&gt;
HeapHero is available in two modes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Cloud:&lt;/strong&gt; You can upload the dump to the HeapHero cloud and see the results.&lt;br&gt;
&lt;strong&gt;2. On-Prem:&lt;/strong&gt; You can register here and get the HeapHero installed on your local machine &amp;amp; then do the analysis. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; I prefer using the on-prem installation of the tool instead of using the cloud edition, because heap dump tends to contain sensitive information (such as SSN, Credit Card Numbers, VAT, …) and don’t want the dump to be analyzed in external locations.&lt;/p&gt;

&lt;p&gt;Once the heap dump is captured, from the above program, we uploaded it to the HeapHero tool. Tool analyzed the dump and generated the report. In the report go to the ‘Histogram’ view. This view will show all the classes that are loaded into the memory. In this view you will notice the classes with the prefix ‘com.buggyapp.MetaspaceObject’ . Right click on the ‘…’ that is next to the class name. Then click on the ‘List Object(s) with &amp;gt; incoming references’ as shown in the below figure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2b365jvzqmzyokyf4bp.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2b365jvzqmzyokyf4bp.JPG" alt="Image description" width="669" height="375"&gt;&lt;/a&gt;&lt;br&gt;
Fig: Histogram view of showing all the loaded classes in memory&lt;br&gt;
Once you do it, the tool will display all the incoming references of this particular class. This will show the origin point of these classes as shown in the below figure. It will clearly show which part of code is creating these class definitions. Once we know which part of code is creating these class definitions, then it would be easy to fix the problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8l2ajs7txq0c713wauwv.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8l2ajs7txq0c713wauwv.JPG" alt="Image description" width="683" height="368"&gt;&lt;/a&gt;&lt;br&gt;
Fig: Incoming References of the class&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
In this post, we’ve covered a range of topics, from understanding JVM memory regions to diagnosing and resolving ‘java.lang.OutOfMemoryError: Metaspace’. We hope you’ve found the information useful and insightful. But our conversation doesn’t end here. Your experiences and insights are invaluable to us and to your fellow readers. We encourage you to share your encounters with ‘java.lang.OutOfMemoryError: Metaspace’ in the comments below. Whether it’s a unique solution you’ve discovered, a best practice you swear by, or even just a personal anecdote, your contributions can enrich the learning experience for everyone.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Java Performance Tuning: Adjusting GC Threads for Optimal Results</title>
      <dc:creator>Ram</dc:creator>
      <pubDate>Fri, 06 Sep 2024 10:21:26 +0000</pubDate>
      <link>https://dev.to/ram_lakshmanan_001/java-performance-tuning-adjusting-gc-threads-for-optimal-results-116o</link>
      <guid>https://dev.to/ram_lakshmanan_001/java-performance-tuning-adjusting-gc-threads-for-optimal-results-116o</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9o26dk6gneepsi6vy34.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9o26dk6gneepsi6vy34.JPG" alt="Image description" width="773" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Garbage Collection (GC) plays an important role in Java’s memory management. It helps to reclaim memory that is no longer in use. Garbage Collector uses its own set of threads to reclaim memory. These threads are called GC Threads. Sometimes JVM can end up either with too many or too few GC threads. In this post, we will discuss why JVM can end up having too many/too few GC threads, the consequences of it and potential solutions to address them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Find Your Application’s GC Thread Count&lt;/strong&gt; &lt;br&gt;
You can determine your application’s GC thread count by doing thread dump analysis as outlined below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://blog.fastthread.io/how-to-take-thread-dumps-7-options/" rel="noopener noreferrer"&gt;Capture thread dump&lt;/a&gt; from your production server.&lt;/li&gt;
&lt;li&gt;Analyze the dump using a thread dump analysis tool like fastThread.&lt;/li&gt;
&lt;li&gt;Tool will immediately report the GC thread count, as shown in the figure below.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjniwh02bfh27cyl5riqo.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjniwh02bfh27cyl5riqo.JPG" alt="Image description" width="643" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig: &lt;a href="https://fastthread.io/" rel="noopener noreferrer"&gt;fastThread tool&lt;/a&gt; reporting GC Thread count&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Set GC Thread Count&lt;/strong&gt;&lt;br&gt;
You can manually adjust the number of GC threads by setting the following two JVM arguments:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. - -XX:ParallelGCThreads=n:&lt;/strong&gt; Sets the number of threads used in parallel phase of the garbage collectors. &lt;br&gt;
&lt;strong&gt;2. - -XX:ConcGCThreads=n:&lt;/strong&gt; Controls the number of threads used in concurrent phases of garbage collectors. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Is the Default GC Thread Count?&lt;/strong&gt;&lt;br&gt;
If you don’t explicitly set the GC thread count using the above two JVM arguments, then default GC thread count is derived based on the number of CPUs in the server/container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;–XX:ParallelGCThreads Default:&lt;/strong&gt; For  on Linux/x86 machine is derived based on the formula:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;if (num of processors &amp;lt;=8) {&lt;br&gt;
   return num of processors; &lt;br&gt;
} else {&lt;br&gt;
  return 8+(num of processors-8)*(5/8);&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;So if your JVM is running on server with 32 processors, then ParallelGCThread value is going to be: 23(i.e. 8 + (32 – 8)*(5/8)).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;-XX:ConcGCThreads Default:&lt;/strong&gt; It’s derived based on the formula:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;max((ParallelGCThreads+2)/4, 1)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;So if your JVM is running on server with 32 processors, then&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;ParallelGCThread value is going to be: 23 (i.e. 8 + (32 – 8)*(5/8))&lt;/li&gt;
&lt;li&gt;ConcGCThreads value is going to be: 6 (i.e. max(25/4, 1)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;How JVM Can End Up with Too Many GC Threads&lt;/strong&gt;&lt;br&gt;
It’s possible for your JVM to unintentionally have too many GC threads, often without your awareness. This typically happens because the default number of GC threads is automatically determined based on the number of CPUs in your server or container.&lt;/p&gt;

&lt;p&gt;For example, on a machine with 128 CPUs, the JVM might allocate around 80 threads for the parallel phase of garbage collection and about 20 threads for the concurrent phase, resulting in a total of approximately 100 GC threads.&lt;/p&gt;

&lt;p&gt;If you’re running multiple JVMs on this 128-CPU machine, each JVM could end up with around 100 GC threads. This can lead to excessive resource usage because all these threads are competing for the same CPU resources. This problem is particularly noticeable in containerized environments, where multiple applications share the same CPU cores. It will cause JVM to allocate more GC threads than necessary, which can degrade overall performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Is Having Too Many GC Threads a Problem?&lt;/strong&gt;&lt;br&gt;
While GC threads are essential for efficient memory management, having too many of them can lead to significant performance challenges in your Java application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Increased Context Switching:&lt;/strong&gt; When the number of GC threads is too high, the operating system must frequently switch between these threads. This leads to increased context switching overhead, where more CPU cycles are spent managing threads rather than executing your application’s code. As a result, your application may slow down significantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. CPU Overhead:&lt;/strong&gt; Each GC thread consumes CPU resources. If too many threads are active simultaneously, they can compete for CPU time, leaving less processing power available for your application’s primary tasks. This competition can degrade your application’s performance, especially in environments with limited CPU resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Memory Contention:&lt;/strong&gt; With an excessive number of GC threads, there can be increased contention for memory resources. Multiple threads trying to access and modify memory simultaneously can lead to lock contention, which further slows down your application and can cause performance bottlenecks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Increased GC Pause Times and Lower Throughput:&lt;/strong&gt; When too many GC threads are active, the garbage collection process can become less efficient, leading to longer GC pause times where the application is temporarily halted. These extended pauses can cause noticeable delays or stutters in your application. Additionally, as more time is spent on garbage collection rather than processing requests, your application’s overall throughput may decrease, handling fewer transactions or requests per second and affecting its ability to scale and perform under load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Higher Latency:&lt;/strong&gt; Increased GC activity due to an excessive number of threads can lead to higher latency in responding to user requests or processing tasks. This is particularly problematic for applications that require low latency, such as real-time systems or high-frequency trading platforms, where even slight delays can have significant consequences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Diminishing Returns:&lt;/strong&gt; Beyond a certain point, adding more GC threads does not improve performance. Instead, it leads to diminishing returns, where the overhead of managing these threads outweighs the benefits of faster garbage collection. This can result in degraded application performance, rather than the intended optimization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Is Having Too Few GC Threads a Problem?&lt;/strong&gt;&lt;br&gt;
While having too many GC threads can create performance issues, having too few GC threads can be equally problematic for your Java application. Here’s why:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Longer Garbage Collection Times:&lt;/strong&gt; With fewer GC threads, the garbage collection process may take significantly longer to complete. Since fewer threads are available to handle the workload, the time required to reclaim memory increases, leading to extended GC pause times.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Increased Application Latency:&lt;/strong&gt; Longer garbage collection times result in increased latency, particularly for applications that require low-latency operations. Users might experience delays, as the application becomes unresponsive while waiting for garbage collection to finish.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Reduced Throughput:&lt;/strong&gt; A lower number of GC threads means the garbage collector can’t work as efficiently, leading to reduced overall throughput. Your application may process fewer requests or transactions per second, affecting its ability to scale under load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Inefficient CPU Utilization:&lt;/strong&gt; With too few GC threads, the CPU cores may not be fully utilized during garbage collection. This can lead to inefficient use of available resources, as some cores remain idle while others are overburdened.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Increased Risk of OutOfMemoryErrors and Memory Leaks:&lt;/strong&gt; If the garbage collector is unable to keep up with the rate of memory allocation due to too few threads, it may not be able to reclaim memory quickly enough. This increases the risk of your application running out of memory, resulting in OutOfMemoryErrors and potential crashes. Additionally, insufficient GC threads can exacerbate memory leaks by slowing down the garbage collection process, allowing more unused objects to accumulate in memory. Over time, this can lead to excessive memory usage and further degrade application performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions to Optimize GC Thread Count&lt;/strong&gt;&lt;br&gt;
If your application is suffering from performance issues due to an excessive or insufficient number of GC threads, consider manually setting the GC thread count using the above mentioned JVM arguments i.e.,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; -XX:ParallelGCThreads=n &lt;/li&gt;
&lt;li&gt; -XX:ConcGCThreads=n&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before making these changes in production, it’s essential to study your application’s GC behavior. Start by collecting and analyzing GC logs using tools like GCeasy. This analysis will help you identify if the current thread count is causing performance bottlenecks. Based on these insights, you can make informed adjustments to the GC thread count without introducing new issues&lt;/p&gt;

&lt;p&gt;Note: Always test changes in a controlled environment first to confirm that they improve performance before rolling them out to production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Balancing the number of GC threads is key to ensuring your Java application runs smoothly. By carefully monitoring and adjusting these settings, you can avoid potential performance issues and keep your application operating efficiently.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Solve OutOfMemoryError: Java heap space</title>
      <dc:creator>Ram</dc:creator>
      <pubDate>Fri, 09 Aug 2024 06:46:08 +0000</pubDate>
      <link>https://dev.to/ram_lakshmanan_001/how-to-solve-outofmemoryerror-java-heap-space-3hlc</link>
      <guid>https://dev.to/ram_lakshmanan_001/how-to-solve-outofmemoryerror-java-heap-space-3hlc</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz621tae8sgopiy5cq1eq.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz621tae8sgopiy5cq1eq.JPG" alt="Image description" width="762" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are &lt;a href="https://blog.heaphero.io/types-of-outofmemoryerror/" rel="noopener noreferrer"&gt;9 types of java.lang.OutOfMemoryError&lt;/a&gt;, each signaling a unique memory-related issue within Java applications. Among these, ‘java.lang.OutOfMemoryError: Java heap space’ stands out as one of the most prevalent and challenging errors developers encounter. In this post, we’ll delve into the root causes behind this error, explore potential solutions, and discuss effective diagnostic methods to troubleshoot this problem. Let’s equip ourselves with the knowledge and tools to conquer this common adversary.&lt;/p&gt;

&lt;p&gt;Here’s a video summary of the article:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/L-_GpaWVgYs" rel="noopener noreferrer"&gt;https://youtu.be/L-_GpaWVgYs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JVM Memory Regions&lt;/strong&gt;&lt;br&gt;
To better understand OutOfMemoryError, we first need to understand different JVM Memory regions. Here is a video clip that gives a good introduction about different JVM memory regions. But in nutshell, JVM has following memory regions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98e95bch67d6a72v3gde.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98e95bch67d6a72v3gde.JPG" alt="Image description" width="666" height="145"&gt;&lt;/a&gt;&lt;br&gt;
Fig: JVM Memory Regions&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Young Generation:&lt;/strong&gt; Newly created application objects are stored in this region.&lt;br&gt;
&lt;strong&gt;2. Old Generation:&lt;/strong&gt; Application objects that are living for longer duration are promoted from the Young Generation to the Old Generation. Basically this region holds long lived objects.&lt;br&gt;
&lt;strong&gt;3. Metaspace:&lt;/strong&gt; Class definitions, method definitions and other metadata that are required to execute your program are stored in the Metaspace region. This region was added in Java 8. Before that metadata definitions were stored in the PermGen. Since Java 8, PermGen was replaced by Metaspace.&lt;br&gt;
&lt;strong&gt;4. Threads:&lt;/strong&gt; Each application thread requires a thread stack. Space allocated for thread stacks, which contain method call information and local variables are stored in this region.&lt;br&gt;
&lt;strong&gt;5. Code Cache:&lt;/strong&gt; Memory areas where compiled native code (machine code) of methods is stored for efficient execution are stored in this region.&lt;br&gt;
Direct Buffer: ByteBuffer objects are used by modern framework (i.e. Spring WebClient) for efficient I/O operations. They are stored in this region.&lt;br&gt;
&lt;strong&gt;6. GC (Garbage Collection):&lt;/strong&gt; Memory required for automatic garbage collection to work is stored in this region. &lt;br&gt;
&lt;strong&gt;7. JNI (Java Native Interface):&lt;/strong&gt; Memory for interacting with native libraries and code written in other languages are stored in this region.&lt;br&gt;
&lt;strong&gt;8. misc:&lt;/strong&gt; There are areas specific to certain JVM implementations or configurations, such as the internal JVM structures or reserved memory spaces, they are classified as ‘misc’ regions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is ‘java.lang.OutOfMemoryError: Java heap space’?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjboppijo2e1y3h51we8.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjboppijo2e1y3h51we8.JPG" alt="Image description" width="657" height="141"&gt;&lt;/a&gt;&lt;br&gt;
Fig: ‘java.lang.OutOfMemoryError: Java heap space’&lt;br&gt;
When more objects are created in the ‘Heap’ (i.e. Young and Old) region than the allocated memory limit (i.e., ‘-Xmx’), then JVM will throw ‘java.lang.OutOfMemoryError: Java heap space’.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What causes ‘java.lang.OutOfMemoryError: Java heap space’?&lt;/strong&gt;&lt;br&gt;
‘java.lang.OutOfMemoryError: Java heap space’ is triggered by the JVM under following circumstances:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Increase in Traffic Volume:&lt;/strong&gt; When there is a spike in the traffic volume, more objects will be created in the memory. When more objects are created than the allocated Memory limit, JVM will throw ‘OutOfMemoryError: Java heap space’.&lt;br&gt;
&lt;strong&gt;2. Memory Leak due to Buggy Code:&lt;/strong&gt; Due to the bug in the code, application can inadvertently retain references to objects that are no longer needed, it can lead to buildup of unused objects in memory, eventually exhausting the available heap space, resulting in OutOfMemoryError.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions for ‘OutOfMemoryError: Java heap space’&lt;br&gt;
Following are the potential solutions to fix this error:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Fix Memory Leak:&lt;/strong&gt; Analyze memory leaks or inefficient memory usage patterns using the approach given in this post. Ensure that objects are properly dereferenced when they are no longer needed to allow them to be garbage collected.&lt;br&gt;
&lt;strong&gt;2. Increase Heap Size:&lt;/strong&gt; If OutOfMemoryError surfaced due the increase in the traffic volume, then increase the JVM heap size (-Xmx) to allocate more memory to the JVM. However, be cautious not to allocate too much memory, as it can lead to longer garbage collection pauses and potential performance issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Program that generates ‘OutOfMemoryError: Java heap space’&lt;/strong&gt;&lt;br&gt;
To better understand ‘java.lang.OutOfMemoryError: Java heap space’, let’s try to simulate it. Let’s leverage BuggyApp,  a simple open-source chaos engineering project. BuggyApp can generate various sorts of performance problems such as Memory Leak, Thread Leak, Deadlock, multiple BLOCKED threads, … Below is the program from the BuggyApp project that can simulate ‘java.lang.OutOfMemoryError: Java heap space’ when executed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class MapManager {

   private static HashMap&amp;lt;Object, Object&amp;gt; myMap = new HashMap&amp;lt;&amp;gt;();

   public void grow() {

      long counter = 0;
      while (true) {

         if (counter % 1000 == 0) {

            System.out.println("Inserted 1000 Records to map!");
         }

         myMap.put("key" + counter, "Large stringgggggggggggggggggggggggg" + counter);         
         ++counter;
      }
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above program has a ‘MapManager’ class that internally contains a ‘HashMap’ object that is assigned to the ‘myMap’ variable. With in the ‘grow()’ method, there is an infinite ‘while (true)’ loop that keeps populating the ‘HashMap’ object. On every iteration, a new key and value (i.e., ‘key-0’ and ‘Large stringgggggggggggggggggggggggg-0’) is added to the ‘HashMap’. Since it’s an infinite loop, ‘myMap’ object will get continuously populated until heap capacity is saturated. Once the heap capacity limit is exceeded, application will result in ‘java.lang.OutOfMemoryError: Java heap space’. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to troubleshoot ‘OutOfMemoryError: Java heap space’?&lt;/strong&gt;&lt;br&gt;
It’s a two-step process:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Capture Heap Dump:&lt;/strong&gt; You need to capture heap dump from the application, right before JVM throws OutOfMemoryError. In this post, 8 options to capture the heap dump are discussed. You might choose the option that fits your needs. My favorite option is to pass the -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath= JVM arguments to your application at the time of startup. Example:&lt;/p&gt;

&lt;p&gt;-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/tmp/heapdump.bin&lt;br&gt;
When you pass the above arguments, JVM will generate a heap dump and write it to ‘/opt/tmp/heapdump.bin’ whenever OutOfMemoryError is thrown.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Heap Dump?&lt;/strong&gt;&lt;br&gt;
Heap Dump is a snapshot of your application memory. It contains detailed information about the objects and data structures present in the memory. It will tell what objects are present in the memory, whom they are referencing, who are referencing, what is the actual customer data stored in them, what size of they occupy, are they eligible for garbage collection… They provide valuable insights into the memory usage patterns of an application, helping developers identify and resolve memory-related issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Analyze Heap Dump:&lt;/strong&gt; Once a heap dump is captured, you need to use tools like HeapHero, JHat, … to analyze the dumps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to analyze Heap Dump?&lt;/strong&gt;&lt;br&gt;
In this section let’s discuss how to analyze heap dump using the HeapHero tool.&lt;/p&gt;

&lt;p&gt;HeapHero is available in two modes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Cloud: You can upload the dump to the HeapHero cloud and see the results.&lt;/li&gt;
&lt;li&gt;On-Prem: You can register here and get the HeapHero installed on your local machine &amp;amp; then do the analysis. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note: I prefer using the on-prem installation of the tool instead of using the cloud edition, because heap dump tends to contain sensitive information (such as SSN, Credit Card Numbers, VAT, …) and don’t want the dump to be analyzed in external locations.&lt;/p&gt;

&lt;p&gt;Once the tool is installed, upload your heap dump to HeapHero tool. Tool will analyze the dump and generate a report. Let’s review the report generated by the tool for the above program.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza8xzahuyt1dpyxrkq06.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza8xzahuyt1dpyxrkq06.JPG" alt="Image description" width="670" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig: HeapHero reporting Memory problem detected&lt;br&gt;
From the above screenshot you can see HeapHero reporting that a problem has been detected in the application and it points out that the ‘MapManager’ class is occupying 99.96% of overall memory. Tool also provides an overall overview of the memory (Heap Size, Class count, Object Count, …)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8eaq82l6l0g7qy3ut4b.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8eaq82l6l0g7qy3ut4b.JPG" alt="Image description" width="667" height="282"&gt;&lt;/a&gt;&lt;br&gt;
Fig: Largest objects in the application&lt;br&gt;
Above is the ‘Largest Objects’ section screenshot from the HeapHero’s heap dump analysis report. This section shows the largest objects that are present in the application. In majority of the cases, the top 2 – 3 largest objects in the application are responsible for Memory Leak in the application. Let’s see what are the important information provided in this ‘Largest Objects’ section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z9eli97e43vvculi4iz.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z9eli97e43vvculi4iz.JPG" alt="Image description" width="652" height="248"&gt;&lt;/a&gt;&lt;br&gt;
Fig: Largest Objects Breakdown&lt;br&gt;
If you notice in #4 and #5, you can see the actual data is present in the memory. Equipped with this information you can know what are the largest objects in the application and what are the values present in the application. If you want to see who created or holding on to the reference of the largest objects, you can use the ‘Incoming References’ feature of the tool. When this option is selected, it will display all the objects that are referencing the ‘MapManager’ class.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wln4vent9wl1gna6n9e.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wln4vent9wl1gna6n9e.JPG" alt="Image description" width="661" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig: Incoming references of the Selected Object&lt;br&gt;
From this report you can notice that MapManager is referenced by Object2, which is in turn referenced by Object1, which in turn is referenced by MemoryLeakDemo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
In this post, we’ve covered a range of topics, from understanding JVM memory regions to diagnosing and resolving ‘java.lang.OutOfMemoryError: Java heap space’. We hope you’ve found the information useful and insightful. But our conversation doesn’t end here. Your experiences and insights are invaluable to us and to your fellow readers. We encourage you to share your encounters with ‘java.lang.OutOfMemoryError: Java heap space’ in the comments below. Whether it’s a unique solution you’ve discovered, a best practice you swear by, or even just a personal anecdote, your contributions can enrich the learning experience for everyone.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to achieve high GC Throughput</title>
      <dc:creator>Ram</dc:creator>
      <pubDate>Tue, 30 Jul 2024 07:55:29 +0000</pubDate>
      <link>https://dev.to/ram_lakshmanan_001/how-to-achieve-high-gc-throughput-m4i</link>
      <guid>https://dev.to/ram_lakshmanan_001/how-to-achieve-high-gc-throughput-m4i</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx43ejj9w0kmy3ghex56f.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx43ejj9w0kmy3ghex56f.JPG" alt="Image description" width="505" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this post, let’s explore a key performance metric studied during garbage collection analysis: ‘GC Throughput’. We’ll understand what it means, its significance in Java applications, and how it impacts overall performance. Additionally, we’ll delve into actionable strategies to improve GC Throughput, unlocking its benefits for modern software development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Garbage Collection Throughput?&lt;/strong&gt;&lt;br&gt;
Whenever an automatic garbage collection event runs, it pauses the application to identify unreferenced objects from memory and evict them. During that pause period, no customer transactions will be processed. Garbage Collection throughput indicates what percentage of the application’s time is spent in processing customer transactions and what percentage of time is spent in the garbage collection activities. For example, if someone says his application’s GC throughput is 98%, it means his application spends 98% of its time in processing customer transactions and the remaining 2% of time in processing Garbage Collection activities.  &lt;/p&gt;

&lt;p&gt;A high GC throughput is desirable as it indicates that the application is efficiently utilizing system resources, leading to minimal interruptions and improved overall performance. Conversely, low GC throughput can lead to increased garbage collection pauses, impacting application responsiveness and high computing cost. Monitoring and optimizing GC throughput are vital to ensure smooth application execution and responsiveness. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reasons for poor Garbage Collection throughput&lt;/strong&gt;&lt;br&gt;
Reasons for Garbage Collection throughput degradation can be categorized in to 3 buckets: &lt;/p&gt;

&lt;p&gt;a. Performance problems&lt;/p&gt;

&lt;p&gt;b. Wrong GC tuning &lt;/p&gt;

&lt;p&gt;c. Lack of Resources&lt;/p&gt;

&lt;p&gt;Let’s review each of these categories in detail in this section.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a. Performance Problems&lt;/strong&gt;&lt;br&gt;
When there is a performance problem in the application, GC throughput will degrade. Below are the potential performance reasons that would cause degradation in application’s performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Memory Leaks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feg7sral6gwrgm2g5zgmf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feg7sral6gwrgm2g5zgmf.png" alt="Image description" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig: GC events running repeatedly because of memory leak&lt;/p&gt;

&lt;p&gt;When an application suffers from a memory leak, Garbage Collection events keep running repeatedly without effectively reclaiming memory. In the figure above, you can notice the cluster of red triangles towards the right corner, indicating that GC events are repeatedly running. However, the memory utilization does not decrease, which is a classic indication of a memory leak. In such cases, GC events consume most of the application’s time, resulting in a significant degradation of GC throughput and overall performance. To troubleshoot memory leaks, you may find this video clip helpful: &lt;a href="https://youtu.be/SuguH8YBl5g" rel="noopener noreferrer"&gt;Troubleshooting Memory Leaks&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Consecutive GC Pauses&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pgmv9nydzg5us88biu0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pgmv9nydzg5us88biu0.png" alt="Image description" width="800" height="333"&gt;&lt;/a&gt;&lt;br&gt;
Fig: GC events running repeatedly because of high traffic volume&lt;/p&gt;

&lt;p&gt;During peak hours of the day or when running batch processes, your application might experience a high traffic volume. As a result, GC events may run consecutively to clean up the objects created by the application. The figure above shows GC events running consecutively (note the red arrow in the above figure). This scenario leads to a dramatic degradation of GC throughput during that time period. To address this problem, you can refer to the blog post: &lt;a href="https://blog.gceasy.io/2016/11/22/eliminate-consecutive-full-gcs/" rel="noopener noreferrer"&gt;Eliminate Consecutive Full GCs&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Heavy Object creation rate&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is a famous Chinese proverb in the ‘Art of War’ book: ‘The greatest victory is that which requires no battle’. Similarly, instead of trying to focus on tuning the GC events, it would be more efficient if you can prevent the GC events from running. The amount of time spent in garbage collection is directly proportional to the number of objects created by the application. If the application creates more objects, GC events are triggered more frequently. Conversely, if the application creates fewer objects, fewer GC events will be triggered.&lt;/p&gt;

&lt;p&gt;By profiling your application’s memory using tools like &lt;a href="https://heaphero.io/" rel="noopener noreferrer"&gt;HeapHero&lt;/a&gt;, you can identify the memory bottlenecks &amp;amp; fix them. Reducing memory consumption will, in turn, reduce the GC impact on your application. However, reducing the object creation rate is a tedious and time-consuming process as it involves studying your application, identifying the bottlenecks, refactoring the code and thoroughly testing it. However, it’s well worth the effort in the long run, as it leads to significant improvements in application performance and more efficient resource usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;b. Wrong GC tuning&lt;/strong&gt;&lt;br&gt;
Another significant reason for degradation in an application’s GC throughput is incorrect Garbage Collection (GC) tuning. Various factors can contribute to this issue:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Wrong GC Algorithm Choice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Garbage Collection algorithm plays a pivotal role in influencing the GC pause times. Choosing a wrong GC algorithm can substantially decrease the application’s GC throughput. As of now, there are 7 GC algorithms in OpenJDK: Serial GC, Parallel GC, CMS GC, G1 GC, Shenandoah GC, ZGC, Epsilon. This brings the question: ‘How to choose the right GC algorithm for my application?’  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmuqbcczt9wo5gdaw3ifo.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmuqbcczt9wo5gdaw3ifo.JPG" alt="Image description" width="628" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig: Flow chart to help you to arrive at right GC algorithm&lt;br&gt;
The above flow chart will help you to identify the right GC algorithm for your application. You may also refer to this detailed post which &lt;a href="https://blog.gceasy.io/comparing-java-gc-algorithms-best/" rel="noopener noreferrer"&gt;highlights the capabilities, advantages and disadvantages of each GC algorithm&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Here is a real-world case study of an application, which was used in warehouses to control the robots for shipments. This application was running with the CMS GC algorithm and suffered from long GC pause times of up to 5 minutes. Yes, you read that correctly, it’s 5 minutes, not 5 seconds. During this 5-minute window, robots weren’t receiving instructions from the application and a lot of chaos was caused. When the GC algorithm was switched from CMS GC to G1 GC, the pause time instantly dropped from 5 minutes to 2 seconds. This GC algorithm change made a big difference in improving the warehouse’s delivery.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;5. Lack (or Incorrect) GC Tuning *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Incorrectly configuring GC  arguments or failing to tune the application appropriately can also lead to a decline in GC throughput. Be advised there are 600+ JVM arguments related to JVM Memory and Garbage Collection. It’s a tedious task for anyone to choose the GC right arguments from a poorly documented arguments list. Thus, we have curated less than a handful JVM arguments by each GC algorithm and given them below. Use the arguments pertaining to your GC algorithm and optimize the GC pause time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.gceasy.io/serial-gc-tuning/" rel="noopener noreferrer"&gt;Serial GC Tuning Parameters&lt;/a&gt;&lt;br&gt;
&lt;a href="https://blog.gceasy.io/java-parallel-gc-tuning/" rel="noopener noreferrer"&gt;Parallel GC Tuning Parameters&lt;/a&gt;&lt;br&gt;
&lt;a href="https://blog.gceasy.io/java-cms-gc-tuning/" rel="noopener noreferrer"&gt;CMS GC Tuning Parameters&lt;/a&gt;&lt;br&gt;
&lt;a href="https://blog.gceasy.io/simple-effective-g1-gc-tuning-tips/" rel="noopener noreferrer"&gt;G1 GC Tuning Parameters&lt;/a&gt;&lt;br&gt;
&lt;a href="https://blog.gceasy.io/simple-effective-g1-gc-tuning-tips/" rel="noopener noreferrer"&gt;Shenandoah Tuning Parameters&lt;/a&gt;&lt;br&gt;
&lt;a href="https://blog.gceasy.io/java-zgc-algorithm-tuning/" rel="noopener noreferrer"&gt;ZGC Tuning Parameters&lt;/a&gt;&lt;br&gt;
For a detailed &lt;a href="https://www.youtube.com/watch?v=6G0E4O5yxks" rel="noopener noreferrer"&gt;overview of GC tuning&lt;/a&gt;, you can watch this insightful video talk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Wrong Internal Memory Regions Size&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;JVM memory has the following internal memory regions:&lt;/p&gt;

&lt;p&gt;a. Young Generation&lt;/p&gt;

&lt;p&gt;b. Old Generation&lt;/p&gt;

&lt;p&gt;c. MetaSpace&lt;/p&gt;

&lt;p&gt;d. Others &lt;/p&gt;

&lt;p&gt;You may visit this video post to learn about &lt;a href="https://www.youtube.com/watch?v=uJLOlCuOR4k" rel="noopener noreferrer"&gt;different JVM memory regions&lt;/a&gt;. Changing the internal memory regions size can also result in positive GC pause time improvements. Here is a &lt;a href="https://blog.gceasy.io/garbage-collection-tuning-success-story-reducing-young-gen-size/" rel="noopener noreferrer"&gt;real case study of an application&lt;/a&gt;, which was suffering from 12.5 seconds average GC Pause time. This application’s Young Generation Size was configured at 14.65GB, and Old gen size was also configured at the same 14.65GB. Upon reducing the Young Gen size to 1GB, average GC pause time remarkably got reduced to 138 ms, which is a 98.9% improvement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;c. Lack of Resources&lt;/strong&gt;&lt;br&gt;
Insufficient system and application-level resources can contribute to the degradation of an application’s Garbage Collection (GC) throughput.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Insufficient Heap Size&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In most applications, &lt;a href="https://blog.gceasy.io/how-to-optimize-memory-allocation/" rel="noopener noreferrer"&gt;heap size is either under allocated or over allocated&lt;/a&gt;. When heap size is under allocated, GCs will run more frequently, resulting in the degradation of the application’s performance. &lt;/p&gt;

&lt;p&gt;Here is a &lt;a href="https://blog.gceasy.io/java-gc-tuning-improved-insurance-company-throughput/" rel="noopener noreferrer"&gt;real case study of an insurance application&lt;/a&gt;, which was  configured to run with 8gb heap size (-Xmx). This heap size wasn’t sufficient enough to handle the incoming traffic, due to which garbage collector was running back-to-back. As we know, whenever a GC event runs, it pauses the application. Thus, when GC events run back-to-back, pause times were getting stretched and application was becoming unresponsive in the middle of the day. Upon observing this behavior, the heap size was increased from 8GB to 12GB. This change reduced the frequency of GC events and significantly improved the application’s overall availability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. Insufficient System Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A scarcity of CPU cycles or heavy I/O activity within the application can significantly degrade GC performance. Ensuring sufficient CPU availability on the server, virtual machine (VM), or container hosting your application is crucial. Additionally, minimizing I/O activity can help maintain optimal GC throughput.&lt;/p&gt;

&lt;p&gt;Garbage Collection performance can sometimes suffer due to insufficient system-level resources such as threads, CPU, and I/O. GC log analysis tools like GCeasy, identifies these limitations by examining following two patterns in your GC log files:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sys time &amp;gt; User Time:&lt;/strong&gt; This pattern indicates that the GC event is spending more time on kernel-level operations (system time) compared to executing user-level code. This could be a sign that your application is facing high contention for system resources, which can hinder GC performance. For more details, you can refer &lt;a href="https://blog.gceasy.io/sys-time-greater-than-user-time/" rel="noopener noreferrer"&gt;to this article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sys time + User Time &amp;gt; Real Time:&lt;/strong&gt; This pattern suggests that the combined CPU time (system time plus user time) exceeds the actual elapsed wall-clock time. This discrepancy indicates that the system is overburdened, possibly due to insufficient CPU resources or lack of GC threads. You can find more information &lt;a href="https://blog.gceasy.io/real-time-greater-than-user-and-sys-time/" rel="noopener noreferrer"&gt;about this pattern&lt;/a&gt;.&lt;br&gt;
To address these system level limitations, consider taking one of the following actions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increase GC Threads:&lt;/strong&gt; Allocate more GC threads to your application by adjusting the relevant JVM parameters. &lt;br&gt;
&lt;strong&gt;Add CPU Resources:&lt;/strong&gt; If your application is running on a machine with limited CPU capacity, consider scaling up by adding more CPU cores. This can provide the additional processing power needed to handle GC operations more efficiently.&lt;br&gt;
&lt;strong&gt;I/O bandwidth:&lt;/strong&gt; Ensure that your application’s I/O operations are optimized and not creating bottlenecks. Poor I/O performance can lead to increased system time, negatively impacting GC performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9. Old Version of JDK&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Continual improvements are made to GC performance by JDK development teams. Operating on an outdated JDK version prevents you from benefiting from the latest enhancements. To maximize GC throughput, it’s recommended to keep your JDK up to date. You can access the l&lt;a href="https://openjdk.org/" rel="noopener noreferrer"&gt;atest JDK release&lt;/a&gt; information here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Garbage Collection (GC) throughput is a critical metric in ensuring the efficient operation of Java applications. By understanding its significance and the factors that influence it, you can take actionable steps to optimize GC throughput and enhance overall performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To achieve high GC throughput:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Address Performance Problems: Identify and resolve memory leaks, manage heavy object creation rates, and avoid consecutive GC pauses during high traffic periods.&lt;/li&gt;
&lt;li&gt;Optimize GC Tuning: Select the appropriate GC algorithm, correctly configure GC tuning parameters, and adjust internal memory regions sizes to improve GC pause times.&lt;/li&gt;
&lt;li&gt;Ensure Adequate Resources: Allocate sufficient heap size, provide enough CPU resources, and minimize I/O activity to prevent system-level bottlenecks.&lt;/li&gt;
&lt;li&gt;Keep Your JDK Updated: Regularly update your JDK to benefit from the latest GC performance improvements.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By implementing these strategies, you can significantly reduce garbage collection pauses, leading to better application responsiveness and resource utilization.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is Java’s default GC algorithm?</title>
      <dc:creator>Ram</dc:creator>
      <pubDate>Fri, 21 Jun 2024 06:32:49 +0000</pubDate>
      <link>https://dev.to/ram_lakshmanan_001/what-is-javas-default-gc-algorithm-4fgb</link>
      <guid>https://dev.to/ram_lakshmanan_001/what-is-javas-default-gc-algorithm-4fgb</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwo74ov2mnooxfktp6t9r.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwo74ov2mnooxfktp6t9r.JPG" alt="Image description" width="654" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Are you wondering what the default Java Garbage Collection algorithm is? It depends on 3 factors:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Who is your JVM Vendor (i.e., OpenJDK, OpenJ9, Azul …)?&lt;/li&gt;
&lt;li&gt;What version of Java are you running (i.e. Java 8, 11, 17…)?&lt;/li&gt;
&lt;li&gt;What class of JVM you are running (i.e. client or Server)?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below table summarizes the default garbage collection algorithms for OpenJDK:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkc0dyilag568fzbodnx.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkc0dyilag568fzbodnx.JPG" alt="Image description" width="400" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Besides the above-mentioned Serial GC, Parallel GC and G1 GC default algorithms, following algorithms are also available in OpenJDK: CMS GC, Shenandoah GC, ZGC, Epsilon GC. By following the tips given in this post, you can &lt;a href="https://blog.gceasy.io/2024/05/27/what-is-the-best-java-gc-algorithm/"&gt;choose the right GC algorithm for your application&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Degradation in String Deduplication Performance in Recent Java Versions</title>
      <dc:creator>Ram</dc:creator>
      <pubDate>Mon, 13 May 2024 05:38:37 +0000</pubDate>
      <link>https://dev.to/ram_lakshmanan_001/degradation-in-string-deduplication-performance-in-recent-java-versions-2ce5</link>
      <guid>https://dev.to/ram_lakshmanan_001/degradation-in-string-deduplication-performance-in-recent-java-versions-2ce5</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq024dol5zkedxgfzyqie.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq024dol5zkedxgfzyqie.JPG" alt="Image description" width="797" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;String deduplication is an important feature aimed at optimizing memory usage by eliminating duplicate strings from heap memory. However, recent observations suggest a concerning trend – a degradation in string deduplication performance across newer Java versions. Thus, we embarked on a comparative analysis to assess the string deduplication performance behavior in Java versions 11, 17, and 21. This post intends to share our observations and insights gleaned from this analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WebCrawler Spring Boot Application&lt;/strong&gt;&lt;br&gt;
In order to experiment with string deduplication, we used an &lt;a href="https://github.com/unnivm/webcrawler"&gt;open source Web Crawler&lt;/a&gt; application developed in Spring Boot. The web crawler is a REST based application, which will crawl any given website and archive that site’s information into the H2 Database. When a HTTP POST request is made, WebCrawler will start the crawling job in the site and return a jobId. This jobId can be used later to query the status of the crawling task.&lt;/p&gt;

&lt;p&gt;To crawl the wikipedia website, you need to pass the seed as &lt;a href="https://en.wikipedia.org/wiki"&gt;https://en.wikipedia.org/wiki&lt;/a&gt; with a depth. The depth will decide how deep it should crawl the websites and extract the information. For example, the &lt;a href="http://localhost:8003/start?depth=100&amp;amp;seed=https://en.wikipedia.org/wiki"&gt;http://localhost:8003/start?depth=100&amp;amp;seed=https://en.wikipedia.org/wiki&lt;/a&gt;&lt;br&gt;
URL, will crawl the wikipedia site with depth as 100 in the background.&lt;/p&gt;

&lt;p&gt;Once you make a POST request, it will return the jobId as a response and the crawling will start in the background. Once the crawling task reaches the depth level, automatically will stop that particular crawling task.&lt;/p&gt;

&lt;p&gt;We are going to load test this web crawler with a POST request in Java 11, 17 and 21 and study the String deduplication performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enabling String Deduplication&lt;/strong&gt;&lt;br&gt;
To study the behavior of String Deduplication, it must first be enabled in the JVM. This can be achieved by passing the following JVM argument:&lt;/p&gt;

&lt;p&gt;-XX:+UseStringDeduplication&lt;br&gt;
String Deduplication events performance characteristics (like how many string examined, how many of them were deduplicated, how long it took to complete, …) can be printed in to the log file by passing following JVM arguments to the application:&lt;/p&gt;

&lt;p&gt;-Xlog:stringdedup*=debug:file=string-dup-logfile.log  -Xloggc:string-dup-logfile.log&lt;br&gt;
Based on the above JVM arguments, we are directing the both string deduplication debug events and the GC events log messages into a one single ‘string-dup-logfile.log’ file&lt;/p&gt;

&lt;p&gt;For detailed information about the JVM arguments, refer to this article for &lt;a href="https://blog.ycrash.io/string-deduplication-in-java/"&gt;String deduplication&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;JMeter Load Test&lt;/strong&gt;&lt;br&gt;
We conducted load testing on the WebCrawler application using JMeter, simulating a load of 50 users submitting crawling tasks with identical seed URLs and depths for approximately 1 hour. We are submitting the same URL, so that a lot of duplicate strings will be simulated in the application. &lt;/p&gt;

&lt;p&gt;Note this test was conducted in my local laptop, whose configuration is:&lt;/p&gt;

&lt;p&gt;Operating System: Windows 11 &lt;br&gt;
System type: 64-bit operating system, x64-based processor&lt;br&gt;
RAM: 12 GB &lt;br&gt;
Processor: Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz   1.19 GHz&lt;br&gt;
Java Heap Size (i.e. -Xm512mb)&lt;/p&gt;

&lt;p&gt;Below image shows the JMeter configuration in the above case:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6pivn5rud4fz3fg8croa.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6pivn5rud4fz3fg8croa.JPG" alt="Image description" width="746" height="259"&gt;&lt;/a&gt;&lt;br&gt;
Fig: POST method configuration in JMeter&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Garbage Collection Analysis Study&lt;/strong&gt;&lt;br&gt;
After completing the tests, we uploaded the generated Garbage Collection log file to the online GCeasy tool for analysis. The tool promptly generated reports showcasing String Deduplication performance metrics for each Java version tested. Here are the reports generated by the tool:&lt;/p&gt;

&lt;p&gt;Java 11 GC report: &lt;a href="https://rb.gy/8o440d"&gt;https://rb.gy/8o440d&lt;/a&gt;   &lt;/p&gt;

&lt;p&gt;Java 17 GC report: &lt;a href="https://rb.gy/2hoz1p"&gt;https://rb.gy/2hoz1p&lt;/a&gt;   &lt;/p&gt;

&lt;p&gt;Java 21 GC report: &lt;a href="https://rb.gy/8n74lu"&gt;https://rb.gy/8n74lu&lt;/a&gt;   &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comparison Study of String Deduplication Key Metrics Across Java Versions&lt;/strong&gt;&lt;br&gt;
Below table summarizes String deduplication key metrics across the Java versions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zhxj8etq7vgrrepqtle.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zhxj8etq7vgrrepqtle.JPG" alt="Image description" width="760" height="320"&gt;&lt;/a&gt;&lt;br&gt;
The analytics reveal that Java 11 exhibits the best String Deduplication performance, eliminating 34.3% of strings. Conversely, Java 17 and 21 only managed to eliminate 9.3% and 3.4%, respectively. Moreover, the time taken to deduplicate strings increased in modern Java versions, with Java 11 completing the process in only 1,264.442 ms, compared to 2,323.492 ms for Java 17 and 3,439.466 ms for Java 21.&lt;/p&gt;

&lt;p&gt;In essence, modern JVMs are spending more time inspecting a higher number of strings while eliminating fewer duplicates from memory. This underscores a clear degradation in JVM performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
The investigation into string deduplication performance across Java versions 11, 17, and 21 yields significant insights into the evolution of JVM behavior. While string deduplication serves as a vital mechanism for optimizing memory utilization, our analysis reveals a concerning trend of degradation in performance across newer Java releases. Java 11 emerges as the standout performer, efficiently eliminating a substantial portion of duplicate strings within a shorter time frame. In contrast, Java 17 and 21 exhibit diminished effectiveness, both in terms of the percentage of strings deduplicated and the time taken to execute the process.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to analyze Node.js Garbage Collection traces?</title>
      <dc:creator>Ram</dc:creator>
      <pubDate>Mon, 29 Apr 2024 06:15:35 +0000</pubDate>
      <link>https://dev.to/ram_lakshmanan_001/how-to-analyze-nodejs-garbage-collection-traces-4696</link>
      <guid>https://dev.to/ram_lakshmanan_001/how-to-analyze-nodejs-garbage-collection-traces-4696</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvvkoaj2gzwxw0ma7k81.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvvkoaj2gzwxw0ma7k81.JPG" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;&lt;br&gt;
Is your Node.js application experiencing unresponsiveness or performance bottlenecks? The problem could have originated because of long running Garbage Collection pauses or memory leaks. In such circumstances you might want to study your Node.js application’s Garbage Collection performance. In this post, we’ll walk you through the process of enabling GC traces, interpreting the trace data, and the right tools and knowledge needed to study the Garbage Collection behavior.&lt;/p&gt;

&lt;p&gt;How to enable Node.js Garbage Collection traces?&lt;br&gt;
There are few approaches to &lt;a href="https://blog.gceasy.io/2024/04/03/how-to-capture-node-js-garbage-collection-traces/"&gt;enable the Node.js Garbage Collection traces&lt;/a&gt;. Easiest and most straightforward approach is to pass the ‘–trace-gc’ flag along with your usual invocation command. Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node --trace-gc my-script.mjs

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the ‘–trace-gc’ flag is enabled, your Node.js application will start generating garbage collection traces in the console output. These traces provide valuable insights into memory usage, GC events, and potential performance bottlenecks. Garbage Collection traces would look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[721159:0x61f0210]  1201125 ms: Scavenge 27.7 (28.8) -&amp;gt; 26.8 (29.8) MB, 0.5 / 0.2 ms  (average mu = 0.999, current mu = 0.970) allocation failure 
[721166:0x5889210]  1201338 ms: Scavenge 30.7 (32.1) -&amp;gt; 29.7 (33.1) MB, 0.6 / 0.3 ms  (average mu = 0.998, current mu = 0.972) allocation failure 
[721173:0x54fc210]  1202608 ms: Scavenge 26.8 (28.3) -&amp;gt; 25.8 (29.3) MB, 0.7 / 0.4 ms  (average mu = 0.999, current mu = 0.972) allocation failure 
[721152:0x54ca210]  1202879 ms: Scavenge 30.5 (31.8) -&amp;gt; 29.6 (32.8) MB, 0.6 / 0.2 ms  (average mu = 0.999, current mu = 0.978) allocation failure 
[721166:0x5889210]  1202925 ms: Scavenge 30.6 (32.1) -&amp;gt; 29.7 (33.1) MB, 0.7 / 0.3 ms  (average mu = 0.998, current mu = 0.972) task 
[721159:0x61f0210]  1203105 ms: Scavenge 27.7 (28.8) -&amp;gt; 26.7 (29.8) MB, 0.4 / 0.2 ms  (average mu = 0.999, current mu = 0.970) allocation failure 
[721173:0x54fc210]  1204660 ms: Scavenge 26.8 (28.3) -&amp;gt; 25.8 (29.3) MB, 0.5 / 0.2 ms  (average mu = 0.999, current mu = 0.972) allocation failure 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to analyze Node.js GC log?&lt;/strong&gt;&lt;br&gt;
Garbage Collection traces contain a rich set of information such as how many objects created, how many objects got garbage collected, how long each Garbage Collection event took to complete, how much memory got reclaimed after every GC event, whether there are any memory leaks… You may refer to this post to see &lt;a href="https://blog.gceasy.io/2024/04/03/understanding-node-js-gc-traces/"&gt;how to interpret and read the GC trace&lt;/a&gt;. However, it’s quite a tedious and time-consuming process to interpret the GC traces manually. Thus, you may consider using the GCeasy online tool to analyze the Node.js GC traces.&lt;/p&gt;

&lt;p&gt;You can go to the GCeasy tool and sign-up for a free account and upload the GC trace. The tool will instantly parse the GC traces and generate a report that contains vital Garbage Collection analysis metrics and graphs. For your reference, &lt;a href="https://gceasy.io/my-gc-report.jsp?p=YXJjaGl2ZWQvMjAyNC8wNC8xL25vZGVqcy1nYy50eHQtLTEtMzgtNDE=&amp;amp;channel=WEB&amp;amp;s=t"&gt;here is the live report&lt;/a&gt; generated by the tool. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node.js GC trace analysis report&lt;/strong&gt;&lt;br&gt;
GCeasy report provides a rich set of graphs, metrics and statistics around Garbage Collection overhead added to the Node.js application. Below are some of the excerpts from the GCeasy report.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsia5i8d2wkd904oa6k1.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsia5i8d2wkd904oa6k1.JPG" alt="Image description" width="747" height="398"&gt;&lt;/a&gt;&lt;br&gt;
Fig: Heap usage graph&lt;br&gt;
‘Heap usage’ graph, reports the memory trend after every GC event. You can notice that after GC events, memory usage drops.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo4fgchyer2tklyg3m77.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo4fgchyer2tklyg3m77.JPG" alt="Image description" width="743" height="399"&gt;&lt;/a&gt;&lt;br&gt;
Fig: GC Duration Time Graph&lt;br&gt;
‘GC Duration Time’ Graph indicates the time taken by each GC event to run. The red triangle indicates it’s a Full GC event and green square icon indicates it’s a Young (or minor) GC event. You can notice that in general Full GC events (i.e. red triangle) are running less frequently, but taking more time, on the other hand Young GC events (i.e. green square) are running more often but taking less time. Because Full GC runs on all the regions in the Node.js memory, whereas minor GC runs only on the young region of the memory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fepru6sn5m4im7tdw9sb4.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fepru6sn5m4im7tdw9sb4.JPG" alt="Image description" width="776" height="405"&gt;&lt;/a&gt;&lt;br&gt;
Fig: Reclaimed Bytes Graph&lt;br&gt;
‘Reclaimed Bytes’ graph shows the amount of memory reclaimed after each GC event. You can notice that Full GC events are reclaiming more memory than Young GC events.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qqw9wy8xlzzxr33kd9h.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qqw9wy8xlzzxr33kd9h.JPG" alt="Image description" width="724" height="520"&gt;&lt;/a&gt;&lt;br&gt;
Fig: GC Statistics section&lt;br&gt;
Besides giving graphical visualization of the Garbage Collection behavior, the tool also gives a lot of statistical metrics. The ‘GC statistics’ section above reports the total number GC events, their average time, min/max time, standard deviation… These metrics are vital to study the overhead added by the automatic garbage collection to the Node.js application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zin0w101uoxppp6jzxv.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zin0w101uoxppp6jzxv.JPG" alt="Image description" width="737" height="293"&gt;&lt;/a&gt;&lt;br&gt;
Fig: GC Causes section&lt;br&gt;
Another interesting section in the report is the ‘GC Causes’ section. This section reveals the reasons why Garbage Collection got triggered in the application. For example, in this application 50,965 GC events got triggered because of ‘Allocation Failure’. This happens when there is a lack of memory in the young generation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Node.js garbage collection analysis provides insights into your application’s memory management, performance, and stability. By understanding and interpreting GC traces, you can uncover hidden performance bottlenecks, identify memory leaks, and optimize memory usage for optimal application performance. Armed with this knowledge, you can make informed decisions to optimize your code, fine-tune garbage collection parameters, and address any underlying issues affecting performance.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to capture Node.js Garbage Collection traces?</title>
      <dc:creator>Ram</dc:creator>
      <pubDate>Mon, 15 Apr 2024 10:02:33 +0000</pubDate>
      <link>https://dev.to/ram_lakshmanan_001/how-to-capture-nodejs-garbage-collection-traces-bi5</link>
      <guid>https://dev.to/ram_lakshmanan_001/how-to-capture-nodejs-garbage-collection-traces-bi5</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjokzlephj51c69mlna2.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjokzlephj51c69mlna2.JPG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Garbage collection (GC) is a fundamental aspect of memory management in Node.js applications. However, inefficient garbage collection can lead to performance issues, causing application slowdowns and potentially impacting user experience. To ensure optimal performance and diagnose memory problems, it’s essential to study garbage collection traces. In this blog post, we’ll explore various methods for capturing garbage collection traces from Node.js applications.&lt;/p&gt;

&lt;p&gt;Options to capture Garbage Collection traces from Node.js applications&lt;br&gt;
There are 3 options to capture Garbage Collection traces from the Node.js applications:&lt;/p&gt;

&lt;p&gt;‘–trace-gc’ flag&lt;br&gt;
v8 module&lt;br&gt;
Performance Hook&lt;br&gt;
Let’s discuss them in this post.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. ‘–trace-gc’ flag&lt;/strong&gt;&lt;br&gt;
The easiest and most straightforward approach is to pass the ‘–trace-gc’ flag along with your usual invocation command. Example:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;node --trace-gc my-script.mjs&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once the ‘–trace-gc’ flag is enabled, your Node.js application will start generating garbage collection traces in the console output. These traces provide valuable insights into memory usage, GC events, and potential performance bottlenecks. Garbage Collection traces would look something like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;[721159:0x61f0210]  1201125 ms: Scavenge 27.7 (28.8) -&amp;gt; 26.8 (29.8) MB, 0.5 / 0.2 ms  (average mu = 0.999, current mu = 0.970) allocation failure &lt;br&gt;
[721166:0x5889210]  1201338 ms: Scavenge 30.7 (32.1) -&amp;gt; 29.7 (33.1) MB, 0.6 / 0.3 ms  (average mu = 0.998, current mu = 0.972) allocation failure &lt;br&gt;
[721173:0x54fc210]  1202608 ms: Scavenge 26.8 (28.3) -&amp;gt; 25.8 (29.3) MB, 0.7 / 0.4 ms  (average mu = 0.999, current mu = 0.972) allocation failure &lt;br&gt;
[721152:0x54ca210]  1202879 ms: Scavenge 30.5 (31.8) -&amp;gt; 29.6 (32.8) MB, 0.6 / 0.2 ms  (average mu = 0.999, current mu = 0.978) allocation failure &lt;br&gt;
[721166:0x5889210]  1202925 ms: Scavenge 30.6 (32.1) -&amp;gt; 29.7 (33.1) MB, 0.7 / 0.3 ms  (average mu = 0.998, current mu = 0.972) task &lt;br&gt;
[721159:0x61f0210]  1203105 ms: Scavenge 27.7 (28.8) -&amp;gt; 26.7 (29.8) MB, 0.4 / 0.2 ms  (average mu = 0.999, current mu = 0.970) allocation failure &lt;br&gt;
[721173:0x54fc210]  1204660 ms: Scavenge 26.8 (28.3) -&amp;gt; 25.8 (29.3) MB, 0.5 / 0.2 ms  (average mu = 0.999, current mu = 0.972) allocation failure&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Tool to analyze GC Traces: Analyzing garbage collection (GC) traces manually can be a daunting task due to the wealth of information they contain. To simplify this process and gain valuable insights, consider using the GCeasy online tool. Upon uploading GC Traces to the GCeasy tool, it generates insightful graphs, metrics, and recommendations to optimize the GC performance of your Node.js application. Here is a sample GC trace analysis report generated by the tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. v8 module&lt;/strong&gt;&lt;br&gt;
If you don’t want to enable GC traces for the entire lifetime of the application or if you want to enable them only on certain conditions or in the certain parts of code, then you can use the ‘v8’ module as it  provides options to add/remove flags at run-time. Using the ‘v8’ module, you can pass the ‘–trace-gc’ flag and remove it as shown in the below code snippet:&lt;/p&gt;

&lt;p&gt;`import v8 from 'v8';&lt;/p&gt;

&lt;p&gt;// enable trace-gc&lt;br&gt;
v8.setFlagsFromString('--trace-gc');&lt;/p&gt;

&lt;p&gt;// app code&lt;br&gt;
// ..&lt;br&gt;
// ..&lt;/p&gt;

&lt;p&gt;// disable trace-gc&lt;br&gt;
v8.setFlagsFromString('--notrace-gc');`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Performance Hook&lt;/strong&gt;&lt;br&gt;
Node.js has a built-in ‘perf_hooks’ module that facilitates you to capture performance metrics from the application. You can use the ‘perf_hooks’ module to capture Garbage Collection traces. Refer to the code snippet below:&lt;/p&gt;

&lt;p&gt;const { performance, PerformanceObserver } = require('perf_hooks');&lt;/p&gt;

&lt;p&gt;// Step 1: Create a PerformanceObserver to monitor GC events&lt;br&gt;
const obs = new PerformanceObserver((list) =&amp;gt; {&lt;br&gt;
  const entries = list.getEntries();&lt;br&gt;
  for (const entry of entries) {&lt;br&gt;
  // Printing GC events in the console log&lt;br&gt;
    console.log(entry);&lt;br&gt;
  }&lt;br&gt;
});&lt;br&gt;
// Step 2: Subscribe to GC events&lt;br&gt;
obs.observe({ entryTypes: ['gc'], buffered: true });&lt;br&gt;
// Step 3: Stop subscription&lt;br&gt;
obs.disconnect();&lt;/p&gt;

&lt;p&gt;If you notice in above code, we are doing the following:&lt;/p&gt;

&lt;p&gt;We are importing the ‘performance’ and ‘PerformanceObserver’ classes from the ‘perf_hooks’ module.&lt;br&gt;
We create a ‘PerformanceObserver’ instance to monitor garbage collection events (‘gc’ entry type).&lt;br&gt;
And whenever Garbage Collection events occur in the application, we are logging in to the console using the ‘console.log(entry)’ statement.&lt;br&gt;
We start observing GC events with ‘obs.observe()’.&lt;br&gt;
Finally, we stop observing GC events with ‘obs.disconnect()’.&lt;br&gt;
When above code snippet is added to your application, in the console you will start to see the GC events reported in the JSON format as below:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  kind: 'mark_sweep_compact',&lt;br&gt;
  startTime: 864.659982532,&lt;br&gt;
  duration: 7.824,&lt;br&gt;
  entryType: 'gc',&lt;br&gt;
  name: 'GC Event'&lt;br&gt;
}&lt;br&gt;
{&lt;br&gt;
  kind: 'scavenge',&lt;br&gt;
  startTime: 874.589382193,&lt;br&gt;
  duration: 3.245,&lt;br&gt;
  entryType: 'gc',&lt;br&gt;
  name: 'GC Event'&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
In this post, we explored three main methods for capturing garbage collection traces in Node.js applications: using the –trace-gc flag, leveraging the v8 module for dynamic tracing, and utilizing the perf_hooks module. Each method offers its own advantages and flexibility in capturing and analyzing GC events. Additionally, we discussed the importance of analyzing GC traces effectively and recommended using online tools like GCeasy for simplifying the analysis process and deriving actionable insights. Hope you found it helpful.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Optimizing Robotics application’s Performance!</title>
      <dc:creator>Ram</dc:creator>
      <pubDate>Wed, 20 Mar 2024 07:45:35 +0000</pubDate>
      <link>https://dev.to/ram_lakshmanan_001/optimizing-robotics-applications-performance-3761</link>
      <guid>https://dev.to/ram_lakshmanan_001/optimizing-robotics-applications-performance-3761</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7t7x2722gb0eca0g654.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7t7x2722gb0eca0g654.PNG" alt="Image description" width="681" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this post, we would like to share our real-world experience in optimizing a Java application which was controlling the robots in a warehouse. This application would give instructions to the robots in the warehouse on what actions to perform. Based on those instructions, robots carry out their job in the warehouse. Occasionally, this application was slowing down and not giving instructions to the robots. If robots don’t receive instructions from the application, they would start to make autonomous decisions causing degenerated behavior in them, which in turn was affecting the delivery and shipments in the warehouse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long Garbage Collection Pause&lt;/strong&gt;&lt;br&gt;
The best way to start troubleshooting the Java application’s performance is to study its Garbage Collection performance. This is even more true when the application suffers from a slowdown. We took this application’s Garbage Collection log file and uploaded it to the GCeasy tool. (Note: Garbage Collection log contains vital statistics that are not reported by most monitoring tools, and they add almost &lt;a href="https://blog.gceasy.io/2021/08/17/overhead-added-by-garbage-collection-logging/"&gt;no overhead to your application&lt;/a&gt;. Thus, it’s a &lt;a href="https://blog.gceasy.io/2017/10/17/what-is-garbage-collection-log-how-to-enable-analyze/"&gt;best practice to enable Garbage Collection log &lt;/a&gt;on all your production instances). The tool analyzed the Garbage Collection log and instantly generated &lt;a href="https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMjQvMDIvNi9nYy4yMDIzMTIxNC5sb2ctLTIzLTU5LTY=&amp;amp;channel=WEB"&gt;this insightful GC log analysis report&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The tool reported various interesting metrics and graphs. The Garbage Collection Pause time graph in the report is of most interest to our discussion. Below is that graph:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbonoc3yjw1glahcat6bx.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbonoc3yjw1glahcat6bx.PNG" alt="Image description" width="757" height="407"&gt;&lt;/a&gt;&lt;br&gt;
Fig 1: Garbage Collection Pause Duration Graph generated by GCeasy&lt;br&gt;
Whenever the Garbage Collection event runs, it pauses the entire application. During that pause time period, none of the customer transactions will be processed. All the transactions which are in flight will be halted. From the above graph you can notice that at 11:35am, during the peak traffic time, a Garbage Collection event was pausing the entire application for 329 seconds (i.e. 5 minutes and 29 seconds). It means during that entire 5+ minutes window all the robots wouldn’t have gotten any instructions from this application. They would have taken the decisions autonomously, causing disruption to the business.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is causing the long GC pause?&lt;/strong&gt;&lt;br&gt;
There were two primary reasons causing such long Garbage Collection Pause:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Large Heap size. Application is configured to run with a 126GB heap size. Typically, when heap size is large, garbage collection time will also be longer. Because a lot of objects would have been accumulated and it would take a long time to be evicted from the memory. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CMS (Concurrent Mark &amp;amp; Sweep) algorithm. &lt;a href="https://blog.gceasy.io/2023/11/18/java-cms-gc-tuning/"&gt;CMS GC algorithm&lt;/a&gt; runs well and is an apt fit for several applications, however it’s major drawback is its occasional long GC pause. CMS GC algorithm doesn’t always cause long GC Pauses, most of its GC pauses are in acceptable range, however occasionally it causes terribly long GC Pause due to &lt;a href="https://blog.gceasy.io/2020/05/31/what-is-java-heap-fragmentation/"&gt;heap fragmentation&lt;/a&gt;, which can last for several seconds (sometimes even minutes) like in this situation.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Potential Solutions to reduce Long GC Pause&lt;/strong&gt;&lt;br&gt;
Here is a blog post which highlights potential &lt;a href="https://blog.gceasy.io/2016/11/22/reduce-long-gc-pauses/"&gt;solutions to reduce long Garbage Collection pause&lt;/a&gt;. We were contemplating a couple of solutions to address this long GC pause.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Reducing Heap Size&lt;/strong&gt;&lt;br&gt;
Reducing the application’s heap size is a potential solution. However, the object creation rate of this monolith application was very high and reducing the heap size has potential to affect the application’s responsiveness. Heap size can be reduced only if the &lt;a href="https://blog.heaphero.io/2019/11/18/memory-wasted-by-spring-boot-application/"&gt;application’s memory consumption can be reduced&lt;/a&gt;, which warrants the refactoring of the application code. Already this monolith application’s re-architecture was underway. This monolith application was getting broken down and re-written as microservices with much lesser heap size. However, this application re-architecture was slated to go live in 6 – 9 months later. Thus, the customer was hesitant to reduce the heap size until then.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Switching from CMS to G1 GC algorithm&lt;/strong&gt;&lt;br&gt;
The other solution was to migrate away from the CMS GC algorithm. Irrespective of this GC performance, this CMS GC algorithm has been &lt;a href="https://blog.gceasy.io/2019/02/18/cms-deprecated-next-steps/"&gt;deprecated since Java 9&lt;/a&gt; and it will be permanently &lt;a href="https://blog.gceasy.io/2023/11/22/cms-gc-algorithm-removed-from-java-14/"&gt;removed from Java 14&lt;/a&gt;. If we want to move away from the CMS GC algorithm, what are the alternatives we have? Below are the alternate GC algorithms that are available in OpenJDK:&lt;/p&gt;

&lt;p&gt;1.Serial GC&lt;br&gt;
2.Parallel GC&lt;br&gt;
3.G1 GC&lt;br&gt;
4.ZGC&lt;br&gt;
5.Shenandoah GC&lt;/p&gt;

&lt;p&gt;Serial GC algorithm is useful only for single threaded, desktop type of application. Since this application has multiple concurrent threads with a very heavy object creation rate, we eliminated the Serial GC algorithm. Since this application was running on Java 8, we ruled out &lt;a href="https://blog.gceasy.io/2023/07/04/java-zgc-algorithm-tuning/"&gt;ZGC&lt;/a&gt; and &lt;a href="https://blog.gceasy.io/2023/11/18/shenandoah-gc-tuning/"&gt;Shenandoah GC&lt;/a&gt; because they are all stable only from Java 17+. Thus, we were left with the choice of either using Parallel GC or G1 GC. &lt;/p&gt;

&lt;p&gt;We simulated production traffic volume in the performance lab and experimented with Parallel GC and G1 GC algorithm settings based on the &lt;a href="https://www.youtube.com/watch?v=6G0E4O5yxks"&gt;best GC tuning best practices&lt;/a&gt;. We found out that Parallel GC pause time was not as bad as CMS GC, but it was not better than G1 GC. Thus, we ended up switching from CMS GC algorithm to G1 GC algorithm. Here is the &lt;a href="https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMjMvMTIvMjEvZ2MubG9nLS01LTIxLTU0&amp;amp;channel=WEB"&gt;GC log analysis report&lt;/a&gt; of this robotics application in the performance lab when using G1 GC algorithm. Below is the GC Pause duration graph when using G1 GC algorithm:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xkojbgvqmjd18mczfjh.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xkojbgvqmjd18mczfjh.PNG" alt="Image description" width="793" height="403"&gt;&lt;/a&gt;&lt;br&gt;
Fig 2: G1 GC pause time graph generated by GCeasy&lt;br&gt;
From the graph you can notice that the maximum GC pause time was 2.17 seconds. This is a phenomenal improvement from 5 minutes and 29 seconds. Also, the average GC pause time was only 198ms, way better than the CMS GC algorithm for this application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
After switching to the G1 GC algorithm, the application’s random slowdowns completely stopped. Thus, without major architectural changes, without code refactoring, without any JDK/infrastructural upgrades, just by tweaking the GC arguments in the JVM, we were able to bring this significant optimization to this robotics application’s performance.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Spring RestTemplate to WebClient causes OutOfMemoryError</title>
      <dc:creator>Ram</dc:creator>
      <pubDate>Fri, 08 Mar 2024 05:42:45 +0000</pubDate>
      <link>https://dev.to/ram_lakshmanan_001/spring-resttemplate-to-webclient-causes-outofmemoryerror-ph9</link>
      <guid>https://dev.to/ram_lakshmanan_001/spring-resttemplate-to-webclient-causes-outofmemoryerror-ph9</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbsb2ktxxeokrupjsp6e.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbsb2ktxxeokrupjsp6e.PNG" alt="Image description" width="698" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://spring.io/projects/spring-boot/"&gt;Spring Boot&lt;/a&gt; is a highly popular framework for Java enterprise applications. One common method of integration with internal or external applications is through HTTP REST connections. We were upgrading from &lt;a href="https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/client/RestTemplate.html"&gt;RestTemplate&lt;/a&gt; to the Java NIO-based &lt;a href="https://docs.spring.io/spring-framework/reference/web/webflux-webclient.html"&gt;WebClient&lt;/a&gt;, which can significantly enhance application performance by allowing concurrency when calling REST service endpoints. The benefits of  WebClients is as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.Concurrency:&lt;/strong&gt; WebClient enables handling multiple connections simultaneously without blocking threads, leading to better concurrency.&lt;br&gt;
&lt;strong&gt;2.Asynchronous:&lt;/strong&gt; Asynchronous programming allows the application to perform other tasks while waiting for I/O operations to complete, improving overall efficiency.&lt;br&gt;
&lt;strong&gt;3.Performance:&lt;/strong&gt; Non-blocking I/O can manage more connections with fewer threads, reducing the resources required for handling concurrent requests.&lt;br&gt;
Although the performance improved, however with the same number of concurrent connections, the WebClient was crashing with OutOfMemoryError. We will be analyzing the WebClient crash issues along with how to troubleshoot and fix them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spring RestTemplate to WebClient Upgrade&lt;/strong&gt;&lt;br&gt;
To harness the benefits of NIO, such as concurrency and asynchronous processing, we upgraded the rest client call from Spring RestTemplate to WebClient, as shown below.&lt;/p&gt;

&lt;p&gt;Spring RestTemplate&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  public void restClientCall(Integer id, String url,String imagePath) {

        // Create RestTemplate instance
        RestTemplate restTemplate = new RestTemplate();

        // Prepare the image file
        File imageFile = new File(imagePath);

        // Prepare headers
    HttpHeaders headers = new HttpHeaders();
        headers.setContentType(MediaType.MULTIPART_FORM_DATA);

        // Prepare the request body
    MultiValueMap&amp;lt;String, Object&amp;gt; body = new LinkedMultiValueMap&amp;lt;&amp;gt;();
        body.add("file", new org.springframework.core.io.FileSystemResource(imageFile));

        // Create the HTTP entity with headers and the multipart body
    HttpEntity&amp;lt;MultiValueMap&amp;lt;String, Object&amp;gt;&amp;gt; requestEntity = new HttpEntity&amp;lt;&amp;gt;(body, headers);

        System.out.println("Starting to post an image for Id"+id);

        // Perform the POST request
        ResponseEntity&amp;lt;String&amp;gt; responseEntity = restTemplate.postForEntity(url, requestEntity, String.class);

        // Print the response status code and body
        System.out.println("Response Id "+id +":"+ responseEntity.getBody());
    System.out.println(" Time: " + LocalTime.now());
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To the following Spring WebClient as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public void webHeavyClientCall(Integer id,String url, String imagePath) {

    // Create a WebClient instance
    WebClient webClient = WebClient.create();

    // Prepare the image file
        File imageFile = new File(imagePath);

    // Perform the POST request with the image as a part of the request body
        MultiValueMap&amp;lt;String, Object&amp;gt; body = new LinkedMultiValueMap&amp;lt;&amp;gt;();
    body.add("file", new FileSystemResource(imageFile));
        System.out.println("Image upload started "+id);
        webClient.post().uri(url).contentType(MediaType.MULTIPART_FORM_DATA).body(BodyInserters.fromMultipartData(body)).retrieve().bodyToMono(String.class).subscribe(response -&amp;gt; {
           System.out.println("Response Id"+id+ ":" + response);
    });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;WebClient resulting in OutOfMemoryError&lt;/strong&gt;&lt;br&gt;
When we ran both the programs in &lt;a href="https://openjdk.org/projects/jdk/11/"&gt;OpenJDK 11&lt;/a&gt;. Program that was using NIO based Spring WebClient, resulted in ‘java.lang.OutOfMemoryError: Direct buffer memory’ after a few iterations, whereas the Spring RestTemplate based program completed successfully. Below is the output of the NIO based Spring WebClient program. You can notice ‘java.lang.OutOfMemoryError’ reported.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

Starting to post an image for Id0

Starting to post an image for Id1

Starting to post an image for Id2

Starting to post an image for Id3

Starting to post an image for Id4

Starting to post an image for Id5

Starting to post an image for Id6

Starting to post an image for Id7

Starting to post an image for Id8

Starting to post an image for Id9

Starting to post an image for Id10

Starting to post an image for Id11

Starting to post an image for Id12

Starting to post an image for Id13

Starting to post an image for Id14

2023-12-06 17:21:46.730  WARN 13804 --- [tor-http-nio-12] io.netty.util.concurrent.DefaultPromise  : An exception was thrown by reactor.ipc.netty.FutureMono$FutureSubscription.operationComplete()

reactor.core.Exceptions$ErrorCallbackNotImplemented: io.netty.channel.socket.ChannelOutputShutdownException: Channel output shutdown

Caused by: java.lang.OutOfMemoryError: Direct buffer memory

    at java.base/java.nio.Bits.reserveMemory(Bits.java:175) ~[na:na]

    at java.base/java.nio.DirectByteBuffer.&amp;lt;init&amp;gt;(DirectByteBuffer.java:118) ~[na:na]

    at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:318) ~[na:na]

    at java.base/sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:242) ~[na:na]

    at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:164) ~[na:na]

    at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:130) ~[na:na]

    at java.base/sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496) ~[na:na]

    at io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:418) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]

    at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:934) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]

    ... 18 common frames omitted
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Troubleshooting ‘OutOfMemoryError: Direct buffer memory’&lt;/strong&gt;&lt;br&gt;
In order to troubleshoot this problem, we leveraged the yCrash monitoring tool. This tool is capable of &lt;a href="https://blog.ycrash.io/2023/12/04/what-is-micro-metrics-monitoring/"&gt;predicting outages before it surfaces&lt;/a&gt; in the production environment. Once it predicts outage in the environment, it captures 360° troubleshooting artifacts from your environment, analyses them and instantly generates a root cause analysis report. Artifacts it captures includes Garbage Collection log, Thread Dump, Heap Substitute, netstat, vmstat, iostat, top, top -H, dmesg, kernel parameters, disk usage…. &lt;/p&gt;

&lt;p&gt;You can &lt;a href="https://ycrash.io/yc-signup.jsp"&gt;register here&lt;/a&gt; and start using the free-tier of this tool.&lt;/p&gt;

&lt;p&gt;The yCrash server analyzed the Spring Boot Rest Client and provides clear indications of issues with recommendations. Below is the incident summary report that yCrash generated for the SpringBoot WebClient application.You can notice yCrash clearly pointing out the error with necessary recommendations to remediate the problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz60lkeh6en40rgv1ns0l.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz60lkeh6en40rgv1ns0l.PNG" alt="Image description" width="744" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig 1: Incident Summary Report from yCrash&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Garbage Collection analysis Report&lt;/strong&gt;&lt;br&gt;
yCrash’s Garbage Collection (GC) analysis report revealed that Full GCs were consecutively running (see screenshot below). When GC runs, the entire application pauses and no transactions will be processed. Entire application would become unresponsive. We observed the unresponsiveness behaviour before the SpringBoot WebClient application crashed with OutOfMemoryError.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4t9mv11dc8jr19qy4zhi.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4t9mv11dc8jr19qy4zhi.PNG" alt="Image description" width="745" height="408"&gt;&lt;/a&gt;&lt;br&gt;
Fig 2: yCrash report pointing our Consecutive Full GC problem&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logs analysis reporting OutOfMemoryError: Direct buffer memory&lt;/strong&gt;&lt;br&gt;
yCrash’s application log analysis report revealed that application was suffering from ‘ java.lang.OutOfMemoryError: Direct buffer memory’ (see the screenshot below) which causing the application to crash.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fospc05bb2hotgt64at56.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fospc05bb2hotgt64at56.PNG" alt="Image description" width="743" height="449"&gt;&lt;/a&gt;&lt;br&gt;
Fig 3: yCrash log report pointing  java.lang.OutOfMemoryError: Direct buffer memory&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is Spring WebClient suffering from OutOfMemoryError?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwau540egswd37jywo7h.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnwau540egswd37jywo7h.PNG" alt="Image description" width="696" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig 4: RestTemplate Objects Stored in Others Region of Native Memory&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzcgdzrso93aty5by2j4.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzcgdzrso93aty5by2j4.PNG" alt="Image description" width="698" height="294"&gt;&lt;/a&gt;&lt;br&gt;
Fig 5: WebClient Objects Stored in Direct Memory Region of Native Memory&lt;br&gt;
Spring WebClient is developed based on &lt;a href="https://docs.oracle.com/en/java/javase/21/core/java-nio.html"&gt;Java NIO&lt;/a&gt; technology. In Java NIO, objects are stored in the ‘Direct Buffer Memory’ region of JVM’s native memory, whereas RestTemplate objects are stored in the ‘others’ region of JVM’s native memory. There are different memory regions in JVM. To learn about them, you may watch &lt;a href="https://www.youtube.com/watch?v=P3gFfPIN3sw"&gt;this video clip&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;When we executed the above two programs, we had set the Direct Buffer Memory size as 200k (i.e. -XX:MaxDirectMemorySize=200k). This size was sufficient for Spring RestTemplate, because objects were never stored in this region, on the other hand it wasn’t sufficient for the Spring WebClient. Thus Spring WebClient suffered from  java.lang.OutOfMemoryError: Direct buffer memory. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increasing -XX:MaxDirectMemorySize&lt;/strong&gt;&lt;br&gt;
After identifying this issue, we increased the direct memory size to a higher value using the JVM argument -XX:MaxDirectMemorySize=1000k. After making this change the Spring WebClient program worked perfectly fine with no issues.&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id0&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id1&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id2&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id3&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id4&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id5&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id6&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id7&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id8&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id9&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id10&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id11&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id12&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id13&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id14&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id15&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id16&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id17&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id18&lt;/p&gt;

&lt;p&gt;Starting to post an image for Id19&lt;/p&gt;

&lt;p&gt;Response Id11:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id4:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id1:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id18:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id2:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id3:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id6:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id5:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id10:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id13:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id15:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id8:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id17:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id9:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id7:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id0:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id16:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id14:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id19:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;Response Id12:Image uploaded successfully!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
In this post we discussed the OutOfMemoryError issue we faced when upgrading from Spring RestTemplate to Java NIO-based WebClient. We also shared the diagnostic approach we took and then resolution to the problem. Hopefully you find it useful.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
