<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ryan Zhi</title>
    <description>The latest articles on DEV Community by Ryan Zhi (@ryan_zhi).</description>
    <link>https://dev.to/ryan_zhi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ryan_zhi"/>
    <language>en</language>
    <item>
      <title>Detailed Steps of JVM Object Creation Using the new Keyword in Java</title>
      <dc:creator>Ryan Zhi</dc:creator>
      <pubDate>Tue, 11 Feb 2025 03:45:11 +0000</pubDate>
      <link>https://dev.to/ryan_zhi/detailed-steps-of-jvm-object-creation-using-the-new-keyword-in-java-3mgm</link>
      <guid>https://dev.to/ryan_zhi/detailed-steps-of-jvm-object-creation-using-the-new-keyword-in-java-3mgm</guid>
      <description>&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Class Loading Check&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Triggering Class Loading:&lt;/strong&gt;
If the target class has not yet been loaded, the JVM initiates the class loading process using a ClassLoader. This process typically involves the following phases:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Loading:&lt;/strong&gt; Reads the bytecode from a file or network and converts it into JVM-internal data structures.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verification:&lt;/strong&gt; Ensures the bytecode complies with the JVM specifications to maintain security and integrity.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Preparation:&lt;/strong&gt; Allocates memory for class variables and sets their default initial values.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resolution:&lt;/strong&gt; Converts symbolic references in the class into direct references. (Note that resolution may be delayed until the first time the class is used.)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Initialization:&lt;/strong&gt; Executes the class constructor (&lt;code&gt;&amp;lt;clinit&amp;gt;&lt;/code&gt;) to initialize static variables and execute static code blocks.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;After these steps, the class metadata (methods, fields, inheritance details, etc.) is loaded into the Method Area (or Metaspace), ensuring that the JVM can correctly refer to the class information during object creation.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Memory Allocation&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Allocation Strategies:&lt;/strong&gt;
The JVM allocates memory for the new object on the heap using different strategies depending on the state of the memory:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bump-the-Pointer:&lt;/strong&gt;
This strategy is used when the heap is contiguous and well-organized. A pointer is simply moved to allocate a continuous block of memory.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Free List:&lt;/strong&gt;
When the heap is fragmented, the JVM maintains a free list—a list of available memory blocks—and searches for a block that is large enough for the new object.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Concurrent Optimization:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CAS (Compare and Swap):&lt;/strong&gt;
In multi-threaded environments, CAS operations ensure that updates to the heap pointer are atomic. This prevents race conditions by allowing only one thread to successfully update the pointer at a time.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TLAB (Thread-Local Allocation Buffer):&lt;/strong&gt;
Each thread is provided with its own private memory buffer. This reduces contention between threads when allocating memory. However, if an object is too large (e.g., a large array), it might bypass the TLAB and be allocated directly in the shared heap space (such as the old generation).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Memory Space Initialization (Initialization to Zero Values)&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once memory is allocated, the JVM initializes the entire memory region to zero (default values). This step is critical to ensure that all fields of the object start with a known state:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Primitive Types:&lt;/strong&gt;
For example, &lt;code&gt;int&lt;/code&gt; is set to 0, &lt;code&gt;boolean&lt;/code&gt; to false, &lt;code&gt;long&lt;/code&gt; to 0L, &lt;code&gt;float&lt;/code&gt; and &lt;code&gt;double&lt;/code&gt; to 0.0, and &lt;code&gt;char&lt;/code&gt; to &lt;code&gt;'\u0000'&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reference Types:&lt;/strong&gt;
All reference variables are initialized to &lt;code&gt;null&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;This initialization prevents the object from containing random or undefined values before the explicit initialization code runs.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Object Header Setup&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Structure of the Object Header:&lt;/strong&gt;
The object header typically consists of two main components:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mark Word:&lt;/strong&gt;
This field stores runtime data about the object, such as its lock state (e.g., no lock, biased lock, lightweight lock, heavyweight lock), GC generation age, and a lazily computed hashcode (if &lt;code&gt;hashCode()&lt;/code&gt; is invoked).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Klass Pointer:&lt;/strong&gt;
This pointer references the class metadata in the Method Area, enabling the JVM to determine the object's type.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Special Considerations:&lt;/strong&gt;
For array objects, the header also includes the length of the array.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Executing the Instance Initialization Method (&lt;code&gt;&amp;lt;init&amp;gt;&lt;/code&gt; Method)&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Constructor Invocation:&lt;/strong&gt;
After setting up the memory and object header, the JVM calls the instance initialization method (the constructor, denoted by &lt;code&gt;&amp;lt;init&amp;gt;&lt;/code&gt;) of the object.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Initialization Process:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Default Initialization:&lt;/strong&gt;
The fields are already set to their default (zero) values.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explicit Field Initialization:&lt;/strong&gt;
The JVM then applies any explicit field assignments provided in the class definition.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Initialization Blocks:&lt;/strong&gt;
Any non-static initialization blocks are executed.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Constructor Body:&lt;/strong&gt;
Finally, the code inside the constructor is executed to perform any additional setup required by the program.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inheritance Handling:&lt;/strong&gt;
The initialization follows the order dictated by inheritance: the constructor of the parent class is executed first, followed by the child's constructor.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Once the &lt;code&gt;&amp;lt;init&amp;gt;&lt;/code&gt; method completes, the object is fully initialized and ready for use.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
When you invoke the &lt;code&gt;new&lt;/code&gt; keyword in Java, the JVM executes the following sequence:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Class Loading Check:&lt;/strong&gt; Loads, verifies, prepares, resolves, and initializes the class if it isn’t already loaded.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory Allocation:&lt;/strong&gt; Allocates heap memory using strategies such as bump-the-pointer or free list, optimized for multi-threaded environments using CAS and TLAB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory Initialization:&lt;/strong&gt; Initializes the allocated memory to default zero values.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Object Header Setup:&lt;/strong&gt; Configures the object's header with the Mark Word and Klass Pointer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instance Initialization:&lt;/strong&gt; Executes the &lt;code&gt;&amp;lt;init&amp;gt;&lt;/code&gt; method, following the proper initialization order, including parent class initialization.&lt;/li&gt;
&lt;/ol&gt;




&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;类加载检查（Class Loading Check）&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;触发类加载&lt;/strong&gt;：
如果目标类尚未被加载，JVM 会通过类加载器（ClassLoader）启动类加载过程。这个过程通常包含以下阶段：

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;加载（Loading）&lt;/strong&gt;：从文件或网络中读取字节码，并转换为 JVM 内部数据结构。&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;验证（Verification）&lt;/strong&gt;：检查字节码是否符合 JVM 规范，确保安全性。&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;准备（Preparation）&lt;/strong&gt;：为类变量分配内存，并设置默认初始值。&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;解析（Resolution）&lt;/strong&gt;：将类中的符号引用转换为直接引用（这一步有时会延迟到首次使用时进行）。&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;初始化（Initialization）&lt;/strong&gt;：执行类构造器 &lt;code&gt;&amp;lt;clinit&amp;gt;&lt;/code&gt; 方法，初始化静态变量和静态代码块。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;经过上述过程，类的元数据（如方法、字段、继承关系等）会加载到方法区（或 Metaspace），确保后续对象创建能正确引用该类的信息。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;内存分配（Memory Allocation）&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;分配策略&lt;/strong&gt;：
JVM 根据当前堆内存状况采用不同的内存分配方式：

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;指针碰撞（Bump-the-Pointer）&lt;/strong&gt;：适用于堆内存连续且规整的情况，直接通过移动指针来分配连续内存块。&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;空闲列表（Free List）&lt;/strong&gt;：当堆内存存在碎片化时，维护一张空闲内存块列表，在分配时查找足够大的内存块。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;多线程并发优化&lt;/strong&gt;：

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CAS（Compare and Swap）操作&lt;/strong&gt;：
在多线程环境下，为确保内存分配过程中对堆指针的更新操作具有原子性和线程安全性，JVM 可能采用 CAS 原子操作。
例如，当多个线程同时试图分配内存时，CAS 操作可以保证只有一个线程成功更新堆指针，从而防止竞争条件和数据不一致的问题。&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TLAB（Thread-Local Allocation Buffer）&lt;/strong&gt;：
每个线程分配一个私有的内存缓冲区（TLAB），线程在自己的 TLAB 中快速分配内存，降低多线程竞争带来的性能开销。
对于较大的对象（如大数组），可能会绕过 TLAB，直接在堆的公共区域（如老年代）中分配。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;内存空间初始化（Initialization to Zero Values）&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;内存分配完成后，JVM 会将分配到的内存空间初始化为零值（默认值），以确保对象各字段在未显式赋值时不会包含未定义或随机的值。
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;基本数据类型&lt;/strong&gt;：

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;int&lt;/code&gt; 初始化为 0
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;boolean&lt;/code&gt; 初始化为 false
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;long&lt;/code&gt; 初始化为 0L
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;float&lt;/code&gt; 和 &lt;code&gt;double&lt;/code&gt; 初始化为 0.0
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;char&lt;/code&gt; 初始化为 &lt;code&gt;'\u0000'&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;引用类型&lt;/strong&gt;：初始化为 &lt;code&gt;null&lt;/code&gt;。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;设置对象头（Object Header Setup）&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;对象头结构&lt;/strong&gt;：
对象头主要包含两部分信息：

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mark Word&lt;/strong&gt;：存储对象的运行时数据，如锁状态（无锁、偏向锁、轻量级锁、重量级锁）、GC 分代年龄等，以及在需要时延迟计算的哈希码（hashcode）。&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Klass Pointer&lt;/strong&gt;：指向方法区中该类的元数据，确定对象的类型。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;特殊情况&lt;/strong&gt;：
对于数组对象，对象头中还会包含数组的长度信息。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;执行实例初始化方法（ 方法）&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;调用构造方法&lt;/strong&gt;：
完成内存分配和对象头设置后，JVM 会调用该对象的实例初始化方法（构造函数 &lt;code&gt;&amp;lt;init&amp;gt;&lt;/code&gt;）。
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;初始化过程&lt;/strong&gt;：

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;默认初始化&lt;/strong&gt;：前面已将所有字段初始化为零值。&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;显式赋值&lt;/strong&gt;：根据代码中对实例变量的显式初始化赋值。&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;初始化块&lt;/strong&gt;：执行非静态代码块中的初始化逻辑。&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;构造方法体&lt;/strong&gt;：执行构造函数中定义的具体逻辑。&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;继承关系处理&lt;/strong&gt;：构造过程遵循“先初始化父类，再初始化子类”的顺序，确保继承体系中各层次的正确初始化。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;当 &lt;code&gt;&amp;lt;init&amp;gt;&lt;/code&gt; 方法执行完毕后，对象进入可用状态，并能被程序后续逻辑使用。&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;strong&gt;总结：&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
JVM 在通过 &lt;code&gt;new&lt;/code&gt; 关键字创建新对象时，依次执行以下步骤：&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;类加载检查&lt;/strong&gt;（如果类未加载，则完成加载、验证、准备、解析和初始化过程），
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;内存分配&lt;/strong&gt;（采用指针碰撞或空闲列表策略，在多线程环境下通过 CAS 操作和 TLAB 优化分配），
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;内存空间初始化&lt;/strong&gt;（初始化为零值以确保内存安全），
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;设置对象头&lt;/strong&gt;（设置 Mark Word 和 Klass Pointer），
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;执行实例初始化方法&lt;/strong&gt;（按继承顺序执行初始化块和构造方法）。
&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Performance Optimization for Java &amp; MySQL: A Comprehensive Guide</title>
      <dc:creator>Ryan Zhi</dc:creator>
      <pubDate>Fri, 07 Feb 2025 08:54:23 +0000</pubDate>
      <link>https://dev.to/ryan_zhi/performance-optimization-for-java-mysql-a-comprehensive-guide-5b5</link>
      <guid>https://dev.to/ryan_zhi/performance-optimization-for-java-mysql-a-comprehensive-guide-5b5</guid>
      <description>&lt;p&gt;When working with MySQL in your Java applications, there are several layers at which you can optimize performance. In this post, I’ll cover key areas—from the database to your code, connection management, and even hardware/network considerations—to help you squeeze out every bit of performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  I. Database-Level Optimizations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Index Optimization
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Create the Right Indexes&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Indexes are the cornerstone of fast queries. You should:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Index Frequently Queried Columns:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
If you often filter on fields in the WHERE clause—like a username in a users table—ensure you have an index on that column.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Design Composite Indexes Carefully:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
For example, if you frequently query an orders table by both order date and order amount, consider a composite index such as:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;  &lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;INDEX&lt;/span&gt; &lt;span class="n"&gt;idx_order_date_amount&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order_date&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;order_amount&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Order matters here—place the most commonly filtered column first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Avoid Over-Indexing&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
While indexes speed up reads, they can slow down writes (inserts, updates, and deletes) because the indexes need to be maintained. Avoid adding indexes to columns that rarely appear in query conditions or have low cardinality (e.g., a gender field with only “M” and “F”).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regular Index Maintenance&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Over time, indexes can become fragmented. Running commands like &lt;code&gt;OPTIMIZE TABLE&lt;/code&gt; periodically can help keep your indexes performing well—but schedule these during off-peak hours if your table is large.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Query Optimization
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Write Efficient Queries&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Avoid &lt;code&gt;SELECT *&lt;/code&gt;:&lt;/strong&gt;
Specify only the columns you need. For instance, if you only require the username and email, use:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;  &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use Proper Joins:&lt;/strong&gt;
When joining multiple tables, make sure you join on indexed columns to avoid full table scans.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;  &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;order_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;u&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;
  &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;
  &lt;span class="k"&gt;JOIN&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="n"&gt;u&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;u&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Use Views and Stored Procedures When Appropriate&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
For complex queries, consider encapsulating the logic in a view or stored procedure. This can reduce round trips between your application and the database and allow MySQL to optimize the execution plan better.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Table Design Optimization
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Table Partitioning &amp;amp; Splitting&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Horizontal Splitting:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
If you have a table with massive amounts of data, consider splitting it by ranges (e.g., by date or by user region).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Vertical Splitting:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Separate infrequently accessed columns into another table, reducing the size of the primary table.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Choose Appropriate Data Types&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Opt for data types that match your data. For example, for binary states (yes/no), use &lt;code&gt;TINYINT&lt;/code&gt; rather than &lt;code&gt;VARCHAR&lt;/code&gt;. For dates, use &lt;code&gt;DATE&lt;/code&gt; or &lt;code&gt;DATETIME&lt;/code&gt; rather than storing dates as strings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Partition Large Tables&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
If you have enormous tables, consider using MySQL’s partitioning feature. For example, partitioning a log table by date means that queries for a specific date only scan a single partition rather than the entire table.&lt;/p&gt;
&lt;h2&gt;
  
  
  II. Code-Level Optimizations
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. Reduce Unnecessary Queries
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Cache Query Results&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
For data that doesn’t change often (e.g., configuration settings or lookup tables), cache the results in memory. Tools like Ehcache or Redis can significantly reduce load on your database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Batch Your Queries&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
When you need to retrieve multiple records, avoid looping over single queries. Instead, use batch queries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt; &lt;span class="k"&gt;IN&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Use Transactions Wisely
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Keep Transactions Short&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Limit the scope of your transactions to the minimum necessary work. This reduces lock contention and improves overall throughput.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Select the Right Isolation Level&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
MySQL’s default REPEATABLE READ isolation level may be overkill in some cases. If your application can tolerate it, consider lowering the isolation level to READ COMMITTED to improve performance.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Use Prepared Statements
&lt;/h3&gt;

&lt;p&gt;Prepared statements not only help prevent SQL injection attacks but also improve performance by reusing the compiled SQL execution plan. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;sql&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"INSERT INTO users (username, email) VALUES (?, ?)"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="nc"&gt;PreparedStatement&lt;/span&gt; &lt;span class="n"&gt;pstmt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;prepareStatement&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;pstmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setString&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"username"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;pstmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setString&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"email"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;pstmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;executeUpdate&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  III. Connection Management
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Use a Connection Pool
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Choosing a Connection Pool&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Popular Java connection pools include DBCP, C3P0, and HikariCP. HikariCP is known for its high performance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nc"&gt;HikariConfig&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;HikariConfig&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setJdbcUrl&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"jdbc:mysql://localhost:3306/mydb"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setUsername&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"root"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setPassword&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"password"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="nc"&gt;HikariDataSource&lt;/span&gt; &lt;span class="n"&gt;dataSource&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;HikariDataSource&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Configure Pool Parameters&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Tweak parameters like maximumPoolSize (the maximum number of connections) and idleTimeout (to release idle connections) based on your workload and server capabilities.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Properly Use and Release Connections
&lt;/h3&gt;

&lt;p&gt;Always close your connections after use to prevent leaks. The try-with-resources statement in Java is a great way to ensure this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Connection&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;dataSource&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getConnection&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
     &lt;span class="nc"&gt;Statement&lt;/span&gt; &lt;span class="n"&gt;statement&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;createStatement&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
     &lt;span class="nc"&gt;ResultSet&lt;/span&gt; &lt;span class="n"&gt;resultSet&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;statement&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;executeQuery&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"SELECT * FROM users"&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resultSet&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;next&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Process results&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;SQLException&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;printStackTrace&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  IV. Hardware and Network Optimizations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Hardware Upgrades
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Increase Memory:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
More RAM means MySQL can cache more data, reducing the need to hit disk.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use SSDs:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
SSDs offer much faster read/write speeds compared to traditional HDDs, cutting down I/O bottlenecks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Network Optimization
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduce Latency:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Ensure your database and application servers are on a fast, reliable network. Consider co-locating them or using a high-speed network setup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Proper Port Configuration:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Make sure that MySQL is configured on a dedicated port with the proper firewall settings to minimize network contention.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>In-depth Interview Questions on Distributed Locks and Thread Pools</title>
      <dc:creator>Ryan Zhi</dc:creator>
      <pubDate>Wed, 05 Feb 2025 13:02:09 +0000</pubDate>
      <link>https://dev.to/ryan_zhi/in-depth-interview-questions-on-distributed-locks-and-thread-pools-5a9</link>
      <guid>https://dev.to/ryan_zhi/in-depth-interview-questions-on-distributed-locks-and-thread-pools-5a9</guid>
      <description>&lt;h1&gt;
  
  
  Distributed Locks
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Distributed Lock Principles
&lt;/h2&gt;

&lt;p&gt;Distributed locks are typically implemented using Redis with a Lua script to ensure atomicity. The script first checks if the lock key exists. If not, it proceeds to set the lock. The data structure used is a hash, with the main key being the business key (which we need to design) and the sub-key being a UUID: thread ID. The initial lock value is 1.&lt;br&gt;
For mutual exclusion, if thread 1 has already acquired the lock, thread 2 will also execute the same Lua script. It will check for the existence of the key and find that it exists (since both threads 1 and 2 have the same lock key). It will then check if the hash structure contains thread 2's ID. If not, it will return the remaining time of thread 1's lock and then use a while loop to acquire the lock.&lt;br&gt;
For reentrant locks, the logic is the same as for mutual exclusion, but any thread can pass the check, and the value in the hash structure for that key is incremented by 1 using the incrby command.&lt;br&gt;
The principle of releasing the lock is to call the unlock() method, which decrements the value in the hash structure for the holding thread. If the value is 0, the lock is deleted.&lt;br&gt;
Redisson's implementation of distributed locks adds a watchdog mechanism to prevent the lock from being released prematurely if the task has not completed. This is essentially a background task thread that, after successfully acquiring the lock, adds the thread holding the lock to a map. It then checks every 10 seconds if the thread still holds the lock key (by iterating over the thread IDs in the map and querying Redis). If it does, it continuously extends the lock key's lifetime.&lt;br&gt;
If the service crashes, the watchdog mechanism disappears, and the key's expiration time will not be extended, causing it to expire after 30 seconds, allowing other threads to acquire the lock.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimizing Distributed Locks
&lt;/h2&gt;

&lt;p&gt;To reduce the triggering of distributed locks or optimize performance in concurrent scenarios, consider sending tokens with expiration times and using MQ to broadcast updates to the token, with the distributed lock serving as a fallback operation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling Redis Failures
&lt;/h2&gt;

&lt;p&gt;If Redis goes down while a Java thread holds a Redisson distributed lock, several issues may arise:&lt;br&gt;
The lock may be lost, especially in a single-instance or master-slave architecture.&lt;br&gt;
The lock may not be renewed if the watchdog mechanism fails.&lt;br&gt;
The lock may be released if it expires before Redis restarts.&lt;br&gt;
Solutions include enabling Redis persistence, setting reasonable lock expiration times, using high-availability architectures like Redis Sentinel or Redis Cluster, and implementing the Redlock algorithm across multiple Redis instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Designing Distributed Lock Keys
&lt;/h2&gt;

&lt;p&gt;When using Redisson's lock, the key is a fixed business string plus a unique token.&lt;/p&gt;

&lt;h1&gt;
  
  
  Thread Pools and Multithreading
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Thread Pool Execution Process
&lt;/h2&gt;

&lt;p&gt;The execute() method's logic involves checking the number of threads against the core pool size, adding tasks to the queue if the core threads are busy, creating new non-core threads if the queue is full, and handling new tasks with a rejection policy if the pool is at capacity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Parameters of Thread Pools
&lt;/h2&gt;

&lt;p&gt;The constructor of a thread pool includes parameters like corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, and handler.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rejection Policies of Thread Pools
&lt;/h2&gt;

&lt;p&gt;Rejection policies include AbortPolicy, CallerRunsPolicy, DiscardPolicy, and DiscardOldestPolicy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Queue Types for Thread Pools
&lt;/h2&gt;

&lt;p&gt;Supported queue types include ArrayBlockingQueue, LinkedBlockingQueue, PriorityBlockingQueue, and DelayQueue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Usage Scenarios for Thread Pools
&lt;/h2&gt;

&lt;p&gt;Thread pools are used for quick responses and high throughput, such as parallel service calls or processing message queues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimization and Considerations for Thread Pools
&lt;/h2&gt;

&lt;p&gt;Avoid using Executors factory methods, handle task exceptions properly, and monitor thread pool performance to adjust parameters and prevent overload.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lock Mechanisms in Thread Pools
&lt;/h2&gt;

&lt;p&gt;Internally, thread pools use locks like mainLock to protect shared resources and Worker thread locks to manage thread interruption status.&lt;/p&gt;

&lt;h2&gt;
  
  
  Extending and Customizing Thread Pools
&lt;/h2&gt;

&lt;p&gt;You can implement a ThreadFactory interface to customize thread creation and define custom rejection policies based on business needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-creation of Core Threads
&lt;/h2&gt;

&lt;p&gt;By default, core threads are created on demand when tasks are submitted. However, you can pre-create core threads using methods like prestartCoreThread() or prestartAllCoreThreads().&lt;/p&gt;

&lt;h2&gt;
  
  
  Idle Core Thread Destruction
&lt;/h2&gt;

&lt;p&gt;Core threads remain alive until the pool is shut down. If allowCoreThreadTimeOut is set to true, idle core threads may be destroyed after keepAliveTime.&lt;/p&gt;

&lt;h2&gt;
  
  
  State of Core and Non-Core Threads When Idle
&lt;/h2&gt;

&lt;p&gt;Core threads wait for new tasks, while non-core threads wait for a certain time (keepAliveTime) and are destroyed if no new tasks arrive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging and Measuring Task Execution Time
&lt;/h2&gt;

&lt;p&gt;You can wrap tasks with FutureTask to add logging logic or override ThreadPoolExecutor's beforeExecute and afterExecute methods to perform actions before and after task execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic Core Pool Size Adjustment
&lt;/h2&gt;

&lt;p&gt;The core pool size can be dynamically adjusted using the setCorePoolSize(int newCorePoolSize) method. If the new size is larger, new core threads will be created; if smaller, excess threads may be terminated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Behavior When corePoolSize Is Increased
&lt;/h2&gt;

&lt;p&gt;If the corePoolSize is increased and there are tasks in the queue, new core threads will be created to handle the tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling Idle Threads with keepAliveTime
&lt;/h2&gt;

&lt;p&gt;The keepAliveTime is not managed by a task but by the Worker thread's runWorker method, which exits the thread if no tasks are received within the keepAliveTime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use of NANOSECONDS in workQueue.poll
&lt;/h2&gt;

&lt;p&gt;The workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) method uses NANOSECONDS for precision. Although the time unit can be specified when creating the thread pool, it is internally converted to nanoseconds for consistent time management.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Key Concepts in Operating Systems</title>
      <dc:creator>Ryan Zhi</dc:creator>
      <pubDate>Sat, 01 Feb 2025 16:06:11 +0000</pubDate>
      <link>https://dev.to/ryan_zhi/key-concepts-in-operating-systems-a7g</link>
      <guid>https://dev.to/ryan_zhi/key-concepts-in-operating-systems-a7g</guid>
      <description>&lt;p&gt;Operating systems are the backbone of modern computing, managing resources and providing a seamless user experience. In this blog post, we'll delve into some of the most critical concepts in operating systems that every developer should understand. We'll cover processes and threads, virtual memory, I/O multiplexing, inter-process communication (IPC), process synchronization, and process scheduling. Let's dive in!&lt;/p&gt;

&lt;h1&gt;
  
  
  1. Processes vs. Threads
&lt;/h1&gt;

&lt;p&gt;Processes&lt;br&gt;
Definition: A process is the basic unit of resource allocation in an operating system. It has its own independent address space, global variables, and stack.&lt;br&gt;
Characteristics:&lt;br&gt;
Processes are independent of each other, meaning they don't interfere with one another.&lt;br&gt;
Inter-process communication (IPC) is relatively complex and typically involves mechanisms like pipes, message queues, or shared memory.&lt;br&gt;
Threads&lt;br&gt;
Definition: A thread is the basic unit of CPU scheduling and usually resides within a process.&lt;br&gt;
Characteristics:&lt;br&gt;
Threads share resources within the same process, such as memory address space and global variables.&lt;br&gt;
Communication between threads is simpler and can be achieved through shared memory and semaphores.&lt;/p&gt;

&lt;h1&gt;
  
  
  2. Virtual Memory
&lt;/h1&gt;

&lt;p&gt;Virtual memory is a crucial memory management mechanism provided by operating systems. It allows programs to behave as if they have access to a large, contiguous memory space, even though physical memory is limited. Virtual memory achieves this by dynamically allocating memory between physical RAM and disk storage.&lt;br&gt;
How Virtual Memory Works&lt;br&gt;
Virtual memory uses the Memory Management Unit (MMU) to map virtual addresses to physical addresses. The address space is divided into fixed-size pages, and physical memory is divided into corresponding page frames. The mapping between virtual and physical addresses is managed by the operating system's page tables.&lt;br&gt;
Address Space Layout&lt;br&gt;
The virtual memory address space typically consists of:&lt;br&gt;
Text Segment: Stores the program code.&lt;br&gt;
Data Segment: Stores global and static variables and initialized data.&lt;br&gt;
Heap: Used for dynamic memory allocation (e.g., via malloc or new).&lt;br&gt;
Stack: Stores local variables and function call parameters, supporting recursion.&lt;br&gt;
Memory-Mapped Area: Used for loading shared libraries (e.g., .so files) or mapping files.&lt;br&gt;
Paging and Page Tables&lt;br&gt;
Virtual memory uses pages and page frames to map virtual addresses to physical addresses. The virtual address space is divided into pages (e.g., 4KB each), and the physical memory is divided into page frames of the same size. The page table keeps track of the mapping between virtual and physical addresses.&lt;br&gt;
Demand Paging and Page Replacement&lt;br&gt;
Demand Paging: The operating system loads pages into memory on-demand rather than loading the entire program at startup. When a program accesses a page not in memory, a page fault occurs, and the OS loads the page from disk.&lt;br&gt;
Page Replacement Algorithms: When physical memory is insufficient, the OS must swap out some pages to disk. Common algorithms include:&lt;br&gt;
Least Recently Used (LRU): Replaces the least recently used page.&lt;br&gt;
First-In-First-Out (FIFO): Replaces the oldest page in memory.&lt;br&gt;
Optimal: Replaces the page that will not be used in the future (theoretical and impractical).&lt;br&gt;
Advantages of Virtual Memory&lt;br&gt;
Isolation and Protection: Each process has its own virtual address space, preventing interference between processes. The OS can also set permissions (read-only, writable, etc.) to protect memory regions.&lt;br&gt;
Scalability: Programs can use a larger address space than the physical memory available. Modern 64-bit systems can theoretically access TBs of virtual memory.&lt;br&gt;
Sharing and Dynamic Linking: Multiple processes can share the same library files, reducing memory usage.&lt;br&gt;
Flexible Memory Management: The OS can dynamically allocate and swap memory, allowing programs to behave as if they have continuous memory.&lt;br&gt;
Disadvantages of Virtual Memory&lt;br&gt;
Performance Overhead: Address translation and page faults can be costly, especially when loading data from disk.&lt;br&gt;
High Hardware Requirements: Virtual memory requires hardware support, such as an MMU, which is not available on some older or embedded systems.&lt;br&gt;
Applications of Virtual Memory&lt;br&gt;
Virtual memory is essential in modern operating systems (Windows, Linux, macOS) and is crucial for multitasking, supporting large-scale applications, and efficient memory management.&lt;/p&gt;

&lt;h1&gt;
  
  
  3. I/O Multiplexing
&lt;/h1&gt;

&lt;p&gt;I/O multiplexing is a technique that allows a single thread or process to handle multiple I/O operations simultaneously. It is particularly useful in high-concurrency systems, enabling efficient resource utilization and reducing CPU idle time.&lt;br&gt;
Why I/O Multiplexing?&lt;br&gt;
Traditional I/O operations are blocking, meaning the program waits for the I/O to complete before continuing. This is inefficient for applications like web servers or databases that handle many concurrent connections. I/O multiplexing allows a single thread to monitor multiple I/O channels and handle them when they become ready.&lt;br&gt;
How I/O Multiplexing Works&lt;br&gt;
The basic idea is that a single process or thread uses system calls to monitor multiple I/O channels (e.g., file descriptors). When an I/O channel is ready, the OS notifies the application, which then processes the I/O operation.&lt;br&gt;
Implementations of I/O Multiplexing&lt;br&gt;
Select:&lt;br&gt;
Mechanism: Select monitors multiple file descriptors by placing them in a set. It blocks until one or more descriptors are ready for I/O.&lt;br&gt;
Pros: Cross-platform support (Linux, Windows).&lt;br&gt;
Cons: Limited to a small number of file descriptors (typically 1024) and can be inefficient with large numbers of descriptors.&lt;br&gt;
Poll:&lt;br&gt;
Mechanism: Similar to select but uses an array of file descriptors, allowing it to handle more descriptors.&lt;br&gt;
Pros: Supports more file descriptors.&lt;br&gt;
Cons: Still inefficient with very large numbers of descriptors.&lt;br&gt;
Epoll (Linux-specific):&lt;br&gt;
Mechanism: Epoll is an event-driven mechanism that uses a kernel-maintained event list. It notifies the application only when a file descriptor is ready, making it highly efficient.&lt;br&gt;
Pros: Highly scalable and suitable for high-concurrency applications like web servers.&lt;br&gt;
Cons: Limited to Linux.&lt;br&gt;
Pros and Cons of I/O Multiplexing&lt;br&gt;
Pros:&lt;br&gt;
Reduces the number of threads or processes, minimizing creation and destruction overhead.&lt;br&gt;
Efficient resource utilization and support for high concurrency.&lt;br&gt;
Cons:&lt;br&gt;
Complex programming model, requiring careful management of file descriptors and event callbacks.&lt;br&gt;
Potential performance bottlenecks in extreme cases.&lt;br&gt;
Blocking nature of underlying system calls like select and poll.&lt;br&gt;
Summary&lt;br&gt;
I/O multiplexing is a powerful technique for handling high-concurrency applications. While select and poll are suitable for smaller-scale applications, epoll is the go-to choice for large-scale, high-performance systems.&lt;/p&gt;

&lt;h1&gt;
  
  
  4. Inter-Process Communication (IPC)
&lt;/h1&gt;

&lt;p&gt;IPC is the mechanism by which different processes exchange data. Common IPC methods include:&lt;br&gt;
Pipes: Allow communication between parent and child processes, with unidirectional data flow.&lt;br&gt;
Named Pipes: Similar to pipes but have a name in the file system, enabling communication between unrelated processes.&lt;br&gt;
Message Queues: Allow multiple processes to send and receive messages in a FIFO order.&lt;br&gt;
Shared Memory: Multiple processes can map to the same physical memory region for direct data sharing.&lt;br&gt;
Semaphores: Used for synchronizing access to shared resources.&lt;/p&gt;

&lt;h1&gt;
  
  
  5. Process Synchronization
&lt;/h1&gt;

&lt;p&gt;Process synchronization ensures that multiple processes access shared resources without conflicts. Common synchronization mechanisms include:&lt;br&gt;
Mutexes: Ensure exclusive access to a shared resource by one process at a time.&lt;br&gt;
Semaphores: Control access to shared resources using counters.&lt;br&gt;
Read-Write Locks: Allow multiple readers but exclusive access for writers.&lt;br&gt;
Condition Variables: Allow processes to wait for certain conditions to be met, often used with mutexes.&lt;/p&gt;

&lt;h1&gt;
  
  
  6. Process Scheduling
&lt;/h1&gt;

&lt;p&gt;Process scheduling is a core function of operating systems, managing how multiple processes share the CPU. The goal is to allocate CPU time fairly, maximize system throughput, and minimize response times.&lt;br&gt;
Process States&lt;br&gt;
New: The process is just created and not yet scheduled.&lt;br&gt;
Ready: The process is loaded into memory and waiting for CPU time.&lt;br&gt;
Running: The process is currently executing on the CPU.&lt;br&gt;
Blocked: The process is waiting for an event (e.g., I/O completion) and cannot execute.&lt;br&gt;
Terminated: The process has completed execution or been terminated.&lt;br&gt;
Scheduling Queues&lt;br&gt;
Ready Queue: Contains all ready-to-run processes.&lt;br&gt;
Blocked Queue: Contains processes waiting for events.&lt;br&gt;
Scheduling Algorithms&lt;br&gt;
First Come, First Served (FCFS):&lt;br&gt;
Mechanism: Processes are executed in the order they arrive.&lt;br&gt;
Pros: Simple to implement.&lt;br&gt;
Cons: Poor fairness and can lead to long waiting times for short jobs.&lt;br&gt;
Shortest Job First (SJF):&lt;br&gt;
Mechanism: The shortest job is executed first.&lt;br&gt;
Pros: Minimizes average waiting time.&lt;br&gt;
Cons: Requires knowing job lengths in advance and can cause starvation for long jobs.&lt;br&gt;
Round Robin (RR):&lt;br&gt;
Mechanism: Each process gets a fixed time slice, and the CPU is switched to the next process when the time slice expires.&lt;br&gt;
Pros: Fairness and good response time for interactive systems.&lt;br&gt;
Cons: Performance can degrade with improper time slice settings.&lt;br&gt;
Priority Scheduling:&lt;br&gt;
Mechanism: Processes with higher priority are executed first.&lt;br&gt;
Pros: Suitable for real-time systems.&lt;br&gt;
Cons: Can cause starvation for low-priority processes.&lt;br&gt;
Multilevel Feedback Queue:&lt;br&gt;
Mechanism: Processes are assigned to different queues with varying priorities and scheduling algorithms.&lt;br&gt;
Pros: High flexibility and adaptability.&lt;br&gt;
Cons: Complex implementation.&lt;br&gt;
Performance Metrics&lt;br&gt;
Average Waiting Time: The average time a process spends in the ready queue.&lt;br&gt;
Average Turnaround Time: The total time from process submission to completion.&lt;br&gt;
CPU Utilization: The percentage of time the CPU is busy.&lt;br&gt;
Throughput: The number of processes completed per unit time.&lt;br&gt;
Response Time: The time from user request to system response.&lt;br&gt;
Summary&lt;br&gt;
Process scheduling is a critical component of operating systems, impacting performance, fairness, and user experience. Choosing the right scheduling algorithm depends on system requirements, process types, and user needs. Understanding these concepts is essential for developers working on system-level applications or performance optimization.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Computer Networking: Key Interview Questions and Learning Points</title>
      <dc:creator>Ryan Zhi</dc:creator>
      <pubDate>Wed, 29 Jan 2025 01:23:06 +0000</pubDate>
      <link>https://dev.to/ryan_zhi/computer-networking-key-interview-questions-and-learning-points-4onf</link>
      <guid>https://dev.to/ryan_zhi/computer-networking-key-interview-questions-and-learning-points-4onf</guid>
      <description>&lt;h3&gt;
  
  
  Understanding the TCP/IP Model and Common Protocols
&lt;/h3&gt;

&lt;p&gt;The TCP/IP model, a foundational concept for internet communication, consists of four layers, each serving a distinct purpose in the transmission of data over the network. Let’s break down the protocols associated with each layer and explore their functionalities.&lt;/p&gt;

&lt;h4&gt;
  
  
  Application Layer Protocols
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;HTTP (Hypertext Transfer Protocol)&lt;/strong&gt;: The standard protocol used for transferring web pages and resources over the internet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SMTP (Simple Mail Transfer Protocol)&lt;/strong&gt;: Used for sending emails between servers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;POP3/IMAP (Post Office Protocol/Internet Message Access Protocol)&lt;/strong&gt;: Protocols for retrieving emails from a server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FTP (File Transfer Protocol)&lt;/strong&gt;: Used for transferring files between client and server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Telnet&lt;/strong&gt;: A protocol for remote command-line access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSH (Secure Shell Protocol)&lt;/strong&gt;: A secure version of Telnet, providing encrypted communication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RTP (Real-Time Transport Protocol)&lt;/strong&gt;: Used for real-time communication, such as audio and video streaming.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DNS (Domain Name System)&lt;/strong&gt;: A system that translates domain names into IP addresses.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Transport Layer Protocols
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;TCP (Transmission Control Protocol)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides reliable data transmission by ensuring data packets are delivered in the correct order.&lt;/li&gt;
&lt;li&gt;Features like &lt;strong&gt;flow control&lt;/strong&gt;, &lt;strong&gt;congestion control&lt;/strong&gt;, and &lt;strong&gt;error detection&lt;/strong&gt; make TCP reliable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TCP Segments&lt;/strong&gt;: Data is segmented, numbered, and acknowledged, ensuring data integrity.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;UDP (User Datagram Protocol)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unlike TCP, UDP is &lt;strong&gt;connectionless&lt;/strong&gt;, offering faster transmission at the cost of reliability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RDT&lt;/strong&gt; (Reliable Data Transfer) protocols can be built on top of UDP to handle reliability.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Network Layer Protocols
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;IP (Internet Protocol)&lt;/strong&gt;: The core protocol responsible for addressing and routing data packets across networks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ARP (Address Resolution Protocol)&lt;/strong&gt;: Resolves IP addresses to MAC addresses in local networks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ICMP (Internet Control Message Protocol)&lt;/strong&gt;: Used for sending control messages like error reporting (e.g., "ping" command).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NAT (Network Address Translation)&lt;/strong&gt;: Translates private IP addresses into public IP addresses and vice versa.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OSPF (Open Shortest Path First)&lt;/strong&gt;: A link-state routing protocol used in large enterprise networks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RIP (Routing Information Protocol)&lt;/strong&gt;: A distance-vector routing protocol, often used in smaller networks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BGP (Border Gateway Protocol)&lt;/strong&gt;: Used for routing data between different networks, particularly in large-scale networks like the internet.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Link Layer Protocols
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Error Detection&lt;/strong&gt;: Protocols ensure that data received is error-free, typically using checksums or CRCs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiplexing&lt;/strong&gt;: Technologies that enable multiple communications to share a single transmission medium.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CSMA/CD (Carrier Sense Multiple Access with Collision Detection)&lt;/strong&gt;: A protocol used in Ethernet networks to handle data collisions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MAC (Media Access Control)&lt;/strong&gt;: Ensures that data is properly addressed and delivered on a physical network.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ethernet&lt;/strong&gt;: The most common local area network (LAN) technology.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  HTTP Protocols Breakdown
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Request and Response Messages
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Request Message&lt;/strong&gt;: Contains a &lt;strong&gt;request line&lt;/strong&gt; (method, URL, HTTP version), &lt;strong&gt;headers&lt;/strong&gt; (additional information like Host, User-Agent), and an optional &lt;strong&gt;body&lt;/strong&gt; (data for methods like POST).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Response Message&lt;/strong&gt;: Contains a &lt;strong&gt;status line&lt;/strong&gt; (HTTP version, status code, status message), &lt;strong&gt;headers&lt;/strong&gt;, and an optional &lt;strong&gt;body&lt;/strong&gt; (data like HTML or JSON content).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Common HTTP Status Codes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;2xx&lt;/strong&gt;: Successful requests (e.g., &lt;strong&gt;200 OK&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3xx&lt;/strong&gt;: Redirection (e.g., &lt;strong&gt;301 Moved Permanently&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;4xx&lt;/strong&gt;: Client error (e.g., &lt;strong&gt;404 Not Found&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5xx&lt;/strong&gt;: Server error (e.g., &lt;strong&gt;500 Internal Server Error&lt;/strong&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  HTTP Methods:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GET&lt;/strong&gt;: Retrieves data from the server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;POST&lt;/strong&gt;: Sends data to the server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PUT&lt;/strong&gt;: Updates existing data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DELETE&lt;/strong&gt;: Removes data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HEAD&lt;/strong&gt;: Fetches headers without the body.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  GET vs POST:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GET&lt;/strong&gt; is used to fetch resources, typically data retrieval.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;POST&lt;/strong&gt; is used to send data to the server, typically for creating or updating resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  HTTP Connections:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Short Connection&lt;/strong&gt;: Each request/response cycle requires a new TCP connection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long Connection&lt;/strong&gt;: A persistent TCP connection is used to send multiple requests and responses, reducing overhead (enabled by &lt;strong&gt;Keep-Alive&lt;/strong&gt;).&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  HTTP/1.1 vs HTTP/2 vs HTTP/3
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;HTTP/1.1&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses &lt;strong&gt;long connections&lt;/strong&gt; and &lt;strong&gt;pipelining&lt;/strong&gt; for performance improvement but faces &lt;strong&gt;head-of-line blocking&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;HTTP/2&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Header Compression&lt;/strong&gt; (HPACK) reduces redundancy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Binary Protocol&lt;/strong&gt; improves parsing efficiency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiplexing&lt;/strong&gt; enables parallel data transfer to avoid blocking.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;HTTP/3&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built on &lt;strong&gt;QUIC&lt;/strong&gt;, uses &lt;strong&gt;UDP&lt;/strong&gt; for faster connection establishment.&lt;/li&gt;
&lt;li&gt;Eliminates &lt;strong&gt;head-of-line blocking&lt;/strong&gt; and allows &lt;strong&gt;connection migration&lt;/strong&gt; for seamless transitions between networks.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h3&gt;
  
  
  HTTPS (Secure HTTP)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Port&lt;/strong&gt;: HTTP uses &lt;strong&gt;port 80&lt;/strong&gt;, while HTTPS uses &lt;strong&gt;port 443&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encryption&lt;/strong&gt;: HTTPS uses SSL/TLS to encrypt data, ensuring secure communication between client and server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication&lt;/strong&gt;: Requires a &lt;strong&gt;digital certificate&lt;/strong&gt; from a trusted Certificate Authority (CA).&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  TCP Overview
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Why TCP Requires Three-Way Handshake:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prevent Duplicate Connections&lt;/strong&gt;: Ensures old or duplicate connections don't interfere.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Synchronizes Sequence Numbers&lt;/strong&gt;: Both ends agree on the initial sequence numbers for reliable communication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Efficiency&lt;/strong&gt;: Prevents wastage of resources by establishing a connection only when necessary.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  TCP’s Reliability:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sequence Numbers&lt;/strong&gt;: Each byte of data has a unique sequence number to ensure data is in the correct order.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Acknowledgments&lt;/strong&gt;: The receiver sends back an acknowledgment to the sender confirming data receipt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timeouts and Retransmissions&lt;/strong&gt;: Ensures reliability by resending data if acknowledgment isn’t received.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flow Control&lt;/strong&gt;: Prevents congestion by regulating the data rate based on receiver’s capacity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Congestion Control&lt;/strong&gt;: Reduces transmission rate if network congestion is detected.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  TCP vs UDP:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;TCP&lt;/strong&gt;: Reliable, connection-oriented, guarantees delivery, in-order delivery, and error-checking.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UDP&lt;/strong&gt;: Faster, connectionless, no guarantees, better for real-time applications.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  DNS (Domain Name System)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;DNS&lt;/strong&gt; translates human-readable domain names into IP addresses. It’s a distributed system that enables clients to access resources using domain names, such as "&lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt;", instead of having to remember numeric IP addresses.&lt;/p&gt;

&lt;h4&gt;
  
  
  DNS Resolution Process:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Client queries local DNS for the IP address of a domain.&lt;/li&gt;
&lt;li&gt;If not cached, the local DNS queries the &lt;strong&gt;root&lt;/strong&gt; DNS server.&lt;/li&gt;
&lt;li&gt;The root DNS directs to the &lt;strong&gt;TLD&lt;/strong&gt; server (e.g., for &lt;code&gt;.com&lt;/code&gt; domains).&lt;/li&gt;
&lt;li&gt;The TLD server directs to the &lt;strong&gt;authoritative&lt;/strong&gt; DNS server for the domain.&lt;/li&gt;
&lt;li&gt;The authoritative DNS responds with the IP address.&lt;/li&gt;
&lt;li&gt;The local DNS returns the IP to the client.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;TCP/IP protocols are integral to how the internet operates, with each layer providing distinct services and ensuring that data is transmitted reliably, securely, and efficiently. Understanding these protocols is key for building, maintaining, and troubleshooting network applications.&lt;/p&gt;




&lt;p&gt;Let me know if you'd like to dive deeper into any of these topics!&lt;/p&gt;

&lt;h1&gt;
  
  
  Networking #TCPIP #WebProtocols #HTTP #DNS #TCP #UDP #HTTP2 #HTTP3 #Security #TechPost
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title>Learning Virtual Threads in Java</title>
      <dc:creator>Ryan Zhi</dc:creator>
      <pubDate>Mon, 27 Jan 2025 13:20:47 +0000</pubDate>
      <link>https://dev.to/ryan_zhi/learning-virtual-threads-in-java-2ijb</link>
      <guid>https://dev.to/ryan_zhi/learning-virtual-threads-in-java-2ijb</guid>
      <description>&lt;h3&gt;
  
  
  What are Virtual Threads?
&lt;/h3&gt;

&lt;p&gt;Virtual threads, introduced in Java 19 and officially released in Java 21 (September 2023), are lightweight threads that are managed by the JVM, not the operating system. They are similar to goroutines in Go.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Were Virtual Threads Introduced?
&lt;/h3&gt;

&lt;p&gt;The main issue that virtual threads solve is improving CPU utilization for I/O-bound tasks. Traditional multi-threading in Java can lead to inefficient CPU usage when a thread is waiting for I/O operations, such as network communication or file reading. For example, if a thread is waiting for data from the network or a disk, it blocks and doesn't do anything else, causing idle CPU time.&lt;/p&gt;

&lt;p&gt;Virtual threads address this by allowing threads to "yield" and do other work when blocked on I/O, releasing the CPU and allowing other tasks to execute. Once the I/O operation is complete, the virtual thread resumes. This greatly increases CPU utilization because the thread doesn't sit idle while waiting for I/O operations to complete.&lt;/p&gt;

&lt;p&gt;However, virtual threads don't increase the actual number of available CPU threads. Instead, they improve the efficiency of the threads by allowing them to be used more effectively. For CPU-intensive tasks like mathematical calculations, virtual threads behave similarly to traditional threads. But if your workload involves significant I/O blocking, virtual threads can provide a notable performance boost.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Do Virtual Threads Work?
&lt;/h3&gt;

&lt;p&gt;Virtual threads are mapped to platform threads, which can be understood as platform threads being like electrical sockets that only allow one device to be plugged in. Virtual threads are like power strips, allowing multiple threads to share the same platform thread without blocking it when inactive. This metaphor, while not perfectly accurate, illustrates the idea of multiple virtual threads being managed on a single platform thread.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Concepts:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scheduling Queue&lt;/strong&gt;: Virtual threads are managed and scheduled by the JVM using a scheduling queue, which tracks all the suspended virtual threads. The queue ensures that threads are resumed in the order they were suspended.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scheduler&lt;/strong&gt;: The scheduler manages platform threads and assigns virtual threads to them when they are available. The default scheduler in Java uses a FIFO-based ForkJoinPool, which uses a "work-stealing" algorithm to maximize throughput.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Execution Flow:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Creation&lt;/strong&gt;: A virtual thread is created and placed in the scheduling queue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mounting&lt;/strong&gt;: The JVM's scheduler selects a platform thread and "mounts" a virtual thread onto it, allowing it to run.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execution&lt;/strong&gt;: The virtual thread runs its task until it encounters a blocking operation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unmounting&lt;/strong&gt;: If the thread becomes blocked, it is "unmounted" from the platform thread, which is then free to execute other tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Suspension&lt;/strong&gt;: The blocked thread is suspended by the JVM and placed back into the scheduling queue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resumption&lt;/strong&gt;: Once the blocking condition is cleared (e.g., I/O completes), the thread is resumed and mounted back onto a platform thread.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Completion&lt;/strong&gt;: The virtual thread continues execution until it completes, at which point it is garbage collected when there are no more references to it.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Continuations: The Core of Virtual Threads
&lt;/h3&gt;

&lt;p&gt;The concept of &lt;strong&gt;continuations&lt;/strong&gt; is crucial for virtual threads. A continuation is a data structure that holds the state of a task and allows it to be paused and resumed. This enables the JVM to suspend and resume tasks on virtual threads.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When a virtual thread needs to block (e.g., during I/O operations), the JVM uses a continuation to suspend the thread.&lt;/li&gt;
&lt;li&gt;When the I/O completes, the JVM uses the continuation to resume the thread from where it was suspended.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The platform thread that is executing a virtual thread is also called the &lt;strong&gt;carrier thread&lt;/strong&gt;. The JVM schedules virtual threads onto carrier threads in a way that allows for efficient resource usage. When a virtual thread is mounted onto a carrier thread, the thread's stack data (in the form of continuation frames) is copied into the carrier thread's stack.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mounting and Unmounting Virtual Threads
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mounting&lt;/strong&gt;: The process of assigning a virtual thread to a carrier platform thread. This involves copying the virtual thread's continuation stack frames to the carrier thread's stack, effectively transferring the virtual thread's execution to the platform thread.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unmounting&lt;/strong&gt;: When a virtual thread becomes blocked, it is "unmounted" from the platform thread, and its continuation data is often left on the heap (in memory) to be resumed later.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Takeaways:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Virtual threads provide lightweight task management within the JVM, enabling applications to handle large numbers of concurrent I/O-bound tasks efficiently.&lt;/li&gt;
&lt;li&gt;They allow a single platform thread to carry out the work of many virtual threads, reducing the overhead typically associated with creating and managing threads.&lt;/li&gt;
&lt;li&gt;Virtual threads are ideal for I/O-bound workloads (e.g., network requests, file I/O) but may not be as beneficial for CPU-bound tasks due to the limited number of platform threads available.&lt;/li&gt;
&lt;li&gt;Continuations handle the suspension and resumption of tasks, providing a way for the JVM to manage the execution of virtual threads across platform threads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Use Cases:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;I/O-Intensive Tasks&lt;/strong&gt;: Virtual threads are perfect for handling a large number of I/O operations, such as handling HTTP requests or processing file operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Concurrency in High-Load Applications&lt;/strong&gt;: Virtual threads can help manage thousands or even millions of concurrent tasks without the memory and CPU overhead of traditional threads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Async Programming&lt;/strong&gt;: Virtual threads can simplify asynchronous programming by making it look and behave more like traditional synchronous code, improving readability and reducing complexity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Virtual threads represent a powerful tool in Java for optimizing resource usage in highly concurrent, I/O-heavy applications. By efficiently handling blocking I/O operations and allowing threads to be more lightweight, they make it easier to scale applications while avoiding the overhead of managing a massive number of platform threads. However, virtual threads are not a silver bullet for every kind of task—particularly for CPU-bound tasks, where traditional threading models still hold an advantage.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding Thread Pools and Thread Management in Java</title>
      <dc:creator>Ryan Zhi</dc:creator>
      <pubDate>Sun, 26 Jan 2025 15:19:27 +0000</pubDate>
      <link>https://dev.to/ryan_zhi/understanding-thread-pools-and-thread-management-in-java-4j6a</link>
      <guid>https://dev.to/ryan_zhi/understanding-thread-pools-and-thread-management-in-java-4j6a</guid>
      <description>&lt;p&gt;Sure! Here's your content rewritten into a North American programmer-style post, with an added title:&lt;/p&gt;




&lt;h2&gt;
  
  
  Understanding Thread Pools and Thread Management in Java
&lt;/h2&gt;

&lt;p&gt;Thread pools are an essential tool in Java programming, providing a way to manage and reuse threads efficiently, which helps improve performance by avoiding the overhead of constantly creating and destroying threads. Let’s break down the key concepts and working principles of thread pools, thread management, and related mechanisms in Java.&lt;/p&gt;

&lt;h3&gt;
  
  
  Thread Pool Basics
&lt;/h3&gt;

&lt;p&gt;A thread pool is a collection of threads that are pre-created and managed for executing tasks, reducing the cost of creating new threads frequently. The core elements of thread pools include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Core Pool Size&lt;/strong&gt;: The minimum number of threads that are always running.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maximum Pool Size&lt;/strong&gt;: The maximum number of threads the pool can accommodate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep-Alive Time&lt;/strong&gt;: The time a thread is allowed to be idle before it is terminated.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Work Queue&lt;/strong&gt;: A queue that holds tasks before they are executed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rejection Policy&lt;/strong&gt;: What happens when tasks are submitted but the pool is full.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s a basic flow of how a thread pool operates:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Thread Creation&lt;/strong&gt;: Threads are created only when tasks are submitted (though you can pre-create them using &lt;code&gt;prestartAllCoreThreads&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task Queuing&lt;/strong&gt;: If the core threads are busy, tasks are added to the work queue instead of creating new threads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expanding Threads&lt;/strong&gt;: If the work queue is full, the pool will create new threads (up to the maximum pool size).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rejection Handling&lt;/strong&gt;: When both the work queue and pool size limits are reached, a rejection policy kicks in.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thread Termination&lt;/strong&gt;: Threads that are idle longer than the keep-alive time will be terminated unless they are core threads. Core threads won’t be terminated unless &lt;code&gt;allowCoreThreadTimeOut&lt;/code&gt; is set to &lt;code&gt;true&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Non-Core Threads and Their Reclamation
&lt;/h3&gt;

&lt;p&gt;Non-core threads are typically added to handle burst loads. When idle, they are reclaimed if they don’t get new tasks after a certain period. This reclamation happens via the &lt;code&gt;Poll&lt;/code&gt; method in the work queue. When threads don't find tasks, the method returns &lt;code&gt;null&lt;/code&gt;, which leads to their termination.&lt;/p&gt;

&lt;h4&gt;
  
  
  Can Core Threads Be Reclaimed?
&lt;/h4&gt;

&lt;p&gt;By default, core threads are not reclaimed. However, if you set &lt;code&gt;allowCoreThreadTimeOut = true&lt;/code&gt;, they will be reclaimed once they exceed the keep-alive time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Thread Blocking and Idle States
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Blocking States&lt;/strong&gt;: Threads enter a blocking state when waiting on synchronized methods or blocking operations like &lt;code&gt;sleep()&lt;/code&gt;, &lt;code&gt;wait()&lt;/code&gt;, or &lt;code&gt;join()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Core Threads in Idle State&lt;/strong&gt;: Core threads typically block on the &lt;code&gt;take()&lt;/code&gt; method of the work queue when waiting for tasks. Non-core threads, on the other hand, use &lt;code&gt;poll()&lt;/code&gt; and will be terminated if idle for too long.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Stopping Threads
&lt;/h3&gt;

&lt;p&gt;Threads can be interrupted or stopped in various ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;interrupt()&lt;/code&gt;&lt;/strong&gt;: Marks a thread for interruption. The thread must cooperate by checking its interrupted status and handling &lt;code&gt;InterruptedException&lt;/code&gt; to stop gracefully.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;isInterrupted()&lt;/code&gt;&lt;/strong&gt;: Checks if a thread has been interrupted.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;stop()&lt;/code&gt;&lt;/strong&gt;: This method has been deprecated as it can lead to inconsistent thread state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;Thread.interrupted()&lt;/code&gt;&lt;/strong&gt;: Clears the interrupt flag when called.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Virtual Threads: A New Era in Concurrency
&lt;/h3&gt;

&lt;p&gt;With JDK 19 (Preview) and officially in JDK 21, Java introduced &lt;strong&gt;Virtual Threads&lt;/strong&gt;, which are lightweight threads managed by the JVM. Unlike traditional OS-level threads, virtual threads don’t map directly to native OS threads, reducing the performance overhead associated with context switching.&lt;/p&gt;

&lt;h4&gt;
  
  
  Virtual Thread vs. Traditional Thread
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Traditional Threads&lt;/strong&gt;: Each maps directly to an OS-level thread.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Threads&lt;/strong&gt;: Managed by the JVM and are more lightweight, allowing large numbers to be created without heavy resource consumption.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Virtual threads are ideal for I/O-bound tasks, where threads spend a lot of time waiting for external operations (e.g., network calls, database queries). They’re less suitable for CPU-bound tasks due to their dependency on physical threads for execution.&lt;/p&gt;

&lt;h4&gt;
  
  
  Benefits of Virtual Threads
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency for I/O-bound Tasks&lt;/strong&gt;: Virtual threads can handle a large number of concurrent I/O operations with minimal overhead.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lightweight&lt;/strong&gt;: Much faster to create and switch between than traditional threads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Scheduling&lt;/strong&gt;: The JVM can manage their scheduling, reducing the need for manual thread pool management.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, for CPU-bound tasks, the use of virtual threads might result in resource contention because virtual threads share physical threads, which are limited by CPU cores.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why ThreadLocal Uses Weak References
&lt;/h3&gt;

&lt;p&gt;In Java, &lt;code&gt;ThreadLocal&lt;/code&gt; is used for storing data that is specific to the current thread. To avoid memory leaks when using thread pools, &lt;code&gt;ThreadLocal&lt;/code&gt; uses &lt;strong&gt;weak references&lt;/strong&gt; for its keys. This ensures that when a thread ends, the associated data can be garbage collected. Without weak references, thread-local data would remain in memory, even after a thread has finished execution, which could lead to memory leaks.&lt;/p&gt;

&lt;p&gt;To prevent this, &lt;code&gt;ThreadLocal&lt;/code&gt; automatically removes references when the thread ends, but &lt;strong&gt;it’s still good practice to call &lt;code&gt;remove()&lt;/code&gt; manually&lt;/strong&gt; to ensure resources are cleared promptly.&lt;/p&gt;




&lt;h3&gt;
  
  
  In Conclusion
&lt;/h3&gt;

&lt;p&gt;Thread management is a critical aspect of writing efficient concurrent programs in Java. Understanding thread pools, virtual threads, and related mechanisms like &lt;code&gt;ThreadLocal&lt;/code&gt; is essential for writing scalable, high-performance applications. Virtual threads open new possibilities, especially for handling large numbers of I/O-bound tasks without overwhelming system resources. &lt;/p&gt;

&lt;p&gt;As always, choosing the right concurrency model (traditional thread pools vs virtual threads) depends on the specific use case: I/O-bound tasks are perfect for virtual threads, but CPU-bound tasks are better off using traditional threads to avoid context-switching overhead.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>In-Depth Study of ZGC (Z Garbage Collector)</title>
      <dc:creator>Ryan Zhi</dc:creator>
      <pubDate>Sat, 25 Jan 2025 14:42:45 +0000</pubDate>
      <link>https://dev.to/ryan_zhi/in-depth-study-of-zgc-z-garbage-collector-2lo</link>
      <guid>https://dev.to/ryan_zhi/in-depth-study-of-zgc-z-garbage-collector-2lo</guid>
      <description>&lt;p&gt;Java 11 introduced the &lt;strong&gt;Epsilon&lt;/strong&gt; garbage collector, which is used in scenarios where no garbage collection is needed. It controls memory allocation but does not perform any garbage collection tasks. Once the heap memory is exhausted, the JVM shuts down directly. Additionally, &lt;strong&gt;ZGC&lt;/strong&gt; (Z Garbage Collector) was introduced as an experimental garbage collector in Java 11, and &lt;strong&gt;Shenandoah GC&lt;/strong&gt; was introduced in Java 12 as an experimental version, which became production-ready by Java 15. &lt;strong&gt;Java 17&lt;/strong&gt; marked the official LTS release with these enhancements.&lt;/p&gt;




&lt;h3&gt;
  
  
  1. What is ZGC?
&lt;/h3&gt;

&lt;p&gt;ZGC (Z Garbage Collector) is a low-latency garbage collector in Java, specifically designed to handle large heap sizes, ranging from hundreds of MBs to 16 TB. It achieves this by using &lt;strong&gt;concurrent marking&lt;/strong&gt; and &lt;strong&gt;concurrent relocation&lt;/strong&gt; techniques to keep garbage collection pauses extremely low, typically under 1 millisecond, regardless of the heap size. ZGC incorporates several innovative technologies like &lt;strong&gt;colored pointers&lt;/strong&gt;, &lt;strong&gt;load barriers&lt;/strong&gt;, and &lt;strong&gt;memory multi-mapping&lt;/strong&gt; to manage memory efficiently and perform garbage collection.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The Three-Color Marking Algorithm
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;Three-Color Marking&lt;/strong&gt; algorithm is used in garbage collection to track live objects in memory. It categorizes objects into three colors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;White&lt;/strong&gt;: The initial state, representing unvisited objects. These objects will be collected after marking is complete.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gray&lt;/strong&gt;: Objects that have been visited but whose references have not yet been visited. Gray objects are in a transitional state during the marking process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Black&lt;/strong&gt;: Objects that and all their referenced objects have been visited and marked as live.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. ZGC Use Cases
&lt;/h3&gt;

&lt;p&gt;ZGC is particularly useful in the following scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Large Memory Applications&lt;/strong&gt;: Such as real-time data analytics, high-performance servers, and online transaction systems, which need to handle TB-sized heap memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low Latency Requirements&lt;/strong&gt;: Ideal for applications that require real-time responses, like high-frequency trading systems and online games. ZGC’s low pause times (typically less than 1ms) meet these needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud Computing Platforms&lt;/strong&gt;: ZGC’s scalability and low-latency characteristics make it well-suited for managing shared resources in multi-tenant cloud environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. How ZGC Works
&lt;/h3&gt;

&lt;p&gt;ZGC’s garbage collection process includes several phases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Initial Mark&lt;/strong&gt;: A quick marking phase where all the live roots are identified.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Concurrent Marking&lt;/strong&gt;: Concurrently marks live objects in the heap.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remark&lt;/strong&gt;: Another short phase where the system reconciles any discrepancies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Concurrent Relocation&lt;/strong&gt;: Moves the live objects and compacts the heap while the application continues to run.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. ZGC's Current Challenges
&lt;/h3&gt;

&lt;p&gt;Java has introduced a &lt;strong&gt;generational ZGC&lt;/strong&gt; in recent updates. ZGC operates differently from traditional garbage collectors like &lt;strong&gt;CMS&lt;/strong&gt; or &lt;strong&gt;G1&lt;/strong&gt; in that it does not have the concept of generations but uses regions similar to G1. ZGC's regions come in three sizes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Small Region&lt;/strong&gt;: 2MB in size, holds objects smaller than 256KB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium Region&lt;/strong&gt;: 32MB in size, for objects between 256KB and 4MB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Large Region&lt;/strong&gt;: A flexible size (must be a multiple of 2MB), used for large objects (4MB+). These regions are not reallocated because copying large objects is very expensive.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. ZGC's Pause Time Characteristics
&lt;/h3&gt;

&lt;p&gt;Unlike CMS, where the entire garbage collection phase is &lt;strong&gt;Stop-The-World (STW)&lt;/strong&gt;, ZGC is designed to be almost entirely concurrent. ZGC has three &lt;strong&gt;STW phases&lt;/strong&gt;: Initial Mark, Remark, and Initial Relocation. Most of the work is done concurrently, and pauses are not dependent on the heap size or the number of active objects. This ensures that ZGC achieves its goal of minimizing pauses.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. ZGC Configuration Options
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;-Xms -Xmx&lt;/strong&gt;: Set both the minimum and maximum heap sizes to 10GB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-XX:ReservedCodeCacheSize&lt;/strong&gt;: Set the size of the code cache used for JIT-compiled code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-XX:+UnlockExperimentalVMOptions -XX:+UseZGC&lt;/strong&gt;: Enables ZGC in the JVM.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-XX:ConcGCThreads&lt;/strong&gt;: Specifies the number of threads for concurrent garbage collection. By default, it's 12.5% of the number of CPU cores.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-XX:ParallelGCThreads&lt;/strong&gt;: Defines the number of threads for STW phases. Default is 60% of the CPU cores.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-XX:ZCollectionInterval&lt;/strong&gt;: The minimum time interval between ZGC collections (in seconds).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-XX:ZAllocationSpikeTolerance&lt;/strong&gt;: Adjusts when ZGC triggers based on allocation spikes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-XX:+UnlockDiagnosticVMOptions -XX:-ZProactive&lt;/strong&gt;: Controls proactive garbage collection behavior.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-Xlog&lt;/strong&gt;: Configures logging for garbage collection events.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  8. ZGC Trigger Mechanisms
&lt;/h3&gt;

&lt;p&gt;ZGC’s GC trigger mechanism differs significantly from CMS and G1. One of ZGC's core features is concurrency, and during the GC process, new objects may be created. The challenge lies in ensuring that the heap does not fill up before GC completes, as this could cause threads to block.&lt;/p&gt;

&lt;p&gt;Some key ZGC triggering mechanisms are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Blocking Memory Allocation Requests&lt;/strong&gt;: When the heap fills up faster than GC can handle, threads may be blocked. The keyword in logs for this is "&lt;strong&gt;Allocation Stall&lt;/strong&gt;."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adaptive Algorithm Based on Allocation Rate&lt;/strong&gt;: ZGC dynamically calculates when to trigger GC based on the recent allocation rate and GC times. This method helps avoid triggering too early and causing unnecessary pauses. The log keyword for this is "&lt;strong&gt;Allocation Rate&lt;/strong&gt;."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fixed Time Intervals&lt;/strong&gt;: ZGC can also be triggered based on fixed intervals, which is useful for burst traffic scenarios. The log keyword for this is "&lt;strong&gt;Timer&lt;/strong&gt;."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proactive GC&lt;/strong&gt;: When ZGC calculates that GC should happen earlier than normal, this can be triggered proactively. This can be controlled with the &lt;strong&gt;-ZProactive&lt;/strong&gt; flag. Logs will show the keyword "&lt;strong&gt;Proactive&lt;/strong&gt;."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Warmup Phase&lt;/strong&gt;: This occurs during service startup and is usually not of concern. Logs will show the keyword "&lt;strong&gt;Warmup&lt;/strong&gt;."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External Triggering&lt;/strong&gt;: Explicit calls to &lt;strong&gt;System.gc()&lt;/strong&gt; can also trigger GC, which is reflected in the logs as "&lt;strong&gt;System.gc()&lt;/strong&gt;."&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  9. Key Innovations in ZGC
&lt;/h3&gt;

&lt;p&gt;ZGC uses a combination of &lt;strong&gt;colored pointers&lt;/strong&gt;, &lt;strong&gt;load barriers&lt;/strong&gt;, and &lt;strong&gt;memory multi-mapping&lt;/strong&gt; to achieve low-latency garbage collection.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Colored Pointers&lt;/strong&gt;: ZGC uses the high-order bits of a 64-bit pointer to store GC state information. These bits include flags like &lt;strong&gt;Marked0&lt;/strong&gt;, &lt;strong&gt;Marked1&lt;/strong&gt;, and &lt;strong&gt;Remapped&lt;/strong&gt;, which indicate whether an object has been marked as live, whether it's been moved, or whether it's ready for finalization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load Barriers&lt;/strong&gt;: ZGC ensures memory consistency during object access by using load barriers. These barriers check the object's pointer color and ensure that the pointer is updated to the correct memory location if the object has been moved during GC.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory Multi-Mapping&lt;/strong&gt;: ZGC maps the same physical memory to multiple virtual addresses, each representing a different GC state (e.g., Marked0, Marked1, Remapped). This allows ZGC to efficiently switch between views of memory without actually moving data, improving flexibility and efficiency.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;ZGC is a revolutionary step forward in Java's garbage collection strategy, particularly for applications with &lt;strong&gt;large heaps&lt;/strong&gt; and &lt;strong&gt;low-latency requirements&lt;/strong&gt;. By using advanced techniques like colored pointers, load barriers, and memory multi-mapping, it can achieve low pause times even with large-scale memory management. With its concurrent phases and advanced triggering mechanisms, ZGC is well-suited for cloud environments, real-time systems, and applications that require high throughput and low latency.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Learning and Reflections on Java 8 Garbage Collectors</title>
      <dc:creator>Ryan Zhi</dc:creator>
      <pubDate>Fri, 24 Jan 2025 14:03:47 +0000</pubDate>
      <link>https://dev.to/ryan_zhi/learning-and-reflections-on-java-8-garbage-collectors-4c9i</link>
      <guid>https://dev.to/ryan_zhi/learning-and-reflections-on-java-8-garbage-collectors-4c9i</guid>
      <description>&lt;h1&gt;
  
  
  Why Companies Still Use Java 8
&lt;/h1&gt;

&lt;p&gt;Recently, I've been asking many friends about the Java versions they use in their projects. I found that a significant number of projects still run on Java 8, while some have moved to higher versions like Java 17 or Java 21. In these projects, choosing and tuning the garbage collector (GC) is a crucial part of optimizing application performance.&lt;br&gt;
Many senior colleagues told me that around 80% of companies still use JDK 8. This is mainly because they choose between CMS and G1 based on memory requirements. In the current tech landscape, few new internet companies are emerging, and most prefer to stick with what they know to avoid increasing team workload and potential pitfalls.&lt;/p&gt;

&lt;h2&gt;
  
  
  From a Management Perspective
&lt;/h2&gt;

&lt;p&gt;Optimization Benefits&lt;br&gt;
Upgrading to newer Java versions (like Java 17 or Java 21) might bring performance improvements and support for new features. However, it's essential to weigh these benefits against the effort and risks involved in the upgrade process.&lt;br&gt;
Workload and Risk&lt;br&gt;
Upgrading requires adapting and testing existing code, which could introduce new issues. Management needs to assess whether the upgrade is necessary and prepare a technical feasibility analysis to report to higher-ups.&lt;/p&gt;

&lt;h2&gt;
  
  
  From a Developer's Perspective
&lt;/h2&gt;

&lt;p&gt;Increased Workload&lt;br&gt;
Upgrading means retesting all modules, which is time-consuming and labor-intensive. This can lead to conflicts between developers and management regarding workload and scheduling.&lt;br&gt;
Technical Risks&lt;br&gt;
Newer versions may have undiscovered compatibility or performance issues, adding uncertainty to the project.&lt;/p&gt;

&lt;h1&gt;
  
  
  Java 8 Garbage Collectors
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Serial Garbage Collector (Serial GC)
&lt;/h2&gt;

&lt;p&gt;Characteristics&lt;br&gt;
Uses a single thread for garbage collection, causing all user threads to pause (Stop-The-World, STW) during GC.&lt;br&gt;
Simple and efficient with no overhead from thread interactions.&lt;br&gt;
Best for single-threaded environments or small applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;According to the official documentation, if your application has a small dataset (up to around 100 MB) or runs on a single processor with no pause time requirements, you can use -XX:+UseSerialGC to select the Serial Collector.&lt;br&gt;
Suitable for single-core CPU environments or resource-constrained scenarios.&lt;br&gt;
Applications with low latency requirements and small datasets.&lt;/p&gt;

&lt;h1&gt;
  
  
  Parallel Garbage Collector (Parallel GC)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Characteristics
&lt;/h2&gt;

&lt;p&gt;Uses multiple threads for garbage collection to reduce pause times.&lt;br&gt;
Focuses on high throughput.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;If peak application performance is the top priority and there are no pause time requirements or pauses of 1 second or longer are acceptable, use -XX:+UseParallelGC to select the Parallel Collector.&lt;br&gt;
Suitable for multi-core processor environments.&lt;br&gt;
Batch processing or big data processing scenarios.&lt;/p&gt;

&lt;h1&gt;
  
  
  CMS Garbage Collector (Concurrent Mark-Sweep GC)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Characteristics
&lt;/h2&gt;

&lt;p&gt;Aims to minimize pause times using a concurrent mark-sweep algorithm.&lt;br&gt;
The process includes initial mark, concurrent mark, remark, and concurrent sweep stages.&lt;br&gt;
Best for applications sensitive to response times.&lt;br&gt;
The CMS collector only works on the old generation and is based on the mark-sweep algorithm. Its operation consists of four steps:&lt;br&gt;
Initial Mark (CMS initial mark)&lt;br&gt;
Marks objects directly reachable from GC Roots. This stage is very quick and requires a brief STW pause.&lt;br&gt;
Concurrent Mark (CMS concurrent mark)&lt;br&gt;
Traces objects from GC Roots to mark live objects. This stage is time-consuming but runs concurrently with the application.&lt;br&gt;
Remark (CMS remark)&lt;br&gt;
Fixes any changes in object markings caused by the application's activity during the concurrent mark stage. This stage is slightly longer than the initial mark but still much shorter than the concurrent mark.&lt;br&gt;
Concurrent Sweep (CMS concurrent sweep)&lt;br&gt;
Clears the marked regions. This stage runs concurrently with the application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;If response time is more critical than overall throughput and GC pauses need to be kept under 1 second, consider using -XX:+UseConcMarkSweepGC or -XX:+UseG1GC.&lt;br&gt;
Suitable for web applications, online transaction systems, and other latency-sensitive scenarios.&lt;br&gt;
Large memory applications, but be aware of memory fragmentation issues. This can be mitigated by triggering CMS more frequently followed by a compacting collection.&lt;/p&gt;

&lt;h1&gt;
  
  
  G1 Garbage Collector (Garbage-First GC)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Characteristics
&lt;/h2&gt;

&lt;p&gt;Divides the heap into regions and prioritizes garbage collection in regions with the most garbage.&lt;br&gt;
Provides predictable pause times and reduces memory fragmentation.&lt;br&gt;
Best for large memory and multi-core environments.&lt;br&gt;
The G1 collection process is as follows:&lt;br&gt;
Initial Marking (Initial Marking)&lt;br&gt;
Marks objects directly reachable from GC Roots and adjusts the TAMS (Next Top at Mark Start) value to ensure new objects are created in the correct regions. This stage requires a brief STW pause but is very quick.&lt;br&gt;
Concurrent Marking (Concurrent Marking)&lt;br&gt;
Traces objects from GC Roots to identify live objects. This stage is time-consuming but runs concurrently with the application.&lt;br&gt;
Final Marking (Final Marking)&lt;br&gt;
Fixes changes in object markings caused by the application's activity during the concurrent marking stage. This stage requires a brief STW pause but can be executed in parallel.&lt;br&gt;
Live Data Counting and Evacuation (Evacuation)&lt;br&gt;
Sorts regions by their garbage collection value and cost, then evacuates objects based on the desired GC pause time. This stage can run concurrently with the application, and pausing user threads can significantly improve collection efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;If your application fits the following criteria, consider migrating from CMS or ParallelOld to G1 for better performance:&lt;br&gt;
More than half of the heap is occupied by live data.&lt;br&gt;
Object allocation or promotion rates vary significantly.&lt;br&gt;
You want to eliminate long GC pauses (over 0.5–1 second).&lt;br&gt;
In practice, applications with 8 GB or more memory generally use the G1 garbage collector. Smaller memory sizes can lead to frequent GC cycles, which degrade performance.&lt;br&gt;
Suitable for large applications requiring low latency and high throughput.&lt;br&gt;
Ideal for e-commerce platforms, game servers, and other performance-critical scenarios.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;While newer Java versions offer exciting features and improvements, many companies still rely on Java 8 due to its stability and familiarity. Garbage collector tuning remains a critical aspect of optimizing Java applications, and understanding the strengths and weaknesses of each collector is essential for making informed decisions.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Java Concurrency Mastery: A Comprehensive Guide to AQS, Locks, and Concurrent Collection</title>
      <dc:creator>Ryan Zhi</dc:creator>
      <pubDate>Thu, 23 Jan 2025 08:44:29 +0000</pubDate>
      <link>https://dev.to/ryan_zhi/java-concurrency-mastery-a-comprehensive-guide-to-aqs-locks-and-concurrent-collection-1l0h</link>
      <guid>https://dev.to/ryan_zhi/java-concurrency-mastery-a-comprehensive-guide-to-aqs-locks-and-concurrent-collection-1l0h</guid>
      <description>&lt;ol&gt;
&lt;li&gt;AbstractQueuedSynchronizer (AQS)
AQS is the backbone of many concurrency utilities in Java. It's a framework for building locks and synchronizers.
Key Features:
State Management: AQS maintains a state variable that represents the synchronization state. This state can have different meanings depending on the implementation (e.g., lock count for ReentrantLock, read/write counts for ReentrantReadWriteLock).
FIFO Queue: AQS uses a FIFO queue to manage threads that fail to acquire the lock. Each thread is wrapped in a Node and enqueued.
Condition Support: AQS provides a ConditionObject that allows threads to wait for specific conditions to be met. Each condition maintains its own queue of waiting threads.
Fairness: AQS supports both fair and non-fair implementations, affecting how threads acquire locks.&lt;/li&gt;
&lt;li&gt;ReentrantLock
A reentrant lock is a synchronization primitive that provides more flexibility than intrinsic locks (synchronized).
Key Features:
Reentrancy: A thread can acquire the lock multiple times, and the lock count is incremented each time.
Fairness: Can be configured to be fair (threads acquire locks in FIFO order) or non-fair (threads may acquire locks out of order).
Condition Support: Allows threads to wait for specific conditions using Condition objects.
Interruptibility: Threads waiting to acquire the lock can be interrupted.&lt;/li&gt;
&lt;li&gt;ReadWriteLock
ReadWriteLock allows multiple readers to access a resource simultaneously while ensuring exclusive access for writers.
Key Features:
Shared Locks (Read Locks): Multiple threads can hold the read lock simultaneously.
Exclusive Locks (Write Locks): Only one thread can hold the write lock at a time.
Upgrade/Downgrade: Supports lock downgrade (write lock to read lock) but not upgrade (read lock to write lock).
Fairness: Can be configured to be fair or non-fair.&lt;/li&gt;
&lt;li&gt;LockSupport
LockSupport is a utility class that provides basic thread blocking and unblocking mechanisms.
Key Features:
Blocking: Uses park() to block a thread.
Unblocking: Uses unpark() to unblock a thread.
Low-Level: Often used internally by other concurrency utilities but can also be used directly for custom synchronization.&lt;/li&gt;
&lt;li&gt;Concurrent Collections
Java provides several thread-safe collections optimized for concurrent access.
ConcurrentHashMap
High Concurrency: Uses a combination of segment locks and CAS operations to allow high concurrency.
No Null Keys/Values: Does not allow null keys or values.
ConcurrentLinkedQueue
Lock-Free: Uses CAS operations for thread safety.
Unbounded: No fixed capacity.
CopyOnWriteArrayList
Write-Heavy: Writes are expensive as they involve copying the entire array.
Read-Heavy: Reads are extremely fast and lock-free.
ConcurrentSkipListMap/ConcurrentSkipListSet
Ordered: Maintains elements in sorted order.
High Concurrency: Uses fine-grained locking.&lt;/li&gt;
&lt;li&gt;Blocking Queues
Blocking queues are thread-safe queues that support blocking operations.
ArrayBlockingQueue
Bounded: Fixed capacity.
Fairness: Can be configured to be fair.
LinkedBlockingQueue
Unbounded: Can be configured with a capacity, but defaults to unbounded.
Performance: Better performance than ArrayBlockingQueue.
PriorityBlockingQueue
Priority: Elements are ordered by priority.
Unbounded: No fixed capacity.
SynchronousQueue
Direct Handoff: No storage; producer threads wait for consumer threads and vice versa.
DelayQueue
Delayed Access: Elements can only be accessed after a specified delay.
LinkedTransferQueue
Transfer: Supports direct handoff between producer and consumer threads.&lt;/li&gt;
&lt;li&gt;Atomic Classes
Atomic classes provide thread-safe operations on single variables.
Key Features:
CAS Operations: Use Compare-And-Swap (CAS) to ensure atomicity.
No Locks: Avoids the overhead of traditional locks.
Memory Barriers: Ensures visibility of changes across threads.
Common Classes:
AtomicInteger
AtomicLong
AtomicBoolean
AtomicReference
AtomicIntegerArray&lt;/li&gt;
&lt;li&gt;Thread Pools
Thread pools manage a pool of worker threads to execute tasks efficiently.
Key Features:
Core and Max Threads: Manages a core pool size and a maximum pool size.
Work Queue: Tasks are queued if all core threads are busy.
Rejected Execution: Handles tasks when the queue is full and the maximum pool size is reached.
Idle Threads: Can be terminated if they remain idle for a specified duration.
Common Executors:
FixedThreadPool: Fixed number of threads.
CachedThreadPool: Dynamically creates threads as needed.
SingleThreadExecutor: Single-threaded executor.
ScheduledThreadPool: Supports scheduled and periodic tasks.&lt;/li&gt;
&lt;li&gt;Synchronization Utilities
Java provides several utilities for synchronizing threads.
CountDownLatch
One-Time Use: Allows one or more threads to wait for a set of operations to complete.
Count Down: Threads call countDown() to decrement the counter.
Await: Threads call await() to wait until the counter reaches zero.
CyclicBarrier
Reusable: Allows multiple threads to wait at a barrier point.
Barrier Action: Optional action to be executed when all threads reach the barrier.
Reset: Can be reset for reuse.
Semaphore
Permits: Manages a set of permits that threads can acquire.
Fairness: Can be configured to be fair or non-fair.
Acquire/Release: Threads acquire permits to access resources and release them afterward.&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>A Deep Dive into Java’s Concurrency Framework (JUC)</title>
      <dc:creator>Ryan Zhi</dc:creator>
      <pubDate>Wed, 22 Jan 2025 10:21:56 +0000</pubDate>
      <link>https://dev.to/ryan_zhi/a-deep-dive-into-javas-concurrency-framework-juc-2p1e</link>
      <guid>https://dev.to/ryan_zhi/a-deep-dive-into-javas-concurrency-framework-juc-2p1e</guid>
      <description>&lt;p&gt;As developers, writing efficient and thread-safe concurrent code can be challenging, but Java provides a robust concurrency framework called Java Util Concurrent (JUC) to simplify this process. Whether you’re working on complex multithreaded systems or looking to optimize performance under high concurrency, JUC has you covered. Here's a breakdown of its major components:&lt;br&gt;
🔐&lt;strong&gt;Locks&lt;/strong&gt;&lt;br&gt;
JUC’s lock mechanism goes beyond the synchronized keyword, offering greater flexibility and functionality:&lt;/p&gt;

&lt;p&gt;ReentrantLock: A reentrant lock with enhanced features, like interruptible locks, fairness policies, and timeout-based locking.&lt;br&gt;
ReentrantReadWriteLock: Separates read and write locks for better performance under read-heavy workloads. Multiple threads can acquire the read lock simultaneously, while the write lock ensures exclusive access.&lt;br&gt;
StampedLock: Introduced in JDK 8, this offers a more advanced optimistic locking mechanism, enabling fine-grained control with modes like optimistic read, pessimistic read, and write locks.&lt;br&gt;
LockSupport: A utility class for advanced thread blocking and waking mechanisms, often used to implement custom synchronization constructs.&lt;br&gt;
Condition: Provides a finer-grained thread communication mechanism, enabling more flexible wait/notify patterns in combination with locks.&lt;br&gt;
🛠 &lt;strong&gt;Concurrency Tools&lt;/strong&gt;&lt;br&gt;
These utility classes are designed for advanced thread coordination:&lt;/p&gt;

&lt;p&gt;CountDownLatch: A synchronization aid that allows one or more threads to wait until a set of operations are completed. Ideal for tasks like waiting for multiple services to initialize.&lt;br&gt;
CyclicBarrier: Allows a group of threads to synchronize at a common barrier point. Once all threads reach the barrier, they can proceed. What makes it unique? It’s reusable for multiple cycles.&lt;br&gt;
Semaphore: A classic concurrency control tool that limits access to a resource pool by a specified number of threads. Perfect for managing limited connections or rate-limiting APIs.&lt;br&gt;
⚛️ &lt;strong&gt;Atomic Classes&lt;/strong&gt;&lt;br&gt;
JUC offers a suite of atomic classes for lightweight, lock-free thread safety:&lt;/p&gt;

&lt;p&gt;Examples include AtomicInteger, AtomicBoolean, and AtomicReference.&lt;br&gt;
Backed by volatile and CAS (compare-and-swap) operations, they ensure visibility and atomicity without the overhead of explicit locks.&lt;br&gt;
Use cases: shared counters, flags, or reference updates in highly concurrent environments.&lt;br&gt;
🚀 &lt;strong&gt;Executors and Thread Pools&lt;/strong&gt;&lt;br&gt;
Thread management simplified! JUC provides Executor and ExecutorService interfaces for task execution:&lt;/p&gt;

&lt;p&gt;ThreadPoolExecutor: A fully customizable thread pool, making it ideal for scalable and performant task execution.&lt;br&gt;
ScheduledThreadPoolExecutor: Perfect for scheduling tasks with fixed delays or intervals.&lt;br&gt;
ForkJoinPool: A pool optimized for divide-and-conquer algorithms, leveraging work-stealing for better CPU utilization.&lt;br&gt;
CachedThreadPool: Dynamically creates threads as needed and reuses idle ones for short-lived tasks.&lt;br&gt;
SingleThreadExecutor: Guarantees sequential execution of tasks using a single worker thread.&lt;br&gt;
📦 &lt;strong&gt;Concurrent Collections&lt;/strong&gt;&lt;br&gt;
Thread-safe collections built for high-concurrency environments:&lt;/p&gt;

&lt;p&gt;ConcurrentHashMap: A highly optimized, thread-safe hash map with fine-grained locking (bucket-level). It provides non-blocking reads and controlled writes.&lt;br&gt;
CopyOnWriteArrayList: Perfect for scenarios with frequent reads but infrequent writes, as every modification creates a new copy of the list.&lt;br&gt;
ConcurrentSkipListMap and ConcurrentSkipListSet: Sorted, thread-safe collections built on the skip list data structure for fast lookups and updates.&lt;br&gt;
Final Thoughts&lt;br&gt;
JUC is a powerful toolbox for building scalable, performant, and thread-safe applications in Java. Whether you’re optimizing resource access with locks, managing threads with executors, or leveraging atomic operations for lock-free concurrency, JUC equips you with the tools you need.&lt;/p&gt;

&lt;p&gt;What’s your favorite part of the JUC framework? Let me know in the comments! 👇&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Summary of Deploying Vortex GPGPU on K7 FPGA Development Board</title>
      <dc:creator>Ryan Zhi</dc:creator>
      <pubDate>Wed, 22 Jan 2025 08:55:11 +0000</pubDate>
      <link>https://dev.to/ryan_zhi/summary-of-deploying-vortex-gpgpu-on-k7-fpga-development-board-324j</link>
      <guid>https://dev.to/ryan_zhi/summary-of-deploying-vortex-gpgpu-on-k7-fpga-development-board-324j</guid>
      <description>&lt;h1&gt;
  
  
  Project Background
&lt;/h1&gt;

&lt;p&gt;Our team consists of four members, each with a specific role. I am responsible for overall project coordination, scheduling, hardware architecture design, system integration, and data flow management. I also support team members when they encounter difficulties and escalate issues to our technical lead, Mr. Geng, if necessary. One team member focuses on testing, another on driver development, and the third on IP core modularization.&lt;br&gt;
The primary goal of this project is to lay the groundwork for future Compute-in-Memory (CIM) chips by implementing a minimal-cost RISC-V-based GPGPU on a K7 FPGA. The architecture consists of one core, one socket, and one cluster.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Compute-in-Memory (CIM)?
&lt;/h1&gt;

&lt;p&gt;Compute-in-Memory is an architecture that integrates computing capabilities within memory units, enabling efficient 2D and 3D matrix operations (multiplication and addition). Its key advantages include:&lt;br&gt;
Breaking the memory wall by reducing unnecessary data movement delays and power consumption.&lt;br&gt;
Enhancing computational efficiency by orders of magnitude while reducing costs.&lt;br&gt;
CIM is a non-von Neumann architecture that can deliver significantly higher performance (over 1000 TOPS) and efficiency (exceeding 10-100 TOPS/W) compared to existing ASIC chips.&lt;/p&gt;

&lt;h1&gt;
  
  
  Project Design and Background
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Why Choose Vortex GPGPU Based on RISC-V?
&lt;/h2&gt;

&lt;p&gt;Efficiency and Open Ecosystem: RISC-V's minimal instruction set is more efficient than ARM, resulting in smaller chip area and better performance. Vortex GPGPU is open-source, offering greater customization and innovation opportunities compared to closed ecosystems like NVIDIA CUDA and AMD ROCm.&lt;br&gt;
Research and Innovation: The RISC-V ecosystem is still rapidly evolving, providing ample opportunities for academic and industrial innovation, especially in hardware optimization and algorithm acceleration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges and Solutions in Deploying Vortex GPGPU on K7
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Hardware Architecture Differences:
&lt;/h3&gt;

&lt;p&gt;Challenge: U280 (UltraScale+) vs. K7 (Kintex-7) architecture; limited logic resources and memory bandwidth on K7.&lt;br&gt;
Solution: Modular design using Vivado Block Design, optimized pipeline depth, and reduced non-essential logic to fit K7's resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  PCIe Interface and Driver Differences:
&lt;/h3&gt;

&lt;p&gt;Challenge: U280 supports Xilinx Runtime (XRT) and HBM, while K7 requires custom Linux drivers and XDMA IP.&lt;br&gt;
Solution: Developed a Generic Virtual Interface (GVI) driver to encapsulate XDMA, optimized PCIe data channels, and used DMA queues to improve throughput.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resource Limitations on K7:
&lt;/h3&gt;

&lt;p&gt;Challenge: Limited LUTs, DSPs, and DDR3 memory bandwidth.&lt;br&gt;
Solution: Optimized logic resource usage, split complex operations into multiple cycles, and enhanced local BRAM utilization for caching.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Differences Between K7 and U280
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Hardware Resources:
&lt;/h3&gt;

&lt;p&gt;U280: UltraScale+ architecture with abundant LUTs, FFs, DSPs, and HBM.&lt;br&gt;
K7: Kintex-7 architecture with limited resources and DDR3 support.&lt;/p&gt;

&lt;h3&gt;
  
  
  Computational Capability:
&lt;/h3&gt;

&lt;p&gt;U280: Supports multi-cluster, multi-socket, multi-core GPGPU architecture.&lt;br&gt;
K7: Limited to a single cluster, socket, and core due to resource &lt;br&gt;
constraints.&lt;/p&gt;

&lt;h3&gt;
  
  
  PCIe Support:
&lt;/h3&gt;

&lt;p&gt;U280: PCIe Gen3.&lt;br&gt;
K7: PCIe Gen2.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimization on K7
&lt;/h2&gt;

&lt;p&gt;Minimal GPGPU Architecture: Implemented a basic 1-cluster, 1-socket, 1-core architecture, removing non-essential features.&lt;br&gt;
Resource Reuse and Optimization: Shared modules, optimized key paths for common operations (e.g., matrix multiplication), and increased local cache usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Impact on System Performance
&lt;/h2&gt;

&lt;p&gt;Throughput and Latency: Lower throughput and higher latency on K7 due to limited resources and lower PCIe bandwidth.&lt;br&gt;
Task Scale: Suitable only for small-scale tasks, not for large deep learning models.&lt;br&gt;
Advantages: Low-cost validation platform, 95% cost reduction, and scalable architecture for future upgrades.&lt;/p&gt;

&lt;h1&gt;
  
  
  Task Planning and Team Coordination
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Task Allocation:
&lt;/h2&gt;

&lt;p&gt;Project Manager: Overall coordination, hardware architecture, system integration, and data flow management.&lt;br&gt;
Tester: Functional and performance testing, OpenCL script execution.&lt;br&gt;
Driver Developer: GVI driver development and debugging.&lt;br&gt;
IP Core Engineer: Optimization and modularization of GPGPU core for K7.&lt;/p&gt;

&lt;h2&gt;
  
  
  Collaboration Methods:
&lt;/h2&gt;

&lt;p&gt;Tools: Gantt charts, project management tools, enterprise messaging, and knowledge base for documentation.&lt;br&gt;
Meetings: Daily stand-ups and weekly project reviews to synchronize progress and address challenges.&lt;br&gt;
Support: Regular check-ins to assist team members with technical issues and facilitate cross-domain collaboration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Balancing Hardware and Software Teams:
&lt;/h2&gt;

&lt;p&gt;Clear Interface Definition: Established standardized interfaces and functional requirements between hardware and software.&lt;br&gt;
Milestone Setting: Aligned hardware IP delivery with software driver development and testing.&lt;br&gt;
Communication: Regular joint meetings to ensure consistent understanding of interface requirements and avoid rework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overcoming Challenges:
&lt;/h2&gt;

&lt;p&gt;Progress Synchronization: Set short-term goals to avoid delays due to mismatched hardware and software development paces.&lt;br&gt;
Cross-Domain Support: Provided cross-disciplinary technical guidance through documentation and knowledge sharing.&lt;/p&gt;

&lt;h1&gt;
  
  
  Modular Design in Vivado Block Design
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Principles:
&lt;/h2&gt;

&lt;p&gt;Single Responsibility: Each module handles a specific function (e.g., data transfer, GPGPU computation).&lt;br&gt;
Standardized Interfaces: Use AXI4, AXI4-Lite, and AXI4-Stream for module communication.&lt;br&gt;
Hierarchical Design: Organize modules into layers (e.g., data transfer, computation, control) for easier maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scalability:
&lt;/h2&gt;

&lt;p&gt;Parameterized Modules: Allow dynamic adjustment of module parameters (e.g., PCIe bandwidth, GPGPU core count).&lt;br&gt;
Reserved Interfaces: Include extra AXI ports and interrupt lines for future expansion.&lt;br&gt;
Flexible Module Replacement: Standardized interfaces enable easy module upgrades.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maintainability:
&lt;/h2&gt;

&lt;p&gt;Module Packaging: Use Vivado Packager Tool to create reusable IP cores with GUIs for configuration.&lt;br&gt;
Layered Debugging: Develop independent test cases for each module and use simulation tools for verification.&lt;br&gt;
Documentation and Naming: Maintain consistent naming conventions and comprehensive documentation for future reference.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Through clear task allocation, effective tools, and strong team coordination, we successfully overcame technical challenges and deployed the Vortex GPGPU on the K7 FPGA. This project not only validated the feasibility of GPGPU design on resource-constrained hardware but also provided a low-cost platform for future development and expansion.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
