<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ChelseaLiu0822</title>
    <description>The latest articles on DEV Community by ChelseaLiu0822 (@chelsealiu0822).</description>
    <link>https://dev.to/chelsealiu0822</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chelsealiu0822"/>
    <language>en</language>
    <item>
      <title>PySpark： missing value</title>
      <dc:creator>ChelseaLiu0822</dc:creator>
      <pubDate>Thu, 18 Apr 2024 04:41:44 +0000</pubDate>
      <link>https://dev.to/chelsealiu0822/pyspark-missing-value-j3d</link>
      <guid>https://dev.to/chelsealiu0822/pyspark-missing-value-j3d</guid>
      <description>&lt;h2&gt;
  
  
  Drop
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrameNaFunctions.drop.html?highlight=na%20drop#pyspark.sql.DataFrameNaFunctions.drop"&gt;df.na.drop()&lt;/a&gt; vs. &lt;a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.dropna.html?highlight=na%20drop#"&gt;df.dropna()&lt;/a&gt;&lt;br&gt;&lt;br&gt;
DataFrame.dropna() and DataFrameNaFunctions.drop() are aliases of each other. So theoretically their efficiency should be equivalent.&lt;br&gt;&lt;br&gt;
In addition, df.na.drop() can also specify a subset.  &lt;/p&gt;
&lt;h3&gt;
  
  
  examples
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rt95f6tb5974igca6iz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rt95f6tb5974igca6iz.png" alt="Image description" width="800" height="216"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Code to drop any row that contains missing data
&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;na&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h0foxnnjxi9ag7penvy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7h0foxnnjxi9ag7penvy.png" alt="Image description" width="261" height="127"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Only drop if row has at least 2 NON-null values
&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;na&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;thresh&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhesigvjlhgtkhc99uq7g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhesigvjlhgtkhc99uq7g.png" alt="Image description" width="315" height="199"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Only drop the rows with null in Sales col
&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dropna&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;how&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;any&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;subset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Sales&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5n8u5etpjnek29840eoq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5n8u5etpjnek29840eoq.png" alt="Image description" width="307" height="163"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;na&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;how&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;any&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;na&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;how&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;all&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kjswl6dmle8hgqhxd85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kjswl6dmle8hgqhxd85.png" alt="Image description" width="256" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  fill
&lt;/h2&gt;

&lt;p&gt;We can also fill the missing values with new values. If you have multiple nulls across multiple data types, Spark smart enough to &lt;strong&gt;match up the data types&lt;/strong&gt;. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;na&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;NEW VALUE&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;show&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F521d5f5eqcybnri25zxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F521d5f5eqcybnri25zxm.png" alt="Image description" width="363" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;if you have multiple columns to fill, you could use a dictionary.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzr2awymg48nlbec1eu2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzr2awymg48nlbec1eu2j.png" alt="Image description" width="607" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>pyspark</category>
      <category>python</category>
      <category>dataengineering</category>
      <category>bigdata</category>
    </item>
    <item>
      <title>JUC</title>
      <dc:creator>ChelseaLiu0822</dc:creator>
      <pubDate>Thu, 30 Nov 2023 06:14:23 +0000</pubDate>
      <link>https://dev.to/chelsealiu0822/juc-3d5</link>
      <guid>https://dev.to/chelsealiu0822/juc-3d5</guid>
      <description>&lt;h2&gt;
  
  
  AQS principle
&lt;/h2&gt;

&lt;p&gt;AQS (ABstractQueuedSynchronizer) is a framework for blocking locks and related synchronizer tools&lt;br&gt;
Features:&lt;br&gt;
Use the state attribute to represent the status of the resource (exclusive and shared). The subclass needs to define how to maintain this status.&lt;br&gt;
Control how locks are acquired and released&lt;br&gt;
getState - Get the state&lt;br&gt;
setSate - set state&lt;br&gt;
compareAndSetState - Optimistic lock set state&lt;br&gt;
Exclusive: only one thread can access Shared: multi-threaded access&lt;br&gt;
Provides a FIFO-based waiting queue, similar to Monitor's EntryList&lt;br&gt;
Condition variables are used to implement waiting and wake-up mechanisms, and support multiple condition variables, similar to Monitor's WaitSet.&lt;br&gt;
tryAcquire() acquires the lock&lt;br&gt;
tryRelease() releases the lock&lt;br&gt;
The bottom layer uses park and unpark to block and wake up threads&lt;/p&gt;

&lt;h2&gt;
  
  
  ReentrantLock principle
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OOvU0J_l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vlc3musyo8m7sm5qac5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OOvU0J_l--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vlc3musyo8m7sm5qac5i.png" alt="Image description" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Principle of unfair lock implementation:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hS4qgm9W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bvsgqnszarjvru0veees.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hS4qgm9W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bvsgqnszarjvru0veees.png" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gzUU1cRL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f1lfamvwle5g52pyx80p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gzUU1cRL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f1lfamvwle5g52pyx80p.png" alt="Image description" width="716" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v_qvUAVB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ek8cs59navfvw6tde8nc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v_qvUAVB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ek8cs59navfvw6tde8nc.png" alt="Image description" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tkZpQ2o2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tjq5olvraqr6o75bkczi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tkZpQ2o2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tjq5olvraqr6o75bkczi.png" alt="Image description" width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Threads that fail in competition will enter the Node queue. The waitStatus of Node is 0 by default. The creation of Node is lazy.&lt;br&gt;
Yes, the first Node is Dummy or Sentinel, which is used to occupy space and is not associated with threads.&lt;br&gt;
acquireQueued: After entering Node, Thread-1 will continuously try to acquire the lock in an infinite loop and fail.&lt;br&gt;
After entering park blocking, if the previous node of the current Node is head, its status will be changed to -1, indicating that it is responsible for calling&lt;br&gt;
Wake up a node and try to acquire the lock again. If it fails, block the thread.&lt;br&gt;
Lock competition is successful:&lt;br&gt;
No race conditions:&lt;br&gt;
After the lock is released, check whether waitSet is null and whether the node status is -1, find a Node closest to the head, unpark resumes its operation, and updates the head node.&lt;br&gt;
There is competition: If the lock is obtained by another thread first, enter the acquireQueued process again. If it fails, re-enter the park blocking process.&lt;br&gt;
Re-entry principle:&lt;br&gt;
Determine whether the current incoming thread is an occupying thread and count state+1. When releasing, first state -1 and only state=0 can be released.&lt;br&gt;
Interruptible principle:&lt;br&gt;
Uninterruptible mode (default). Even if it is interrupted, it will still stay in the AQS queue. After obtaining the lock, it will continue to run (the interrupt flag is set to true) to know that it has been interrupted and interrupt. Cannot be interrupted in blocking state&lt;br&gt;
Interruptible mode: acquireInterruptibly() method directly throws an exception, leaves AQS, and stops waiting for the lock.&lt;/p&gt;

&lt;p&gt;Fair lock:&lt;br&gt;
During the competition, it will be checked to see if there is a predecessor node. If there is no predecessor node, it will compete.&lt;br&gt;
Condition variable implementation&lt;br&gt;
await process&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m2dHyoe4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wpcq6li4glg2rpejzosp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m2dHyoe4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wpcq6li4glg2rpejzosp.png" alt="Image description" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then enter the fullyRelease process such as AQS, release the lock on the synchronizer, directly release the reentry lock, and unpark the next node in the AQS queue to compete for the lock.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;signal&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
First determine whether it is a thread holding the lock for signaling. If so, get the first node of the waiting queue, transfer this node to the end of the AQS queue, and convert its own status to 0, and set the previous node status to - 1&lt;/p&gt;

&lt;h2&gt;
  
  
  read-write lock
&lt;/h2&gt;

&lt;p&gt;ReentrantReadWriteLock&lt;br&gt;
When read operations are much higher than write operations, use read-write locks to allow read-read concurrency and improve performance.&lt;br&gt;
Usage: Need to obtain read lock and write lock respectively&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---miaoaHn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2trzrzykhi6615xwnczv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---miaoaHn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2trzrzykhi6615xwnczv.png" alt="Image description" width="800" height="912"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Read locks do not support condition variables&lt;br&gt;
Upgrading during re-entry is not supported: Acquiring a write lock when there is a read lock will cause the acquisition of the write lock to wait forever.&lt;br&gt;
It can be reduced when re-entering, first acquire the write lock, and then acquire the read lock.&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Cache update strategy&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Clear the cache first, then update the database&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Read-write lock principle&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
The same Sync synchronizer is used, so the waiting queue and state are also the same.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mUV9a56o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yrkp3221ue175796d075.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mUV9a56o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yrkp3221ue175796d075.png" alt="Image description" width="800" height="535"&gt;&lt;/a&gt;&lt;br&gt;
t2 executes r.lock, and then enters the sync.acquireShared(1) process of the read lock. It will first enter the tryAcquireShared process. If there is a write lock occupied, then -1 is returned to indicate failure.&lt;br&gt;
Returning 0 indicates success, but subsequent nodes will not continue to wake up.&lt;br&gt;
Returning a positive integer indicates success, and the value indicates that there are still several successor nodes that need to be awakened. The read-write lock returns 1&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QA_HA4M---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ym3fz04lem5rra2udq7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QA_HA4M---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ym3fz04lem5rra2udq7m.png" alt="Image description" width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kfxFBXOG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/su44tqcgruvoaf5nxymc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kfxFBXOG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/su44tqcgruvoaf5nxymc.png" alt="Image description" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B6fMpxcI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dsl43at25w4whvpwoixd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B6fMpxcI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dsl43at25w4whvpwoixd.png" alt="Image description" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eXJqF0JC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwtnyfibwa2jxdmaa9v9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eXJqF0JC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mwtnyfibwa2jxdmaa9v9.png" alt="Image description" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PQqRTCv0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zq2klqnwivciwq0emov9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PQqRTCv0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zq2klqnwivciwq0emov9.png" alt="Image description" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QMoRCC9F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o2vjw9wsdcroaifg1j77.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QMoRCC9F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o2vjw9wsdcroaifg1j77.png" alt="Image description" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bdlLS2pp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jvuviwednriqflf4jj9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bdlLS2pp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jvuviwednriqflf4jj9u.png" alt="Image description" width="800" height="429"&gt;&lt;/a&gt;&lt;br&gt;
Execute tryAcquireShared again. If successful, increase the read lock count by 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8WQN2_fM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6sas1lxw0dl3qwvbe1yi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8WQN2_fM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6sas1lxw0dl3qwvbe1yi.png" alt="Image description" width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The wake-up of the read lock will wake up all shared threads to achieve read-read concurrency.&lt;br&gt;
&lt;em&gt;&lt;strong&gt;StampedLock&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
It was added in JDK8 to further optimize the reading performance. Its characteristic is that when using read locks and write locks, it must be used in conjunction with [stamp].&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WklW5e-z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/khnexymto8cmcdorbsfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WklW5e-z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/khnexymto8cmcdorbsfd.png" alt="Image description" width="748" height="428"&gt;&lt;/a&gt;&lt;br&gt;
Optimistic reading, StampedLock supports the tryOptimisticRead() method (no lock is used in this method). After reading, a [stamp verification] needs to be done. If it passes, it means that there was no writing operation during the period and the data can be used safely. If it fails, you need to re-obtain the read lock to ensure data security.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SKBsK3JD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h8iksx864xr272g0lqqp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SKBsK3JD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h8iksx864xr272g0lqqp.png" alt="Image description" width="678" height="688"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;StampedLock does not support condition variables and reentry.&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Semaphore&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Semaphore, used to limit the number of threads that can access shared resources at the same time&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qv-qTo6o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdibj87b46xhz8091hdd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qv-qTo6o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdibj87b46xhz8091hdd.png" alt="Image description" width="694" height="722"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iqUARUmL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4djuvp4l8lw755k9p1ip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iqUARUmL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4djuvp4l8lw755k9p1ip.png" alt="Image description" width="732" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nju8RW8q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/63gamcvlzb9gedd1t18v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nju8RW8q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/63gamcvlzb9gedd1t18v.png" alt="Image description" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XaIFDjZg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/opvv9cw81b3b4p3oc27j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XaIFDjZg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/opvv9cw81b3b4p3oc27j.png" alt="Image description" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;*&lt;em&gt;CountdownLatch *&lt;/em&gt;&lt;/em&gt;(countdown lock)&lt;br&gt;
Used for thread synchronization and cooperation, waiting for all threads to complete the countdown&lt;br&gt;
The construction parameters are used to initialize the wait count value, await() is used to wait for the count to return to zero, and countDown() is used to decrement the count by one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X2u1YtzP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qmai61vfg7tkp9dldgvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X2u1YtzP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qmai61vfg7tkp9dldgvq.png" alt="Image description" width="800" height="875"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;CyclicBarrier&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Loop fence is used for thread collaboration, waiting for threads to meet a certain count. The number of counts is set during construction. When each thread executes to a certain moment that requires synchronization, the await() method is called to wait. When waiting When the number of threads reaches the count, continue execution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_Vhl-gST--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9z39214gwu8x3pe5age7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_Vhl-gST--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9z39214gwu8x3pe5age7.png" alt="Image description" width="800" height="789"&gt;&lt;/a&gt;&lt;br&gt;
The number of threads needs to be the same as the count value&lt;br&gt;
It can be used repeatedly without creating a new CyclicBarrier object. After the count is used, it will be reset to the initial value. CountdownLatch is a one-time use.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Thread safe collection class</title>
      <dc:creator>ChelseaLiu0822</dc:creator>
      <pubDate>Thu, 30 Nov 2023 05:59:43 +0000</pubDate>
      <link>https://dev.to/chelsealiu0822/thread-safe-collection-class-1j36</link>
      <guid>https://dev.to/chelsealiu0822/thread-safe-collection-class-1j36</guid>
      <description>&lt;p&gt;Blocking: Based on locks, through blocking&lt;br&gt;
CopyOnWrite: modification overhead is high&lt;br&gt;
Containers of Concurrent type:&lt;br&gt;
Through cas optimization, high throughput&lt;br&gt;
Weak consistency: When traversing, even if the container is modified, the iterator can continue to traverse, and the content is old&lt;br&gt;
Seek consistency between sizes and sizes. The size operation may not be 100% accurate.&lt;br&gt;
Read weak consistency&lt;br&gt;
If modifications occur during traversal of a non-thread-safe container, the fail-fast mechanism will be used to cause the traversal to fail and throw&lt;br&gt;
ConcurrentModificationException, no longer traversed.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;ConcurrentHashMap&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Same as the normal HashMap method, but thread-safe&lt;br&gt;
Although each operation is atomic, the combination is not necessarily thread-safe.&lt;br&gt;
map.computeIfAbsent() puts the key into the map. It is thread-safe and ensures the atomicity of the get and put method combinations.&lt;br&gt;
principle:&lt;br&gt;
HashMap Java 8 hash conflict elements are placed at the end of the linked list, while in Java 7 they are placed at the head.&lt;br&gt;
The problem of concurrent dead links will occur when HashMap is expanded (occurred in JDK 7). The insertion method has been modified in JDK 8 so that dead links will no longer be formed during expansion, but it will easily lead to data loss during expansion.&lt;br&gt;
ConcurrentHashMap&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zLys1aW6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6vxrvtgtfb13ygncevoe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zLys1aW6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6vxrvtgtfb13ygncevoe.png" alt="Image description" width="800" height="712"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5kZnp91---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4bj8d9j6513qn7brd1x3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5kZnp91---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4bj8d9j6513qn7brd1x3.png" alt="Image description" width="800" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Dp6jw4uR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5rtnu7wsdp24jlwaqlyk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Dp6jw4uR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5rtnu7wsdp24jlwaqlyk.png" alt="Image description" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lazy initialization, only the table size is calculated in the construction method, and it is not actually created until it is used for the first time.&lt;br&gt;
The structure used in JDK8 is array + linked list + red and black. The initial length of the array is 16&lt;br&gt;
When the length of the linked list exceeds 8, it turns into a red tree. When the length exceeds the loading factor * array length, the capacity is expanded.&lt;br&gt;
In JDK7, Segment segment lock is used to achieve thread safety based on array + linked list.&lt;br&gt;
Using CAS+Synchronized in JDK 8 to ensure thread safety&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Thread Pool</title>
      <dc:creator>ChelseaLiu0822</dc:creator>
      <pubDate>Thu, 30 Nov 2023 05:55:47 +0000</pubDate>
      <link>https://dev.to/chelsealiu0822/thread-pool-3c6p</link>
      <guid>https://dev.to/chelsealiu0822/thread-pool-3c6p</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qH68eAFa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gteqt7zo8ebal3r72dli.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qH68eAFa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gteqt7zo8ebal3r72dli.png" alt="Image description" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ThreadPoolExecutor
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3u62_9hW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mb4316s7lony1j2rly62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3u62_9hW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mb4316s7lony1j2rly62.png" alt="Image description" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thread pool status:&lt;br&gt;
ThreadPoolExecutor uses the high 3 bits of int to represent the thread pool status, and the low 29 bits represent the number of threads.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rYDkZRlz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/36k7h98ondjpsdmm17uj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rYDkZRlz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/36k7h98ondjpsdmm17uj.png" alt="Image description" width="556" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This information is stored in an atomic variable ctl.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1mOn-Irj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2e69s0jcr27f4prh0d0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1mOn-Irj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2e69s0jcr27f4prh0d0h.png" alt="Image description" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the task volume is large, an emergency thread will be created to execute the task, and it will be destroyed after execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  RejectedExecutionHandler
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8CVMwxTp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6cb50achqap1tagy9zxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8CVMwxTp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6cb50achqap1tagy9zxg.png" alt="Image description" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  newFixedThreadPool
&lt;/h2&gt;

&lt;p&gt;Number of core threads == Maximum number of threads (no emergency threads), no timeout required&lt;br&gt;
Suitable for situations where you know the number of tasks and are not too busy&lt;br&gt;
Because it is a core thread, it will not end the thread by itself after executing the task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XppabA-I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c8h6yhyu08r3a5t8jxl8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XppabA-I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c8h6yhyu08r3a5t8jxl8.png" alt="Image description" width="800" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  newCachedThreadPool (thread pool with buffering function)
&lt;/h2&gt;

&lt;p&gt;The number of core threads is 0, the maximum number of threads is Integer.MAX_VALUE, all created are emergency threads, and the lifetime is&lt;br&gt;
It is 60s. Emergency threads can be created unlimitedly.&lt;br&gt;
The queue is implemented using synchronousQueue and has no capacity limit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EDSW5O-O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uewqzenwi45xqgtu4sbk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EDSW5O-O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uewqzenwi45xqgtu4sbk.png" alt="Image description" width="800" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  newSingleThreadExecutor
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qpY0Hwwx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cww1k5jc729wwq9jkicy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qpY0Hwwx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cww1k5jc729wwq9jkicy.png" alt="Image description" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Applicable to situations where multiple tasks are expected to be executed in queue order. Even if an exception is encountered, a new thread will be created to execute the remaining tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WvCSKP63--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xcyu9we7865uh18b22io.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WvCSKP63--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xcyu9we7865uh18b22io.png" alt="Image description" width="800" height="585"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Close thread pool&lt;br&gt;
The thread pool status changes to SHUTDOWN and the submitted task is executed without blocking the execution of the calling thread.&lt;br&gt;
void shutdown()&lt;br&gt;
The shutdown() method will modify the thread pool status, interrupt idle threads, and attempt to terminate&lt;/p&gt;

&lt;p&gt;shutdownNow()&lt;br&gt;
The thread pool status changes to STOP, and new tasks will not be accepted. The tasks in the queue will be returned, and the interrupt method is used to interrupt the executing tasks.&lt;br&gt;
The shutdownNow() method will modify the thread pool status, interrupt all threads, obtain the remaining tasks in the queue, and try to terminate it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design Pattern Worker Thread
&lt;/h2&gt;

&lt;p&gt;Let limited worker threads take turns to process an unlimited number of tasks asynchronously&lt;br&gt;
Applications of different task types use different thread pools, which can avoid starvation and improve efficiency.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;hunger
Fixed-size thread pools may experience starvation
The threads in the same thread pool all do the same task, and there are no remaining threads to do the following tasks, leading to starvation.
Appear
Let different thread pools do different tasks&lt;/li&gt;
&lt;li&gt;Thread pool size
If it is too small, the thread cannot make full use of system resources and can easily lead to starvation.
Excessive size leads to more thread context switching and takes up more memory.
Determine the thread pool size according to different situations&lt;/li&gt;
&lt;li&gt;CPU-intensive operations (e.g. data analysis)
Using CPU core number +1 can achieve optimal CPU utilization. +1 ensures that when the thread is suspended due to page missing failure or other problems, the additional thread can step in to ensure that the CPU clock cycle is not wasted.&lt;/li&gt;
&lt;li&gt;IO intensive
CPU is not always busy
Empirical formula:
Number of threads = number of cores * expected CPU utilization * total time (CPU calculation time + waiting time) / CPU calculation time&lt;/li&gt;
&lt;li&gt;Task scheduling thread pool
It can be implemented using Timer, which is simple and easy to use. However, all tasks are scheduled by one thread, so the tasks are executed serially. The delay or exception of one task will affect the subsequent tasks. Use newScheduledThreadPool instead.
schedule (task, time, time unit) method to delay execution
Construction method:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bZA1Ij89--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mxq6a2f91nzb9a3wyq6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bZA1Ij89--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mxq6a2f91nzb9a3wyq6v.png" alt="Image description" width="800" height="29"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Thread pool exception
Exceptions will not be printed, and you need to try catch to catch them manually.
The exception will be caught and printed through the get() method of the Future object&lt;/li&gt;
&lt;li&gt;Tomcat&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YSSO2vP2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fw18xoq92p9kwdzlmayk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YSSO2vP2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fw18xoq92p9kwdzlmayk.png" alt="Image description" width="800" height="184"&gt;&lt;/a&gt;&lt;br&gt;
LimitLatch is used to limit current and control the maximum number of connections.&lt;br&gt;
Acceptor receives new socket connections&lt;br&gt;
Poller monitors the socket channel to see whether there are readable IO events. Once it is readable, it encapsulates a task object (socketProcessor) and submits it to the Executor thread pool for processing.&lt;br&gt;
The worker threads in the Executor thread pool are responsible for processing requests&lt;br&gt;
Tomcat thread pool extends ThreadPoolExecutor. If the total thread reaches maximumPoolSize, RejectExecutionException will not be thrown immediately. It will try to put the task into the queue again. If it fails, RejectExecutionException will be thrown.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fork/Join thread pool
The idea of divide and conquer is used, which is suitable for CPU-intensive operations of task splitting. Multi-threading is added on the basis of divide and conquer, and the decomposition and merging of each task is handed over to different threads to complete. Step by step to improve computing efficiency
By default, it is created in a thread pool with the same number of CPU cores.
How to use:
The task object needs to inherit RecursiveTask and override the compute() method&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rP1zYzua--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/853gct4xzpe49ksbt51l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rP1zYzua--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/853gct4xzpe49ksbt51l.png" alt="Image description" width="800" height="703"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Shared model immutability</title>
      <dc:creator>ChelseaLiu0822</dc:creator>
      <pubDate>Thu, 30 Nov 2023 05:43:47 +0000</pubDate>
      <link>https://dev.to/chelsealiu0822/shared-model-immutability-o3b</link>
      <guid>https://dev.to/chelsealiu0822/shared-model-immutability-o3b</guid>
      <description>&lt;p&gt;SimpleDateFormat is thread-unsafe, you can use DateTimeFormatter (immutable class, thread-safe) under multi-threading&lt;/p&gt;

&lt;h2&gt;
  
  
  Immutable design:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Classes and all properties in a class are final
Modifying the property with final ensures that the property is read-only and cannot be modified.
Modifying a class with final ensures that methods in the class cannot be overridden, preventing subclasses from inadvertently destroying immutability.
-Protective copy: avoid sharing by creating copy objects
Protective copy to ensure that the array contents will not be changed by other classes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Flyweight pattern
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Share data as much as possible&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Since protective copy creates multiple objects and wastes resources, flyweight mode is usually used.&lt;br&gt;
This pattern is often used in wrapper classes&lt;br&gt;
The valueOf method of Long type will cache Long objects between -128 and 127. Objects will be reused within this range. New Long objects will be created only when the range is greater than this range.&lt;/p&gt;

&lt;h2&gt;
  
  
  final principle
&lt;/h2&gt;

&lt;p&gt;Principle of setting final variables&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xy53THhm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24vjtipy9t6iledghwd6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xy53THhm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24vjtipy9t6iledghwd6.png" alt="Image description" width="800" height="436"&gt;&lt;/a&gt;&lt;br&gt;
Stateless is also thread-safe and is achieved by not setting member variables in the class.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Lock-free concurrency-optimistic locking (non-blocking)</title>
      <dc:creator>ChelseaLiu0822</dc:creator>
      <pubDate>Thu, 30 Nov 2023 05:40:40 +0000</pubDate>
      <link>https://dev.to/chelsealiu0822/lock-free-concurrency-optimistic-locking-non-blocking-3ked</link>
      <guid>https://dev.to/chelsealiu0822/lock-free-concurrency-optimistic-locking-non-blocking-3ked</guid>
      <description>&lt;p&gt;CAS and volatile&lt;br&gt;
CAS: compare And Set (compare And Swap), must be an atomic operation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KnxGRoky--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2r0sy95zlnsgbjvnv6x7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KnxGRoky--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2r0sy95zlnsgbjvnv6x7.png" alt="Image description" width="573" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The bottom layer of CAS is the lock cmpxchg instruction (X86 architecture), which ensures operation under both single-core CPU and multi-core CPU.&lt;br&gt;
atomicity&lt;br&gt;
CAS operations require volatile support to ensure that the latest results of shared variables are obtained to achieve [compare and exchange]Effect.&lt;/p&gt;

&lt;h2&gt;
  
  
  High lock-free efficiency
&lt;/h2&gt;

&lt;p&gt;In a lock-free situation, even if the retry fails, the thread is always running at high speed without stopping, and synchronized will make&lt;br&gt;
When the thread does not acquire the lock, a context switch occurs and it enters blocking.&lt;br&gt;
In the lock-free case, because the thread needs to keep running, additional CPU support is required. If the thread is not allocated time,&lt;br&gt;
file, it will still enter the runnable state, and it will still cause context switching.&lt;/p&gt;

&lt;h2&gt;
  
  
  CAS features
&lt;/h2&gt;

&lt;p&gt;CAS is based on the idea of optimistic locking: it is not afraid of other threads modifying shared variables.&lt;br&gt;
synchronized is based on the idea of pessimistic locking: preventing other threads from modifying shared variables&lt;br&gt;
CAS embodies lock-free concurrency and non-blocking concurrency:&lt;br&gt;
Synchronized is not used, so the thread will not be blocked.&lt;br&gt;
However, if competition is fierce and retries occur frequently, efficiency will be affected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Atomic integer
&lt;/h2&gt;

&lt;p&gt;AtomicBoolean, AtomicInteger, AtomicLong basic types of atoms&lt;br&gt;
&lt;strong&gt;Construction method:&lt;/strong&gt;&lt;br&gt;
AtomicInteger i = new AtomicInteger(value) value is modified by volatile&lt;br&gt;
&lt;strong&gt;Auto-increment method:&lt;/strong&gt;&lt;br&gt;
i.incrementAndGet()i++&lt;br&gt;
i.getAndIncrement()++i&lt;br&gt;
Similar to autodecrement&lt;br&gt;
Get value:&lt;br&gt;
i.get()&lt;br&gt;
addition:&lt;br&gt;
i.getAndAdd(value) i.addAndGet(value)&lt;br&gt;
multiplication:&lt;br&gt;
i.updateAndGet (IntUnaryOperator operation) (IntUnaryOperator is a function interface)&lt;br&gt;
i.getAndUpdate&lt;br&gt;
e.g i.updateAndGet(x -&amp;gt; x*10)&lt;br&gt;
&lt;strong&gt;principle&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T_L0KfhW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5di3691bpjfyc57yxj8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T_L0KfhW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5di3691bpjfyc57yxj8b.png" alt="Image description" width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Atomic citation
&lt;/h2&gt;

&lt;p&gt;AtomicReference, AtomicMarkableReference, AtomicStampedReference&lt;br&gt;
AtomicReference determines the protected object through generics&lt;br&gt;
Basically the same usage as AtomicInteger&lt;/p&gt;

&lt;h2&gt;
  
  
  ABA question:
&lt;/h2&gt;

&lt;p&gt;The initial variable is A. Before CAS, other threads changed the shared variable to B, and another thread changed B to&lt;br&gt;
A. CAS cannot sense whether the shared variable has been modified. CAS can only determine whether the shared variable is the same as the original value.&lt;/p&gt;

&lt;h2&gt;
  
  
  AtomicStampedReference
&lt;/h2&gt;

&lt;p&gt;By adding the version number attribute to the shared variable, we can determine whether the shared variable has been modified. If the CAS has been modified,&lt;br&gt;
will fail&lt;br&gt;
Construction method:&lt;br&gt;
AtomicStampedReference ref = new AtomicStampedReference&amp;lt;&amp;gt;("A",0(version&lt;br&gt;
This number))&lt;br&gt;
There is no get() method, only getReference() method&lt;br&gt;
compareAndSet(prev, new, stamp, newStamp), you need to pass in the original version number, and&lt;br&gt;
Modified new version number&lt;/p&gt;

&lt;h2&gt;
  
  
  AtomicMarkableReference
&lt;/h2&gt;

&lt;p&gt;Only care whether the shared variable has been changed, not how many times it has been changed, just add boolean to judge&lt;br&gt;
Construction method:&lt;br&gt;
AtomicMarkableReference ref = new AtomicMarkableReference&amp;lt;&amp;gt; (value,&lt;br&gt;
true)&lt;br&gt;
compareAndSet(prev, new, expectedMark, newMark) needs to pass in the original mark and modify next new tag&lt;/p&gt;

&lt;h2&gt;
  
  
  Atomic array
&lt;/h2&gt;

&lt;p&gt;AtomicIntegerArray, AtomicLongArray, AtomicReferenceArray&lt;br&gt;
Protect the thread safety of elements within an array&lt;br&gt;
Usage is basically the same as AtomicInteger&lt;br&gt;
The length() method returns the array length&lt;br&gt;
getAndIncrement(index) auto-increments the elements in the array&lt;/p&gt;

&lt;h2&gt;
  
  
  Atomic field updater
&lt;/h2&gt;

&lt;p&gt;Protect object member variables from thread safety&lt;br&gt;
AtomicReferenceFieldUpdater, AtomicIntegerFieldUpdater,&lt;br&gt;
AtomicLongFieldUpdater&lt;br&gt;
Construction method:&lt;br&gt;
AtomicReferenceFieldUpdater updater = new AtomicReferenceFieldUpdater(protect&lt;br&gt;
Object.class, reference object.class, member variable)&lt;br&gt;
compareAndSet(object, initial value, expected value)&lt;br&gt;
The member properties of the object must be modified with volatile&lt;/p&gt;

&lt;h2&gt;
  
  
  Atomic Accumulator
&lt;/h2&gt;

&lt;p&gt;More efficient than the self-increment method in AtomicLong&lt;br&gt;
LongAdder&lt;br&gt;
increment() self-increment method&lt;br&gt;
reason:&lt;br&gt;
When multiple threads compete, CAS competition becomes fierce, resulting in low efficiency. When there is competition, LongAdder sets multiple accumulation units, and different threads accumulate different units (Cell variables), which reduces CAS retry failures, thereby improving performance. efficiency&lt;/p&gt;

&lt;h2&gt;
  
  
  Unsafe object
&lt;/h2&gt;

&lt;p&gt;Provides very low-level methods for operating memory and threads. Unsafe objects cannot be called directly and can only be obtained through reflection.&lt;br&gt;
It is the underlying implementation of the atomic class&lt;/p&gt;

&lt;h2&gt;
  
  
  Unsafe CAS operation
&lt;/h2&gt;

&lt;p&gt;Get the offset address of the domain&lt;br&gt;
long offset = unsafe.objectFieldOfSet(xxx.class.getDeclareField(variable name))&lt;br&gt;
Perform CAS operations&lt;br&gt;
unsafe.compareAndSwapInt (modify object, field offset, current value, expected value)&lt;br&gt;
compareAndSwapObject has different methods for different types of data&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Shared model memory</title>
      <dc:creator>ChelseaLiu0822</dc:creator>
      <pubDate>Thu, 30 Nov 2023 05:24:50 +0000</pubDate>
      <link>https://dev.to/chelsealiu0822/shared-model-memory-44gm</link>
      <guid>https://dev.to/chelsealiu0822/shared-model-memory-44gm</guid>
      <description>&lt;h2&gt;
  
  
  Java memory model
&lt;/h2&gt;

&lt;p&gt;JMM is Java Memory Model, which defines the abstract concepts of main memory and working memory. The bottom layer corresponds to the CPU.&lt;br&gt;
Registers, cache, hardware memory, CPU instruction optimization, etc.&lt;br&gt;
JMM is reflected in the following aspects:&lt;br&gt;
Atomic - Guarantees that instructions will not be affected by thread context switching&lt;br&gt;
Visibility - Ensure instructions are not affected by CPU cache&lt;br&gt;
Orderliness - ensures that instructions are not affected by CPU instruction parallel optimization&lt;/p&gt;

&lt;h2&gt;
  
  
  Visibility
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--82Dxn5u8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzghckuyzusmpllxmzeo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--82Dxn5u8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzghckuyzusmpllxmzeo.png" alt="Image description" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MMAjR7WF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/le38eje91h20quyy3kv3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MMAjR7WF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/le38eje91h20quyy3kv3.png" alt="Image description" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4PTylFYn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/95s7je1y40st9b1lvrbp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4PTylFYn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/95s7je1y40st9b1lvrbp.png" alt="Image description" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use volatile to solve visibility problems&lt;br&gt;
Volatile can use Ali to modify member variables or static member variables. It can prevent threads from caching their own work.&lt;br&gt;
To query the value of a variable in the main memory, you must obtain its value from the main memory. When threads operate volatile variables, they directly operate on the main memory.&lt;br&gt;
live&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Locking can also solve visibility problems&lt;br&gt;
Modifications of volatile variables by one thread are visible to another thread, and atomicity cannot be guaranteed.&lt;br&gt;
The synchronized statement can not only ensure the atomicity of the code block but also ensure the visibility of the variables within the code block.&lt;br&gt;
The disadvantage is that synchronized is a heavyweight operation with relatively low performance.&lt;br&gt;
volatile is suitable for scenarios where one thread writes and multiple threads read.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Orderliness
&lt;/h2&gt;

&lt;p&gt;The JVM can adjust the execution order of statements without affecting the correctness. This feature is called instruction redundancy.&lt;br&gt;
Arrangement, and instruction rearrangement under multi-threading will affect correctness.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Monitor</title>
      <dc:creator>ChelseaLiu0822</dc:creator>
      <pubDate>Thu, 30 Nov 2023 05:20:01 +0000</pubDate>
      <link>https://dev.to/chelsealiu0822/monitor-3ing</link>
      <guid>https://dev.to/chelsealiu0822/monitor-3ing</guid>
      <description>&lt;h2&gt;
  
  
  Java object header
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R9OmRrox--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ojdngay6kfzb4kwvqd9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R9OmRrox--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ojdngay6kfzb4kwvqd9f.png" alt="Image description" width="800" height="189"&gt;&lt;/a&gt;&lt;br&gt;
Klass word points to the class where the object is stored.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fN_HJT8F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w3soixml3s690msjxor9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fN_HJT8F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w3soixml3s690msjxor9.png" alt="Image description" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each Java object can be associated with a Monitor object. If synchronized is used to lock the object (weight&lt;br&gt;
level), the pointer to the Monitor object is set in the Mark Word of the object header.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zcehggup--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1bzneekjzaqleew09jtc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zcehggup--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1bzneekjzaqleew09jtc.png" alt="Image description" width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a thread occupies the lock, the Monitor's Owner will be set to the thread, and there can only be one Owner.&lt;br&gt;
If other threads also execute synchronized, they will enter the EntryList and become Blocked.&lt;/p&gt;

&lt;h2&gt;
  
  
  lightweight lock
&lt;/h2&gt;

&lt;p&gt;Application scenario: Although an object is accessed by multiple threads, the access times of the multiple threads are staggered (no competition).&lt;br&gt;
contention), then lightweight locks can be used to optimize Lightweight locks are transparent to users, and the syntax is still synchronized&lt;/p&gt;

&lt;h2&gt;
  
  
  process:
&lt;/h2&gt;

&lt;p&gt;Create a Lock Record object. The stack frame of each thread contains a lock record structure, which can be stored internally.&lt;br&gt;
Mark Word of Lock Object&lt;br&gt;
Let the lock record Object reference point to the lock object, and try to replace the Mark word of Object with cas, so that&lt;br&gt;
The value of Mark word is stored in the lock record&lt;br&gt;
If the cas replacement is successful, the object header stores the lock record address and status 00, indicating that the object is locked by this thread.&lt;br&gt;
If it fails there are two situations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If other threads already hold the lightweight lock of the Object, it means competition and entry into the expansion process.&lt;/li&gt;
&lt;li&gt;If you perform synchronized lock reentry yourself, add another Lock Record as the reentry count&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0_kO0Bmi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1z6os8b7zhm3mzzcapq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0_kO0Bmi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1z6os8b7zhm3mzzcapq4.png" alt="Image description" width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When exiting the synchronized code block (unlocking), the value of the lock record is null, indicating re-entry. At this time, the lock record is reset.&lt;br&gt;
record, indicating that the re-entry count is decremented by one&lt;br&gt;
When exiting the synchronized code block (when unlocking), the value of the lock record is not null. At this time, use cas to change the Mark&lt;br&gt;
The value of Word is restored to the object header&lt;br&gt;
If successful, the unlocking is successful.&lt;br&gt;
Failure, indicating that the lightweight lock has undergone lock expansion or has been upgraded to a weight lock, and the weight lock unlocking process is entered.&lt;/p&gt;

&lt;h2&gt;
  
  
  expansion lock
&lt;/h2&gt;

&lt;p&gt;If the CAS operation fails when trying to add a lightweight lock, one situation is that other threads are trying to add lightweight locks.&lt;br&gt;
This object has a lightweight lock (with competition). In this case, lock expansion is required to turn the lightweight lock into a heavy lock.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dRsP2SLz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/529mfpqh7bwcjow2qkw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dRsP2SLz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/529mfpqh7bwcjow2qkw8.png" alt="Image description" width="800" height="287"&gt;&lt;/a&gt;&lt;br&gt;
When Thread-1 performs lightweight locking, Thread-0 has already added a lightweight lock to the lock.&lt;br&gt;
At this time, Thread-1 fails to add a lightweight lock and enters the expansion lock process:&lt;br&gt;
Apply for a Monitor lock for the Object object and let the Object point to the weight lock address.&lt;br&gt;
Then let yourself enter the EntryList and Blocked state of the Monitor&lt;br&gt;
When Thread-0 exits the sync block and is unlocked, using cas to restore the Mark word value to the object header will fail. this&lt;br&gt;
will enter the heavyweight lock unlocking process, that is, find the Monitor object according to the Monitor address, and set the Owner to&lt;br&gt;
null, wake up the Blocked thread in EntryList&lt;/p&gt;

&lt;h2&gt;
  
  
  Spin optimization
&lt;/h2&gt;

&lt;p&gt;When competing for weight locks, you can also use spin for optimization. If the current thread spins successfully (that is, the lock is held at this time)&lt;br&gt;
The thread has exited the synchronization block and released the lock), then the current thread can avoid blocking&lt;br&gt;
It will increase or decrease the number of spins based on whether the last spin was successful, which is more intelligent.&lt;/p&gt;

&lt;h2&gt;
  
  
  bias lock
&lt;/h2&gt;

&lt;p&gt;When there is no competition for lightweight locks, CAS operations still need to be performed every time the lock is re-entered.&lt;br&gt;
Java 6 introduced biased locking for further optimization. As long as the CAS thread ID is set to the object for the first time,&lt;br&gt;
Mark Word header, and later found that the ID of this thread is itself, which means there is no competition, and there is no need to re-CAS. In the future&lt;br&gt;
As long as no competition occurs, this object is owned by the thread.&lt;/p&gt;

&lt;h2&gt;
  
  
  When an object is created:
&lt;/h2&gt;

&lt;p&gt;If the bias lock is enabled (enabled by default), then after the object is created, the markword value is 0x05, which is the last 3 digits.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;At this time, his thread, epoch, and age are all 0.
Bias lock is delayed by default and will not take effect immediately when the program starts. If you want to avoid delay, you can add VM parameters.
Number: -XX:BiasedLockinStartupDelay=0 to tighten the delay
If the bias lock is not turned on, then after the object is created, the markword value is 0x01 and the last three digits are 001. At this time
Its hashcode and age are both 0. The first 54 bits of markword will be assigned when the hashcode is used for the first time.
Thread id
-XX:-UseBiasedLocking disables biased locking
The order of using locks: give priority to bias locks, then light locks, and finally heavy locks.
The hashcode() method disables the biased lock because the biased lock does not have enough space to store the hashcode (requires
31 bit)
When other threads use the biased lock object, the biased lock will be upgraded to a lightweight lock.
The bias lock will also be revoked when wait/notify is called. These two methods are only available for weight locks, so the lock will be upgraded to
weight lock&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Batch weight bias
&lt;/h2&gt;

&lt;p&gt;If the object is accessed by multiple threads but there is no competition, the object that is biased towards thread T1 will still have a chance to be accessed again.&lt;br&gt;
Bias T2, heavy bias will reset the Thread ID of the object&lt;br&gt;
When the biased lock cancellation threshold exceeds 20 times, the jvm will redirect these objects to the locking thread when accelerating.&lt;/p&gt;

&lt;h2&gt;
  
  
  Batch undo
&lt;/h2&gt;

&lt;p&gt;When the biased lock threshold is revoked more than 40 times, all objects of the entire class will become non-biasable, and the newly created Objects cannot be biased either&lt;/p&gt;

&lt;h2&gt;
  
  
  lock elimination
&lt;/h2&gt;

&lt;p&gt;The JIT just-in-time compiler will optimize the bytecode file. When the lock object is a local variable and cannot escape, it will be automatically eliminated.&lt;br&gt;
This lock achieves the purpose of optimization&lt;/p&gt;

&lt;h2&gt;
  
  
  Wait and notify
&lt;/h2&gt;

&lt;p&gt;If the owner thread condition is not satisfied, call the wait method to enter WaitSet and change to WAITING state.&lt;br&gt;
After the Owner thread calls notify or notifyAll to wake up, the thread enters the EntryList to compete again and will not immediately&lt;br&gt;
get lock wait (long timeout) has a time limit to wait. It will wake up automatically when the time is up and can be woken up in advance by other threads.&lt;/p&gt;

&lt;h2&gt;
  
  
  The difference between Sleep and Wait
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;sleep is a Thread method, and wait is an Object method.&lt;/li&gt;
&lt;li&gt;Sleep does not need to be used in conjunction with synchronized, but wait needs to be used in conjunction with synchronized.&lt;/li&gt;
&lt;li&gt;sleep will not release the object lock while sleeping, but wait will release the object lock while waiting.&lt;/li&gt;
&lt;li&gt;The thread status of sleep and wait is the same, both are TIMED WAITING.
notify will only randomly wake up the threads in wait, which will cause false wake-ups.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  protective pause mode
&lt;/h2&gt;

&lt;p&gt;Application scenario: One thread waits for the execution result of another thread&lt;br&gt;
There is a result that needs to be passed from one thread to another thread, and let them associate a GuardedObject&lt;br&gt;
If there are results constantly flowing from one thread to another, you can use message queue (producer/consumer)&lt;br&gt;
In JDK, the implementation of join and the implementation of Future adopt this mode.&lt;br&gt;
Because it has to wait for the results from the other party, it is classified into synchronous mode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4mq18uTF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wrua1hejowfdwrjcj94g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4mq18uTF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wrua1hejowfdwrjcj94g.png" alt="Image description" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  join principle
&lt;/h2&gt;

&lt;p&gt;Is achieved through protective pause mode&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nQvybPcM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h055hxtvaw66q8h3dk7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nQvybPcM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h055hxtvaw66q8h3dk7y.png" alt="Image description" width="800" height="628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Protected Suspend Mode - Extended
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bdAvYM3Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ndj99gnpxepdglvgo9kk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bdAvYM3Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ndj99gnpxepdglvgo9kk.png" alt="Image description" width="800" height="322"&gt;&lt;/a&gt;&lt;br&gt;
The Futures in the picture are like the mailboxes on the first floor of a residential building (each mailbox has a room number), t0, t2, and t4 on the left are like residents waiting for mail, and t1, t3, and t5 on the right are like postmen.&lt;br&gt;
If you need to use GuardedObject objects between multiple classes, it is not very convenient to pass them as parameters, so design an intermediate class for decoupling. This will not only decouple the [result waiter] and [result producer], but also decouple them at the same time. Supports the management of multiple tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Asynchronous mode-producer/consumer mode
&lt;/h2&gt;

&lt;p&gt;There is no need to produce results and consume results in a thread-to-thread correspondence.&lt;br&gt;
The consumption queue can be used to balance the resources of production and consumption threads&lt;br&gt;
The producer is only responsible for producing results and does not care how the data is processed. The consumer only processes the data.&lt;br&gt;
The message queue has a capacity limit. No more data will be added when it is full, and no more data will be consumed when it is empty.&lt;br&gt;
Various blocking queues in JDK adopt this model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YciAIR37--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c7ovb5sr07x0aa7xpime.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YciAIR37--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c7ovb5sr07x0aa7xpime.png" alt="Image description" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Park &amp;amp; Unpark
&lt;/h2&gt;

&lt;p&gt;Is a method in the LockSupport class&lt;br&gt;
Compare with Object’s wait &amp;amp; notify&lt;br&gt;
wait, notify and notifyAll must be used together with Object Monitor, but park and unpark are not required.&lt;br&gt;
Park unpark blocks and wakes up in thread units, while notify wakes up a waiting thread randomly.&lt;br&gt;
Park unpark can be unparked first, but wait and notify cannot be notified first.&lt;br&gt;
Each thread has a Parker object, consisting of _counter, _cond and _mutex&lt;br&gt;
_counter is used to determine whether the thread needs to rest. _cond is used to store the resting thread.&lt;br&gt;
When _counter = 0, it means a break is needed. _counter will change to 1 after calling unpark.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bYvdxKZd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbc31sse2kf762bambca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bYvdxKZd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbc31sse2kf762bambca.png" alt="Image description" width="800" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SyXaDzRP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yi5mm0eoqguc7dio6nt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SyXaDzRP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yi5mm0eoqguc7dio6nt1.png" alt="Image description" width="800" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mvelbh2e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/42j947e1qkjt9s5g4hel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mvelbh2e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/42j947e1qkjt9s5g4hel.png" alt="Image description" width="800" height="542"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Thread status switching:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;NEW -&amp;gt; RUNNABLE: t.start() method&lt;/li&gt;
&lt;li&gt;RUNNABLE &amp;lt;——&amp;gt; WAITING:
Call obj.wait() method t thread from RUNNABLE - -&amp;gt; WAITING&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When calling obj.notify(), obj.notifyAll(), t.interrupt()&lt;br&gt;
The competition is successful, t thread goes from WAITING - -&amp;gt; RUNNABLE&lt;br&gt;
Competition failed, t thread went from WAITING - -&amp;gt; BLOCKED&lt;br&gt;
The current thread calls the t.join() method. The current thread starts from RUNNABLE - -&amp;gt; WAITING&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;RUNNABLE &amp;lt;- -&amp;gt;TIMED_WAITING
The thread t ends or the interrupt() method of the current thread is called. The current thread starts from WAITING - -&amp;gt;RUNNABLE
When the current thread calls the park method, the current thread will change from RUNNABLE - -&amp;gt; WAITING.
Calling unpark (target thread) or the thread's interrupt() method will cause the target thread to start from WAITING - -&amp;gt; RUNNABLE&lt;/li&gt;
&lt;li&gt;RUNNABLE &amp;lt;- -&amp;gt; BLOCKED
If the competition fails when the t thread acquires the object lock using synchronized (obj), it will start from RUNNABLE - -&amp;gt;
BLOCKED
After the thread synchronization code block holding the obj lock is executed, all BLOCKED threads on the object will be awakened to compete again.
After the competition is successful, the cover thread will change from BLOCKED - -&amp;gt; RUNNABLE and other threads will still be BLOCKED.&lt;/li&gt;
&lt;li&gt;RUNNABLE &amp;lt;- -&amp;gt; TERMINATED
When all codes of the thread have finished running, enter TERMINATED&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  multiple locks
&lt;/h2&gt;

&lt;p&gt;Subdivide the lock granularity&lt;br&gt;
Benefits: Can enhance concurrency&lt;br&gt;
Disadvantages: If a thread requires colleagues to obtain multiple locks, deadlocks may easily occur.&lt;/p&gt;

&lt;h2&gt;
  
  
  deadlock
&lt;/h2&gt;

&lt;p&gt;Thread t1 acquires the A object lock, and then wants to acquire the B object lock.&lt;br&gt;
Thread t2 acquires the B object lock, and then wants to acquire the A object lock.&lt;br&gt;
To monitor deadlocks, you can use the jconsole tool, or use jps to locate the process ID, and then use jstack to locate the deadlock.&lt;/p&gt;

&lt;h2&gt;
  
  
  livelock
&lt;/h2&gt;

&lt;p&gt;The two threads change the end conditions of the object to each other, and neither one can end it in the end.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hunger problem:
&lt;/h2&gt;

&lt;p&gt;Because the priority of a thread is too low, it is never scheduled to be executed by the CPU and cannot be terminated.&lt;br&gt;
Sequential locking can solve the deadlock problem, but it will cause starvation problems, which can be solved with ReentrantLock.&lt;/p&gt;

&lt;h2&gt;
  
  
  ReentrantLock
&lt;/h2&gt;

&lt;p&gt;For synchronized, it has the following characteristics:&lt;br&gt;
Interruptible&lt;br&gt;
Timeout can be set&lt;br&gt;
Can be set to fair lock&lt;br&gt;
Support multiple condition variables&lt;br&gt;
Like synchronized, both support reentry.&lt;br&gt;
To create a ReentrantLock object first&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qUEH4zWb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f4x8rg4rvc6diraxcgqh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qUEH4zWb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f4x8rg4rvc6diraxcgqh.png" alt="Image description" width="375" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Can be interrupted:&lt;br&gt;
While t1 is waiting, other threads can use the interrupt() method to interrupt the waiting of the t1 thread.&lt;br&gt;
The lock() method is uninterruptible, and the lockInteruptibly() method can be interrupted.&lt;/p&gt;

&lt;h2&gt;
  
  
  lock timeout
&lt;/h2&gt;

&lt;p&gt;If you still cannot obtain the lock after waiting for a period of time, give up waiting.&lt;br&gt;
The tryLock() method returns a boolean value, false indicates that the lock is not acquired, and it also supports interrupting&lt;/p&gt;

&lt;h2&gt;
  
  
  fair lock
&lt;/h2&gt;

&lt;p&gt;ReentrantLock is unfair by default (released locks are not allocated in the order of the blocking queue). The original intention is that&lt;br&gt;
In order to solve the starvation problem, pass true during construction to become a fair lock. Fair lock will reduce concurrency.&lt;/p&gt;

&lt;h2&gt;
  
  
  condition variable
&lt;/h2&gt;

&lt;p&gt;When the conditions are not met, enter waitSet to wait.&lt;br&gt;
ReentrantLock supports multiple condition variables. Different conditions have different waitSets (lounges). You can follow&lt;br&gt;
waitSet(lounge) to wake up&lt;br&gt;
Create a condition variable:&lt;br&gt;
Condition cond1 = lock.newCondition()&lt;br&gt;
Need to obtain the lock before await&lt;br&gt;
cond1.await() enters waiting&lt;br&gt;
cond1.singal() wakes up a thread in cond1&lt;br&gt;
cond1.singalAll() wakes up all threads in cond1&lt;/p&gt;

</description>
    </item>
    <item>
      <title>shared model</title>
      <dc:creator>ChelseaLiu0822</dc:creator>
      <pubDate>Thu, 30 Nov 2023 04:39:51 +0000</pubDate>
      <link>https://dev.to/chelsealiu0822/shared-model-1gia</link>
      <guid>https://dev.to/chelsealiu0822/shared-model-1gia</guid>
      <description>&lt;h2&gt;
  
  
  critical section
&lt;/h2&gt;

&lt;p&gt;Multiple threads access shared resources, and when instructions are interleaved during read and write operations on shared resources, concurrent problems may occur.&lt;br&gt;
question&lt;br&gt;
If there are multi-threaded read and write operations on shared resources within a code block, this code is called a critical section.&lt;/p&gt;

&lt;h2&gt;
  
  
  race condition
&lt;/h2&gt;

&lt;p&gt;Multiple threads are executing in the critical section. Due to the different execution sequences of the code, it is impossible to predict, which is called a race.&lt;br&gt;
state conditions&lt;/p&gt;

&lt;h2&gt;
  
  
  Methods to avoid race conditions in critical sections:
&lt;/h2&gt;

&lt;p&gt;Blocking solutions: synchronized, lock&lt;br&gt;
Non-blocking solution: atomic variables&lt;/p&gt;

&lt;h2&gt;
  
  
  synchronized
&lt;/h2&gt;

&lt;p&gt;The object in synchronized can be imagined as a room with only one entrance, and only one person can enter at a time.&lt;br&gt;
Make calculations&lt;br&gt;
When the t1 thread owns the lock and enters the room for calculation, and the context switch occurs, when t2 wants to enter the room, it finds that there is no thread.&lt;br&gt;
If you have a key and cannot enter, it will become blocked (blocked state), and t1 will continue to perform calculations, and only t1 will be released.&lt;br&gt;
After locking, t2 can get the lock and enter the room for calculation.&lt;br&gt;
synchronized actually uses object locks to ensure the atomicity of the critical section code. The code in the critical section is not exposed to the outside world.&lt;br&gt;
Divisible and will not be interrupted by thread switching&lt;br&gt;
Synchronized on member methods lock this object&lt;br&gt;
Synchronized on static methods locks class objects&lt;br&gt;
Member variables and static variables need to consider thread safety when they are shared and have read and write operations.&lt;br&gt;
If the reference object of a local variable escapes the scope of the method, thread safety needs to be considered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thread safety class
&lt;/h2&gt;

&lt;p&gt;String&lt;br&gt;
Integer all wrapper classes&lt;br&gt;
StringBuffer&lt;br&gt;
Random&lt;br&gt;
Vector&lt;br&gt;
Hashtable&lt;br&gt;
Classes under the java.until.concurrent package&lt;br&gt;
When multiple threads call a method of their same instance, it is thread-safe, and each of their methods is Atomic, but their combination method is not necessarily atomic.&lt;/p&gt;

&lt;p&gt;String and Integer are both immutable classes because their internal state cannot be changed, so their methods are Thread safe&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The state of the thread</title>
      <dc:creator>ChelseaLiu0822</dc:creator>
      <pubDate>Thu, 30 Nov 2023 04:35:26 +0000</pubDate>
      <link>https://dev.to/chelsealiu0822/the-state-of-the-thread-3k98</link>
      <guid>https://dev.to/chelsealiu0822/the-state-of-the-thread-3k98</guid>
      <description>&lt;h1&gt;
  
  
  from OS
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Esvs6fQG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m6h8i04q6p02ctlrk07d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Esvs6fQG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m6h8i04q6p02ctlrk07d.png" alt="Image description" width="767" height="330"&gt;&lt;/a&gt;&lt;br&gt;
When an application is to be processed, then it creates a thread.&lt;br&gt;
It is then allocated the required resources(such as a network) and it comes in the READY queue.&lt;br&gt;
When the thread scheduler (like a process scheduler) assign the thread with processor, it comes in RUNNING queue.&lt;br&gt;
When the process needs some other event to be triggered, which is outsides it’s control (like another process to be completed), it transitions from RUNNING to WAITING queue.&lt;br&gt;
When the application has the capability to delay the processing of the thread, it when needed can delay the thread and put it to sleep for a specific amount of time. The thread then transitions from RUNNING to DELAYED queue.&lt;br&gt;
An example of delaying of thread is snoozing of an alarm. After it rings for the first time and is not switched off by the user, it rings again after a specific amount of time. During that time, the thread is put to sleep.&lt;/p&gt;

&lt;p&gt;When thread generates an I/O request and cannot move further till it’s done, it transitions from RUNNING to BLOCKED queue.&lt;br&gt;
After the process is completed, the thread transitions from RUNNING to FINISHED.&lt;br&gt;
The difference between the WAITING and BLOCKED transition is that in WAITING the thread waits for the signal from another thread or waits for another process to be completed, meaning the burst time is specific. While, in BLOCKED state, there is no specified time (it depends on the user when to give an input).&lt;br&gt;
&lt;a href="https://www.geeksforgeeks.org/thread-states-in-operating-systems/"&gt;https://www.geeksforgeeks.org/thread-states-in-operating-systems/&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  from Java
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ne7OvoJ4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hbk3x3lmjq0plbnz0fo6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ne7OvoJ4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hbk3x3lmjq0plbnz0fo6.png" alt="Image description" width="800" height="654"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NEW – a newly created thread that has not yet started the execution&lt;br&gt;
RUNNABLE – either running or ready for execution but it’s waiting for resource allocation&lt;br&gt;
BLOCKED – waiting to acquire a monitor lock to enter or re-enter a synchronized block/method&lt;br&gt;
WAITING – waiting for some other thread to perform a particular action without any time limit&lt;br&gt;
TIMED_WAITING – waiting for some other thread to perform a specific action for a specified period&lt;br&gt;
TERMINATED – has completed its execution&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Main thread and daemon thread</title>
      <dc:creator>ChelseaLiu0822</dc:creator>
      <pubDate>Thu, 30 Nov 2023 04:28:02 +0000</pubDate>
      <link>https://dev.to/chelsealiu0822/main-thread-and-daemon-thread-1eem</link>
      <guid>https://dev.to/chelsealiu0822/main-thread-and-daemon-thread-1eem</guid>
      <description>&lt;p&gt;As long as there are threads running, the Java process will not end.&lt;br&gt;
For daemon threads, as long as other non-daemon threads have finished running, even if the code of the daemon thread has not finished executing,&lt;br&gt;
It will force the end of setDaemon(true) and set it as a daemon thread.&lt;br&gt;
The garbage collector is a daemon thread&lt;br&gt;
The Acceptor and Poller threads in Tomcat are both daemon threads&lt;/p&gt;

</description>
    </item>
    <item>
      <title>interrupt method</title>
      <dc:creator>ChelseaLiu0822</dc:creator>
      <pubDate>Thu, 30 Nov 2023 04:27:22 +0000</pubDate>
      <link>https://dev.to/chelsealiu0822/interrupt-method-1md4</link>
      <guid>https://dev.to/chelsealiu0822/interrupt-method-1md4</guid>
      <description>&lt;p&gt;Interrupting a blocked thread (wait, sleep, join) will clear the interruption status&lt;br&gt;
Interrupting a normally running thread will not clear the interruption status.&lt;br&gt;
The interrupt state can be used to stop a thread.&lt;/p&gt;

&lt;h1&gt;
  
  
  Two-stage termination model
&lt;/h1&gt;

&lt;p&gt;Gracefully terminate the T2 thread in the T1 thread and give T2 a chance to take care of the aftermath (release locks and resources)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--La4jqC9g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/go8hso3yt4763ttgjwc9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--La4jqC9g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/go8hso3yt4763ttgjwc9.png" alt="Image description" width="538" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Interrupting the park thread will not clear the interruption status&lt;br&gt;
After the interrupt flag is true, calling park again will fail.&lt;/p&gt;

&lt;h1&gt;
  
  
  Not recommended method
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NOnzhuaw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wud2hfge66a968jqczaq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NOnzhuaw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wud2hfge66a968jqczaq.png" alt="Image description" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Will destroy the synchronized code block and cause deadlock&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
