<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dawid Kałuża</title>
    <description>The latest articles on DEV Community by Dawid Kałuża (@dawidkaluza).</description>
    <link>https://dev.to/dawidkaluza</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dawidkaluza"/>
    <language>en</language>
    <item>
      <title>Why isn't gRPC more popular?</title>
      <dc:creator>Dawid Kałuża</dc:creator>
      <pubDate>Mon, 19 Jan 2026 13:56:25 +0000</pubDate>
      <link>https://dev.to/dawidkaluza/why-grpc-is-not-more-popular-1gn3</link>
      <guid>https://dev.to/dawidkaluza/why-grpc-is-not-more-popular-1gn3</guid>
      <description>&lt;p&gt;gRPC is a highly-efficient communication framework that can be even 10 times faster than REST. However, it's still not as widely known as its alternatives. Is there any specific reason behind that?&lt;/p&gt;

&lt;h2&gt;
  
  
  What is gRPC?
&lt;/h2&gt;

&lt;p&gt;gRPC is a framework that implements RPC (Remote Procedure Call). It was initially created by Google, but now it's open source. It's high performance mainly comes from using HTTP/2 to communicate and Protocol Buffers to serialize messages. Otherwise, it provides multiple other features, such as authentication, client-side load balancing or bidirectional streaming.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key, stand-out features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Stream-based, reactive RPC
&lt;/h3&gt;

&lt;p&gt;RPC is an idea of allowing you, a developer, to make network calls as if they were local functions. Its goal is to make such external calls more fluent and natural around other invocations, without too much of bloated, protocol-specific preparation. The API to make the calls is often prepared upfront, so you just need to import it to use it. &lt;/p&gt;

&lt;p&gt;Other RPC implementations try to accomplish that by entirely hiding the fact that the call is actually going over the wire. Unfortunately, such a simple approach doesn't make our lives easier. There is significant difference between local and remote calls, especially in reliability. Local calls share local hardware that can be efficiently used to coordinate the calls. Remote calls go through the network that is inherently unreliable. Packets can get altered, arrive out of order, be broken, be delayed, get lost (these are just a few examples on top of my head, the list could be longer...). Due to those reasons, completely hiding from developers the fact that RPC calls are remote calls can lead to major performance degradation, to put it mildly, because developers might simply not notice that given call is a remote call, not a local one.&lt;/p&gt;

&lt;p&gt;gRPC approach is different. It hides remote calls under stream-based reactive interfaces. This relatively simple, yet effective change makes you think about those calls differently and makes it more prominent that they actually go to external services. Additionally, it suits the streaming functionality provided by the framework, where both sides can exchange data in streams, that are either unidirectional or bidirectional.&lt;/p&gt;

&lt;h3&gt;
  
  
  Efficient and concise HTTP/2 communication
&lt;/h3&gt;

&lt;p&gt;Communication itself is based on HTTP/2. It's an upgrade to the previous version, HTTP/1.1, with better performance as the main improvement, which is achieved by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;encoding messages in bytes rather than text;&lt;/li&gt;
&lt;li&gt;multiplexing, allowing requests to be processed entirely parallelly over the same connection;&lt;/li&gt;
&lt;li&gt;compression of redundant headers on subsequent messages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even though alternatives to gRPC, such as REST, can also use HTTP/2, it's not always the case since a lot of &lt;em&gt;mature&lt;/em&gt; technology still relies on older, HTTP/1.1. According to certain sources, &lt;a href="https://almanac.httparchive.org/en/2024/http" rel="noopener noreferrer"&gt;HTTP/1.1 is still used by about 20% of the World Wide Web&lt;/a&gt; - so make sure that you don't belong to them. HTTP/2 is supported by browsers and other software by some time already, so it's rather safe to start using it today. In gRPC, HTTP/2 is the minimal required version. &lt;/p&gt;

&lt;h3&gt;
  
  
  Small messages and fast serialization via Protocol Buffers
&lt;/h3&gt;

&lt;p&gt;gRPC uses Protocol Buffers (aka protobuf) for schema definition and message serialization. In here, messages are much smaller and can be processed much faster, comparing to JSON or XML. &lt;/p&gt;

&lt;p&gt;Schemas are prepared in advance in universal proto schema language. The schemas are then used to generate appropriate objects in target programming languages (e.g., classes in Java) on build time. Next, on runtime, messages are serialized to a compact sequence of bytes, that can be processed much faster. The speed is improved due to a couple of reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smaller messages result in less data to process.&lt;/li&gt;
&lt;li&gt;Schema details are processed in advance, so that you can focus on message content during serialization (e.g., during serialization, fields are identified by compact indexes, not full names).&lt;/li&gt;
&lt;li&gt;Supports a number of data types, so that the processing can be better optimized for each (e.g., int32 and int64).&lt;/li&gt;
&lt;li&gt;Messages can contain extra metadata to optimize the processing (e.g., value length known upfront can improve buffers allocation and concurrency).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;According to certain resources, &lt;a href="https://auth0.com/blog/beating-json-performance-with-protobuf/" rel="noopener noreferrer"&gt;Protobuf can be 6 times faster than JSON.&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Other interesting features
&lt;/h3&gt;

&lt;p&gt;Besides the features mentioned above, there is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication: supports auth mechanisms such as SSL/TLS, ALTS or token-based authentication (e.g., OAuth2).&lt;/li&gt;
&lt;li&gt;Interceptors: middleware software that can address cross-cutting concerns (e.g., caching, logging).&lt;/li&gt;
&lt;li&gt;Error handling mechanism: custom status codes that are more accurate and relevant to RPC nature.&lt;/li&gt;
&lt;li&gt;Cancellation: either clients or servers can cancel RPC at any time.&lt;/li&gt;
&lt;li&gt;Deadlines (aka timeouts): clients can specify for how long are they are willing to wait for RPC to complete before it's terminated.&lt;/li&gt;
&lt;li&gt;Retries: in case of failures, clients can configure retries with specialized strategies such as exponential backoff that are extendable.&lt;/li&gt;
&lt;li&gt;Load balancing: client-side load balancing, with various strategies available and ability to extend them.&lt;/li&gt;
&lt;li&gt;Health checks: servers can share health checks that can be used by clients to check the health of the server.&lt;/li&gt;
&lt;li&gt;Flow control: a mechanism that is used to ensure that a receiver of messages won't get overwhelmed by a fast sender.&lt;/li&gt;
&lt;li&gt;Compression: reduces the amount of data that is sent over the wire.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check out &lt;a href="https://grpc.io/docs/guides/" rel="noopener noreferrer"&gt;the official guides&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Examples
&lt;/h2&gt;

&lt;p&gt;If you are interested to see how gRPC looks in code, you can take a look at &lt;a href="https://docs.spring.io/spring-grpc/reference/getting-started.html" rel="noopener noreferrer"&gt;examples shown in Spring docs.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  if it's that good, why is it not that popular?
&lt;/h2&gt;

&lt;p&gt;Once you read the features above and add all things up, you might be thinking: Okay, it sounds really compelling, nonetheless there is not too much noise about it over the net. Why is that? Unfortunately, even though gRPC is quite good, there are certain drawbacks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Support for only a couple of programming languages
&lt;/h3&gt;

&lt;p&gt;To use gRPC, you have to use external libraries that provide client and server SDKs. Unfortunately, it's maintained only for a couple of the most popular languages. As of 2026, the supported languages are: C# / .NET, C++, Dart, Go, Java, Kotlin, Node, Objective-C, PHP, Python, Ruby and Swift.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requires dedicated tooling
&lt;/h3&gt;

&lt;p&gt;gRPC does not work out of the box within a given language. I already mentioned that it requires its libraries, but in some cases it can also require additional components (e.g., specific proxy setup, as explained in &lt;a href="https://grpc.io/docs/platforms/web/basics/" rel="noopener noreferrer"&gt;gRPC Web tutorial&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Alternatives, such as REST, can often be implemented using only core language features (e.g., HTTP and JSON) since it works on mechanisms that's been there for a longer time and the mechanisms are more mature.&lt;/p&gt;

&lt;h3&gt;
  
  
  Not available in older HTTP versions
&lt;/h3&gt;

&lt;p&gt;gRPC requires HTTP/2. Depending on how you look at this, it can be either advantage or disadvantage. In certain cases (rather minority, but still), it can be a blocker, stopping from adoption of gRPC. Unfortunately, there are still components (e.g., proxies) that downgrade the communication to HTTP/1.1. &lt;/p&gt;

&lt;h3&gt;
  
  
  Slightly more complicated
&lt;/h3&gt;

&lt;p&gt;gRPC achieves high performance at a cost of simplicity - messages are compact and sent in bytes, sacrificing readability. It can make work harder when we need to review message content, and a developer might be forced to use specialized tooling for debugging purposes, which may also be limited. This all can be too overwhelming and not worth it, especially if high performance is not a major concern.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where it can be successfully used
&lt;/h2&gt;

&lt;p&gt;As you can see, gRPC comes with some promising technical solutions, but also with certain drawbacks that can make it hard to use it. Nonetheless, it can still be used successfully, most often for &lt;strong&gt;internal communication between services&lt;/strong&gt;. Having full control of communication on both sides fits nicely with gRPC and its features. Used technology is rather limited, so there are no issues with a lack of support for specific devices. Complex communication can be thoroughly logged by services if there is such need. On the other hand, support for various communication styles (uni- or bidirectional, unary or streaming), performance and a couple of built-in, commonly used in network communication mechanisms (load balancing, retries, timeouts) make gRPC more than sufficient choice for such scenarios.&lt;/p&gt;

&lt;p&gt;I hope that this succinct overview let you understand gRPC slightly better.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn more
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://grpc.io/about/" rel="noopener noreferrer"&gt;About gRPC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@i.gorton/scaling-up-rest-versus-grpc-benchmark-tests-551f73ed88d4" rel="noopener noreferrer"&gt;Scaling up REST versus gRPC Benchmark Tests&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>grpc</category>
      <category>communication</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Database locking revisited</title>
      <dc:creator>Dawid Kałuża</dc:creator>
      <pubDate>Thu, 26 Jun 2025 14:38:44 +0000</pubDate>
      <link>https://dev.to/dawidkaluza/database-locking-revisited-b78</link>
      <guid>https://dev.to/dawidkaluza/database-locking-revisited-b78</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;For me, it was hard to find a paper that would thoroughly explain to me how I can make database operations in my app thread-safe. Most of them focus on explaining @Transactional annotation and other framework mechanisms, but hardly ever elaborate on how it works in the database, or still believe that serializable transactions are executed sequentially (not true!). Here, I go into those details and explain on examples how you can leverage your database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is it even worth to review the topic?
&lt;/h2&gt;

&lt;p&gt;Some people might be wondering whether it's necessary to review this topic, since it's already explained in the databases' docs. However, I feel that there is such need. When I was trying to understand all of this, like transactions, isolations, optimistic vs. pessimistic locking, etc., I could not really find a place that wholly covers this topic. I understand that it's a broad area, but still. Mostly there was just a reference to the documentation and that was it. Don't get me wrong, I don't underestimate reading documentation, it's often a source of truth, but sometimes it may be challenging and overwhelming, especially since it's separate of the language or library you use, and you might not be sure how your code actually translates to interactions with your database.&lt;/p&gt;

&lt;p&gt;Moreover, I was honestly surprised when I realized how much disinformation there is about this topic. For instance, as already mentioned, many authors still believe that serialization isolation is based on pessimistic locking, or even worse - that it executes the transactions sequentially. Obviously, this is not true, not at least in case of Postgres database, as well as in other commonly used relational databases. All isolation levels there are based on optimistic locks, which I will explain later on.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you need to already know?
&lt;/h2&gt;

&lt;p&gt;In this article, I assume that you understand the basics of concurrent processing, such as race conditions or locking. Concurrent processing is the reason why we have to use one or the other thread-safety mechanisms in databases. Additionally, it's worth explicitly pointing out that I will be working on Postgres database, since it's a one of the most popular choice and one that I'm most familiar with. Obviously, many of it's behaviors can be directly translated to other databases, however, not all of them, so make sure to take that into consideration, if you are working with another RDBMS.&lt;/p&gt;

&lt;p&gt;Without further ado, let's get into it!&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the race conditions the databases are vulnerable to?
&lt;/h2&gt;

&lt;p&gt;Firstly, let's quickly recap the basic nomenclature.&lt;/p&gt;

&lt;p&gt;In relational databases, in order to maintain strong data consistency and isolation between concurrent operations, a set of queries are enclosed within transactions. In transactions, either all or nothing succeeds - there is nothing in between. In case of any error within a transaction, all previous operations are reverted, and all next operations are cancelled, and we say that such transaction is &lt;strong&gt;rolled back&lt;/strong&gt;. In case there are no errors, all operations are submitted (e.g., changes), and we say that such transaction is &lt;strong&gt;committed&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;The transactions, even though guarantee isolation, according to &lt;a href="https://en.wikipedia.org/wiki/ACID" rel="noopener noreferrer"&gt;ACID principles&lt;/a&gt;, does not guarantee the ultimate isolation by default. Obviously, you can protect your data (e.g., a single database row) in a way that it could not be accessed by two concurrent processes at a time, but that would significantly limit concurrency capabilities of you database. Thankfully, this issue is already addressed by various, complex algorithms, that allow for more concurrent throughput. I will get back to those later.&lt;/p&gt;

&lt;p&gt;Let's start by listing race conditions we may face as database users. Race conditions are vulnerabilities in a software that may lead to nondeterministic, unexpected and incorrect outcomes in concurrent environment (e.g., dead locks, lost updates, outdated reads). In relational databases, we can distinguish the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dirty reads&lt;/strong&gt;: Reading uncommitted changes from other transactions. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Non-repeatable reads&lt;/strong&gt;: Reading the same data within a transaction twice returns different results (e.g., some new changes got committed between the reads, causing returning inconsistent data). &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phantom reads&lt;/strong&gt;: Query that returns a set of results satisfying given search condition within a transaction twice returns different results (similar to non-repeatable read, but refers to search queries, rather than returning specific rows).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dirty writes&lt;/strong&gt;: Overwriting data that another client has written, but not yet committed. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lost updates&lt;/strong&gt;: Overwriting changes made by another transaction without incorporating them. (e.g., transaction A reads some value, transaction B reads the same value, then transaction A updates the value, and transaction B also updates the value, but does not consider the write made by transaction A, which may result in inconsistent state of the value).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Write skew&lt;/strong&gt;: Making a query, perparing write operations based on results of this query, then submitting the write, but at the time the write is submitted, the query made initially would produce different results and possibly change the prepared write operations (e.g., book a ticket for an event if amount of booked tickets for it is lower than 100, but between checking already booked tickets and booking a new one, some other transaction already committed a new booking to the same event and amount of booked events reached its limit, but we do not see that change in our transaction).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hopefully each of those conditions are clear, but if you have some doubts, no worries! I provide practical examples for each of those in next sections. &lt;/p&gt;

&lt;p&gt;Regarding examples, I will be working on accounts table having 1000 accounts, each with balance=50, has_loan=false, created with the following queries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE accounts (
    id BIGSERIAL PRIMARY KEY,
    balance NUMERIC(9, 2) NOT NULL,
    has_loan BOOLEAN NOT NULL
);

INSERT INTO accounts (balance, has_loan) (SELECT 50, false FROM GENERATE_SERIES(1,1000) n);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Avoiding race conditions by appropriate isolation levels
&lt;/h2&gt;

&lt;p&gt;When you start a new transaction, you start it with a certain isolation level. Their goal is to make operations within a transaction thread-safe. There are weaker, but more performant levels, and stronger, but less performant levels. You can define the level explicitily (e.g., in &lt;code&gt;BEGIN&lt;/code&gt; clause that starts a new transaction, or isolation value in &lt;a href="https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/transaction/annotation/Transactional.html" rel="noopener noreferrer"&gt;@Transactional&lt;/a&gt; annotation in Spring), or your database will use a default isolation level. In Postgres, there are the following isolation levels, ordered from the weakest to the strongest:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Read committed&lt;/strong&gt; (default): Read operations see only committed changes, or changes made in current transaction. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repeatable read&lt;/strong&gt;: Makes subsequent reads repeatable, by working on a specific snapshot of the database throughout the entire transaction. It's based on algorithm known as &lt;a href="https://en.wikipedia.org/wiki/Snapshot_isolation" rel="noopener noreferrer"&gt;Snapshot Isolation&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serlializable&lt;/strong&gt;: Automatically rolls back transactions that are interfered by some other concurrent transactions. It's based on algorithm known as &lt;a href="https://wiki.postgresql.org/wiki/SSI" rel="noopener noreferrer"&gt;Serializable Snapshot Isolation&lt;/a&gt;, which is essentially extension of Snapshot Isolation used in Repeatable read, that additionally puts non-blocking locks on accessed data to check if there weren't any concurrent operation that would cause serialization failure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First of all, observant readers (or those that knew that already) would notice that all of available isolation levels prevent dirty reads and writes - it's impossible to face them in Postgres, so we don't need to care about them. These conditions are mostly always prevented in other databases as well.&lt;/p&gt;

&lt;p&gt;Next thing is that all isolation levels are based on &lt;a href="https://en.wikipedia.org/wiki/Multiversion_concurrency_control" rel="noopener noreferrer"&gt;MVCC (Multiversion concurrency control)&lt;/a&gt;, which is non-blocking concurrency mechanism. In other words, we can say that &lt;strong&gt;all isolation levels are actually based on optimistic locking&lt;/strong&gt;, since it's allowed to perform conflicting operations without blocking, but they will be eventually rejected if such conflict happened. &lt;/p&gt;

&lt;p&gt;Okay, but this is just a pure theory, which is additionally veeery brief, so let's get into more details of each level separately.&lt;/p&gt;

&lt;h3&gt;
  
  
  Read committed
&lt;/h3&gt;

&lt;p&gt;This level is quite simple. Once you start a transaction with it, you won't see any uncomitted changes from other transactions. It's a default isolation level used by Postgres.&lt;/p&gt;

&lt;p&gt;For example, consider having 2 sessions that are concurrently working on the same account. Both read its data, as well as update it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;session #1&amp;gt; BEGIN ISOLATION LEVEL READ COMMITTED;
&amp;gt; BEGIN

session #1&amp;gt; SELECT * FROM accounts WHERE id = 1;
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |   50.00 | f
&amp;gt; (1 row)

session #2&amp;gt; BEGIN ISOLATION LEVEL READ COMMITTED;
&amp;gt; BEGIN

session #2&amp;gt; SELECT * FROM accounts WHERE id = 1;
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |   50.00 | f
&amp;gt; (1 row)

session #2&amp;gt; UPDATE accounts SET balance = 100 WHERE id = 1;
&amp;gt; UPDATE 1

session #2&amp;gt; SELECT * FROM accounts WHERE id = 1;
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |  100.00 | f
&amp;gt; (1 row)

session #1&amp;gt; SELECT * FROM accounts WHERE id = 1;
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |   50.00 | f
&amp;gt; (1 row)

session #2&amp;gt; COMMIT;
&amp;gt; COMMIT

session #1&amp;gt; SELECT * FROM accounts WHERE id = 1;
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |  100.00 | f
&amp;gt; (1 row)

session #1&amp;gt; UPDATE accounts SET balance = 25 WHERE id = 1;
&amp;gt; UPDATE 1

session #1&amp;gt; COMMIT;
&amp;gt; COMMIT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, in session #2 we did not see changes made in session #1 unless they got commited. Once it happenned, we could see that. &lt;/p&gt;

&lt;p&gt;Besides that, making update, even though there was concurrent change on the same data, wasn't blocked - if we didn't re-select it, we couldn't even notice that the change took place. &lt;/p&gt;

&lt;p&gt;The update would have been blocked if there were two pending updates (or any conflicting writes) on the same rows in different transactions. Since changes are submitted on commit, the database waits till one change is committed (or rolled back), to continue with next changes. Changes are applied sequentially, to maintain data integrity.&lt;/p&gt;

&lt;p&gt;Here is an example of that.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;session #1&amp;gt; BEGIN ISOLATION LEVEL READ COMMITTED;
&amp;gt; BEGIN

session #2&amp;gt; BEGIN ISOLATION LEVEL READ COMMITTED;
&amp;gt; BEGIN

session #1&amp;gt; UPDATE accounts SET balance = 75 WHERE id = 1;
&amp;gt; UPDATE 1

--- the session is blocked till other change is either committed or rolled back
session #2&amp;gt; UPDATE accounts SET balance = 25 WHERE id = 1;

session #1&amp;gt; COMMIT;
&amp;gt; COMMIT

--- now it's unblocked
session #2&amp;gt; 
&amp;gt; UPDATE 1

session #2&amp;gt; COMMIT;
&amp;gt; COMMIT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This isolation level prevents only dirty writes and reads, but is still vulnerable to other race conditions.&lt;/p&gt;

&lt;p&gt;Keep in mind one, really important thing - it's the default isolation level, vulnerable to most race conditions. During transaction execution, you can't tell whether you faced them or not. There are no warning or messages informing user that some, for example, lost update may have happenned. Don't get me wrong, this level is great in its simplicity to certain use-cases, however, it can be quite risky if not used properly. &lt;/p&gt;

&lt;p&gt;Imagine that you are developing a Spring (or whatever) application, use Postgres as your database, and all you do is enclosing your operations within a request in transaction. Many people believe it's all you need to make your app thread-safe. Obviously, it is not enough! Unless you are the only user of your app, or you perform only simple queries within your app (e.g., &lt;code&gt;SELECT posts ORDER BY publication_time DESC&lt;/code&gt;). It might be a good idea to change the default isolation level to a stronger one, which can be done via &lt;a href="https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-DEFAULT-TRANSACTION-ISOLATION" rel="noopener noreferrer"&gt;Postgres configuration&lt;/a&gt;, or somehow force developers to always explicitly define isolation level, so that they would be more conscious of what their code does.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pros and cons
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Pros:

&lt;ul&gt;
&lt;li&gt;The most performant isolation level.&lt;/li&gt;
&lt;li&gt;Fits great where you execute only one query within a transaction, or many queries but each on completely unrelated data (e.g., fetch signed-in user data, fetch most recent posts), or work with data that cannot be modfied concurrently (e.g., data that belongs to the user, like their posts, comments, etc. and the user can access it only from one device at a time), or slight inconsistencies can be accepted.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Cons:

&lt;ul&gt;
&lt;li&gt;Narrow set of use-cases where it can be applied.&lt;/li&gt;
&lt;li&gt;Vulnerable to most race conditions.&lt;/li&gt;
&lt;li&gt;Used improperly can lead to unrecoverable data consistency issues. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Repeatable read
&lt;/h3&gt;

&lt;p&gt;This isolation level is based on algorithm known as &lt;a href="https://en.wikipedia.org/wiki/Snapshot_isolation" rel="noopener noreferrer"&gt;Snapshot Isolation&lt;/a&gt;. It essentially takes a snapshot of data at the beginning of transaction, and works on this snapshot till the transaction is finished. &lt;/p&gt;

&lt;p&gt;By doing that, it's ensured that repeated reads of the same rows would always return the same data, meaning it prevents non-repeatable reads. Moreover, this algorithm prevents phantom reads too, so not only reads of the same rows would remain consistent, but reads matching given search condition would remain consistent across transaction as well. This is great for read only transactions, as it addresses all read-related race conditions. &lt;/p&gt;

&lt;p&gt;Look at the example below, where again, we have 2 sessions that are doing operations concurrently.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;session #1&amp;gt; BEGIN ISOLATION LEVEL REPEATABLE READ;
&amp;gt; BEGIN

session #1&amp;gt; SELECT * FROM accounts WHERE id = 1 OR id = 2 ORDER BY id;
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |   50.00 | f
&amp;gt;   2 |   50.00 | f
&amp;gt; (2 rows)

session #2&amp;gt; BEGIN ISOLATION LEVEL READ COMMITTED;
&amp;gt; BEGIN

session #2&amp;gt; UPDATE accounts SET balance = 75 WHERE id = 1;
&amp;gt; UPDATE 1

session #2&amp;gt; UPDATE accounts SET balance = 25 WHERE id = 2;
&amp;gt; UPDATE 1

session #1&amp;gt; SELECT * FROM accounts WHERE id = 1 OR id = 2 ORDER BY id;
&amp;gt;  id | balance 
&amp;gt; ----+---------
&amp;gt;   1 |   50.00 | f
&amp;gt;   2 |   50.00 | f
&amp;gt; (2 rows)

session #2&amp;gt; COMMIT;
&amp;gt; COMMIT

session #1&amp;gt; SELECT * FROM accounts WHERE id = 1 OR id = 2 ORDER BY id;
&amp;gt;  id | balance 
&amp;gt; ----+---------
&amp;gt;   1 |   50.00 | f
&amp;gt;   2 |   50.00 | f
&amp;gt; (2 rows)

session #1&amp;gt; COMMIT;
&amp;gt; COMMIT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, no matter what happened in other transactions, or whether there were some changes committed in the meantime, results returned within session #1 are consistent.&lt;/p&gt;

&lt;p&gt;What about writes? Well, it depends. When we go to &lt;a href="https://www.postgresql.org/docs/current/transaction-iso.html#XACT-REPEATABLE-READ" rel="noopener noreferrer"&gt;Postgres documentation, section about repeatable isolation level&lt;/a&gt;, we can find note about making updates on rows that are being concurrently updated by another transaction. Basically, if we try to update a row that is already being updated by concurrent transaction, our repeatable read transaction will wait till the other transaction finishes. If that transaction is committed, our transaction will be rolled back with &lt;code&gt;ERROR: could not serialize access due to concurrent update&lt;/code&gt; message. If that transaction is rolled back, our transaction will made its changes, as now they are not impacted by concurrent change. Putting it shortly: repeatable read prevents lost updates.&lt;/p&gt;

&lt;p&gt;Let's see that on examples.&lt;/p&gt;

&lt;p&gt;In case of commit in the other transaction, our transaction throws SQL error and, according to the documentation, should be rolled back and optionally retried.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;session #1&amp;gt; BEGIN ISOLATION LEVEL REPEATABLE READ;
&amp;gt; BEGIN

session #1&amp;gt; SELECT * FROM accounts WHERE id = 1;
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |   50.00 | f
&amp;gt; (1 row)

session #2&amp;gt; BEGIN ISOLATION LEVEL READ COMMITTED;
&amp;gt; BEGIN

session #2&amp;gt; SELECT * FROM accounts WHERE id = 1;
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |   50.00 | f
&amp;gt; (1 row)

session #2&amp;gt; UPDATE accounts SET balance = 100 WHERE id = 1;
&amp;gt; UPDATE 1

--- session #1 is blocked till the transaction in session #2 is finished
session #1&amp;gt; UPDATE accounts SET balance = 75 WHERE id = 1;

session #2&amp;gt; COMMIT;
&amp;gt; COMMIT

--- throws SQL error, since concurrent change was committed
session #1&amp;gt; 
&amp;gt;  ERROR:  could not serialize access due to concurrent update

session #1&amp;gt; ROLLBACK;
&amp;gt; ROLLBACK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case of rollback in the other transaction, we are able to commit our changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;session #1&amp;gt; BEGIN ISOLATION LEVEL REPEATABLE READ;
&amp;gt; BEGIN

session #1&amp;gt; SELECT * FROM accounts WHERE id = 1;
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |   50.00 | f
&amp;gt; (1 row)

session #2&amp;gt; BEGIN ISOLATION LEVEL READ COMMITTED;
&amp;gt; BEGIN

session #2&amp;gt; SELECT * FROM accounts WHERE id = 1;
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |   50.00 | f
&amp;gt; (1 row)

session #2&amp;gt; UPDATE accounts SET balance = 100 WHERE id = 1;
&amp;gt; UPDATE 1

--- session #1 is blocked till the transaction in session #2 is finished 
session #1&amp;gt; UPDATE accounts SET balance = 75 WHERE id = 1;

session #2&amp;gt; ROLLBACK;
&amp;gt; ROLLBACK

--- update is performed succesfully, since the other transaction rolled back
session #1&amp;gt; 
&amp;gt;  UPDATE 1

session #1&amp;gt; COMMIT;
&amp;gt; COMMIT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By reviewing the examples, we can actually say that &lt;strong&gt;repeatable read isolation level applies optimistic locking on updated rows&lt;/strong&gt;. It tries to update a row, but if it turns out that there were concurrent updates that committed change to that row, transaction is aborted. Sounds familiar to optimistic locking implemented on app-level via additional version column (as explained &lt;a href="https://docs.spring.io/spring-data/relational/reference/jdbc/entity-persistence.html#jdbc.entity-persistence.optimistic-locking" rel="noopener noreferrer"&gt;here&lt;/a&gt; and &lt;a href="https://vladmihalcea.com/optimistic-locking-version-property-jpa-hibernate/" rel="noopener noreferrer"&gt;here&lt;/a&gt;), huh? Yeah, it actually works on pretty much the same assumptions, it's just implemented entirely within the database, and does not require any special handling in the application. &lt;/p&gt;

&lt;p&gt;I don't know why, but it's not widely known that the optimistic locking can be entirely achieved by using proper isolation level. I understand that perhaps other database systems might implement it differently and perhaps do not support that, but still, Postgres is a really popular choice. Maybe since app-level optimistic locking is often fully automated by frameworks and the fact that it can be succesfully used in every database made it that popular? Maybe. Nonetheless, it's worth to actually understand how your database approaches that, to not end up in situations where you apply optimistic locking twice: one on app-level via version column, one on database-level via repeatable reads.&lt;/p&gt;

&lt;p&gt;Okay, but what about write skews, what if I made a search query, and basic on its result make a decision about update? What if we updated different rows in concurrent transactions? These kind of operations are allowed there.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pros and cons
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Pros:

&lt;ul&gt;
&lt;li&gt;Prevents all read-related race conditions, even for more complex queries.&lt;/li&gt;
&lt;li&gt;Regarding writes, prevents dirty writes and lost updates.&lt;/li&gt;
&lt;li&gt;Really well balanced between being performant and secured against most race conditions. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Cons:

&lt;ul&gt;
&lt;li&gt;Still susceptible to write skews.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Serializable
&lt;/h3&gt;

&lt;p&gt;Guarantees the strictest isolation and prevents all possible race conditions. &lt;/p&gt;

&lt;p&gt;This isolation level is based on algorithm known as &lt;a href="https://wiki.postgresql.org/wiki/SSI" rel="noopener noreferrer"&gt;Snapshot Serializable Isolation&lt;/a&gt;. As mentioned on &lt;a href="https://www.postgresql.org/docs/current/transaction-iso.html#XACT-SERIALIZABLE" rel="noopener noreferrer"&gt;Postgres documentation, Serializable Isolation Level section&lt;/a&gt;, this level works pretty much the same as Repeatable Read, but additionally uses predicate, non-blocking locks to make sure that concurrent transactions did not change anything during the transaction, that would cause serialization failure. If that is the case, transaction rolls back and throws &lt;code&gt;ERROR: could not serialize access due to read/write dependencies among transactions&lt;/code&gt; message. If transaction wasn't interfeered, it is committed successfully. So we can say that serializable isolation level is based on optimistic locking and prevents all race conditions.  In theory, this is great, but in practice - slightly more complicated.&lt;/p&gt;

&lt;p&gt;First of all, it is possible to get false positives when it comes to serialization failures, meaning you can get serialization error even though your transaction was not really interfeered by concurrent transaction. This is caused by the way how predicate locks are being acquired. Predicate locks can be acquired on each row separately (e.g., each modified row), but those can be promoted to page-level locks (page in a sense of a B-Tree page, or to put it simply: on multiple rows that are stored close to each other), or even to relation-level locks (e.g., tables, indexes, relations between tables). So with serializable transactions, you always need to be ready to get serialization errors, and optionally try to retry such transactions. Unfortunately, this can visibly impact performance of the application.&lt;/p&gt;

&lt;p&gt;Theoretically, the rate of these errors can be minimized. For instance, if you have thousands rather than hundreds of rows, page-level locks would not be conflicting that often. Additionally, you can try to modify &lt;a href="https://www.postgresql.org/docs/current/runtime-config-locks.html#RUNTIME-CONFIG-LOCKS" rel="noopener noreferrer"&gt;locking configuration&lt;/a&gt; to minimize the rate of serialization errors. Nonetheless, you cannot eliminate them completely.  &lt;/p&gt;

&lt;p&gt;Without further ado, let's get into examples.&lt;/p&gt;

&lt;p&gt;Let's consider a situation where we want to give loans to accounts where balance is equal or greater than 100. Concurrently, we will update one account to have the balance increased to 125.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;session #1&amp;gt; BEGIN ISOLATION LEVEL SERIALIZABLE;
&amp;gt; BEGIN

session #2&amp;gt; BEGIN ISOLATION LEVEL SERIALIZABLE;
&amp;gt; BEGIN

session #1&amp;gt; SELECT * FROM accounts WHERE id = 1;
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |   50.00 | f
&amp;gt; (1 row)

session #1&amp;gt; UPDATE accounts SET balance = 125 WHERE id = 1;
&amp;gt;  UPDATE 1

session #1&amp;gt; COMMIT;
&amp;gt; COMMIT

session #2&amp;gt; UPDATE accounts SET has_loan = true WHERE balance &amp;gt;= 100;
&amp;gt;  UPDATE 0

session #2&amp;gt; COMMIT;
&amp;gt; COMMIT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might think that it's not how it supposed to be working and expected to get serialization error on session #2, because its update did not incorporate changes commited in session #1. However, in case of such concurrent transactions, Postgres checks whether comitting them would be correct if they were performed sequentially &lt;strong&gt;in any order&lt;/strong&gt;, so it's allowed to reorder concurrent transactions, no matter which one started first. Here, even though session #1 started first, changes in those 2 sessions had no impact on each other if all changes in session #2 were committed first, and those in session #1 - second. Since those transactions are already concurrent, it's perfectly fine and acceptable.&lt;/p&gt;

&lt;p&gt;What if there is circular dependency between two transactions? Assume that we want to give a loan plus a bonus for start to all users that are not having any loan at the moment, and concurrently we want to close loan for users that currently are having a loan, plus take some installment fee.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--- accounts with odd id have loan, with even id - not.
session #1&amp;gt; UPDATE accounts SET has_loan = CASE WHEN id % 2 = 1 THEN true ELSE false END;
&amp;gt; UPDATE 1000

session #1&amp;gt; BEGIN ISOLATION LEVEL SERIALIZABLE;
&amp;gt; BEGIN

session #1&amp;gt; UPDATE accounts SET balance = balance + 25, has_loan = true WHERE has_loan = false;
&amp;gt;  UPDATE 500

session #2&amp;gt; BEGIN ISOLATION LEVEL SERIALIZABLE;
&amp;gt; BEGIN

session #2&amp;gt; UPDATE accounts SET balance = balance - 25, has_loan = false WHERE has_loan = true;
&amp;gt;  UPDATE 500

session #1&amp;gt; COMMIT;
&amp;gt; COMMIT

session #2&amp;gt; COMMIT;
&amp;gt; ERROR:  could not serialize access due to read/write dependencies among transactions
&amp;gt; DETAIL:  Reason code: Canceled on identification as a pivot, during commit attempt.
&amp;gt; HINT:  The transaction might succeed if retried.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the transactions are interdependent. If session #1 submitted its changes first, then session #2 should've updated 1000 rows, not only 500. If session #2 was first, then session #1 should've updated 1000 rows too. Since we work on a snapshot, we don't see changes committed concurrently. In any order, commit of the second transaction would lead to write skew, so the first committed transaction wins, but the other throws serialization error.&lt;/p&gt;

&lt;p&gt;What is also worth to note here is the fact, that the transaction would fail only if both are executed in serializable isolation level. If any of them applied any lower isolation level, both transactions would succeed, since its considered then that one of those can accept write skew and it would not harm the system to commit such transactions.&lt;/p&gt;

&lt;p&gt;I also mentioned that it's possible to get false positives, when predicate locks are promoted to page or relation-level locks. Consider a situation where you want to update 1000 independent accounts concurrently.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;session #1&amp;gt; BEGIN ISOLATION LEVEL SERIALIZABLE;
&amp;gt; BEGIN

session #2&amp;gt; BEGIN ISOLATION LEVEL SERIALIZABLE;
&amp;gt; BEGIN

--- update accounts with id 1, 3, ..., 999
session #1&amp;gt; UPDATE accounts SET balance = balance - 25 WHERE id IN (SELECT n FROM generate_series(1,1000,2) n); 
&amp;gt; UPDATE 500

--- update accounts with id 2, 4, 6, ..., 1000
session #2&amp;gt; UPDATE accounts SET balance = balance + 25 WHERE id IN (SELECT n FROM generate_series(2,1000,2) n); 
&amp;gt; UPDATE 500

session #1&amp;gt; COMMIT;
&amp;gt; COMMIT

session #2&amp;gt; COMMIT;
&amp;gt; ERROR:  could not serialize access due to read/write dependencies among transactions
&amp;gt; DETAIL:  Reason code: Canceled on identification as a pivot, during commit attempt.
&amp;gt; HINT:  The transaction might succeed if retried.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even though those transactions were working on completely different accounts, we got serialization error. We might think about tweaking Postgres settings, but it is not really possible to unequivocally configure them deterministically (for instance, to not go for page-level locks till some number of row-level locks is reached). On the other hand, you might notice that this kind of operation does not actually require serializable isolation, since it's vulnerable only to lost updates, not to write skews, which means that we could go with repeatable read here and it would properly block conflicting updates.&lt;/p&gt;

&lt;p&gt;When it comes to read-only transactions, there is narrow and really specific set of use-cases where serializable transaction throws serializable error, but repeatable read accepts such transactions . You can find them on &lt;a href="https://wiki.postgresql.org/wiki/SSI#Read_Only_Transactions" rel="noopener noreferrer"&gt;Postgres Wiki, SSI algoritm page&lt;/a&gt;. Since these examples are rather niche, I would not go into details about them there.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pros and cons
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Pros:

&lt;ul&gt;
&lt;li&gt;Based on optimistic locking algorithm known as SSI, that allows for high concurrency considering that it's serializable isolation level.&lt;/li&gt;
&lt;li&gt;Prevents any race conditions.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Cons:

&lt;ul&gt;
&lt;li&gt;Even though it's based on optimistic locking, it's still complex isolation level that can lead to performance decline.&lt;/li&gt;
&lt;li&gt;Database users must be prepared to handle serialization errors, which may impact the performance even more.&lt;/li&gt;
&lt;li&gt;Can throw false positive serialization errors. Thankfully, it can be limited if you had a lot of data, so that even page or relation-level locks would not collide with each other.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;As you can see, there a a couple of considerations you might need to go through before deciding to go for serializable isolation. Whether it's worth it depends on multiple factors. It might be great in certain areas of your system, or become a bottleneck in others. Nonetheless it's not as bad as many people suggest, so if it suits your example, it's worth giving it a shot.&lt;/p&gt;

&lt;h2&gt;
  
  
  What if pessimistic locking is preferred?
&lt;/h2&gt;

&lt;p&gt;Okay, we've reviewed all isolation levels and now we know how to get advantage of them. The thing is that all of them are based on optimistic locking, which is not always suitable.&lt;/p&gt;

&lt;p&gt;Optimistic locking is a great option if you don't have conflicting operations on concurrent transactions, but once they become more and more common, you end up with a bunch of aborted transactions, that could result in bad end-user experience, since the response time is increased (in case of transaction retry), or the request is unexpectedly canceled. There are situations where you might actually prefer to go for pressimistic locking, the locking that will wait till the locked resource is released, and once that is the case it will acquire it for its operations, and release on transaction finish (either commit or rollback).&lt;/p&gt;

&lt;p&gt;I already mentioned that when you execute &lt;code&gt;UPDATE&lt;/code&gt; query, Postgres acquires exclusive locks on modified rows, so that if given row is modified by multiple transactions concurrently, the changes are applied sequentially. Depending on used isolation level, commit/rollback of such update is ignored (like in Read Committed) or can cause SQL errors (like in Repeatable Read). Same exclusive lock is used by other write operations too (e.g., &lt;code&gt;DELETE&lt;/code&gt; or &lt;code&gt;MERGE&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Basically, on row-level, it's possible to acquire the following locks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shared lock (aka read-only lock): a lock that can be held by multiple transactions at a time.&lt;/li&gt;
&lt;li&gt;Exclusive lock (aka write lock): a lock that can be held by only one transaction at a time, and none other transaction can hold any lock (neither exclusive or shared) at that time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those locks can be acquired on demand, not only by Postgres itself. All you need to do is add special clause at the end of your &lt;code&gt;SELECT&lt;/code&gt; query.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;FOR SHARE&lt;/code&gt; for shared locks.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;FOR UPDATE&lt;/code&gt; for exclusive locks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In Spring, you can use &lt;a href="https://docs.spring.io/spring-data/jpa/docs/current/api/org/springframework/data/jpa/repository/Lock.html" rel="noopener noreferrer"&gt;@Lock&lt;/a&gt; annotation to apply pessimistic locking on a repository method.&lt;/p&gt;

&lt;p&gt;Now, how to use those? Acquiring locks assures that the data we work on would not be modified concurrently, and optionally we can let other transactions perform read-only operations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you want data within your transaction to remain consistent, block any concurrent modifications, and will perform read-only operations, use shared locks - it lets concurrent transactions perform reading on locked data as well.&lt;/li&gt;
&lt;li&gt;If you need to perform some write operations - use exclusive locks, so that no other transaction would be allowed to access locked data till the lock is released.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key difference here comparing to using purely isolation levels is &lt;strong&gt;blocking concurrent access&lt;/strong&gt;, which is what makes those locks pessimistic.&lt;/p&gt;

&lt;p&gt;Consider an example where we try to update the same account in two concurrent transactions in a select-modify-update manner. First transaction will add 25 to the balance, second - subtract 25 from the balance. By relying purely on isolation levels, in case of conflict we could get either inconsistent results (read committed), or optimistic locking error (repeatable read or serializable). By using locks, we can block one transaction till the other is completed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;session #1&amp;gt; BEGIN ISOLATION LEVEL READ COMMITTED;
&amp;gt; BEGIN

session #2&amp;gt; BEGIN ISOLATION LEVEL READ COMMITTED;
&amp;gt; BEGIN

session #1&amp;gt; SELECT * FROM accounts WHERE id = 1 FOR UPDATE;
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |   50.00 | f
&amp;gt; (1 row)

--- session #2 is blocked, since exclusive lock is held by session #1
session #2&amp;gt; SELECT * FROM accounts WHERE id = 1 FOR UPDATE;

--- calculate new balance on the app and put the new value here
session #1&amp;gt; UPDATE accounts SET balance = 75 WHERE id = 1;
&amp;gt; UPDATE 1

session #1&amp;gt; COMMIT;
&amp;gt; COMMIT

--- lock is released, so session #2 is unblocked and acquires the lock on given account
session #2&amp;gt; 
&amp;gt;  id | balance | has_loan 
&amp;gt; ----+---------+----------
&amp;gt;   1 |   75.00 | f
&amp;gt; (1 row)

--- calculate new balance on the app and put the new value here
session #1&amp;gt; UPDATE accounts SET balance = 50 WHERE id = 1;
&amp;gt; UPDATE 1

session #2&amp;gt; COMMIT;
&amp;gt; COMMIT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pessimistic locking is good choice if we know that certain records are often vulnerable to concurrent modifications, that would eventually cause transaction rollback. It allows to maintain data consistency even in read committed isolation level. However, bear in mind that those locks make the whole database less concurrent, since it's not able to process as many concurrent requests as it could by using optimistic locking via isolation levels. It's actually advised in Postgres documentation to try to utilize isolation levels first if you want to make your database more performant. If the isolation itself does not suit your needs, then go for pessimistic locks.&lt;/p&gt;

&lt;p&gt;There are other types of locks that can be used as well, like table-level locks, or more sophisticated row-level locks. Most often the locks explained above will be sufficient for you, but if you are willing to learn more, check out &lt;a href="https://www.postgresql.org/docs/current/explicit-locking.html" rel="noopener noreferrer"&gt;Postgres documentation, Explicit locking section&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  If all types of locks can be acquired on database level, does it still make sense to use optimistic locking on application level?
&lt;/h2&gt;

&lt;p&gt;Obviously... it depends. Now at least you know how all of this works.&lt;/p&gt;

&lt;p&gt;Optimistic locking on application level is quite popular, and can be successfully applied on various databases, even those that do not support optimistic locking on database level. Additionally, you can have more control over the locking, and it may be already well supported by the framework you use.&lt;/p&gt;

&lt;p&gt;On the other hand, why reinventing the wheel? It's already there, it's more secure if database is accessed by multiple clients (e.g. multiples apps, or apps and users), since we always rely on mechanisms provided by the database. The locking may be more consistent, since we do not mix custom solutions with those provided by the database. &lt;/p&gt;

&lt;p&gt;I would say that both decisions stand on their own, and it often might be a matter of preference and habits. When it comes to performance, there might be slight advantage in favor of database locking, but it's a topic for another article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this article, I thoroughly reviewed all possibilities when it comes to locking in a database and put some fresh perception on the problem.&lt;/p&gt;

&lt;p&gt;Let me quickly summarize how I usually approach database locking.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Try to utilize isolation levels first.

&lt;ul&gt;
&lt;li&gt;Use read committed when:

&lt;ul&gt;
&lt;li&gt;Executing a single and simple read/write query.&lt;/li&gt;
&lt;li&gt;Executing multiple read/write queries, each on completely unrelated data.&lt;/li&gt;
&lt;li&gt;Executing read/write queries where slight inconsistency is acceptable.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Use repeatable read when:

&lt;ul&gt;
&lt;li&gt;Executing complex, embedded read queries (e.g. &lt;code&gt;SELECT&lt;/code&gt; within &lt;code&gt;SELECT&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Executing multiple read queries on related data, so that the returned results remain consistent throughout transaction.&lt;/li&gt;
&lt;li&gt;Executing read/write queries and want to prevent lost updates via optimistic locking.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Use serializable when:

&lt;ul&gt;
&lt;li&gt;Executing read/write queries and want to prevent all race conditions via optimistic locking.&lt;/li&gt;
&lt;li&gt;You can afford to abort (or retry) transactions that would throw serialization failures, even in case of false positives.&lt;/li&gt;
&lt;li&gt;You can accept impact on performance possibly generated by serialization checks. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;If optimistic locking is not suitable (high conflicts rate), go for pessimistic locks.

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;SELECT&lt;/code&gt; with &lt;code&gt;FOR SHARE&lt;/code&gt; clause to acquire shared locks (e.g., read-only transactions).&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;SELECT&lt;/code&gt; with &lt;code&gt;FOR UPDATE&lt;/code&gt; clause to acquire exlusive locks (e.g., read and write transactions).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Hopefully you'll find this article valueable.&lt;/p&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>java</category>
      <category>spring</category>
    </item>
    <item>
      <title>Thread pools in Java</title>
      <dc:creator>Dawid Kałuża</dc:creator>
      <pubDate>Sat, 21 Jun 2025 07:18:53 +0000</pubDate>
      <link>https://dev.to/dawidkaluza/thread-pools-in-java-1gch</link>
      <guid>https://dev.to/dawidkaluza/thread-pools-in-java-1gch</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How it started
&lt;/h3&gt;

&lt;p&gt;Some time ago, when a developer wanted to execute certain tasks in an application concurrently, they needed to create sufficient number of threads and manage them on their own. This adds another level of complexity to the app, since concurrency opens gate for various kinds of race conditions, such as dead locks, memory leaks, lost writes or outdated reads. &lt;/p&gt;

&lt;p&gt;Developers quickly realized that it would have been better if we had moved thread management into a separate API, so that this functionality could have been developed independently with required cautiousness, being even more performant - and this is what thread pools do.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a thread pool?
&lt;/h3&gt;

&lt;p&gt;Thread pool is a pool of threads that can be reused to execute tasks. Instead of creating a new thread manually, tasks are submitted to a thread pool which manages the execution of given task. What it brings?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easier to work with: Hard thread management process is delegated out, developers can focus on writing business logic.&lt;/li&gt;
&lt;li&gt;Performance boost: Creating, deleting and switching between threads is not a lightweight task for an OS. Having threads prepared earlier for the specific use-case allows to get better throughput and responsiveness.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Threads of a thread pool are often called &lt;strong&gt;worker threads&lt;/strong&gt;. Their purpose is to simply execute given work.  The pool usually contains a &lt;strong&gt;task queue&lt;/strong&gt;, which is a queue to which new tasks are added, and then fetched by worker threads.&lt;/p&gt;

&lt;p&gt;Workflow of a thread pool can be described in the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pool preparation: Creating and preparing a pool, a queue and worker threads to accept the work.&lt;/li&gt;
&lt;li&gt;Task acceptance: Taking a task and adding it to a queue.&lt;/li&gt;
&lt;li&gt;Task assignment: Taking a task from a queue and assigning it to a worker thread.&lt;/li&gt;
&lt;li&gt;Task execution: Executing a task by a worker thread.&lt;/li&gt;
&lt;li&gt;Thread freeing: Putting the thread back to a pool of available worker threads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5hjsvcdl1rjyjhxjzdv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5hjsvcdl1rjyjhxjzdv.png" alt="Thread pools" width="800" height="728"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Thread pools available in Java
&lt;/h2&gt;

&lt;p&gt;Java provides API, called as Java Concurrency API, that besides sharing other useful classees for concurrent applications, allows to create various kinds of thread pools. Good starting point is &lt;code&gt;Executors&lt;/code&gt; class, which shares a couple of static factory methods that return an instance of &lt;code&gt;ExecutorService&lt;/code&gt; interface, to create various kinds of thread pools. I try to explain them below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fixed-size thread pool
&lt;/h3&gt;

&lt;p&gt;Very simple pool that initiates with a number of worker threads that remains constant throughout the pool's lifecycle. Useful when you know exactly how many threads you need to perform certain work (e.g., batch processing of data which amount is rather fixed but still big enough to split it onto multiple threads).&lt;/p&gt;

&lt;h4&gt;
  
  
  Methods
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Executors.newFixedThreadPool(int numberOfThreads)&lt;/code&gt; - creates a new fixed-size thread pool with given number of threads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Main {
    public static void main(String[] args) {
        ExecutorService executorService = Executors.newFixedThreadPool(3);
        executorService.submit(newTask(1, 100L));
        executorService.submit(newTask(2, 100L));
        executorService.submit(newTask(3, 100L));
        executorService.submit(newTask(4, 100L));
        executorService.close();
    }

    private static Runnable newTask(int taskNo, long processingTime) {
        return () -&amp;gt; {
            try {
                System.out.println("Started processing task #" + taskNo);
                Thread.sleep(processingTime);
                System.out.println("Finished processing task #" + taskNo);
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
        };
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Result
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Started processing task #2
Started processing task #1
Started processing task #3
Finished processing task #3
Finished processing task #2
Finished processing task #1
Started processing task #4
Finished processing task #4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, since I initialzied the pool with 3 threads, the task #4 started it's execution once after previous tasks finished, since they occupied all threads available in the pool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Single-threaded pool
&lt;/h3&gt;

&lt;p&gt;A thread pool that contains only a single thread to execute tasks. The pool just collects tasks in a queue and then runs them sequentially on a single thread. Even though this pool may seem to be overly simplistic, it can thrive in cases where tasks don't take much computation time and we really want to avoid any race conditions by simply having just a single thread. For example, in memory databases like Redis are based on single-threaded model, which allowed it to be remarkably performant. This pool works like a fixed-size thread pool with only 1 thread. &lt;/p&gt;

&lt;h4&gt;
  
  
  Methods
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Executors.newSingleThreadExecutor()&lt;/code&gt; - creates a new single-threaded pool.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Main {
    public static void main(String[] args) {
        ExecutorService executorService = Executors.newSingleThreadExecutor();
        executorService.submit(newTask(1, 100L));
        executorService.submit(newTask(2, 100L));
        executorService.submit(newTask(3, 100L));
        executorService.submit(newTask(4, 100L));
        executorService.close();
    }

    private static Runnable newTask(int taskNo, long processingTime) {
        return () -&amp;gt; {
            try {
                System.out.println("Started processing task #" + taskNo);
                Thread.sleep(processingTime);
                System.out.println("Finished processing task #" + taskNo);
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
        };
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Result
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Started processing task #1
Finished processing task #1
Started processing task #2
Finished processing task #2
Started processing task #3
Finished processing task #3
Started processing task #4
Finished processing task #4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As mentioned, tasked are executed one-by-one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cached thread pool
&lt;/h3&gt;

&lt;p&gt;Creates new worker threads when needed, but can reuse previously created threads if they are available. A used thread, after it finished its work, will stay available only for a certain amount of time (60 secs for Java 24) - once this time passes, the thread is being removed from the pool. This pool may be useful in case of varying workload, especially for short-lived tasks.&lt;/p&gt;

&lt;h4&gt;
  
  
  Methods
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Executors.newCachedThreadPool()&lt;/code&gt; - creates a new cached thread pool.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Main {
    public static void main(String[] args) throws InterruptedException {
        ExecutorService executorService = Executors.newCachedThreadPool();
        executorService.submit(newTask(1, 100L));
        executorService.submit(newTask(2, 100L));
        executorService.submit(newTask(3, 100L));
        Thread.sleep(200L);
        executorService.submit(newTask(4, 100L));
        executorService.submit(newTask(5, 100L));
        executorService.submit(newTask(6, 100L));
        executorService.submit(newTask(7, 100L));
        executorService.close();
    }

    private static Runnable newTask(int taskNo, long processingTime) {
        return () -&amp;gt; {
            try {
                System.out.println("Started processing task #" + taskNo + ", by " + Thread.currentThread().getName());
                Thread.sleep(processingTime);
                System.out.println("Finished processing task #" + taskNo);
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
        };
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Result
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Started processing task #1, by pool-1-thread-1
Started processing task #2, by pool-1-thread-2
Started processing task #3, by pool-1-thread-3
Finished processing task #1
Finished processing task #2
Finished processing task #3
Started processing task #4, by pool-1-thread-3
Started processing task #5, by pool-1-thread-2
Started processing task #6, by pool-1-thread-1
Started processing task #7, by pool-1-thread-4
Finished processing task #5
Finished processing task #6
Finished processing task #4
Finished processing task #7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, for task #4, #5, and #6 - the pool reused previously created threads, but for task #7, it needed to create a new thread.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scheduled thread pool
&lt;/h3&gt;

&lt;p&gt;This is actually quite a different pool. It returns an instance of &lt;code&gt;ScheduledExecutorService&lt;/code&gt;, which is designed to schedule tasks at intervals or with delays. Example of use-cases are batch jobs that should run once a day, real-time updates sent to users on intervals like live event updates and so on.&lt;/p&gt;

&lt;h4&gt;
  
  
  Methods
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Executors.newScheduledThreadPool(int corePoolSize)&lt;/code&gt; - creates a pool with given number of threads, that are kept in the pool even if they are inactive.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Executors.newSingleThreadScheduledExecutor()&lt;/code&gt; - single-threaded variant, same as &lt;code&gt;newScheduledThreadPool(1)&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;New interface shares new &lt;code&gt;schedule&lt;/code&gt; methods, allowing to define the mentioned interval or delay.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Main {
    public static void main(String[] args) {
        ScheduledExecutorService executorService = Executors.newScheduledThreadPool(3);
        executorService.schedule(newTask(1, 100L), 130L, TimeUnit.MILLISECONDS);
        executorService.schedule(newTask(2, 100L), 120L, TimeUnit.MILLISECONDS);
        executorService.schedule(newTask(3, 100L), 110L, TimeUnit.MILLISECONDS);
        executorService.schedule(newTask(4, 100L), 100L, TimeUnit.MILLISECONDS);
        executorService.close();
    }

    private static Runnable newTask(int taskNo, long processingTime) {
        return () -&amp;gt; {
            try {
                System.out.println("Started processing task #" + taskNo);
                Thread.sleep(processingTime);
                System.out.println("Finished processing task #" + taskNo);
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
        };
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Result
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Started processing task #4
Started processing task #3
Started processing task #2
Finished processing task #4
Started processing task #1
Finished processing task #3
Finished processing task #2
Finished processing task #1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, task #4 is executed first, as expected by the defined delay. However, it's worth noting that since the pool contains only 3 threads, task #1 needs to wait for other tasks to finish because there are no threads available for it, which means that it started later comparing to what's defined in the scheduled method (planned to start after about 130ms, but started after about 200ms). Remember to create a scheduler pool with sufficient amount of threads, unless you can accept the increased delay.&lt;/p&gt;

&lt;h3&gt;
  
  
  Work stealing pool
&lt;/h3&gt;

&lt;p&gt;This pool is also different, since it's based on work stealing algorithm, which is an algoritm that allows for more sophisticated work distribution between the threads.&lt;/p&gt;

&lt;p&gt;Usually, there is only one task queue for the entire thread pool. However, pushing and pulling objects from a queue are blocking operations, meaning that only one thread at a time can push/pull objects, but those methods do not block each other (e.g., when one thread is pushing new objects, another thread can pull objects from the other end of the same queue), which is why this data structure is often used in concurrent environments. In case where objects are being intensively pushed and pulled (e.g., small tasks, like event-processing tasks), you may end up in a situation known as threads contention, where two or more threads often block each other, meaning that they are not utilized efficiently.&lt;/p&gt;

&lt;p&gt;Work stealing pool addresses this issue by creating more than one queue, each often assigned to specific worker threads. New tasks are pushed to the queue on which it's most probable to execute the task first. Additionally, threads that are free and don't have any tasks pending on their queues, are allowed to take tasks from other queues, which is called as work stealing, hence why this pool is named as work stealing pool. This allow to signicantly reduce the thread contention, and make the pool more performant. &lt;/p&gt;

&lt;h4&gt;
  
  
  Methods
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Executors.newWorkStealingPool(int parallelism)&lt;/code&gt; - creates a new work stealing pool with given parallelism level, which is nothing but the target number of worker threads that would be created if needed, or removed if no longer being actively used. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Executors.newWorkStealingPool()&lt;/code&gt; - creates a new work stealing pool with parallelism level set as number of available processors.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Main {
    public static void main(String[] args) {
        ExecutorService fixedThreadPool = Executors.newFixedThreadPool(20);
        executeTasks(fixedThreadPool);

        ExecutorService workStealingPool = Executors.newWorkStealingPool(20);
        executeTasks(workStealingPool);
    }

    private static void executeTasks(ExecutorService executorService) {
        long startTime = System.nanoTime();

        for (int i = 0; i &amp;lt; 5; i++) {
            executorService.submit(newTask(500L));
        }
        executorService.close();

        long endTime = System.nanoTime();
        System.out.println("Total processing time: " + (endTime - startTime) / 1_000_000L);
    }

    private static Runnable newTask(long proceessingTime) {
        return () -&amp;gt; {
            try {
                Thread.sleep(proceessingTime);
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
        };
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Result
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Total processing time: 509
Total processing time: 501
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, in case where there is thread contention, work stealing pool can perform better than fixed-size thread pool, even though the latter has its threads already prepared before processing. The example here is not as realistic as it could be on production environment, but the bigger the scale, the bigger the difference in performance could be.&lt;/p&gt;

&lt;p&gt;Additionally, it's worth to mention that work stealing pool is based on &lt;code&gt;ForkJoinPool&lt;/code&gt; implementation, which interface additionally allows to submit tasks that can be divided into independent subtasks, processed by other threads, and their results can be joined later once all those divided tasks are completed. For such resursive work, the algorithm performs even better. However, it's a topic for another post, but if you are intersted, I encourage you to review this class as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Thread per task executor
&lt;/h3&gt;

&lt;p&gt;This type uses provided &lt;code&gt;ThreadFactory&lt;/code&gt; to spawn a new thread every time a new task comes in. In certain scenarios it might be sufficient, but often lead to inaccurate use of your processing resourses. However, instead of spawning platform threads you can spawn virtual threads, where using this executor can make much more sense.&lt;/p&gt;

&lt;h4&gt;
  
  
  Methods
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Executors.newThreadPerTaskExecutor(ThreadFactory threadFactory)&lt;/code&gt; - creates a new executor that spawns new threads using given thread factory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Executors.newVirtualThreadPerTaskExecutor()&lt;/code&gt; - same, but spawns virtual threads.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class Main {
    public static void main(String[] args) {
        ExecutorService platformThreadPerTaskExecutor = Executors.newThreadPerTaskExecutor(Thread.ofPlatform().factory());
        executeTasks(platformThreadPerTaskExecutor);

        ExecutorService virtualThreadPerTaskExecutor = Executors.newVirtualThreadPerTaskExecutor();
        executeTasks(virtualThreadPerTaskExecutor);
    }

    private static void executeTasks(ExecutorService executorService) {
        long startTime = System.nanoTime();

        for (int i = 0; i &amp;lt; 1000; i++) {
            executorService.submit(newTask(500L));
        }
        executorService.close();

        long endTime = System.nanoTime();
        System.out.println("Total processing time: " + (endTime - startTime) / 1_000_000L);
    }

    private static Runnable newTask(long proceessingTime) {
        return () -&amp;gt; {
            try {
                Thread.sleep(proceessingTime);
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
        };
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Result
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Total processing time: 576
Total processing time: 515
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, virtual threads can perform better in such scenario.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I use virtual threads in other pools?
&lt;/h3&gt;

&lt;p&gt;Yes! It's possible to use virtual thread factory to spawn threads in almost all mentioned pools. There is usually a variant that accepts custom &lt;code&gt;ThreadFactory&lt;/code&gt;, where we could use &lt;code&gt;Thread.ofVirtual().factory()&lt;/code&gt; to use virtual threads, instead of platform threads.&lt;/p&gt;

&lt;p&gt;For instance, in case of cached thread pool, instead of &lt;code&gt;Executors.newCachedThreadPool()&lt;/code&gt;, we can call &lt;code&gt;Executors.newCachedThreadPool(Thread.ofVirtual().factory())&lt;/code&gt;. You can find more examples by reviewing &lt;a href="https://docs.oracle.com/en/java/javase/24/docs/api/java.base/java/util/concurrent/Executors.html" rel="noopener noreferrer"&gt;documentation of Executors class&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;As you can see, there are a couple types of thread pools, each addressing different group of use-cases. Thankfully, with new Java Concurrency API, it became much easier to create and work on those pools. Start by reviewing static factory methods provided by &lt;code&gt;Executors&lt;/code&gt; class, they are often sufficient to address various needs. Once you become more and more familiar with them, try to understand implementation details under the hood and how you can leverage them to create even more sophisticated pools, more suited to what your application currently needs. &lt;/p&gt;

&lt;p&gt;Thanks for reading, hopefully I managed to teach you something :) &lt;/p&gt;

</description>
      <category>java</category>
      <category>threads</category>
    </item>
  </channel>
</rss>
