<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oleg Potapov</title>
    <description>The latest articles on DEV Community by Oleg Potapov (@oleg_potapov).</description>
    <link>https://dev.to/oleg_potapov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oleg_potapov"/>
    <language>en</language>
    <item>
      <title>Language Server Protocol Introduction</title>
      <dc:creator>Oleg Potapov</dc:creator>
      <pubDate>Mon, 02 Feb 2026 07:38:05 +0000</pubDate>
      <link>https://dev.to/oleg_potapov/language-server-protocol-introduction-235l</link>
      <guid>https://dev.to/oleg_potapov/language-server-protocol-introduction-235l</guid>
      <description>&lt;p&gt;Language Servers is not a concept commonly used by the majority of software developers in their work. However, a lot of us use it every day unknowingly, because usually it’s a hidden part in the developer’s toolkit.&lt;/p&gt;

&lt;p&gt;Although not all the IDEs and code editors support Language Server Protocol, chances are you are already using one or several of such servers.&lt;/p&gt;

&lt;p&gt;But even more indispensable this protocol might be not for the “users” of this tooling, but for its “providers” - authors of programming languages and development frameworks, who have to provide some developer tools as an additional benefit of their library.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are plenty of modern IDE and code editors: VS Code, Visual Studio, PyCharm, IntelliJ IDEA. Even Vim and Emacs are still alive and have modern modifications. A new generation of AI-based tools is also here (Cursor). And the number of modern programming languages is even larger than the number of editors (I won’t number them all). Using any programming language, a developer expects to have this language supported by the code editor of his choice. By “support” we mean such options as syntax highlighting, autocomplete, automated refactoring and other handy tools we've got so used to.&lt;/p&gt;

&lt;p&gt;At first, IDE creators tried to have built-in support for the majority of the existing languages in their core product. But soon it became clear how hard it is to support everything in one programming product. Thus, there are only two ways for IDE to stay competitive - either focus on a small range of supported languages and be really good at it(for example, JetBrains with their PyCharm and RubyMine), or create a flexible plugin ecosystem, which allows third party developers to extend the existing functionality of the editor.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fph0w7ggdbna0ylh7bqf6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fph0w7ggdbna0ylh7bqf6.png" alt=" " width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Having a plugin system is great, especially if you use a popular language. There definitely will be an enthusiast (likely a bunch of them) who will implement all the necessary features in the extensions for your editor. But things get worse when you are a creator of a programming language or framework, which has not gone mainstream (yet?). Without a correct IDE support, no one will go for it, so you will have to provide all the necessary developer tools with your language. Sounds good, but to do that you should know Python to create a Sublime plugin, TypeScript to do that for VS Code and even … VimL for Vim! And, of course, you need a lot of time for implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Protocol&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zsxzizbocyughszhrmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zsxzizbocyughszhrmn.png" alt=" " width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yeah, we all know this meme, but I couldn’t resist inserting it here. So, I think you can imagine what happened next.&lt;/p&gt;

&lt;p&gt;Microsoft decided to solve these issues by entering the new standard called LSP (Language Server Protocol) [1]. The goal was to unify the development of the reach code editor actions across different IDEs, since most of these actions are basically the same. The idea is to allow developers to provide programming language support without relying on any particular IDE [2]. &lt;/p&gt;

&lt;p&gt;The fact that Microsoft owns Visual Studio Code - one the most popular code editors nowadays, helped the standard to become more and more popular and now it is supported by a wide range of other editors and programming languages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does it work&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As the name states, communication through the Language Server Protocol requires a client and a server part. Client part is the same for all the languages and is usually developed for every editor by its team, while the server part is usually maintained by the language authors or community members.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwzfmsze5546g4pk0liw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwzfmsze5546g4pk0liw8.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the image above you can see the main components of this concept:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Editor&lt;/strong&gt; - the main process of the editor or IDE, that handles user interactions with code files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extension&lt;/strong&gt; - detects proper events or file changes and sends requests to the Language Server, using Language Server Protocol&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language Server&lt;/strong&gt; - the server process implemented in any programming language responsible for receiving requests from editor extension and generating responses in the defined format&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A small example. Let’s say an editor user wants to click on some language object (class, function or something else) and jumps to its definition. What happens in this case? First, the user calls the “Go-to-definition” feature using one of the methods provided by the editor. It might be Ctrl-click or any other key combination  (Language Server doesn’t care about it, because it is the responsibility of the editor). An extension reacts to this event and sends a corresponding request to the Language Server with the type &lt;code&gt;textDocument/definition&lt;/code&gt;. This request contains a file URI and a position inside the document where the event happened. Having this information the Language Server finds the location of the proper object definition and responds with its file name and position inside the file. The editor receives the response and opens the file. Of course, this is a simplified description of the request, there is full specification of the definition request and response in the documentation[4].&lt;/p&gt;

&lt;p&gt;One thing should be additionally mentioned here. In the process of code editing the editor and the server send a lot of requests to each other - one for every text change, click or hover. And this communication should be so fast that the user shouldn’t notice any delay between his action and the editor's reaction. That’s why TCP wouldn’t be the best choice for such communication, even though it happens on a single machine. Instead it uses RPC protocol, or, to be more exact, one of its modifications called JSON-RPC. [5]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits and drawbacks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Language Server Protocol mostly makes the life easier for languages (or programming frameworks) maintainers. The main benefits of LSP are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no need to create a separate plugin for every editor&lt;/li&gt;
&lt;li&gt;same amount of features with the same logic in any editor&lt;/li&gt;
&lt;li&gt;you can use any technology to build the language server, including the language you build it for, for example Elixir language server elixir-ls is completely written in Elixir [3]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, with all these benefits come several drawbacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the end-user should install and run one additional process&lt;/li&gt;
&lt;li&gt;language server is more limited than editor-specific plugin&lt;/li&gt;
&lt;li&gt;separate process is slower than built-in tools (often it's not a problem, though)&lt;/li&gt;
&lt;li&gt;an additional process is an additional point of failure - it can crash, consume too much memory, etc.&lt;/li&gt;
&lt;li&gt;not all the editors have LSP support and some of the LS implementations have a lot of complaints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I hope this article helped you understand a simple but very useful concept of language servers. It’s getting more and more popular among developer tools creators and there is a chance that your favorite language already has its own language server implementation [6]. If not, maybe this article will encourage you to create your own LS for your language or framework.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Links&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://microsoft.github.io/language-server-protocol/" rel="noopener noreferrer"&gt;Microsoft LSP&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Language_Server_Protocol" rel="noopener noreferrer"&gt;Language Server Protocol&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/elixir-lsp/elixir-ls" rel="noopener noreferrer"&gt;elixir-ls&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocument_definition" rel="noopener noreferrer"&gt;textDocument/definition&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/JSON-RPC" rel="noopener noreferrer"&gt;JSON-RPC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://langserver.org/" rel="noopener noreferrer"&gt;A list of LSP implementations&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>languageserver</category>
      <category>programming</category>
    </item>
    <item>
      <title>10 things in Elixir that confuse Ruby programmers - Part 2</title>
      <dc:creator>Oleg Potapov</dc:creator>
      <pubDate>Sat, 07 Oct 2023 15:14:59 +0000</pubDate>
      <link>https://dev.to/oleg_potapov/10-things-in-elixir-that-confuse-ruby-programmers-part-2-5701</link>
      <guid>https://dev.to/oleg_potapov/10-things-in-elixir-that-confuse-ruby-programmers-part-2-5701</guid>
      <description>&lt;p&gt;I have &lt;a href="https://dev.to/oleg_potapov/1o-things-in-elixir-that-confuse-ruby-programmers-part-1-4om3"&gt;previously covered&lt;/a&gt; 5 functional programming features of Elixir that may confuse ruby developers. This time I'll cover the remaining 5 and dive deeper into the language concepts and types.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Symbols and atoms&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irb(main):001:0&amp;gt; :asd
=&amp;gt; :asd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iex(1)&amp;gt; :asd
:asd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Symbols in Ruby and atoms in Elixir look absolutely the same and are often used for the same purposes and, thus, cause the same associations. Both are associated with immutable strings. But wait, if (as we already know from the previous chapter)  everything is immutable in Elixir, why would they have atoms in the first place?&lt;/p&gt;

&lt;p&gt;Atoms are yet another legacy part that Elixir received from Erlang[1]. Even though they are very similar to strings and can be converted into them, it’s better to think of them as if they were not strings but constants with the value equal to the name. But what for a language such as Elixir has these constants? One of the use cases is obvious - they can be used as keys in maps and structs, and this use case works for Ruby symbols as well. In Elixir, however, atoms serve one more/yet another purpose: they are used as names for all named modules. It means that every time you define a module, you create an atom equal to its name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iex(2)&amp;gt; defmodule MyModule do; end
{:module, MyModule,
 &amp;lt;&amp;lt;70, 79, 82, 49, 0, 0, 4, 60, 66, 69, 65, 77, 65, 116, 85, 56, 0, 0, 0, 166,
   0, 0, 0, 16, 15, 69, 108, 105, 120, 105, 114, 46, 77, 121, 77, 111, 100, 117,
   108, 101, 8, 95, 95, 105, 110, 102, 111, ...&amp;gt;&amp;gt;, nil}
iex(3)&amp;gt; MyModule == :"Elixir.MyModule"
true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, when we know that atoms and module names are the same, it’s not that confusing to see Erlang modules calls (e.g. &lt;em&gt;:math.pi()&lt;/em&gt;) in our Elixir code since they are just modules with lowercase names.&lt;/p&gt;

&lt;p&gt;There is one more small difference between Atom type and Ruby symbols. As atoms are constants they will never be cleaned up by the garbage collector, while symbols will. That’s why it’s better not to create atoms dynamically in your Elixir program.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Return statement&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What’s wrong with the return statement in Elixir? Literally nothing. Because there is no return statement in Elixir. As a matter of course Ruby programmers should change their coding style, as they can’t use one of the most common tools in Ruby. When a function should be executed from start to end there is no return statement needed in Ruby - a method just returns the last value. And Elixir follows the same concept: the last value is the result of the function execution. So the only purpose of the ‘return’ keyword in Ruby is to achieve the early return from the method breaking its execution. And there is no exact match for this in Elixir. However there are several ways to replace this behavior.&lt;/p&gt;

&lt;p&gt;Commonly, parameters validation is the reason to use early return in Ruby. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def increase_items(items, increase_by)
  return unless items
  return :error if increase_by &amp;gt; 5
  …
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In Elixir, you can achieve the same result with the help of pattern matching or guards[2]. The same code in Elixir might look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def increase_items(nil, _increase_by), do: nil
def increase_items(_items, increase_by) when increase_by &amp;gt; 5, do: :error
def increase_items(items, increase_by) do
  …
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Despite being powerful mechanisms, pattern matching and guards have some limitations: for example, the number of functions allowed to be used in guards is limited [3]. It means that complex domain related validation still should be performed inside functions. Elixir has several conditional statements and macros, like &lt;em&gt;‘cond’&lt;/em&gt;, &lt;em&gt;‘case’&lt;/em&gt;, &lt;em&gt;‘with’&lt;/em&gt; or &lt;em&gt;‘if’&lt;/em&gt;, and it is worth getting familiar with all of them to use the right one in the right situation.&lt;/p&gt;

&lt;p&gt;Switching the mindset could be of greater help though. If you create a habit to keep functions tiny, so that each of them is responsible for exactly one thing,you'll probably won't need early returns at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Loops&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ok, we can put up with the absence of the &lt;em&gt;‘return’&lt;/em&gt; keyword but what about loops? Loops are one of the most common and most used concepts in programming. However, in Elixir there is no &lt;em&gt;‘while’&lt;/em&gt;, &lt;em&gt;‘until’&lt;/em&gt;, &lt;em&gt;‘loop’&lt;/em&gt; and even &lt;em&gt;‘for’&lt;/em&gt;. Since Elixir is a functional language, it implements loops differently, mostly by means of recursion [4].&lt;/p&gt;

&lt;p&gt;Let’s imagine an infinite loop of some commands handling that finishes when “exit” command is received:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;loop do
  next_command  = get_command()
  break if next_command == ‘exit’

  apply_command(next_command)
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The same code in Elixir can look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def handle_commands do
  next_command  = get_command()
  apply_command(next_command)
end

def apply_command(“exit”), do: :ok
def apply_command(command) do
  …
  handle_commands()
end
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Luckily, rubyists, unlike some other imperative language coders, have an advantage here. As using iterators instead of loops is considered good practice in Ruby you are probably more accustomed to seeing &lt;em&gt;‘each’&lt;/em&gt;, &lt;em&gt;‘map’&lt;/em&gt; and &lt;em&gt;‘times’&lt;/em&gt; much more often than &lt;em&gt;‘while’&lt;/em&gt; or &lt;em&gt;‘loop’&lt;/em&gt; in the ruby code. Additionally, Elixir has an awesome Enum module [5] that comprises all ruby iteration functions plus some extra features that do not exist in Ruby. This module covers most of common iteration problems and there is no need for custom recursive logic most of the time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keyword lists and maps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s move to keyword lists and map structures in Elixir. In Ruby, you have hashes that cover all possible use cases for key-value data structures. Hashes can be converted to/from lists, have keys of different types and be passed to a method as keyword arguments with the help of double splat operator. So, it’s natural to expect other languages to do the same. But instead of Ruby hashes Elixir has two different associative data structures: keyword lists and maps[6].&lt;/p&gt;

&lt;p&gt;The main thing to know about keyword lists is that they are literally lists, in which each element of the list is a tuple. This tuple includes two values- an atom that is considered a key and a value:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iex(1)&amp;gt; [a: 1] == [{:a, 1}]
true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since this type of structure is a list, the values are ordered and elements have linear access time, in contrast to the constant time for Ruby hashes.&lt;/p&gt;

&lt;p&gt;Maps, on the other hand, are completely different. Like Ruby hashes, they can have any data type as a key and their values are not ordered, which means constant time access. Beside that, maps are much more flexible in pattern matching - it’s absolutely possible to match only one part of the map or just a single key-value pair:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iex(1)&amp;gt; %{a: my_var} = %{a: 1, b: 2}
%{a: 1, b: 2}
iex(2)&amp;gt; my_var
1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now it looks like there is no need to use keyword lists at all, but they are indeed used in Elixir even if you don’t notice them sometimes. For example, these are very common syntax constructions in Elixir:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def my_function(a, b), do: nil
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (a &amp;gt; 0), do: 1, else: 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both examples use keyword lists with blocks given for ‘do’ keyword, but as brackets are omitted, it feels like the built-in language syntax.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Function clauses&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While very common in Elixir, multi-clause functions may seem confusing to adepts of other languages. The idea of having several variants of the same function exists in different languages, however, the implementation differs. In some of them, there is method overloading that depends on the number of arguments, some languages support template methods or classes that vary in application depending on data type. In Elixir, functions with multiple bodies are based on the pattern matching mechanism which makes them more flexible than anywhere else. Function clauses are attempted in the order, from top to bottom, until one of them matches the given arguments. Thus, functions can be split into clauses depending on the arguments’ type or values.&lt;/p&gt;

&lt;p&gt;Ironically, the problem with function clauses is not to simply learn how they work, but to get used to applying them in your code. In Elixir, clauses can be a way to move the control flow (conditions, loops) out of the function body. They are also widely used to make recursive calls. Writing succint multiple-clause functions is a habit that comes with practice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Although Ruby and Elixir syntaxes share many features, their bases are different. Ruby essentially is an interpreted language, executed by the Ruby interpreter. Elixir is based on the Erlang virtual machine called BEAM, which means that it operates with the same types. The main difference, however,  is that Ruby is based on the OOP paradigm, while Elixir is a functional programming language.&lt;/p&gt;

&lt;p&gt;And while familiar syntax may make reading a program written in an unfamiliar language easier, writing your own program will certainly require more time and effort, since you have to make sure that you don't unconsciously replace the original concepts and constructions of the "new" language with ones you are accustomed to applying in your "first" language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Links&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.tutorialspoint.com/erlang/erlang_atoms.htm"&gt;https://www.tutorialspoint.com/erlang/erlang_atoms.htm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hexdocs.pm/elixir/main/patterns-and-guards.html#guards"&gt;https://hexdocs.pm/elixir/main/patterns-and-guards.html#guards&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hexdocs.pm/elixir/main/patterns-and-guards.html#list-of-allowed-functions-and-operators"&gt;https://hexdocs.pm/elixir/main/patterns-and-guards.html#list-of-allowed-functions-and-operators&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Functional_programming#Recursion"&gt;https://en.wikipedia.org/wiki/Functional_programming#Recursion&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://hexdocs.pm/elixir/1.13/Enum.html"&gt;https://hexdocs.pm/elixir/1.13/Enum.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elixir-lang.org/getting-started/keywords-and-maps.html"&gt;https://elixir-lang.org/getting-started/keywords-and-maps.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ruby</category>
      <category>elixir</category>
    </item>
    <item>
      <title>1O things in Elixir that confuse Ruby programmers - Part 1</title>
      <dc:creator>Oleg Potapov</dc:creator>
      <pubDate>Tue, 29 Aug 2023 19:23:26 +0000</pubDate>
      <link>https://dev.to/oleg_potapov/1o-things-in-elixir-that-confuse-ruby-programmers-part-1-4om3</link>
      <guid>https://dev.to/oleg_potapov/1o-things-in-elixir-that-confuse-ruby-programmers-part-1-4om3</guid>
      <description>&lt;p&gt;Elixir is an rapidly developing programming language that combines syntax simplicity, a functional approach and the power of Erlang and BEAM virtual machine. And there are a lot of Ruby programmers who give Elixir a try, considering that the syntax is very similar. But this similarity may become a trap, since two languages are completely different and there may be different concepts lying under the same syntax.&lt;/p&gt;

&lt;p&gt;I gathered ten things that can be confusing for Ruby developers who decided to try Elixir and make their first steps in this wonderful programming language.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Functional mindset&lt;/li&gt;
&lt;li&gt;Immutable data structures&lt;/li&gt;
&lt;li&gt;Assignment operator&lt;/li&gt;
&lt;li&gt;Strings and charlists&lt;/li&gt;
&lt;li&gt;Lists and tuples&lt;/li&gt;
&lt;li&gt;Symbols and atoms&lt;/li&gt;
&lt;li&gt;Return statement&lt;/li&gt;
&lt;li&gt;Loops&lt;/li&gt;
&lt;li&gt;Keyword lists and maps&lt;/li&gt;
&lt;li&gt;Function clauses&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s discuss all items in this list one by one and (hopefully) make it a little bit more clear.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Functional mindset&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the first thing programmers usually face when moving from the object-oriented programming language to the functional one. And it’s probably even worse when we talk about moving from Ruby, where everything is an object (well, almost). In Ruby, a programmer is used to thinking about classes and objects, object methods and attributes and there is nothing like that in Elixir. Instead, there are functions and modules, immutable data structures and recursion.&lt;/p&gt;

&lt;p&gt;Even though the full understanding comes only with experience, there are several recommendations how you could apply functional programming better:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;try to consider you program not as sequence of commands or instructions, but as a series of filters and transformations&lt;/li&gt;
&lt;li&gt;think in functions and operations, not in objects&lt;/li&gt;
&lt;li&gt;try to avoid common procedural constructions (loops and conditions) and replace them with clauses, guards and recursion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Immutable data structures&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Immutable data structures have a lot of advantages. They are thread-safe, so they can be shared among several threads or processes. Languages operating with mutable data structures have to copy the whole data on every fork. There are some optimizations like copy-on-write techniques that prevent the full copy, but they also represent some challenges&lt;br&gt;
[1]. Also immutable data structures are almost bug-proof, as you can pass such structures to any function without worrying that it might be modified.&lt;/p&gt;

&lt;p&gt;But immutability also has its drawbacks. And the main one, obviously, is the fact that you can’t change the value of an element. In other words, if you want to change the value of n-th element in the list, Elixir creates a new list, which copies the old one except the value you changed. However, there are some optimizations that allow Elixir to copy just part of the list before the n-th element and share the tail. That’s why adding an element to the beginning of the list is O(1) while adding to the end is O(n).&lt;/p&gt;

&lt;p&gt;The main thing you should remember about data structures in Elixir is that every time you call a modifying function on the list, map or tuple, it creates and returns a new structure. That’s why it’s impossible to create a circular data structure in Elixir [2].&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Assignment operator&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The assignment operator is a common concept for all procedural and object-oriented programming languages. The assignment operator in Ruby (=) works exactly the same way it does in dozens of other languages, at least when we talk about a simple assignment operator, not combined (+=, -=, etc.) or conditional (||=).&lt;/p&gt;

&lt;p&gt;Despite the fact that there is a similar = operator in Elixir, it doesn’t work the same. In fact, it isn’t an assignment operator at all! In Elixir it’s a match operator and its function is to match the left side with the right side [3]. The fact that it also assigns variables can be considered the side-effect, not the main purpose. We can see the difference in the following example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irb(main):001:0&amp;gt; a = 1
=&amp;gt; 1
irb(main):002:0&amp;gt; 1 = a
syntax error, unexpected '=', expecting end-of-input (SyntaxError)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iex(1)&amp;gt; a = 1
1
iex(2)&amp;gt; 1 = a
1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The expression a = 1 returns the same result in both languages. The expression 1 = a causes an error in Ruby, because you can’t assign any new value to 1. But it works without errors in Elixir, because, as I mentioned, = is not an &lt;em&gt;assignment&lt;/em&gt; but a &lt;em&gt;match&lt;/em&gt; operator and 1 on the left side of = matches with the a on the right side. However, that doesn’t mean that Elixir doesn’t care how you assign a variable - on the left or the right side. The following will not work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iex(3)&amp;gt; 1 = b
** (CompileError) iex:3: undefined function b/0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It means that all the unassigned variables should be on the left side of the match operator.&lt;/p&gt;

&lt;p&gt;Assignment and match operators not only have similar syntax. Unlike most of the functional languages, Elixir has a feature called variable rebound:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iex(3)&amp;gt; a = 1
1
iex(4)&amp;gt; a = 2
2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It looks like a variable value re-assignment and despite the fact that it works differently under the hood, the behavior is still the same. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strings and charlists&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s start from the code example here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irb(main):001:0&amp;gt; 'hello' == "hello"
=&amp;gt; true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;same code in Elixir:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iex(1)&amp;gt; 'hello' == "hello"
false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, we see the code that is identical in both languages but behaves differently. The reason why it happens is that single and double-quoted literals have absolutely different meanings in Elixir and Ruby. In Ruby, both single and double quotes are used to represent String data type. There is only a small difference between them, as double-quoted strings allow interpolation whereas single-quoted don't. But the result will still be the same - we’ll get a String object.&lt;/p&gt;

&lt;p&gt;As opposed to Ruby, Elixir has two different data types to represent text strings: binary strings and charlists [4]. This duality was inherited from Erlang. Double-quoted literals are binary strings, while single-quoted are charlists. That’s exactly why the comparison returned false in our example - those two objects are not equal, because they are not even of the same type.&lt;/p&gt;

&lt;p&gt;Charlists are not widely used in Elixir, in fact they can be considered an Erlang legacy. Nevertheless, developers may mistakenly use them, thinking that they use strings, which can cause some unexpected program behavior.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lists and tuples&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q5J6u7BZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b6rvvx1f02h7uzpr4k2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q5J6u7BZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b6rvvx1f02h7uzpr4k2i.png" alt="Linked list" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Again, we have similar syntax but very different internal representation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iex(1)&amp;gt; [1, 2, 3, 4, 5]
[1, 2, 3, 4, 5]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This looks the same in both Ruby and Elixir. But in Elixir it’s a list, while in Ruby it’s an array. It sounds like just two different names for the same concept, but the truth is quite otherwise. Ruby arrays are intended to be used for the random access to its elements by index and usually there is no difference in access time for the 1st, 10th or 100th element. It works this way, because internally an array is a chunk of allocated memory that contains a set of items of the same size (in Ruby case it is the collection of pointers to other Ruby objects). Having only one pointer to the beginning of the memory chunk, a programming language can access every element by its index calculating the memory offset for this element. To do this calculation it should multiply the element’s index to the size of each element.&lt;/p&gt;

&lt;p&gt;Elixir’s lists have a different nature, as they are the representation of the linked list data structure[6]. A linked list is a collection of objects, where each of them has a pointer to the next one. It means that to access the 10th element, you should start from the first one and move forward through the pointers 9 times. It makes access time different for different elements - the bigger index the element has, the longer it takes to access this element.&lt;/p&gt;

&lt;p&gt;But this is not the end of the story. Elixir has one more data type, which contains a collection of elements and stores them contiguously in memory. It’s called tuple[7]. Tuples is a familiar concept for Python developers, but there is no analogue in Ruby. Tuples have constant access time to any elements they contain, but it still doesn’t mean that they should be used anywhere it’s possible, because updates/deletes of tuple elements are expensive as it requires to create new tuple on every update (remember that data structures are immutable in Elixir?).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this part of the article, I gave an overview of some concepts that may seem familiar to rubyists when they code in Elixir for the first time. But however close these two languages are syntactically, they are still based on completely different paradigms and it may lead to confusions.&lt;/p&gt;

&lt;p&gt;In the second part I will cover 5 remaining concepts mentioned in the beginning of the article.&lt;/p&gt;

&lt;p&gt;Links&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://brandur.org/ruby-memory"&gt;https://brandur.org/ruby-memory&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://guitcastro.medium.com/elixir-immutability-and-data-structure-c5f40734d870"&gt;https://guitcastro.medium.com/elixir-immutability-and-data-structure-c5f40734d870&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elixir-lang.org/getting-started/pattern-matching.html"&gt;https://elixir-lang.org/getting-started/pattern-matching.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elixir-lang.org/getting-started/binaries-strings-and-char-lists.html"&gt;https://elixir-lang.org/getting-started/binaries-strings-and-char-lists.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Array_(data_structure)"&gt;https://en.wikipedia.org/wiki/Array_(data_structure)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Linked_list"&gt;https://en.wikipedia.org/wiki/Linked_list&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://elixir-lang.org/getting-started/basic-types.html#tuples"&gt;https://elixir-lang.org/getting-started/basic-types.html#tuples&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ruby</category>
      <category>elixir</category>
      <category>immutability</category>
      <category>programming</category>
    </item>
    <item>
      <title>What are Postgres advisory locks and their use cases</title>
      <dc:creator>Oleg Potapov</dc:creator>
      <pubDate>Mon, 07 Aug 2023 14:48:35 +0000</pubDate>
      <link>https://dev.to/oleg_potapov/what-are-postgres-advisory-locks-and-their-use-cases-49nd</link>
      <guid>https://dev.to/oleg_potapov/what-are-postgres-advisory-locks-and-their-use-cases-49nd</guid>
      <description>&lt;p&gt;Database locks is a powerful feature that allows to manage concurrent access to different types of database resources. Each modern database management system provides its own set of locking mechanisms and so does Postgres. In this article I’ll describe one of the specific lock types in Postgres - advisory locks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction to locks
&lt;/h2&gt;

&lt;p&gt;Let’s start with the general description of what lock is and what types of locks exist. Database lock is a mechanism to provide data consistency when concurrent transactions are trying to access the same database resources at the same time. A lock can prevent access to the resource or allow it depending on the type of operation (read, write). Thus, a locking mechanism is essential for DBMS to provide ACID guarantees for their transactions [1].&lt;/p&gt;

&lt;p&gt;Database locks can be classified in several different ways. And first one is the way by which the lock was obtained. Lock can be &lt;strong&gt;explicit&lt;/strong&gt; and &lt;strong&gt;implicit&lt;/strong&gt;. It’s often said that explicit locks are obtained by the “developers”, but it would be more accurate to say that they are obtained by the database client application. Implicit locks, on the other hand, are automatically obtained by the database server to provide data integrity. This article is dedicated to explicit locks, but it is worth mentioning that implicit locks occur much more often and sometimes it’s good not to forget about it.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpfk8qxjpef94ibthmxw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpfk8qxjpef94ibthmxw.jpg" alt="locks are everywhere"&gt;&lt;/a&gt;&lt;br&gt;
Implicit locks in Postgres occur while executing &lt;em&gt;SELECT&lt;/em&gt;, &lt;em&gt;INSERT&lt;/em&gt;, &lt;em&gt;CREATE INDEX&lt;/em&gt;, &lt;em&gt;ALTER TABLE&lt;/em&gt;, &lt;em&gt;TRUNCATE&lt;/em&gt;, &lt;em&gt;DROP TABLE&lt;/em&gt;, &lt;em&gt;VACUUM&lt;/em&gt; and many other commands.&lt;/p&gt;

&lt;p&gt;Another way to classify locks is by the type of resources they are acquired on. It may be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;row-level - applies on the individual rows in the table&lt;/li&gt;
&lt;li&gt;table-level - applies on the whole table&lt;/li&gt;
&lt;li&gt;page-level - used to control read/write access to table pages in the shared buffer pool&lt;/li&gt;
&lt;li&gt;database-level - applies on the whole database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some of the database servers can provide more types here depending on the inner entities existing in the particular database.&lt;/p&gt;

&lt;p&gt;Also locks differ by their exclusiveness. The level of exclusiveness of the lock determines whether an acquired lock allows other locks to be obtained on the same resource. In general locks can be &lt;strong&gt;exclusive&lt;/strong&gt; and &lt;strong&gt;shared&lt;/strong&gt;. When the exclusive lock is acquired, the database doesn’t allow any other transaction to acquire any other lock on this resource, while the shared lock can be acquired several times on the same resource by several different transactions. But there are also tens of possible types that lie between these two extremes and each DB system provides its own set of lock modes. &lt;/p&gt;

&lt;h2&gt;
  
  
  So, what is an advisory lock?
&lt;/h2&gt;

&lt;p&gt;Postgres offers a special type of lock that is completely driven by the client application. A client controls when to set up this lock and when to release it. This type of lock is called &lt;strong&gt;advisory&lt;/strong&gt; [2]. To acquire this type of lock client should choose a unique &lt;strong&gt;key&lt;/strong&gt; (single 64-bit value or two 32-bit values) and pass it into one of the Postgres advisory lock functions [3].&lt;/p&gt;

&lt;p&gt;What is the purpose of advisory locks? As Postgres documentation says, it is intended to be used to lock application-defined resources. Usually, such resources have their direct analogues in the database, e.g. domain entities map to database rows. But sometimes there may be exceptions (we’ll look at them later) and application-defined resources have no analogue in the database. And an advisory lock is a good fit for such cases. The key mentioned earlier is the identifier of such a resource and used by Postgres to find other concurrent transactions that acquire locks on the same resource.&lt;/p&gt;

&lt;p&gt;Let’s return to our classification. Advisory locks are obviously explicit as they are initiated by the client application and there is no other way to obtain them. It’s getting more interesting when we talk about the type of resource it’s acquired on. As it’s intended to be used for custom resources, these resources can be of any type. It may be a table, a row, or a group of table rows even from several tables. There even may be no underlying database resource at all, an advisory lock can be used to manage access to resources which are stored only in memory or in another database.&lt;/p&gt;

&lt;p&gt;From the exclusiveness point of view advisory locks can be shared or exclusive. Shared locks (initiated by the &lt;em&gt;pg_advisory_lock_shared&lt;/em&gt; function) don’t conflict with other shared locks, they conflict only with exclusive locks. At the same time exclusive locks (initiated by the &lt;em&gt;pg_advisory_lock&lt;/em&gt; function) conflict with any other lock with the same key, both exclusive and shared.&lt;/p&gt;

&lt;p&gt;Another feature of Postgres advisory locks is a behavior control on conflict. It allows a  different behavior when the lock for the given identifier is already held. It can wait until the resource is unlocked and available (using &lt;em&gt;pg_advisory_lock&lt;/em&gt; function) or just return false and continue execution (using &lt;em&gt;pg_try_advisory_lock&lt;/em&gt;, &lt;em&gt;pg_try_advisory_lock_shared&lt;/em&gt;, &lt;em&gt;pg_try_advisory_xact_log&lt;/em&gt; functions).&lt;/p&gt;

&lt;h2&gt;
  
  
  Session and transaction-level locks
&lt;/h2&gt;

&lt;p&gt;There is another important attribute that can be different for advisory locks. These locks can be obtained on two different levels: on &lt;strong&gt;session&lt;/strong&gt; and &lt;strong&gt;transaction&lt;/strong&gt; levels. Session-level locks (&lt;em&gt;pg_advisory_lock&lt;/em&gt; function) do not depend on current transactions and are held until they are unlocked manually (with &lt;em&gt;pg_advisory_unlock&lt;/em&gt; function) or at the end of the session. Transaction-level logs (&lt;em&gt;pg_advisory_xact_log&lt;/em&gt; function) behave more familiar for those who use Postgres row locks - they live until the end of the transaction and don’t require manual unlocking.&lt;/p&gt;

&lt;p&gt;And what does the “session” mean in this context? In Postgres the session is the same as a database connection. As connections can be shared between several processes, there is a possible issue with session-level locks. When a process that locked the resource dies before it called unlock, the session is not closed, because the connection is used by other processes. It means that the lock will be held until the connection is closed, which can take a long time. Thus, I would recommend using transaction-level locks whenever it’s possible to avoid such problems.&lt;/p&gt;

&lt;p&gt;Working with PgBouncer is another nuance [4]. The majority of the relatively high load projects use Postgres with some kind of connection poolers and PgBouncer is the most popular of them. As you may know, PgBouncer has several modes of connection rotation. And one of them is transaction pooling [5]. In this mode, a server connection is assigned to a client only during a transaction. When PgBouncer notices that the transaction is over, the connection will be returned into the pool. It also makes it impossible to use session-level advisory locks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;Aforementioned features give developers a lot of flexibility in possible use cases. I would mention just some of them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;manage concurrent inserts into a single table - instead of locking the whole table a client can create a lock with the more specific identifier - it may be foreign key, or some combination of fields&lt;/li&gt;
&lt;li&gt;lock table for a single operation, but allow other operations go on. Example: you want an operation to work with the table exclusively, not allowing the same operation to run at the same table; at the same time, other operations shouldn’t be blocked and should run in parallel. It may be step-by-step table processing or analytical processes. Using advisory locks in such cases will help to prevent race conditions.&lt;/li&gt;
&lt;li&gt;lock set of rows stored in separate tables. In this case you can acquire locks on each record separately, but sometimes it’s better to treat them as a single set&lt;/li&gt;
&lt;li&gt;multi-thread table processing, e.g. when you use your table as a queue (generally not recommended)&lt;/li&gt;
&lt;li&gt;distributed locking (but usually there are better alternatives [6])&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;

&lt;p&gt;As an example I would take an implementation of a simple rate-limiting system inside a single table. Let’s say we have a model Like with the fields user_id and object_id and a business rule that allows a single user to like something only 20 times within an hour. Pseudocode:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

transaction do 
    likes_count = get_hour_likes_by_user_id(user_id)
    if likes_count &amp;lt; 20
        create_like(user_id, object_id)
    end
end


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This code will obviously fail to achieve a task because of concurrent requests. There is a possible race condition, as the likes count can be changed after fetching likes count but before creating a like. But this can be easily fixed by adding advisory lock to this transaction:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

transaction do 
    lock_key = crc32(“user_likes_#{user_id}”)
    db_run(“select pg_advisory_xact_lock(#{lock_key})”)

    likes_count = get_hour_likes_by_user_id(user_id)
    if likes_count &amp;lt; 20
        create_like(user_id, object_id)
    end
end


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;crc32 function here is used to convert a string key to the integer using CRC algorithm [7].&lt;/p&gt;

&lt;p&gt;Now the advisory lock guarantees that no new Like entity with the same user_id will be inserted into a table until the transaction commits and the lock is released.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As you can see, advisory locks is a very powerful and flexible tool provided to developers by the Postgres database. It can be adapted for plenty of possible scenarios. I will just leave several tips of using advisory locks here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;use transaction-level locks instead of session-level if possible&lt;/li&gt;
&lt;li&gt;use CRC (or similar) algorithm to generate int keys from string - it will help to avoid locks with similar keys for different tables&lt;/li&gt;
&lt;li&gt;lock can be fetched from special Postgres table &lt;em&gt;pg_locks&lt;/em&gt; by &lt;em&gt;locktype = 'advisory'&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;multiple locks with the same key stacked, so if you lock the resource several times you should unlock it the same number of times&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/ACID" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/ACID&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.postgresql.org/docs/current/explicit-locking.html#ADVISORY-LOCKS" rel="noopener noreferrer"&gt;https://www.postgresql.org/docs/current/explicit-locking.html#ADVISORY-LOCKS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADVISORY-LOCKS" rel="noopener noreferrer"&gt;https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADVISORY-LOCKS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pgbouncer.org/" rel="noopener noreferrer"&gt;https://www.pgbouncer.org/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pgbouncer.org/features.html" rel="noopener noreferrer"&gt;https://www.pgbouncer.org/features.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://redis.io/docs/manual/patterns/distributed-locks/" rel="noopener noreferrer"&gt;https://redis.io/docs/manual/patterns/distributed-locks/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Cyclic_redundancy_check" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Cyclic_redundancy_check&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>postgres</category>
      <category>postgressql</category>
      <category>lock</category>
      <category>database</category>
    </item>
    <item>
      <title>How Kafka applies zombie fencing</title>
      <dc:creator>Oleg Potapov</dc:creator>
      <pubDate>Mon, 22 May 2023 14:41:03 +0000</pubDate>
      <link>https://dev.to/oleg_potapov/how-kafka-applies-zombie-fencing-1o6e</link>
      <guid>https://dev.to/oleg_potapov/how-kafka-applies-zombie-fencing-1o6e</guid>
      <description>&lt;p&gt;Distributed systems are complicated, we know it. And among thousands of other problems, there is a problem of zombie processes. So, today we’ll discuss when it occurs and how to solve it using the Kafka example.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Zombie process is a term that came from the operating systems field [1]. In Unix-like systems, a process is called a “zombie” when it has finished its execution but still stays in the process table. But in distributed systems “zombie” means exactly the opposite. A zombie process is a process that was considered “dead” by other components of the system but hasn’t finished its execution. The most common reason behind it is a temporary network issue, which made the process unavailable for some period of time. After the issue is resolved the process returns online, but during the time it was unavailable the system already launched (or elected) its replacement.&lt;/p&gt;

&lt;p&gt;One example is the replication system with one primary node that allows both reads and writes and several secondary read-only nodes. When the primary node loses its connection to other nodes, one of the replicas becomes a new leader and performs its functions. At the same time, the old leader may still be available for the application and can still process write requests. But these requests are not replicated to other nodes since the connection is lost and the zombie leader doesn’t receive new data from the other nodes as well. When connection is restored, data on the nodes is derived and it’s hard for them to synchronize without data loss. This problem is also called &lt;strong&gt;split-brain&lt;/strong&gt; [2].&lt;/p&gt;

&lt;p&gt;Split-brain may be dangerous for the whole system's consistency since both “brains” are active and can commit conflicting changes at the same time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eBaru5jP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3hf234ahzl8zrao7qb12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eBaru5jP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3hf234ahzl8zrao7qb12.png" alt="split-brain problem" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible solutions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Scenario described above is not the only possibility. We can imagine another one. Say, we have two data-centers and five database replicas - two in the first data-center (and one of them is a current leader) and three in the second. Suddenly the network connection between the data-centers is lost, while all the nodes are still working and can be accessed from other places. So what do we do? Should the three replicas in the second data-center elect a new leader? Keeping in mind that sometimes it’s very hard to determine whether the node is down or just temporarily unavailable, we can imagine plenty of other possible complicated scenarios.&lt;/p&gt;

&lt;p&gt;That’s why different software products have different solutions for this problem, depending on the purpose of the software and its main function (storing or publishing data). &lt;br&gt;
Still, there are three main principles most of these solutions are based on:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Third-party actor (or &lt;strong&gt;witness&lt;/strong&gt;) - an independent application outside of the cluster that can check the availability of all the nodes. Several types of applications can be used for this purpose: load-balancer, Zookeeper, or some other dedicated cluster software. And there are a variety of scenarios of how exactly this application protects the cluster from the split-brain problem. It can check the availability of nodes with heartbeats or register the nodes inside the application, allowing only one primary node to be registered. &lt;/li&gt;
&lt;li&gt;Consensus - the decision about the current leader is based on the nodes’ vote. To promote one of the replicas to be the primary one, it should get a majority of votes, or quorum[3]. This approach is used in MongoDB replica sets. Another example is Hazelcast, which uses the quorum approach for write operations to protect itself from split-brain [4]. When an operation can’t be performed on the sufficient number of cluster members, it raises an exception.&lt;/li&gt;
&lt;li&gt;Generation numbers - there is a generation number available across the cluster, which monotonically increases every time the leader is changed. All the nodes accept only actions performed using the current value of this number. When the old leader is disconnected from other nodes, it will keep the old generation number and won’t be able to apply changes anymore.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These are the three basic principles commonly used, either separately or in combination.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What does Kafka do?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Apache Kafka is a message broker that among other features offers an exactly-once messaging guarantee. I described how it works in general in &lt;a href="https://oleg0potapov.medium.com/how-kafka-achieves-exactly-once-semantics-57fdb7ad2e3f"&gt;one of my previous articles&lt;/a&gt;. Another guarantee that Kafka provides is a message order guarantee within one partition. These two features put additional demands on message producers. Each producer should be unique inside the cluster and use unique sequential message ids[6]. Thus, it’s critical for the broker to detect “zombie” producers and reject the messages they try to send.&lt;/p&gt;

&lt;p&gt;Each transactional producer in Kafka has its own transactionalID which is registered in the Kafka cluster with the first operation after the producer starts. Also, there is an &lt;strong&gt;epoch number&lt;/strong&gt; associated with the transactionalID stored as metadata in the broker. When a producer registers the existing transactionalID, the broker assumes that it’s a new instance of the producer and increases the epoch number. The new epoch number is included in the transaction and if it’s lower than the newly generated epoch number, then the Transaction Coordinator rejects this transaction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2QXIXyVS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u7r5t3xjary3ug2ef4lr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2QXIXyVS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u7r5t3xjary3ug2ef4lr.png" alt="Kafka zombie producer fencing" width="800" height="391"&gt;&lt;/a&gt;&lt;br&gt;
Let’s return to the issue description mentioned above and see how Kafka handles it. When the first producer’s instance temporarily fails and another instance appears, the new one invokes &lt;em&gt;initTransactions&lt;/em&gt; method, which registers the same transactionalID and receives the new epoch number. This number is included in transactions and checked by the Transaction Coordinator. This check will be successful for the new producer, but when the old instance is back online and tries to begin the transaction, it’s rejected by the coordinator since it contains the old epoch number. In this case, the producer receives a &lt;em&gt;ProducerFencedException&lt;/em&gt; and should finish its execution.&lt;/p&gt;

&lt;p&gt;Another thing that deserves a separate mention is unfinished transactions. When the new producer’s instance registers itself in the broker, it can’t start until all the transactions for the previous instance are completed. To do that Transaction Coordinator finds all the transactions with the associated transactionID which have no COMMITTED message in the transaction log. (I briefly described how Transaction Coordinator aborts and commits a transaction in the article about Kafka exactly-once semantics[6]) If there is a PREPARE_COMMIT message written to the transaction log, then it means that commitment process is already started and the coordinator completes this process. Otherwise the transaction is aborted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Split-brain issue might be a serious challenge in a distributed system, mainly because once it occurred, it is too complicated to fix the failures it had created. That’s why it’s better to think ahead. And if you can’t eliminate this possibility, you should at least have an emergency plan.&lt;/p&gt;

&lt;p&gt;Luckily most of the modern software products that were designed to be used in distributed systems take responsibility to handle these errors. Apache Kafka is one of them. Its architecture already contains a Transaction Coordinator module, which runs inside every broker and behaves as a third-party actor that internally registers producers. This makes fencing “zombie” producers quite an easy operation: all you should do is to assign an application-unique transactionalID to the producer, and everything else will be handled by the broker. Thus, using Kafka, you can be sure you are protected from zombies, but never too sure as this problem may appear in other places of your application! &lt;/p&gt;

&lt;p&gt;Links&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Zombie_process"&gt;https://en.wikipedia.org/wiki/Zombie_process&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Split-brain_(computing)"&gt;https://en.wikipedia.org/wiki/Split-brain_(computing)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Quorum_(distributed_computing)"&gt;https://en.wikipedia.org/wiki/Quorum_(distributed_computing)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.hazelcast.com/imdg/4.2/network-partitioning/split-brain-protection"&gt;https://docs.hazelcast.com/imdg/4.2/network-partitioning/split-brain-protection&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kafka.apache.org/23/javadoc/index.html?org/apache/kafka/common/errors/ProducerFencedException.html"&gt;https://kafka.apache.org/23/javadoc/index.html?org/apache/kafka/common/errors/ProducerFencedException.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://oleg0potapov.medium.com/how-kafka-achieves-exactly-once-semantics-57fdb7ad2e3f"&gt;https://oleg0potapov.medium.com/how-kafka-achieves-exactly-once-semantics-57fdb7ad2e3f&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.confluent.io/tutorials/message-ordering/kafka.html"&gt;https://developer.confluent.io/tutorials/message-ordering/kafka.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.confluent.io/blog/transactions-apache-kafka/"&gt;https://www.confluent.io/blog/transactions-apache-kafka/&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>kafka</category>
      <category>distributedsystems</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Microservices communication: Fetching data from another service</title>
      <dc:creator>Oleg Potapov</dc:creator>
      <pubDate>Tue, 14 Mar 2023 09:35:36 +0000</pubDate>
      <link>https://dev.to/oleg_potapov/microservices-communication-fetching-data-from-another-service-2e3c</link>
      <guid>https://dev.to/oleg_potapov/microservices-communication-fetching-data-from-another-service-2e3c</guid>
      <description>&lt;p&gt;The problem we’ll talk about is quite common, as microservices can’t be fully independent. But very often one service needs some data from another one to be able to invoke the business logic or to return this data to the client. In the classical implementation of microservices architecture each service has its own database, so to get this information there should be a method to connect.&lt;/p&gt;

&lt;p&gt;First I will mention two solutions that are commonly used but their main purpose is different.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. UI composition&lt;/strong&gt;&lt;br&gt;
UI composition is great mainly because it’s very simple for backend developers. They have to do literally nothing to achieve the goal. It may work when there is a need to combine data from several services and the logic behind it can be performed on the client side. But, obviously, it’s not always possible and it’s hard to imagine the application for which it may be enough to use only this way of fetching data. Still, it may cover some use cases and is worth mentioning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Aggregate service&lt;/strong&gt;&lt;br&gt;
Another way is to have a separate service that calls all the source services, fetching data from each of them. Then it combines the received data into one data structure and returns it to the client. This functionality is what the API Gateway service is usually responsible for. This pattern is very common in microservices architecture and works in most cases related to fetching data for serving GET requests. But it is very limited when you have to apply business logic and, usually, it’s not a functionality of the Gateway service, as all the domain-related logic is contained in the dedicated microservice.&lt;/p&gt;

&lt;p&gt;So, these two patterns mentioned above are well-known and widely used but they are not a good fit for the specified problem. That’s why there are other ways.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Direct call to the other service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the most obvious way to solve the task. If you need some data from a particular service, just ask for it! Most of the services have an API and this API can be used to fetch data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tHll0zQn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/607jgqxmr2rjs4njibi9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tHll0zQn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/607jgqxmr2rjs4njibi9.png" alt="Direct service calls" width="880" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At first glance, it looks simple: the first service sends a request to the second. It may be HTTP request, gRPC call, or something else. The second service fetches data from the database, maybe handles it somehow and returns in the specified format. However it only looks simple and sometimes it doesn't work as planned.&lt;/p&gt;

&lt;p&gt;The main thing you should occupy yourself with is coupling. Now two services are coupled to each other not only by data; there is also a &lt;strong&gt;temporal coupling&lt;/strong&gt;[1]. What’s that? Temporal coupling occurs when one service has to wait for the response from another and can’t continue processing without it. It means that when Service 2 is not available, Service1 also can’t handle its requests, even though it’s alive and healthy. And that means that some Service3 that calls Service1 can’t process its requests as well. Such situations are called &lt;strong&gt;cascading failures&lt;/strong&gt;. Such failures violate the whole system's work, so, obviously, we should avoid them. To do that developers and architects invented a lot of cool patterns, such as &lt;strong&gt;Circuit Breaker&lt;/strong&gt;[2], but it solves only a part of the problem. Temporal coupling is still one of the most serious problems with asynchronous communication and it’s almost impossible to get rid of it altogether.&lt;/p&gt;

&lt;p&gt;Another thing to consider is microservices API versioning. If you are not familiar with it, welcome to a world full of dangers and surprises. There are a lot of recommendations on how to do this versioning properly: Semantic Versioning[3], URL versioning, using headers, etc. But it still may become a pain in the neck anytime, regardless on which side you are - API creator or API consumer. As an API creator, you don’t want to break all the clients with the next microservice deploy, thus you should maintain not only the latest version of the API, but also the previous one (or the previous 10?). When the service is developing and actively changing it may be tricky to maintain its backward compatibility. As an API consumer, you should be confident that nothing is broken after the API version is changed. How to automate it? Here we move to the next question to think about.&lt;/p&gt;

&lt;p&gt;Automated testing is what we can’t live without developing big programming products. But what is the point of having services that are well-tested in isolation but can’t communicate properly? And testing service communication is not that simple and requires proper infrastructure. One of the possible solutions is &lt;strong&gt;Contract Testing&lt;/strong&gt;[4].&lt;/p&gt;

&lt;p&gt;Having all that in mind we can make a list of advantages and disadvantages of this approach. &lt;br&gt;
Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;received data is always up-to-date&lt;/li&gt;
&lt;li&gt;data-owning microservice can apply additional logic before sending the response&lt;/li&gt;
&lt;li&gt;data encapsulation is not violated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;temporal coupling&lt;/li&gt;
&lt;li&gt;additional infrastructure work - circuit breakers, service discovery&lt;/li&gt;
&lt;li&gt;need to deal with API versioning&lt;/li&gt;
&lt;li&gt;may be hard to test&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Embedded library&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From the direct service calls diagram one could probably conclude that there is one redundant network call. Why can’t the service receive the information directly from the place it’s stored? The main obstacle to it is one of the backbone microservices principles - don’t share a database across all the services. You will be on a safer side if you use &lt;strong&gt;Database per service&lt;/strong&gt; pattern[5] instead. According to this pattern, there should be a dedicated database for each service with the exclusive rights to access. But what's wrong with the shared database? One of them is data encapsulation. When there is a service specifically responsible for working with the database, it encapsulates the internal data structure. But when there are several accessing services, each of them should know this structure and how to query the database. The solution is to create another layer of encapsulation. It may be a package, library or SDK included into the service codebase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GCffazJI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mjm7cbwr21dru5zxzjes.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GCffazJI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mjm7cbwr21dru5zxzjes.png" alt="Embedded library call" width="234" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This package (or module, or whatever) may be developed and maintained by the team responsible for data, but included into the other services as a dependency and deployed with them. This module may contain read-only database access and some tiny logic of fetching and composing the data.&lt;/p&gt;

&lt;p&gt;Unfortunately, behind all this simplicity there are a lot of hidden problems and the number of possible pitfalls may be even more than in the previous approach. Now that you don’t have to worry about microservices versions, you’ve got a new problem to deal with - a necessity to maintain library versions. In some cases it may be simpler for the library developers, because not every service release will require the library update. But sometimes maintaining several versions of the library can be the same mess as maintaining several microservice API versions.&lt;/p&gt;

&lt;p&gt;Another thing to keep an eye on is the way this dependency is managed. It should be included into the service via package manager or another tool. It can become a problem when two services are implemented using different programming languages. In such case library developers will have to implement the client with another language or use some kind of cross-language libraries.&lt;/p&gt;

&lt;p&gt;There is one more issue related to encapsulation. Even though other services don’t directly use data schema internals, the developers of the corresponding service still should not relax. Making changes to the database schema they should make sure that these changes won’t break the previous version of the library.&lt;br&gt;
There may be a lot of other not-obvious and unexpected pitfalls, especially when the product becomes huge enough. One such thing, for example, might be the number of database connections. If each instance of the client library keeps one open connection and each service contains several libraries, the number of connections may get too large and even reach the limit.&lt;/p&gt;

&lt;p&gt;And here goes a brief sum up. &lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;received data is always up-to-date&lt;/li&gt;
&lt;li&gt;no additional infrastructure or circuit breakers required&lt;/li&gt;
&lt;li&gt;client library may contain additional data-fetching logic&lt;/li&gt;
&lt;li&gt;usually easier to start with&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;possible schema-related issues&lt;/li&gt;
&lt;li&gt;need to deal with library versioning&lt;/li&gt;
&lt;li&gt;dependency management tools required&lt;/li&gt;
&lt;li&gt;security - all the client library copies use database connection credentials&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Local data projection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You may also come across unusual cases of fetching data not assigned to your service. It may be some kind of search or you may need to do table join of such data with the data in your service. Although there may be a lot of ways to manipulate data, usually the responsible service provides only one way to fetch the data and only one format. Your service may not run with this format, fetching options, or even the structure of the data. All these problems can be solved with building the local data projection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--caDkk6qL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zk70zutdf9mwypcbw9dt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--caDkk6qL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zk70zutdf9mwypcbw9dt.png" alt="Local data projection" width="880" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Data is saved in the Service2 database and then transferred to the Service1 database. The copy is saved there in the same or changed format. Then Service 1 can use this data, sending requests only to its local database, and if Service2 or its database is not available it won’t violate Service2 in any way.&lt;/p&gt;

&lt;p&gt;The first self-evident peril here is a propagation delay. There is no guarantee that the data was not already changed at the moment of request. Another question is how to copy changes from one database to another. It can be done with the help of database tools (e.g. replication), message brokers, or Change-Data-Capture instruments (you can read more about it in my &lt;a href="https://medium.com/@oleg0potapov/events-patterns-message-relay-with-change-data-capture-1da9b584758e"&gt;previous article&lt;/a&gt;). In any way it will require additional work on implementation and maintenance.&lt;/p&gt;

&lt;p&gt;Here is the list of pros and cons of this approach.&lt;br&gt;
Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;more flexibility with data format and data structure&lt;/li&gt;
&lt;li&gt;no coupling with other services, the service depends only on data in its local database&lt;/li&gt;
&lt;li&gt;ability to work with data from the local database, e.g. making joins, map-reduce, etc.&lt;/li&gt;
&lt;li&gt;easier testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;propagation delay, need to handle eventual consistency&lt;/li&gt;
&lt;li&gt;additional work to implement and maintain reliable changes propagation process&lt;/li&gt;
&lt;li&gt;data duplication, additional disk space is used&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So we have three options to choose from. Depending on the specific nature of the project and the task to solve, all of them may be the best or the worst choice. As for the third option (&lt;strong&gt;local projection&lt;/strong&gt;) I wouldn’t use it normally. It may come in handy in rare situations when you need to change data structure or make a join and thus it’s worth mentioning.&lt;/p&gt;

&lt;p&gt;Choosing between embedded library and direct calls, I would consider the size of the project and the team working on it. When the team is not big enough and the whole project is stored in the monorepo, an &lt;strong&gt;embedded library&lt;/strong&gt; will probably be an easier option to start with. But when the project contains thousands of microservices implemented with different technologies, the drawbacks will probably outweigh the advantages and it would be better to use &lt;strong&gt;direct calls&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Links&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Coupling_(computer_programming)"&gt;https://en.wikipedia.org/wiki/Coupling_(computer_programming)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Circuit_breaker_design_pattern"&gt;https://en.wikipedia.org/wiki/Circuit_breaker_design_pattern&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://semver.org/"&gt;https://semver.org/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://microservices.io/patterns/testing/service-integration-contract-test.html"&gt;https://microservices.io/patterns/testing/service-integration-contract-test.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://microservices.io/patterns/data/database-per-service.html"&gt;https://microservices.io/patterns/data/database-per-service.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@oleg0potapov/events-patterns-message-relay-with-change-data-capture-1da9b584758e"&gt;https://medium.com/@oleg0potapov/events-patterns-message-relay-with-change-data-capture-1da9b584758e&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>microservices</category>
    </item>
    <item>
      <title>Message Brokers: Queue-based vs Log-based</title>
      <dc:creator>Oleg Potapov</dc:creator>
      <pubDate>Wed, 22 Feb 2023 13:18:38 +0000</pubDate>
      <link>https://dev.to/oleg_potapov/message-brokers-queue-based-vs-log-based-2f21</link>
      <guid>https://dev.to/oleg_potapov/message-brokers-queue-based-vs-log-based-2f21</guid>
      <description>&lt;p&gt;Message brokers are one of the essential features of modern distributed applications architecture. They provide an ability to asynchronously exchange messages for different parts of the system. Asynchronous messaging becomes more and more important, and so do message brokers. Today there are a lot of options to choose from, not only Rabbit and Kafka, but many more, including cloud-native solutions, provided by the biggest cloud providers like AWS or Azure. However, most of them can be divided into two groups: queue-based and log-based. Let’s see how they work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Queue-based&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Queue-based systems are based on the Queue data structure that should be familiar to everyone from the first steps in Computer Science. Queue is a simple data structure working by the FIFO (First In - First Out) principle. In its simplest form, the system consists of three components. There is the producer, sending messages to the queue, the consumer receiving them from the queue, and the queue itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvl6ja8055go7niaogec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvl6ja8055go7niaogec.png" alt="Simplest queue-based broker" width="800" height="106"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even such a primitive system may be useful, but real applications, especially those based on Event-Driven Architecture, require more complex topologies. Usually, they need the same event to be published to several other subsystems that will handle it separately. In other words, an event, once produced, should be available for several consumers. Thus the concept of topics is used in most queue-based brokers (in RabbitMQ they are called exchanges).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr01ho51p40zhz1reuare.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr01ho51p40zhz1reuare.png" alt="Queues and topics" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The producer sends a message to the topic, and only from there, the message is distributed to the queues. The implementation of this process may vary, for example in RabbitMQ the topic (exchange) is just a routing rule, defining in which queues should the message be put. &lt;/p&gt;

&lt;p&gt;Why can’t several consumers receive the same message from one queue? That’s one of the key points and the biggest difference between queues and logs. Once a message is sent to the consumer and an acknowledgement is received, it’s removed from the queue and is no longer available for other consumers. That’s how reliability is achieved - the consumer has no state, it always receives the first message from the queue. When the consumer restarts, it receives the first message it didn’t acknowledge before failing.&lt;/p&gt;

&lt;p&gt;The most popular queue-based brokers are RabbitMQ, ZeroMQ, ActiveMQ, Amazon SQS or even Redis PubSub, even though it’s not a message broker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Log-based&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As it follows from the name, the main difference of log-based message brokers is the usage of the log as a store for messages. Log is persistent storage and therefore several consumers can read from it in parallel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwyi6g33mr9zdxy9577b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwyi6g33mr9zdxy9577b.png" alt="Log-based broker" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each consumer works at its own speed and reads the message from the different position in the log. On one side it makes them independent from each other and more decoupled from the broker. Another advantage is that multiple consumers can work with a single log, so there is no need to create additional entities for that purpose as in the case of queue-based brokers.&lt;br&gt;
But this approach leads to another complexity - consumers' offsets (or cursors) must be stored somewhere. Having them saved somewhere inside consumers is not a good idea. If a consumer fails or is stale and replaced by another one, the new one should have access to the previous instance cursor, otherwise, it starts reading messages from the beginning which is usually not what we want.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdk8uw74lw9jli9j9fhmk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdk8uw74lw9jli9j9fhmk.png" alt="Cursors storage is required for log-based brokers" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thus additional cursor storage should be used. A separate service might be used for this purpose but this service would require separate maintenance and monitoring. Probably the better option is to store these cursors inside the broker. One example of this approach is Apache Kafka - it stores consumers' offsets in the internal topic called __consumer_offsets. Apache Pulsar does something similar: it saves cursors in the BookKeeper like the other data.&lt;/p&gt;

&lt;p&gt;The durability of the log brings another advantage - every consumer can read the message from every position and even replay the whole log of events from the start. It’s not a common case, but having the full log may be useful for Event-Driven systems in the context of eventual consistency.&lt;/p&gt;

&lt;p&gt;The main representatives of the log-based group are Apache Kafka, Apache Pulsar and Amazon Kinesis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Messages Order&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the biggest differences between the two broker types comes to light when some kind of message ordering is required. There may be different kinds of ordering. The first one is related to the producer. When the producer publishes Event1 and then Event2, they must be stored in the broker and then received by the consumer in the same order. For example, when the Orders service publishes an OrderPlaced event and then an OrderCancelled event, the last one shouldn’t be processed before the first, because the consumer will not be able to handle it properly. This kind of ordering is provided by most of the brokers from both groups and is already a built-in functionality for them.&lt;/p&gt;

&lt;p&gt;But there is another kind of ordering - within the related messages produced by separate services. Let’s look at an example. We have three services: Orders, Payments and Fulfillments. The orders service publishes OrderPlaced and OrderCancelled events, Payments service publishes the OrderPaid event and the Fulfillments service consumes these events and starts to fulfill the order on the OrderPaid event and stops to do it on the OrderCancelled event. But how to make sure that it consumes the OrderPaid event before OrderCancelled for the same order? Otherwise it may lead to unpredictable behavior.&lt;/p&gt;

&lt;p&gt;For queues the solution is quite simple. As they are flexible in building message topologies it is possible to achieve almost any routing logic. In our case, we can have two separate exchanges for every event type (or for every service - in this case, it doesn’t matter) and a single query connected to both of them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl40uh9f3zvzhdh56exay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl40uh9f3zvzhdh56exay.png" alt="Several exchanges connected to the same queue" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Such topologies are definitely one of the biggest features of queues. But there is even more powerful functionality offered by some of them, e.g. exchange-to-exchange bindings and different types of exchanges (like Topic, Fanout and Direct in RabbitMQ).&lt;/p&gt;

&lt;p&gt;However it’s not that easy to build the topology for log-based brokers. The same strategy won’t work - if every event is published on a separate topic, they will be handled separately and there is no way to keep the order. And even though some brokers allow the consumer to consume messages from multiple topics, it doesn’t help in this case. Another strategy is to have a single topic for the whole system and send all events there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbs0oetkhyoat268zj21h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbs0oetkhyoat268zj21h.png" alt="Several producers write to the same log" width="800" height="592"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It solves the problem with the ordering, but is not always the best solution. Now consumers read all the events produced in the system, even though they are usually interested in just a couple of them. And having one big topic for the entire system may become one big problem.&lt;/p&gt;

&lt;p&gt;So, we need to somehow group events by their type and create a separate topic for each group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefgasoddz43e47elk2xu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefgasoddz43e47elk2xu.png" alt="Several producers write to several topics" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is no one and only way how to do it but there are a bunch of helpful recommendations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;if events are related to the same aggregate, put them in one topic&lt;/li&gt;
&lt;li&gt;if events are related to the entities that depend on each other, it is worth to also keep them in the same topic&lt;/li&gt;
&lt;li&gt;if one event is related to several entities don’t split it into multiple messages, it may be done on the later stages of event handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are some more tips and recommendations in the Kafka blog[2].&lt;/p&gt;

&lt;p&gt;To all the problems with log-based topologies, I would add that it may be hard to change them if you feel you made a mistake or the system structure is altered. At the same time, queue topologies are usually more friendly in this aspect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It seems like a clear win for queues after the previous round, right? Don’t draw quick conclusions here. Everything changes when it comes to scaling. Scaling is always tricky, it may be just a bit easier or more complicated. And for log-based brokers, it’s much easier. Usually, to scale Kafka you have to increase the number of partitions in the topic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w7bjiw8l46q40wvx4sl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w7bjiw8l46q40wvx4sl.png" alt="Several partitions in the topic for scale" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To keep the ordering of related events only one thing should be added - each event should contain a partition key. Events with the same partition key will go to the same partition and will be handled in the correct order. &lt;/p&gt;

&lt;p&gt;Doing the same for queues is much more complex. Queue-based brokers allow multiple consumers for the same queue, but it may cause ordering errors. When multiple consumers listen to the same query, the Competing Consumers pattern[3] is implemented. It means that the first consumer gets one event, the second consumer gets another one not waiting for the first one to finish and so on. It’s a great pattern for the independent task queue but not that great for the related events. Consumers work in parallel, which means it’s not possible to guarantee that the first event will be handled before the second.&lt;/p&gt;

&lt;p&gt;Clearly, it’s not possible to solve it with one queue. Then there should be separate queues and events should be distributed among them on a higher level. One implementation of this idea is RabbitMQ Consistent Hashing Exchange type[4]. Similar to the partition key mechanism for logs, it distributes messages to queues based on their routing key. In order to get the queue by the routing key it uses the Consistent Hashing algorithm. But even with this or similar tools, the solution is not trivial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So, summing up the advantages of the both types.&lt;/p&gt;

&lt;p&gt;Queue-based brokers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;in theory, should have less latency because of using fast queuing protocols, like AMQP [6]. In practice, the difference is not that big as modern log-based brokers use powerful cache systems and don’t fall behind significantly&lt;/li&gt;
&lt;li&gt;allow to build more flexible messaging topologies, which can be used for complex message routing or prioritization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Log-based brokers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;messages are persistent on the disk, it provides an ability to keep the history of events or replay the full sequence&lt;/li&gt;
&lt;li&gt;usually are easier to scale&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Brokers of both types can cover most of your needs, but it will require different amounts of work to make it consistent and reliable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Links&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://hevodata.com/learn/rabbitmq-exchange-type/" rel="noopener noreferrer"&gt;https://hevodata.com/learn/rabbitmq-exchange-type/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.confluent.io/blog/put-several-event-types-kafka-topic/" rel="noopener noreferrer"&gt;https://www.confluent.io/blog/put-several-event-types-kafka-topic/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/CompetingConsumers.html" rel="noopener noreferrer"&gt;https://www.enterpriseintegrationpatterns.com/patterns/messaging/CompetingConsumers.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/rabbitmq/rabbitmq-server/tree/main/deps/rabbitmq_consistent_hash_exchange" rel="noopener noreferrer"&gt;https://github.com/rabbitmq/rabbitmq-server/tree/main/deps/rabbitmq_consistent_hash_exchange&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Consistent_hashing" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Consistent_hashing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>vibecoding</category>
    </item>
  </channel>
</rss>
