<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: dima853</title>
    <description>The latest articles on DEV Community by dima853 (@dima853).</description>
    <link>https://dev.to/dima853</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dima853"/>
    <language>en</language>
    <item>
      <title>I made a sign with brief explanations of the terms from LLDP</title>
      <dc:creator>dima853</dc:creator>
      <pubDate>Fri, 17 Oct 2025 12:22:55 +0000</pubDate>
      <link>https://dev.to/dima853/i-made-a-sign-with-brief-explanations-of-the-terms-from-lldp-515j</link>
      <guid>https://dev.to/dima853/i-made-a-sign-with-brief-explanations-of-the-terms-from-lldp-515j</guid>
      <description>&lt;p&gt;This document (8021AB-2016) is the official standard for the LLDP protocol, which allows network devices to automatically find each other and exchange service information. &lt;strong&gt;All these acronyms are integral parts of this standard, which describe exactly how devices "get to know" each other on the network.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In Russian&lt;/strong&gt; - &lt;a href="https://gist.github.com/dima853/54ae0137ef51220c0ab9c7db40a12dbb" rel="noopener noreferrer"&gt;https://gist.github.com/dima853/54ae0137ef51220c0ab9c7db40a12dbb&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;In English&lt;/strong&gt; - &lt;a href="https://github.com/dima853/self_university/blob/main/network/osi/L2/8021AB-2016/acronyms.md" rel="noopener noreferrer"&gt;https://github.com/dima853/self_university/blob/main/network/osi/L2/8021AB-2016/acronyms.md&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;More&lt;/strong&gt;- &lt;a href="https://t.me/dima853_code" rel="noopener noreferrer"&gt;https://t.me/dima853_code&lt;/a&gt;
&lt;/h1&gt;

</description>
      <category>programming</category>
      <category>lowcode</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>INLINE vs. NORMAL FUNCTIONS: What's Really Happening in Assembly?</title>
      <dc:creator>dima853</dc:creator>
      <pubDate>Fri, 10 Oct 2025 15:22:50 +0000</pubDate>
      <link>https://dev.to/dima853/inline-vs-normal-functions-whats-really-happening-in-assembly-31l0</link>
      <guid>https://dev.to/dima853/inline-vs-normal-functions-whats-really-happening-in-assembly-31l0</guid>
      <description>&lt;h2&gt;
  
  
  🔥 INLINE vs. NORMAL FUNCTIONS: What's Really Happening in Assembly?
&lt;/h2&gt;

&lt;p&gt;All sources (code, etc.) - &lt;a href="https://github.com/dima853/self_university/tree/main/network/c/fucntions/inline" rel="noopener noreferrer"&gt;https://github.com/dima853/self_university/tree/main/network/c/fucntions/inline&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everyone talks about 'inline', but few have seen the difference in real assembly. See:&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 With INLINE — code is INSERTED directly:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test_with_inline:
movl (%rdi), %r8d # Byte comparison
cmpl (%rsi), %r8d # ← code INSERTED!
je .L18

movl (%rsi), %ecx
cmpl (%rdx), %ecx # ← INSERTED again!
je .L19
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The gist:&lt;/strong&gt; The compiler COPYS the function code to each call site.&lt;/p&gt;

&lt;h3&gt;
  
  
  🐌 WITHOUT INLINE — CALLS to the function:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test_without_inline:
call mac_equals_normal # ← JUMP into the function!
movzbl %al, %ecx

call mac_equals_normal # ← JUMP AGAIN!
movl %eax, %esi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The gist:&lt;/strong&gt; Each call is a jump to a different memory location&lt;/p&gt;

&lt;h3&gt;
  
  
  🍔 SIMPLE ANALOGY:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Without inline&lt;/strong&gt; = "Courier: you → courier → restaurant → courier → you"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;With inline&lt;/strong&gt; = "Microwave: you → microwave → done!"&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ⚡ RESULT:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;With inline:&lt;/strong&gt; ~4 instructions per comparison&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Without inline:&lt;/strong&gt; ~20+ instructions (call + return)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🎯 CONCLUSION:
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;inline&lt;/code&gt; is when the compiler stops "sending the courier" and starts "microwave" right there!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Command for testing:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcc &lt;span class="nt"&gt;-S&lt;/span&gt; &lt;span class="nt"&gt;-O2&lt;/span&gt; test.c &lt;span class="nt"&gt;-o&lt;/span&gt; test.s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  programming #C #assembler #optimization #inline #compiler
&lt;/h1&gt;

</description>
      <category>programming</category>
      <category>c</category>
      <category>cpp</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>🛡️ OCTAL ACCESS RIGHTS IN LINUX: A COMPLETE GUIDE</title>
      <dc:creator>dima853</dc:creator>
      <pubDate>Fri, 10 Oct 2025 13:16:04 +0000</pubDate>
      <link>https://dev.to/dima853/octal-access-rights-in-linux-a-complete-guide-3kj0</link>
      <guid>https://dev.to/dima853/octal-access-rights-in-linux-a-complete-guide-3kj0</guid>
      <description>&lt;p&gt;Everything you need to know about permissions: from basic RWX to advanced scenarios with setuid, setgid and sticky bit&lt;br&gt;
• 100+ practical examples&lt;br&gt;
• Ready-to-use solutions&lt;br&gt;
• From basics to production security&lt;/p&gt;

&lt;p&gt;in Russian/English - &lt;a href="https://t.me/dima853_code/111" rel="noopener noreferrer"&gt;https://t.me/dima853_code/111&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Linux #DevOps #SysAdmin #Permissions #Security #Guide
&lt;/h1&gt;

</description>
      <category>linux</category>
      <category>programming</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>I'm creating a social network that is not hacable, trackable, or censored - by design.</title>
      <dc:creator>dima853</dc:creator>
      <pubDate>Fri, 10 Oct 2025 13:13:15 +0000</pubDate>
      <link>https://dev.to/dima853/im-creating-a-social-network-that-is-not-hacable-trackable-or-censored-by-design-3kmo</link>
      <guid>https://dev.to/dima853/im-creating-a-social-network-that-is-not-hacable-trackable-or-censored-by-design-3kmo</guid>
      <description>&lt;p&gt;Hello everyone, I recently created a tg channel, where I will do the following &lt;/p&gt;

&lt;h1&gt;
  
  
  Subscribe - &lt;a href="https://t.me/dima853_code" rel="noopener noreferrer"&gt;https://t.me/dima853_code&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;A DETAILED article about NullPointerException in java will be released soon.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;I also made a detailed guide about octal access rights in Linux.&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Here -&amp;gt; &lt;a href="https://t.me/dima853_code/111" rel="noopener noreferrer"&gt;https://t.me/dima853_code/111&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm creating a social network with full control over the stack. From the transcendental bridge at the L2 level to the UI components. No ready-made solutions in critical places.&lt;/p&gt;

&lt;p&gt;I'm studying security, cryptography, low-level, network programming, and backend. I'm documenting the path to zero-to-one engineering. Posts about science, code, and hardware.&lt;/p&gt;

&lt;p&gt;What I'm doing:&lt;br&gt;
• Writing a social network on my own engines&lt;br&gt;
• Digging into Linux, hardware, networks, and cryptography&lt;br&gt;
• Solving LeetCode (373+ problems, 2 problems every day)&lt;br&gt;
• Studying math and CS theory for practical application&lt;/p&gt;

&lt;p&gt;What I will share:&lt;br&gt;
• Architectural solutions for social networks&lt;br&gt;
• Scientific notes (crypt, theorver, algorithms)&lt;br&gt;
• Low-level optimization (C, assembler)&lt;br&gt;
• Analysis of complex system problems&lt;br&gt;
• Detailed analysis of hardware&lt;/p&gt;

&lt;p&gt;Links:&lt;br&gt;
GitHub: &lt;a href="https://github.com/dima853" rel="noopener noreferrer"&gt;https://github.com/dima853&lt;/a&gt;&lt;br&gt;
LeetCode: &lt;a href="https://leetcode.com/u/dima853" rel="noopener noreferrer"&gt;https://leetcode.com/u/dima853&lt;/a&gt;&lt;br&gt;
Telegram - &lt;a href="https://t.me/dima853_code" rel="noopener noreferrer"&gt;https://t.me/dima853_code&lt;/a&gt;&lt;br&gt;
LinkedIn: &lt;a href="https://linkedin.com/in/dima853" rel="noopener noreferrer"&gt;https://linkedin.com/in/dima853&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Goal: Disassemble the internet into atoms and put it back together.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>2.3.2 - 2.3.5 Concepts (System Performance Brendan Gregg 2nd)</title>
      <dc:creator>dima853</dc:creator>
      <pubDate>Fri, 11 Jul 2025 11:45:13 +0000</pubDate>
      <link>https://dev.to/dima853/232-235-concepts-system-performance-brendan-gregg-2nd-1o9n</link>
      <guid>https://dev.to/dima853/232-235-concepts-system-performance-brendan-gregg-2nd-1o9n</guid>
      <description>&lt;h1&gt;
  
  
  2.3.3 Trade-Offs
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foks8wx4b1y3xrkvmnfkm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foks8wx4b1y3xrkvmnfkm.png" alt=" " width="675" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Trade-offs in IT: "Pick two" and performance optimization&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The image illustrates the classic principle &lt;strong&gt;"Good, Fast, Cheap — choose two"&lt;/strong&gt;, adapted for IT projects:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High performance&lt;/strong&gt; (High-Performance)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deadlines&lt;/strong&gt; (On-Time)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Budget&lt;/strong&gt; (Inexpensive)
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They usually choose &lt;strong&gt;"On-Time+ Inexpensive"&lt;/strong&gt;, sacrificing performance, which leads to problems:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Suboptimal storage architecture.
&lt;/li&gt;
&lt;li&gt;Inefficient programming languages.
&lt;/li&gt;
&lt;li&gt;Lack of tools for performance analysis.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;1. Key trade-offs in IT&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;(1) CPU vs Memory&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;In-memory caching&lt;/strong&gt; → reduces CPU load, but requires more RAM.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data compression&lt;/strong&gt; → saves memory, but increases CPU load.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Redis&lt;/strong&gt; (in-memory cache) vs &lt;strong&gt;PostgreSQL&lt;/strong&gt; (page compression).
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;(2) File System block Size&lt;/strong&gt;
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;
&lt;strong&gt;Small block&lt;/strong&gt; (4 KB)&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;Large block&lt;/strong&gt;(64 KB)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Better for random I/O operations&lt;/td&gt;
&lt;td&gt;Better for streaming reads/writes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Uses cache more efficiently&lt;/td&gt;
&lt;td&gt;Speeds up backups&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;(3) Network Buffer size&lt;/strong&gt;
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Small buffer&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Large buffer&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Less memory load&lt;/td&gt;
&lt;td&gt;Higher bandwidth&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;2. Where is it more efficient to adjust performance?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The closer the setting is to the application level, the greater the effect.:  &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Level&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Examples of optimizations&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Potential gain&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Application&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Optimization of SQL queries and logic&lt;/td&gt;
&lt;td&gt;Up to 20x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Database&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Indexes, partitioning&lt;/td&gt;
&lt;td&gt;Up to 5x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;File system&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Configuring the cache, block size&lt;/td&gt;
&lt;td&gt;Up to 2x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Storage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;RAID, SSD vs HDD&lt;/td&gt;
&lt;td&gt;Up to 1.5x&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;** Why?**&lt;br&gt;&lt;br&gt;
An application-level fix eliminates work at all lower levels of the stack.  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;3. Problems of modern approaches&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The race for functionality&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers focus on correctness, not performance.
&lt;/li&gt;
&lt;li&gt;Performance problems are detected already in production.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloud environments&lt;/strong&gt;:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Frequent updates (weekly/daily) complicate long-term monitoring.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is important to &lt;strong&gt;monitor the OS&lt;/strong&gt;, even if the problem is in the application.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;4. Practical advice&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;(1) For developers&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Profile the code&lt;/strong&gt; before release (for example, &lt;code&gt;perf&lt;/code&gt; for Linux, &lt;code&gt;VTune&lt;/code&gt; for Intel).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Avoid N+1 queries&lt;/strong&gt; in databases.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache&lt;/strong&gt; hot data (Redis, Memcached).
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;(2) For admins&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Set up OS monitoring&lt;/strong&gt; (Prometheus + Grafana for CPU/RAM/I/O metrics).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimize block sizes&lt;/strong&gt; for load (for example, &lt;code&gt;ext4&lt;/code&gt; with &lt;code&gt;bs=64k&lt;/code&gt; for DBMS).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use modern storage&lt;/strong&gt; (NVMe for high-load systems).
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Sacrifice one of the three&lt;/strong&gt;: speed, quality, or cost.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimize at the application level&lt;/strong&gt; — this gives maximum effect.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor the OS&lt;/strong&gt;, even if the problem seems to be "applied".
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;"Late optimization is the root of all evil. But even worse is blind optimization without measurements."&lt;br&gt;&lt;br&gt;
&lt;em&gt;— Adaptation of a quote by Donald Knuth&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;







&lt;h3&gt;
  
  
  &lt;strong&gt;Optimizing block size for PostgreSQL: how to choose and why?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The size of a block (page) in PostgreSQL critically affects performance. Let's figure out how to choose it correctly for your workload.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;1. What is a "block" in PostgreSQL?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Standard size: &lt;strong&gt;8 KB&lt;/strong&gt; (can be changed from 1 to 32 KB during compilation).&lt;/li&gt;
&lt;li&gt;One page = 1 block = minimum unit of I/O operations.&lt;/li&gt;
&lt;li&gt;Stores: table rows, indexes, metadata.&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;2. How does block size affect performance?&lt;/strong&gt;
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Small block (4 KB)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Large block (16-32 KB)&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;✅ Better for OLTP (lots of random reads/writes of short strings)&lt;/td&gt;
&lt;td&gt;✅ Better for analytical queries (scanning large ranges)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;❌ Accesses disk more often (less data per 1 I/O)&lt;/td&gt;
&lt;td&gt;❌ Spends memory on empty areas if the rows are small&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
If you have a chat app (lots of inserts/updates of short messages) — &lt;strong&gt;8 KB&lt;/strong&gt; optimal.&lt;br&gt;&lt;br&gt;
For Data Warehouse (large table analytics) — &lt;strong&gt;16-32 KB&lt;/strong&gt; will speed up `SELECT * FROM large_table'.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;3. How can I check my current performance?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Diagnostic requests:&lt;br&gt;
&lt;code&gt;&lt;/code&gt;`sql&lt;br&gt;
-- Average row size in the table&lt;br&gt;
SELECT avg(pg_column_size(t.*)) FROM my_table t;&lt;/p&gt;

&lt;p&gt;-- How many blocks are "empty" (low occupancy)&lt;br&gt;
SELECT relname, 100 - (pg_relation_size(oid) / (8192 * relpages) * 100) AS "empty_space_%"&lt;br&gt;
FROM pg_class WHERE relkind = 'r';&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;br&gt;
If `empty_space_%' is &amp;gt; 30%, it may be worth reducing the block size.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;4. How can I change the block size?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Only when compiling PostgreSQL&lt;/strong&gt; (setting &lt;code&gt;--with-blocksize=16&lt;/code&gt; in &lt;code&gt;./configure&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cannot be changed for an existing database&lt;/strong&gt; — only create a new cluster with a different size:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   initdb &lt;span class="nt"&gt;-D&lt;/span&gt; /path/to/new/data &lt;span class="nt"&gt;--block-size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;16384
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data transfer&lt;/strong&gt;: Use &lt;code&gt;pg_dump&lt;/code&gt;/`pg_restore'.&lt;/li&gt;
&lt;/ol&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;5. Optimal values for different scenarios&lt;/strong&gt;
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Load type&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Recommended unit&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Arguments&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OLTP (short transactions)&lt;/td&gt;
&lt;td&gt;8 KB&lt;/td&gt;
&lt;td&gt;Minimizing overhead&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Analytics (large scans)&lt;/td&gt;
&lt;td&gt;16-32 KB&lt;/td&gt;
&lt;td&gt;Reducing the number of I/O operations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mixed load&lt;/td&gt;
&lt;td&gt;8 KB&lt;/td&gt;
&lt;td&gt;Read/Write balance&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;6. What else affects I/O efficiency?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;TOAST&lt;/strong&gt;: PostgreSQL automatically compresses/separates large fields (&amp;gt; 2 KB).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fillfactor&lt;/strong&gt;: Setting the page occupancy (for example, &lt;code&gt;90&lt;/code&gt; for frequently updated tables).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;File system&lt;/strong&gt;: 'XFS&lt;code&gt; or &lt;/code&gt;ZFS&lt;code&gt; with the setting &lt;/code&gt;bs=16k` to match PostgreSQL.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;8 KB&lt;/strong&gt; is a safe choice for most cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;16+ KB&lt;/strong&gt; — only for post-test analytics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Before changing&lt;/strong&gt;: Measure the current efficiency via 'pg_stat_io'.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Tip&lt;/strong&gt;: For cloud databases (RDS, Cloud SQL), the block size is fixed — choose instances with preset values for your load.
&lt;/h2&gt;


&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The level of depth of performance analysis: when to stop?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Productivity is a balance between &lt;strong&gt;analysis costs&lt;/strong&gt; and &lt;strong&gt;potential benefits&lt;/strong&gt;. Let's look at how to determine the optimal level of effort for your project.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;1. "Adequate level" of analysis: from startups to HFT&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Organizations have different approaches to performance analysis depending on &lt;strong&gt;ROI&lt;/strong&gt; (return on investment):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Company type&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;The level of analysis&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Examples&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Startups&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Surface monitoring (CloudWatch, Sentry)&lt;/td&gt;
&lt;td&gt;Checking API latency and alerts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Corporations&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Deep analysis (perf, eBPF, CPU PMC)&lt;/td&gt;
&lt;td&gt;Linux kernel optimization, simulation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;HFT / Exchanges&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Extreme optimization (nanoseconds)&lt;/td&gt;
&lt;td&gt;Laying cables for $300 million to save 6 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Key principle&lt;/strong&gt;: &amp;gt; "Adequate level" is when ** the cost of the analysis is less than the potential benefit**.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;2. When should I stop the analysis?&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Scenarios for stopping:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;** The main reason is explained**&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Example: A Java application eats CPU. We found exceptions that explain &lt;strong&gt;12%&lt;/strong&gt; of the load. But since slowdown &lt;strong&gt;3x&lt;/strong&gt;, we're looking further.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rule&lt;/strong&gt;: We stop if we find &lt;strong&gt;&amp;gt;66%&lt;/strong&gt; problems.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ROI is less than the cost of analysis&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Optimization of a microservice that saves &lt;strong&gt;$100/year&lt;/strong&gt; is not worth 10 hours of an engineer's work.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Exception&lt;/strong&gt;: If the problem is a "canary" for future disasters (for example, memory leak).
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;There are more important tasks&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Even if the problem is not completely clear, there are &lt;strong&gt;critical bugs&lt;/strong&gt; elsewhere.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;3. Recommendations "at a given time"&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Performance settings ** become obsolete** after:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hardware upgrades (for example, switching from 10 Gbit/s to 100 Gbit/s).
&lt;/li&gt;
&lt;li&gt;Software updates (the new version of PostgreSQL may change the optimal `shared_buffers').
&lt;/li&gt;
&lt;li&gt;Growth in the number of users.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How to avoid mistakes:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Do not copy tunings from the Internet&lt;/strong&gt; blindly. Example:
&lt;code&gt;&lt;/code&gt;&lt;code&gt;nginx
# Old recommendation for Linux 2.6:
net.ipv4.tcp_tw_reuse = 1 # May be harmful in modern kernels!
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version the settings&lt;/strong&gt; (Git + annotations like: "Increased &lt;code&gt;vm.swappiness=10&lt;/code&gt; due to swap load in 2023-01").
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;4. Practical rules&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;For startups&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Minimum&lt;/strong&gt;: Latency/error monitoring (New Relic, Datadog).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimum&lt;/strong&gt;: Once a quarter — deep check of &lt;strong&gt;slow queries&lt;/strong&gt; and &lt;strong&gt;caching&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;For corporations&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Permanent&lt;/strong&gt; team of performance engineers.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tools&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;perf&lt;/code&gt;/`eBPF' — for core analysis.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Load forecasting&lt;/strong&gt; (for example, via ML).
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;For HFT/Exchanges&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hard SLA&lt;/strong&gt;: Every microsecond is money.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Investments&lt;/strong&gt;: Custom network stacks, FPGA instead of CPU.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;5. Checklist "Where to stay?"&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Have you found the cause of &amp;gt;50% of the problem? → &lt;strong&gt;Stop, we're fixing&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Do the remaining hypotheses take &lt;strong&gt;a disproportionate amount of time&lt;/strong&gt;? → Postpone.
&lt;/li&gt;
&lt;li&gt;Are there &lt;strong&gt;more important&lt;/strong&gt; tasks with high ROI? → Switch to.
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Metric&lt;/strong&gt;: If the analysis takes &lt;strong&gt;longer than the potential savings&lt;/strong&gt;, you have overdone it.  &lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The depth of the analysis&lt;/strong&gt; depends on &lt;strong&gt;budget and risks&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tuning is outdated&lt;/strong&gt; — document the changes.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;80/20 rule&lt;/strong&gt;: 20% of efforts give 80% of the result.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Case study&lt;/strong&gt;:&lt;br&gt;
After updating the AWS instance from &lt;code&gt;m5.large&lt;/code&gt; to &lt;code&gt;m6i.xlarge', the old&lt;/code&gt;kernel.schedul_migration_cost_ns` settings began to &lt;strong&gt;slow down&lt;/strong&gt; the application. Solution: reset the parameters to default + measurements under load.  &lt;/p&gt;

</description>
      <category>programming</category>
      <category>security</category>
      <category>architecture</category>
      <category>design</category>
    </item>
    <item>
      <title>2.2 - 2.3.2 Models/Concepts (System Performance Brendan Gregg 2nd)</title>
      <dc:creator>dima853</dc:creator>
      <pubDate>Fri, 11 Jul 2025 11:42:49 +0000</pubDate>
      <link>https://dev.to/dima853/22-232-modelsconcepts-system-performance-brendan-gregg-2nd-131d</link>
      <guid>https://dev.to/dima853/22-232-modelsconcepts-system-performance-brendan-gregg-2nd-131d</guid>
      <description>&lt;h1&gt;
  
  
  2.2 Models
&lt;/h1&gt;

&lt;p&gt;2.2.1 System Under Test&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fec4gjup18c6xst6z8afq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fec4gjup18c6xst6z8afq.png" alt=" " width="688" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It is important to consider the impact of perturbations (interventions) on test results! 🔍&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Perturbations are unexpected or background interventions that can distort the results of system performance measurements. They may occur due to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;planned system activity&lt;/strong&gt; (for example, automatic updates, backups),&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;actions of other users&lt;/strong&gt; (if the system is multiuser),
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;competing workloads&lt;/strong&gt; (other applications or virtual machines in the cloud).
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🌩 &lt;strong&gt;Special difficulties in cloud environments&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In cloud infrastructures (AWS, Azure, GCP, etc.), a physical host can serve multiple virtual machines (VMs). The activity of neighboring VMs ("noisy neighbors") can affect your system, but remain &lt;strong&gt;invisible&lt;/strong&gt; from inside your test environment (SUT, System Under Test). This is called &lt;strong&gt;the "noisy neighbor problem"&lt;/strong&gt; — and it can seriously distort benchmark results!  &lt;/p&gt;

&lt;h3&gt;
  
  
  🧩 &lt;strong&gt;The complexity of modern distributed systems&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Modern environments often consist of many components:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Load balancers&lt;/strong&gt; (Nginx, HAProxy)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proxies and caching servers&lt;/strong&gt; (Varnish, Redis)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web and app servers&lt;/strong&gt; (Apache, Node.js)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Databases&lt;/strong&gt; (PostgreSQL, MongoDB)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Storage&lt;/strong&gt; (AWS S3, Ceph)
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these elements can make its own perturbations! For example, the cache can mask the real load on the database, and network delays can affect the response time.  &lt;/p&gt;

&lt;h3&gt;
  
  
  🔎 &lt;strong&gt;How to minimize the impact of perturbations?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Careful mapping of the environment&lt;/strong&gt; — draw up a diagram of all system components.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Isolation of the test environment&lt;/strong&gt; — use dedicated resources whenever possible.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring of background processes&lt;/strong&gt; (for example, via &lt;code&gt;top&lt;/code&gt;, `htop', Prometheus).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Statistical analysis&lt;/strong&gt; — multiple test runs and averaging of results.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;🚀 Conclusion:&lt;/strong&gt; Perturbations are the hidden enemy of accurate measurements. Always analyze the environment, isolate the interference and check the results several times!&lt;/p&gt;

&lt;h1&gt;
  
  
  2.2.2 Queueing System
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Deep analysis: Modeling queues, Delays, and System Performance&lt;/strong&gt;  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;📊 Modeling components as queue systems&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Many elements of the IT infrastructure (disks, CPUs, network interfaces) can be represented as &lt;strong&gt;queuing systems&lt;/strong&gt; (queuing systems). This allows you to predict their behavior under load!  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjyhpspwp2gy7qs4sh0l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjyhpspwp2gy7qs4sh0l.png" alt=" " width="617" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔹 &lt;strong&gt;Example with disks&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As the load increases, the disk response time increases &lt;strong&gt;non-linearly&lt;/strong&gt; due to the queue of requests.
&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;M/M/1&lt;/em&gt; model (exponential service time and one handler) is often used to predict delays.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Practical application&lt;/em&gt;: If your disk is serving 100 IOPS, but 150 requests per second are coming, the queue will grow, and latency will skyrocket! ☄️ &lt;/p&gt;

&lt;h3&gt;
  
  
  What is M/M/1? (In simple words)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/M/M/1_queue" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/M/M/1_queue&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Imagine that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;M:&lt;/strong&gt; Customers arrive &lt;em&gt;randomly&lt;/em&gt;.  It's like people walk into a store and you can't predict in advance when the next customer will arrive.  This is called a Poisson process, and it describes random events occurring in time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/M/&lt;/strong&gt;: Each customer's service takes a random amount of time.  It is as if the cashier in the store serves each customer for a different amount of time (depends on the number of goods, etc.). The service time is also described by a Poisson process (exponential distribution).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/1&lt;/strong&gt; Questioner: we have &lt;em&gt;one&lt;/em&gt; server (or cashier).  Only one person can serve customers at a time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;As a result:&lt;/strong&gt; M/M/1 is a simple queue model where customers arrive randomly, are serviced at random times, and there is only one service machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-life analogies
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;There is a queue in the store at one cashier:&lt;/strong&gt; Customers come to the store at different times, everyone has their own basket of goods, and the cashier serves everyone at different speeds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Calls to a call center with one operator:&lt;/strong&gt; Calls arrive randomly, each call takes a different time (depending on the problem), and there is only one operator to take the call.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Request processing on a single-processor web server:&lt;/strong&gt; Users send requests to the server randomly, each request takes different time to process, and the server has only one processor.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key parameters of the M/M/1 model
&lt;/h3&gt;

&lt;p&gt;To understand how the M/M/1 model works, we need the following parameters:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;λ (lambda) is the intensity of the arrival flow:&lt;/strong&gt; How many customers arrive on average per unit of time. For example, 10 clients per hour.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;µ (mu) is the intensity of the service flow:&lt;/strong&gt; How many clients the server can handle on average per unit of time. For example, 15 clients per hour.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; If λ &amp;gt; μ, the queue will grow indefinitely! The server will not be able to handle the flow of clients.  In order for the queue to be stable, &lt;code&gt;p = λ / μ&lt;/code&gt; (load factor) must be less than 1. &lt;code&gt;p&lt;/code&gt; indicates how much of the time the server is busy servicing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What does p = λ / μ &amp;lt; 1 (load factor) mean?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;r (ro): This is the load factor. It shows how much of the time the server is busy working. It is calculated as &lt;code&gt;λ / μ&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;If p &amp;lt; 1&lt;/code&gt;, it means that the server is serving clients faster than they arrive. In our example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;λ = 10 clients per hour&lt;/li&gt;
&lt;li&gt;μ = 15 clients per hour&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  - ρ = 10 / 15 = 0.67
&lt;/h2&gt;

&lt;p&gt;Here p is less than 1 (0.67 &amp;lt; 1). This means that the cashier is busy only 67% of the time. He has time to relax between clients. The queue will be stable because the cashier manages to serve customers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key performance indicators
&lt;/h3&gt;

&lt;p&gt;The M/M/1 model allows us to estimate the following indicators:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;r (load factor):&lt;/strong&gt; How much of the time is the server busy?  &lt;code&gt;ρ = λ / μ&lt;/code&gt;. If &lt;code&gt;p = 0.8&lt;/code&gt;, the server is busy 80% of the time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;p₀ (probability of system downtime):&lt;/strong&gt; What is the probability that there are no clients in the system?  &lt;code&gt;P₀ = 1 - ρ&lt;/code&gt;.  If &lt;code&gt;p = 0.8&lt;/code&gt;, then &lt;code&gt;p₀ = 0.2&lt;/code&gt;, that is, the server is idle 20% of the time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;L (average number of clients in the system):&lt;/strong&gt; How many customers are in the system on average (queued + serviced)? &lt;code&gt;L = λ / (μ - λ)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lq (average number of customers in the queue):&lt;/strong&gt; How many customers are in the queue on average? &lt;code&gt;Lq = ρ * L = (λ²)/(μ(μ - λ))&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;W (average time spent by the client in the system):&lt;/strong&gt; How much time does the client spend in the system (in the queue + on maintenance)? &lt;code&gt;W = 1 / (μ - λ)&lt;/code&gt;.  (W = L/λ).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wq (average waiting time for a customer in the queue):&lt;/strong&gt; How much time does the customer spend in the queue? &lt;code&gt;Wq = ρ / (μ - λ)&lt;/code&gt;.  (Wq = Lq/λ).&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Formulas for calculation (if you are interested)
&lt;/h3&gt;

&lt;p&gt;Here are some formulas for calculating these indicators.  But don't be scared, the main thing is to understand the concept.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;ρ = λ / μ&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;P₀ = 1 - ρ&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;L = λ / (μ - λ)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;Lq = (λ²)/(μ(μ - λ))&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;W = 1 / (μ - λ)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;Wq = λ / (μ(μ - λ))&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Calculation example (simple)
&lt;/h3&gt;

&lt;p&gt;Let's say 10 customers come to the store per hour (&lt;code&gt;λ = 10&lt;/code&gt;) and the cashier serves 15 customers per hour (&lt;code&gt;μ = 15&lt;/code&gt;).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;ρ = 10 / 15 = 0.67&lt;/strong&gt;. The server is loaded at 67%.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;p₀ = 1 - 0.67 = 0.33&lt;/strong&gt;. The probability that the cashier is free is 33%.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;L = 10 / (15 - 10) = 2&lt;/strong&gt;. On average, there are 2 customers in the store (in line or at the checkout).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Lq = (10 * 10) / (15 * (15 - 10)) = 1.33&lt;/strong&gt;. On average, there are 1.33 customers in the queue.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;W = 1 / (15 - 10) = 0.2 hours = 12 minutes&lt;/strong&gt;.  The average customer spends 12 minutes in the store.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wq = 10 / (15 * (15 - 10)) = 0.133 hours = 8 minutes&lt;/strong&gt;. On average, a customer waits in line for 8 minutes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Advantages of the M/M/1 model
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Simplicity:&lt;/strong&gt; Easy to understand and use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Applicability:&lt;/strong&gt; Describes many real systems.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Usefulness:&lt;/strong&gt; Helps to evaluate system performance, design new systems, and optimize existing ones.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Limitations of the M/M/1 model
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Random nature:&lt;/strong&gt; Assumes random customer arrivals and random service times.  In reality, this is not always the case.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One server:&lt;/strong&gt; Not suitable for systems with multiple servers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Does not take into account priorities:&lt;/strong&gt; Does not take into account the priorities of customers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The M/M/1 model is a powerful tool for understanding and analyzing queues. Knowing its basics, you can evaluate how to improve customer service, reduce waiting times, and improve the efficiency of various systems.  I hope it's clearer to you now!  If you have any questions, don't hesitate to ask! ☕&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;⏱️ Latency is the Queen of Metrics&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The delay is the &lt;strong&gt;waiting time&lt;/strong&gt; before performing the operation. It looks different in different contexts.:  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;🌐 Network examples&lt;/strong&gt;:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;DNS Latency&lt;/strong&gt;: Domain name resolution time (for example, &lt;code&gt;google.com&lt;/code&gt; → IP address).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TCP Connection Latency&lt;/strong&gt;: Connection setup delay (3-way handshake).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP Request Latency&lt;/strong&gt;: The time from sending the request to receiving the first byte of the response (TTFB).
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcct6bt6ai0s78alb1mb6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcct6bt6ai0s78alb1mb6.png" alt=" " width="692" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;🔢 Why is latency more important than IOPS?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IOPS (Input/Output Operations Per Second)&lt;/strong&gt; shows only the number of operations, but not their &lt;em&gt;execution speed&lt;/em&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latency&lt;/strong&gt; is measured in time units (ms, µs), which allows you to:&lt;/li&gt;
&lt;li&gt;Compare heterogeneous systems (network vs. disk).

&lt;ul&gt;
&lt;li&gt;Accurately assess the impact of optimizations.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Example*:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;100 network I/O with a delay of 100 ms = &lt;strong&gt;10,000 ms&lt;/strong&gt; in total.
&lt;/li&gt;
&lt;li&gt;50 disk I/O with a delay of 50 ms = &lt;strong&gt;2,500 ms&lt;/strong&gt;.
Conclusion: the disk is 4 times faster!
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;📏 Table of time units&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;It is important to use the correct units for accurate measurements.:  &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Unit&lt;/th&gt;
&lt;th&gt;Reduction&lt;/th&gt;
&lt;th&gt;Fraction of a second&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Millisecond&lt;/td&gt;
&lt;td&gt;ms&lt;/td&gt;
&lt;td&gt;0.001 (10-3)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Microsecond&lt;/td&gt;
&lt;td&gt;µs&lt;/td&gt;
&lt;td&gt;0.000001 (10⁻⁶)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Nanosecond&lt;/td&gt;
&lt;td&gt;ns&lt;/td&gt;
&lt;td&gt;0.000000001 (10⁻)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Picosecond&lt;/td&gt;
&lt;td&gt;ps&lt;/td&gt;
&lt;td&gt;0.000000000001 (10-12)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;🔬 &lt;strong&gt;Interesting fact&lt;/strong&gt;: A delay of 1 ns is the time it takes for light to travel only ~30 cm!  &lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;🎯 Key findings&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Queue models&lt;/strong&gt; help predict performance degradation (for example, disks under load).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latency&lt;/strong&gt; is a universal metric for comparing systems. Always specify the context: &lt;em&gt;"TCP latency"&lt;/em&gt;, &lt;em&gt;"disk seek latency"&lt;/em&gt;, etc.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Convert metrics to time&lt;/strong&gt; to make informed decisions (for example, choosing between network and disk I/O).
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;**Remember:&lt;/em&gt;* "Productivity is the art of measurement, not guesswork!"\&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The shocking truth about delays: from nanoseconds to millennia!&lt;/em&gt;*  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;⚡ How to understand time scales in computers?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Imagine that &lt;strong&gt;1 CPU cycle (0.3 ns)&lt;/strong&gt; is &lt;strong&gt;1 second&lt;/strong&gt; in our "scaled" world. Then:  &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Event&lt;/th&gt;
&lt;th&gt;Real delay&lt;/th&gt;
&lt;th&gt;On the scale of "1 cycle = 1 sec"&lt;/th&gt;
&lt;th&gt;💡 Real-life analogy&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Access to the L1 cache&lt;/td&gt;
&lt;td&gt;0.9 ns&lt;/td&gt;
&lt;td&gt;3 sec&lt;/td&gt;
&lt;td&gt;Blink an eye&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Read from RAM&lt;/td&gt;
&lt;td&gt;100 ns&lt;/td&gt;
&lt;td&gt;6 minutes&lt;/td&gt;
&lt;td&gt;Make coffee&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SSD disk (read)&lt;/td&gt;
&lt;td&gt;50 microseconds&lt;/td&gt;
&lt;td&gt;45 hours&lt;/td&gt;
&lt;td&gt;Two days of vacation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HDD (data search)&lt;/td&gt;
&lt;td&gt;5 ms&lt;/td&gt;
&lt;td&gt;6 months&lt;/td&gt;
&lt;td&gt;Six months of pregnancy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ping from SF to New York&lt;/td&gt;
&lt;td&gt;40 ms&lt;/td&gt;
&lt;td&gt;4 years&lt;/td&gt;
&lt;td&gt;Full-time university studies&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Physical server restart&lt;/td&gt;
&lt;td&gt;5 minutes&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;32 000 years&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The last Ice Age&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;, Shock content&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;During the time until the light reaches the book from your eyes (** 1.7 ns*&lt;em&gt;), the processor manages to execute **5+ instructions&lt;/em&gt;*!
&lt;/li&gt;
&lt;li&gt;Waiting for a response from the HDD (&lt;strong&gt;10 ms&lt;/strong&gt;) on a CPU scale — how to &lt;strong&gt;wait for a package for 12 months&lt;/strong&gt; →→.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;🌍 Why is this important?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Code optimization&lt;/strong&gt;: If your algorithm once again accesses RAM instead of cache, it's like waiting &lt;strong&gt;6 minutes&lt;/strong&gt; instead of 3 seconds!
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture choice&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;SSD vs HDD: The difference between &lt;strong&gt;hours&lt;/strong&gt; and &lt;strong&gt;years&lt;/strong&gt; of waiting.

&lt;ul&gt;
&lt;li&gt;Server location: The delay between the USA and Australia (&lt;strong&gt;183 ms&lt;/strong&gt;) is comparable to &lt;strong&gt;19 years&lt;/strong&gt; in CPU time!
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;🔧 Practical advice&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cache aggressively&lt;/strong&gt;: L1 cache is faster than RAM by &lt;strong&gt;100+ times&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Avoid disk I/O in critical code&lt;/strong&gt;: One request per HDD = &lt;strong&gt;months&lt;/strong&gt; CPU downtime.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consider geography&lt;/strong&gt;: Place servers closer to users — the difference between SF and UK (&lt;strong&gt;81 ms&lt;/strong&gt;) Like &lt;strong&gt;8 years&lt;/strong&gt; waiting!
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;📚 &lt;em&gt;Source&lt;/em&gt;: Adapted from &lt;em&gt;"Systems Performance: Enterprise and the Cloud"&lt;/em&gt; (Brendan Gregg), chapters 6 (CPU), 9 (Disks), 10 (Network).  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🎯 Philosophical summary&lt;/strong&gt;:  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Computers live in another time dimension. Your task is not to keep them waiting!"&lt;/em&gt; ⏳💻&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>security</category>
      <category>architecture</category>
      <category>design</category>
      <category>programming</category>
    </item>
    <item>
      <title>MAMR vs HAMR: The Battle for the Future of hard Drives</title>
      <dc:creator>dima853</dc:creator>
      <pubDate>Thu, 19 Jun 2025 17:54:17 +0000</pubDate>
      <link>https://dev.to/dima853/mamr-vs-hamr-the-battle-for-the-future-of-hard-drives-4abd</link>
      <guid>https://dev.to/dima853/mamr-vs-hamr-the-battle-for-the-future-of-hard-drives-4abd</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Taking apart MAMR: Why hasn't this technology taken over the HDD world yet?&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. What is MAMR?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;MAMR (Microwave-Assisted Magnetic Recording)&lt;/strong&gt; is a technology for recording data on hard drives (HDDs), where &lt;strong&gt;microwaves&lt;/strong&gt; help to remagnetize tiny bits, allowing you to increase storage density without loss of reliability.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does it work?&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;Spin Torque Oscillator (STO) is integrated into the recording head&lt;/strong&gt; — mini microwave generator.
&lt;/li&gt;
&lt;li&gt;Before recording, the STO emits a high-frequency field (~20-40 GHz), which &lt;strong&gt;"rocks" the magnetic moments&lt;/strong&gt; of the bits, temporarily reducing their stability.
&lt;/li&gt;
&lt;li&gt;Now even the weak field of the head is enough for remagnetization.
&lt;/li&gt;
&lt;li&gt;After recording, the microwaves turn off — the bits become stable again.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Analog:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Imagine that a bat is a door with tight hinges. Without MAMR, it cannot be opened with a weak push. With MAMR, the door is first "rocked" (by microwaves), and then easily opened.  &lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;2. Why is MAMR a breakthrough?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Recording density ↑&lt;/strong&gt;: You can reduce the bits without the risk that they will "demagnetize" from temperature.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reliability&lt;/strong&gt;: There is no extreme heat (as in HAMR), which means that the disc will last longer.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Energy efficiency&lt;/strong&gt;: No need for a giant magnetic field.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;3. Why isn't MAMR dominating yet?&lt;/strong&gt;
&lt;/h4&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;🔹 Reason 1: HAMR turned out to be faster&lt;/strong&gt;
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The competing HAMR&lt;/strong&gt; (Heat-Assisted Magnetic Recording) technology uses &lt;strong&gt;laser heating&lt;/strong&gt; for recording.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seagate&lt;/strong&gt;  released the most spacious hard drive in history - &lt;a href="https://www.techradar.com/pro/seagate-confirms-40tb-hard-drives-have-already-been-shipped-but-dont-expect-them-to-go-on-sale-till-2026" rel="noopener noreferrer"&gt;https://www.techradar.com/pro/seagate-confirms-40tb-hard-drives-have-already-been-shipped-but-dont-expect-them-to-go-on-sale-till-2026&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;🔹 Reason 2: The complexity of STO production&lt;/strong&gt;
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Spin Torque Oscillator (STO) is a nanodevice that must:
— Generate &lt;strong&gt;an accurate frequency&lt;/strong&gt; (otherwise the recording will not work).

&lt;ul&gt;
&lt;li&gt;Be &lt;strong&gt;small enough&lt;/strong&gt; (so as not to interfere with the operation of the head).
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;So far, Toshiba is the only company that has been able to establish mass production.
&lt;/li&gt;

&lt;/ul&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;🔹 Reason 5: Industry conservatism&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;HDD manufacturers have been investing in PMR/CMR (traditional technologies) for decades.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The transition to MAMR requires the restructuring of production lines — it is expensive and risky.
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  It is quite possible that none of the technologies will be the clear winner. "I have not yet been able to talk to any specific customer who will exclusively use only HDD with one of the technologies," says Mark Ginen, founder of the consortium of Advanced storage Technologies. "I think there will be companies in the market that will buy both."
&lt;/h2&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;4. Does MAMR have a future?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Yes&lt;/strong&gt;, but in niche scenarios:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Where reliability is important&lt;/strong&gt;: Archives, medical data, backups.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Where record volumes are not needed&lt;/strong&gt;: Enterprise systems with a balance of price and capacity.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Toshiba plans to&lt;/strong&gt; increase the MAMR density to &lt;strong&gt;30+TB&lt;/strong&gt; by 2026.
&lt;/li&gt;

&lt;/ul&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;5. Comparison of MAMR and HAMR&lt;/strong&gt;
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;MAMR&lt;/strong&gt; (Toshiba)&lt;/th&gt;
&lt;th&gt;
&lt;strong&gt;HAMR&lt;/strong&gt; (Seagate/WD)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Technology&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Microwaves (STO)&lt;/td&gt;
&lt;td&gt;Laser + Magnetic field&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Max. capacity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;22TB (2024)&lt;/td&gt;
&lt;td&gt;24+ TB (2024)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Risks&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Complexity of STO&lt;/td&gt;
&lt;td&gt;Overheating, disk degradation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Price&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cheaper than HAMR&lt;/td&gt;
&lt;td&gt;More expensive because of lasers&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;6. Conclusion&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;MAMR is &lt;strong&gt;an elegant solution&lt;/strong&gt;, but for now:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HAMR is inferior in the "terabyte race".
&lt;/li&gt;
&lt;li&gt;Depends on Toshiba's progress in miniaturization of STO.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chance of Success&lt;/strong&gt;: If HAMR runs into reliability issues, MAMR will be the HDD's savior. &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  How It Works? Let's dig a little deeper.
&lt;/h1&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Simplified explanation of MAMR operation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Imagine that a &lt;strong&gt;hard disk&lt;/strong&gt; is a notebook where data is written with &lt;strong&gt;tiny magnetic arrows&lt;/strong&gt; (↑ and ↓). The smaller the arrows, the more information will fit, but there is a problem.:  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Problem&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;If the arrows are made &lt;strong&gt;very small&lt;/strong&gt;, it is difficult to turn them over. You need a &lt;strong&gt;strong magnet&lt;/strong&gt; (like a powerful refrigerator magnet), but the HDD head is a weak button magnet.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Solution: MAMR (microwave "help")&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;A &lt;strong&gt;"microwave for atoms"&lt;/strong&gt; — &lt;strong&gt;Spin Torque Oscillator (STO)&lt;/strong&gt; has been added to the recording head. That's how it works.:  &lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;1. STO is a "sandwich" of two magnetic layers&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The first layer&lt;/strong&gt;: As a "teacher" who &lt;strong&gt;arranges the electrons&lt;/strong&gt; (makes their spins the same).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The second layer&lt;/strong&gt;: As a "rebellious student" — his electrons &lt;strong&gt;are set up in the opposite way&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When current is passed through them:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Electrons from the first layer &lt;strong&gt;collide&lt;/strong&gt; with the second.
&lt;/li&gt;
&lt;li&gt;The "stubborn" electrons of the second layer ** begin to oscillate** (like a swing if they are slightly pushed to the beat).
&lt;/li&gt;
&lt;li&gt;These vibrations &lt;strong&gt;generate microwaves&lt;/strong&gt; (like Wi-Fi, but very spot-on).
&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;2. How do microwaves help record data?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The microwaves &lt;strong&gt;are tuned to the resonant frequency&lt;/strong&gt; of the magnetic grains of the disk (like a tuning fork that makes only one glass vibrate).
&lt;/li&gt;
&lt;li&gt;They &lt;strong&gt;rock the magnetic arrows&lt;/strong&gt; on the disk, making them &lt;strong&gt;"malleable"&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Now even the &lt;strong&gt;weak field of the head&lt;/strong&gt; is enough to flip the arrow (write 0 or 1).
&lt;/li&gt;
&lt;li&gt;After recording, the microwaves turn off — the arrows &lt;strong&gt;"freeze"&lt;/strong&gt; again and store the data.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;3. Why does it feel like "heating up", but without the temperature?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ordinary heating&lt;/strong&gt; (as in HAMR) is like setting fire to paper to write on it. It's dangerous!
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MAMR&lt;/strong&gt; is like &lt;strong&gt;shaking a piece of paper&lt;/strong&gt; to make the ink fit more easily. Without fire!
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;And now — a super simple analogy&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Imagine that:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data&lt;/strong&gt; is nails driven into a board.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The usual entry&lt;/strong&gt; — you are trying to drive a nail into an oak plank ** with a small hammer**. It doesn't work!
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MAMR&lt;/strong&gt; — you first &lt;strong&gt;"vibrate" the board&lt;/strong&gt; (with microwaves), and now even a weak hammer blow drives a nail.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Why is it brilliant?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No destruction&lt;/strong&gt;: No overheating (as in HAMR), the disk lives longer.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy&lt;/strong&gt;: Microwaves work &lt;strong&gt;only where needed&lt;/strong&gt;, without touching neighboring data.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Energy Saving&lt;/strong&gt;: No need for giant magnets or lasers.
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
MAMR is a ** "smart vibration"** that allows HDD to break records in terms of capacity without violating the laws of physics. The technology is not perfect yet, but it has every chance of changing the future of hard drives!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda4jwlgt3wovkfnpnvyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda4jwlgt3wovkfnpnvyz.png" alt=" " width="521" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;HDD recording head device with MAMR&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The picture shows &lt;strong&gt;an enlarged section of the tip of the recording head&lt;/strong&gt; of a hard disk (HDD) using &lt;strong&gt;Microwave-Assisted Magnetic Recording (MAMR) technology&lt;/strong&gt;. Let's look at each element and its role in recording data.  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;1. The main components in the image are&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;A. Write Head&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is the part of the head that &lt;strong&gt;remagnetizes the bits&lt;/strong&gt; on the disk, writing data (0 or 1).  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;B. The Reading Head&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Responsible for &lt;strong&gt;reading data&lt;/strong&gt; from the disk (determines the direction of the magnetization of the bits).  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;C. Microwaves&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Indicated by arrows is the &lt;strong&gt;high frequency magnetic field&lt;/strong&gt; generated by the &lt;strong&gt;Spin Torque Oscillator (STO)&lt;/strong&gt;.  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;2. Detailed structure of the recording head&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;① FGL (Field Generation Layer)&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;This is the part of the &lt;strong&gt;STO (Spin Torque Oscillator)&lt;/strong&gt; that generates the &lt;strong&gt;microwave field&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Consists of &lt;strong&gt;magnetic material&lt;/strong&gt;, the vibrations of which create a high-frequency (~20-40 GHz) field.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;② SIL (Shielded Interaction Layer)&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A layer that &lt;strong&gt;focuses microwave radiation&lt;/strong&gt; into the desired recording area.
&lt;/li&gt;
&lt;li&gt;Prevents the field from scattering so as not to affect neighboring bits.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;③ STO (Spin Torque Oscillator)&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The "heart" of MAMR&lt;/strong&gt; is a microscopic microwave generator.
&lt;/li&gt;
&lt;li&gt;Consists of:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FGL&lt;/strong&gt; (generates a field),&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disconnected layers&lt;/strong&gt; (enhance the effect).
&lt;/li&gt;
&lt;li&gt;Powered by &lt;strong&gt;spin-polarized current&lt;/strong&gt; (electrons "spin up" magnetic moments, creating vibrations).
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;④ Multimagnetic Pole&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The main part of the recording head, which creates a permanent magnetic field for remagnetization of bits.
&lt;/li&gt;
&lt;li&gt;In combination with microwaves from STO, it allows you to record data on &lt;strong&gt;ultra-small bits&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;⑤ Non-magnetic layer&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Separates magnetic components, preventing &lt;strong&gt;unwanted interactions&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Usually from &lt;strong&gt;ruthenium (Ru)&lt;/strong&gt; or similar materials.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;3. How does it work together?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Recording begins&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;Current is applied through the STO → &lt;strong&gt;FGL generates microwaves&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microwaves affect the bit&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;They ** sway the magnetic moments** in the bit, reducing their stability.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The head remagnetizes the bit&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The multi-magnetic pole&lt;/strong&gt; creates a weak field, which is now sufficient to switch the bit (↑ or ↓).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recording completed&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;The microwaves turn off → the bit ** becomes stable again**.
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;4. Why is this a breakthrough?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Previously, ** a huge field&lt;/strong&gt; was required to write to small bits (it cannot be created in a miniature head).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;With MAMR&lt;/strong&gt; microwaves &lt;strong&gt;temporarily reduce the resistance&lt;/strong&gt; bits → recording is possible even with a weak field.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Analog:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Imagine that a bat is a door with tight hinges.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Without MAMR&lt;/strong&gt;: You're trying to open it with your bare hands (not strong enough).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;With MAMR&lt;/strong&gt;: First the door is "rocked" (microwaves), then you open it easily.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;5. Comparison with a regular head&lt;/strong&gt;
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Regular Head&lt;/th&gt;
&lt;th&gt;Head with MAMR&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Recording&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Magnetic field only&lt;/td&gt;
&lt;td&gt;Magnetic field + microwaves&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Density&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Limited (~1 Tbit/in2)&lt;/td&gt;
&lt;td&gt;Higher (~2+ Tbit/in2)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reliability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Stable&lt;/td&gt;
&lt;td&gt;Stable + protection against superparamagnetism&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;6. The future of technology&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Toshiba&lt;/strong&gt; already uses MAMR in disks &lt;strong&gt;MG09 (18 TB)&lt;/strong&gt; and &lt;strong&gt;MG10 (22 TB)&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Goal&lt;/strong&gt;: to achieve &lt;strong&gt;4 Tbit/in2&lt;/strong&gt; (disks on &lt;strong&gt;30+ TB&lt;/strong&gt;).
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Problems&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fine-tune the STO (so that the microwaves do not interfere with neighboring bits).
&lt;/li&gt;
&lt;li&gt;Competition with &lt;strong&gt;HAMR&lt;/strong&gt; (where laser heating is used).
&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  Relationship between STO resistance and applied magnetic field
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47ywa7x26cb1ewfsa7pz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F47ywa7x26cb1ewfsa7pz.png" alt=" " width="540" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  *&lt;em&gt;1. The general structure of the STO *&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;The STO consists of three key layers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;FGL (Free Gadget Layer)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A "free" magnetic layer that can change the magnetization.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Analog: a piece of paper that can be turned over.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Non-magnetic layer&lt;/strong&gt;&lt;br&gt;
is a non-magnetic material (for example, copper).  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Analog: the air gap between two magnets.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SIL (Spin Injection Layer)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A fixed magnetic layer with constant magnetization.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Analog: a magnet glued to a table.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;2. How are microwaves generated?&lt;/strong&gt;
&lt;/h4&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Step 1: Apply current&lt;/strong&gt;
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;High-density electric current is passed through the STO layers (see values in the diagram: from (8.7 \times 10^6) to (1.4 \times 10^8) A/cm2).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What happens&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;The electrons in the current "twist" (polarize) when passing through the SIL.

&lt;ul&gt;
&lt;li&gt;These polarized electrons are "injected" into FGL.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Step 2: Excitation of vibrations&lt;/strong&gt;
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;In FGL, the electron spins begin to &lt;strong&gt;precess&lt;/strong&gt; (like a spinning top about to fall).
&lt;/li&gt;
&lt;li&gt;This creates an alternating magnetic field (microwaves) at a frequency of ~15-20 GHz.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Step 3: Resonance with the disc&lt;/strong&gt;
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;The microwaves &lt;strong&gt;synchronize&lt;/strong&gt; with the magnetic grains on the disk, reducing their coercivity.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;3. Key parameters from the scheme&lt;/strong&gt;
&lt;/h4&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Current density (A/cm2)&lt;/strong&gt;
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;The higher the current density, the stronger the fluctuations:
-(8.7 \times 10^6) — the threshold for the start of generation.

&lt;ul&gt;
&lt;li&gt;(1.4\times 10^8) — maximum efficiency.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;STO States&lt;/strong&gt;
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Oscillating state&lt;/strong&gt; (Fluctuations):&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;FGL is actively oscillating → microwaves are generated.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The arrow (direction of magnetization) rotates rapidly.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Non-oscillating state&lt;/strong&gt; (Rest):  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FGL is static → there are no microwaves.
&lt;/li&gt;
&lt;li&gt;The arrow is fixed.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Why do we need an intermediate (non-magnetic) layer in STO?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Intermediate layer in &lt;strong&gt;Spin Torque Oscillator (STO)&lt;/strong&gt; — this is a critical element, without which the generation of microwaves would be impossible. That's why it's needed and how it works.:  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;1. The role of the intermediate layer&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This layer (usually made of &lt;strong&gt;copper, aluminum, or magnesium oxide&lt;/strong&gt;) performs two key functions:  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🔹 (1) Separates two magnetic layers (FGL and SIL)&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FGL (Free Gadget Layer)&lt;/strong&gt; is a "free" layer that can fluctuate.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SIL (Spin Injection Layer)&lt;/strong&gt; is a "fixed" layer with constant magnetization.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Without an intermediate layer:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The magnetic layers ** would stick together** (like two magnets), and FGL would not be able to oscillate.
&lt;/li&gt;
&lt;li&gt;There would be no &lt;strong&gt;spin-polarized current&lt;/strong&gt; — basics of STO operation.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🔹 (2) Allows the current to "transfer" spin information&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When electrons pass through &lt;strong&gt;SIL&lt;/strong&gt;, their spins are polarized (aligned in the same direction).
&lt;/li&gt;
&lt;li&gt;Then they ** pass through a non-magnetic layer**, maintaining their polarization.
&lt;/li&gt;
&lt;li&gt;Reaching &lt;strong&gt;FGL&lt;/strong&gt;, these electrons transfer their &lt;strong&gt;spin moment&lt;/strong&gt; to it, causing it to oscillate.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;2. Why is it a non-magnetic material?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The magnetic material&lt;/strong&gt; would shield the spin current → the FGL oscillations would stop.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dielectric (for example, MgO)&lt;/strong&gt; It is sometimes used to enhance the spin transfer effect.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Optimal thickness:&lt;/strong&gt; ~1-3 nm.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Too thin → the layers will "stick together".
&lt;/li&gt;
&lt;li&gt;Too thick → the electrons will lose their polarization.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;3. What would happen without this layer?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;It would be impossible to control FGL&lt;/strong&gt; — it would just "stick" to SIL.
&lt;/li&gt;
&lt;li&gt;** There would be no microwaves** — there would be no fluctuations.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MAMR would not work&lt;/strong&gt; — recording to high-density discs would remain impossible.
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzhth9tcsl26qnp06or5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzhth9tcsl26qnp06or5.png" alt=" " width="589" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Graph analysis: Comparison of SNR of conventional and MAMR heads&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This graph demonstrates the key advantage of &lt;strong&gt;MAMR (Microwave-Assisted Magnetic Recording)&lt;/strong&gt; Before traditional magnetic recording: &lt;strong&gt;increased signal-to-noise ratio (SNR) by 7 dB&lt;/strong&gt;. Let's take it apart piece by piece.  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;1. X-axis and Y-axis: what is depicted?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;X-axis (Down-track / Cross-track direction, nm)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Shows the &lt;strong&gt;spatial position&lt;/strong&gt; of the head relative to the track on the disc.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Down-track&lt;/strong&gt; — along the track (the direction of movement of the data).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-track&lt;/strong&gt; — across the track (recording width).
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Y-axis (SNR, dB / Magnetization, %)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;SNR (Signal-to—Noise Ratio)&lt;/strong&gt; is the ratio of the useful signal to noise (the higher the better).  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Magnetization saturation&lt;/strong&gt; — the level of magnetization (↑ up / ↓ down).
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;2. Comparison of conventional and MAMR heads&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;(A) Conventional R/W Head&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SNR&lt;/strong&gt;: Low (conventionally ~0 dB on the graph).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Problem&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;At high recording density, the magnetic bits become too small → the signal weakens, the noise increases.
&lt;/li&gt;
&lt;li&gt;The head cannot clearly remagnetize tiny areas → the data is distorted.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;(B) MAMR head&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SNR&lt;/strong&gt;: Higher by &lt;strong&gt;7 dB&lt;/strong&gt; (significant improvement!).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reason&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;Microwaves from &lt;strong&gt;STO (Spin Torque Oscillator)&lt;/strong&gt; helps ** to re-magnetize the bits more clearly**.

&lt;ul&gt;
&lt;li&gt;This reduces the "smearing" of the signal and suppresses noise.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;3. How do microwaves improve SNR?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Accurate recording&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Microwaves ** locally reduce the coercivity** (Coercivity is the resistance of a ferromagnet to demagnetization) of the disc material.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The head can record &lt;strong&gt;smaller and clearer bits&lt;/strong&gt; without errors.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Improved reading&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Because the bits are recorded more clearly, it is easier for the head to recognize them - &amp;gt; less noise.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stability&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;MAMR does not overheat the disc (unlike HAMR), so there is no additional thermal noise.  &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;4. Why is +7 dB important?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Increase in SNR by 3 dB = doubling of information capacity&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;+7 dB ≈ 5 times better signal-to-noise ratio&lt;/strong&gt; → You can increase the recording density &lt;strong&gt;without data loss&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
If a conventional head worked with a density of &lt;strong&gt;1 Tbit/in2&lt;/strong&gt;, then MAMR allows you to rise to &lt;strong&gt;1.5–2 Tbit/in2&lt;/strong&gt; with the same reliability.  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;5. Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MAMR doesn't just "slightly improve" the recording — it makes it better&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;+7 dB SNR&lt;/strong&gt; means that HDDs with MAMR will be able to:&lt;/li&gt;
&lt;li&gt;store more data,&lt;/li&gt;
&lt;li&gt;read it faster,

&lt;ul&gt;
&lt;li&gt;work longer without errors.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Results of evaluation of overwrite performance of MAMR read and write heads
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypyfnfz4b8vj4467ytdi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypyfnfz4b8vj4467ytdi.png" alt=" " width="554" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This Fihure shows the changes in &lt;br&gt;
overwrite performance over a range of write current (Iw). It &lt;br&gt;
indicates that, when the STO is on, MAMR provides a roughly 10 dB &lt;br&gt;
higher overwrite performance than a conventional magnetic &lt;br&gt;
recording method without an STO. This &lt;br&gt;
result indicates that the microwave field emitted by an STO helps &lt;br&gt;
to achieve good write performance, demonstrating the feasibility &lt;br&gt;
of MAMR. As a result of the foregoing, MAMR is considered to be &lt;br&gt;
a promising next-generation high-density HDD recording &lt;br&gt;
technology capable of overcoming the trilemma associated with &lt;br&gt;
high-Ku recording media&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Evaluation of a prototype HDD with MAMR read/write heads
&lt;/h1&gt;

&lt;p&gt;and media&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftms18xldi3slwhkptvtn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftms18xldi3slwhkptvtn.png" alt=" " width="603" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*Results of long-term reliability tests of prototype &lt;br&gt;
MAMR HDDs&lt;/p&gt;

&lt;p&gt;Figure 5 compares the changes in the bit error rate &lt;br&gt;
(BER) of the MAMR and conventional HDDs over time. The MAMR media were made of materials similar to those used for conventional &lt;br&gt;
recording media. Figure 5 presents the results obtained at a &lt;br&gt;
recording density close to the maximum recording density of the &lt;br&gt;
existing HDDs. It shows that MAMR with an STO provides a &lt;br&gt;
significant reduction in the BER. &lt;br&gt;
Generally, an STO with higher drive current generates a microwave &lt;br&gt;
field with higher intensity and thus provides higher energy-assistfor MAMR, resulting in a greater reduction in the BER. However, &lt;br&gt;
excessive current could degrade reliability because of Joule &lt;br&gt;
heating and electromigration within the STO. In the example &lt;br&gt;
shown in Figure 5, the BER of MAMR remains unchanged for up &lt;br&gt;
to 1,000 hours of write operations without any STO degradation, &lt;br&gt;
the possibility of which had been a concern. From this evaluation, &lt;br&gt;
we have also obtained information about adequate STO drive &lt;br&gt;
conditions. (from the &lt;a href="https://toshiba.semicon-storage.com/content/dam/toshiba-ss-v3/master/en/company/technical-review/pdf/technical-review-microwave-assisted-magnetic-recording-technology_e.pdf" rel="noopener noreferrer"&gt;article&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; MAMR is a smart way to extend the life of an HDD, but the technology is still fighting for a place in the market, let's see what happens next.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>An overview of the security of lower-level TCP/IP protocols</title>
      <dc:creator>dima853</dc:creator>
      <pubDate>Thu, 05 Jun 2025 12:43:36 +0000</pubDate>
      <link>https://dev.to/dima853/an-overview-of-the-security-of-lower-level-tcpip-protocols-4fnf</link>
      <guid>https://dev.to/dima853/an-overview-of-the-security-of-lower-level-tcpip-protocols-4fnf</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This section examines the protocols at the lower levels of the TCP/IP stack, focusing on their vulnerabilities and security issues. The focus is on IP, ARP and TCP.&lt;/p&gt;

&lt;h2&gt;
  
  
  2.1 Basic Protocols
&lt;/h2&gt;

&lt;h3&gt;
  
  
  2.1.1 IP Protocol (Internet Protocol)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Main function&lt;/strong&gt;: A packet multiplexer that adds an IP header to messages of higher-level protocols&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Features&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;Unreliable datagram service (no guarantees of delivery, order, uniqueness)&lt;/li&gt;
&lt;li&gt;Header checksum does not protect data

&lt;ul&gt;
&lt;li&gt;Lack of authentication of the source address (IP spoofing is possible)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example of IP spoofing in C&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Example of creating a RAW socket with IP address substitution&lt;/span&gt;
&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;sock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;AF_INET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;SOCK_RAW&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;IPPROTO_RAW&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kt"&gt;char&lt;/span&gt; &lt;span class="n"&gt;packet&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;4096&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="c1"&gt;// Filling in the IP header with a fake source IP&lt;/span&gt;
&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;iphdr&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;ip&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;iphdr&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="n"&gt;packet&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;saddr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;inet_addr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"192.168.1.100"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Spoofed&lt;/span&gt;
&lt;span class="n"&gt;ip&lt;/span&gt; &lt;span class="n"&gt;address&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;daddr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;inet_addr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"10.0.0.1"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// ... the remaining fields&lt;/span&gt;
&lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;sendto&lt;/span&gt; &lt;span class="nf"&gt;header&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sock&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;packet&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;tot_len&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;sockaddr&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;dest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;sizeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dest&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fragmentation&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Packets can fragment when the MTU is exceeded&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Security issues: bypassing filters, overlapping fragments with different contents&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;IP addresses (IPv4)&lt;/strong&gt;:&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;32-bit, divided into network and host parts (CIDR)&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Broadcast addresses (all 0 or 1 in the host part)&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Directed broadcast traffic can be used for attacks&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  2.1.2 ARP (Address Resolution Protocol)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Purpose&lt;/strong&gt;: Converting IP addresses to MAC addresses for Ethernet&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operating mechanism&lt;/strong&gt;: Broadcast requests and responses with caching of results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vulnerabilities&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;ARP spoofing: spoofing ARP responses to redirect traffic&lt;/li&gt;
&lt;li&gt;Lack of authentication in the protocol&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example of ARP spoofing&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Example of sending a fake ARP response&lt;/span&gt;
&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;arp_packet&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;uint16_t&lt;/span&gt; &lt;span class="n"&gt;htype&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Hardware type&lt;/span&gt;
    &lt;span class="kt"&gt;uint16_t&lt;/span&gt; &lt;span class="n"&gt;ptype&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Protocol type&lt;/span&gt;
&lt;span class="c1"&gt;// ... other fields of the ARP header&lt;/span&gt;
    &lt;span class="n"&gt;u_char&lt;/span&gt; &lt;span class="n"&gt;sha&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt; &lt;span class="c1"&gt;// Sender's MAC (forged)&lt;/span&gt;
&lt;span class="n"&gt;u_char&lt;/span&gt; &lt;span class="n"&gt;spa&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt; &lt;span class="c1"&gt;// Sender's IP (forged)&lt;/span&gt;
&lt;span class="n"&gt;u_char&lt;/span&gt; &lt;span class="n"&gt;tha&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt; &lt;span class="c1"&gt;// Target's MAC&lt;/span&gt;
    &lt;span class="n"&gt;u_char&lt;/span&gt; &lt;span class="n"&gt;tpa&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt; &lt;span class="c1"&gt;// IP of the target&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;arp_packet&lt;/span&gt; &lt;span class="n"&gt;pkt&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;// Filling with fake&lt;/span&gt;
&lt;span class="n"&gt;memcpy&lt;/span&gt; &lt;span class="nf"&gt;data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pkt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sha&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;attacker_mac&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;memcpy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pkt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;spa&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;victim_ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="n"&gt;send_packet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;pkt&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2.1.3 TCP (Transmission Control Protocol)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Main function&lt;/strong&gt;: Providing a reliable duplex connection over an unreliable IP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mechanisms&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;Sequential numbers for ordering and confirmation

&lt;ul&gt;
&lt;li&gt;Overload control and retransmission&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Establishing a connection (Three-way handshake)
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Client → Server: SYN (ISNc)&lt;/li&gt;
&lt;li&gt;Server → Client: SYN-ACK (iSNS, ACK(ISNc+1))&lt;/li&gt;
&lt;li&gt;Client → Server: ACK(iSNS+1)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example in C&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Simplified example of TCP handshake&lt;/span&gt;
&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;sock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;AF_INET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;SOCK_STREAM&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;sockaddr_in&lt;/span&gt; &lt;span class="n"&gt;serv_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;serv_addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sin_family&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;AF_INET&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;serv_addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sin_port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;htons&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;inet_pton&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;AF_INET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"192.168.1.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;serv_addr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sin_addr&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// SYN&lt;/span&gt;
&lt;span class="n"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sock&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;sockaddr&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;serv_addr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;sizeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;serv_addr&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="c1"&gt;// The kernel completes the handshake automatically&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  TCP vulnerabilities:
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SYN flood&lt;/strong&gt;: Sending multiple SYNs without completing the handshake&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Filling the queue of half-open connections&lt;/li&gt;
&lt;li&gt;Solution: SYN cookies&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The ISN predictability attack&lt;/strong&gt;:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the initial sequence number (ISN) is predictable, spoofing is possible&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Solution: Cryptographically strong ISN generation (RFC 1948)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Visualization of ISN&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Good implementation: diffuse point cloud (FreeBSD 4.6)&lt;/li&gt;
&lt;li&gt;Poor implementations: clear patterns (Windows NT 4.0, IRIX 6.5)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Privileged ports (1-1024)&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;On UNIX, only root can open&lt;/li&gt;
&lt;li&gt;Unreliable authentication scheme, as it is not universal&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Closing the connection
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;An asymmetric process (each side closes independently)&lt;/li&gt;
&lt;li&gt;FIN packages for initiating closure&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Safety conclusions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;IP layer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is no source authentication&lt;/li&gt;
&lt;li&gt;Possible spoofing and fragmentation attacks&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ARP:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Vulnerable to spoofing&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Solutions: Static ARP, ARP inspection&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;TCP:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Vulnerable to spoofing with predictable ISN&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;SYN-flooding can cause DoS&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Privileged ports are an unreliable authentication mechanism&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Recommendations&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use cryptographic authentication methods (do not rely on IP/TCP mechanisms)&lt;/li&gt;
&lt;li&gt;Configure filtering of directed broadcast traffic&lt;/li&gt;
&lt;li&gt;Implement protection against SYN flood&lt;/li&gt;
&lt;li&gt;Ensure correct ISN generation (RFC 1948)&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Introduction to DDD || Eric Evans</title>
      <dc:creator>dima853</dc:creator>
      <pubDate>Mon, 26 May 2025 10:30:08 +0000</pubDate>
      <link>https://dev.to/dima853/introduction-to-ddd-eric-evans-4ca7</link>
      <guid>https://dev.to/dima853/introduction-to-ddd-eric-evans-4ca7</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Summary: Domain Models in software development&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhe3hudaprl2spctug4vx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhe3hudaprl2spctug4vx.png" alt=" " width="544" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. Models as a simplification of reality&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;An example of an 18th-century Chinese map&lt;/strong&gt;:
— China is depicted in the center, the rest of the countries are schematically shown.

&lt;ul&gt;
&lt;li&gt;This reflected China's isolationist policy, but it did not help to interact with the outside world.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conclusion&lt;/strong&gt;: A model is ** a simplified representation of reality** that focuses on important aspects and ignores unnecessary ones.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. What is a Domain?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Domain&lt;/strong&gt; is the user's area of activity that the program concerns.

&lt;ul&gt;
&lt;li&gt;Examples:&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Airline tickets: real people, planes, flights.

&lt;ul&gt;
&lt;li&gt;Accounting: money, bills, taxes.
&lt;/li&gt;
&lt;li&gt;Version control system: software development.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;A domain is usually &lt;strong&gt;not directly connected to computers&lt;/strong&gt;, but requires a deep understanding of the subject area.
&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3. Why do we need domain models?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Problem&lt;/strong&gt;: The amount of domain knowledge is huge and complex.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution&lt;/strong&gt;: &lt;strong&gt;Model&lt;/strong&gt; is:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Structured simplification&lt;/strong&gt; of knowledge.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Abstraction&lt;/strong&gt;, which helps to focus on the task.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A tool&lt;/strong&gt; to combat information overload.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;4. What is the Domain Model?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;It's not just a diagram or a code&lt;/strong&gt;, but the &lt;strong&gt;idea&lt;/strong&gt; that they convey.
&lt;/li&gt;
&lt;li&gt;** Not "realism"**, but a useful abstraction (like cinema— not real life, but interpretation).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Example&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The documentary film &lt;strong&gt;edits reality&lt;/strong&gt; to convey meaning.
&lt;/li&gt;
&lt;li&gt;Similarly, the domain model &lt;strong&gt;selects the important&lt;/strong&gt; and discards the unimportant.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;5. How is the Domain Model created?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Domain analysis&lt;/strong&gt; (communication with experts).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structuring knowledge&lt;/strong&gt; (highlighting key entities and relationships).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Abstraction&lt;/strong&gt; (ignoring unnecessary details).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fixing the model&lt;/strong&gt; (in the form of diagrams, code or descriptions).
&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;6. Examples of models&lt;/strong&gt;
&lt;/h4&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Real world&lt;/th&gt;
&lt;th&gt;Model in the program&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Passenger, Plane, flight&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;Class Passenger&lt;/code&gt;, &lt;code&gt;Flight&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Money, accounts, transactions&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;Account&lt;/code&gt;, &lt;code&gt;Transaction&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;7. Conclusion&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Domain Model&lt;/strong&gt; is &lt;strong&gt;a deliberately simplified representation of knowledge&lt;/strong&gt; about a domain.
&lt;/li&gt;
&lt;li&gt;She helps:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Understand&lt;/strong&gt; complex processes.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Communicate&lt;/strong&gt; between developers and experts.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create software&lt;/strong&gt; that solves users' real-world problems.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key point&lt;/strong&gt;: A good model is not one that is "realistic", but one that is &lt;strong&gt;useful&lt;/strong&gt; for solving a specific problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Summary: The role of the model in Domain-Driven Design (DDD)&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. Three key model functions in DDD&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;The model forms the design of the system, and the design clarifies the model&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The model is closely related to the implementation, which makes it relevant.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The code can be interpreted through the lens of the model (convenient with support and development).
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Example&lt;/em&gt;: If the model includes the entity &lt;code&gt;Order&lt;/code&gt;, then the code will have the class &lt;code&gt;Order&lt;/code&gt; with the appropriate methods.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The model is the basis of a single team language (Ubiquitous Language)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Domain developers and experts speak the same language.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is no need to "translate" the requirements.
&lt;/li&gt;
&lt;li&gt;The language helps to refine the model itself.
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Example&lt;/em&gt;: The term "order cancellation" in the speech of experts → the method &lt;code&gt;Order.cancel()&lt;/code&gt;in the code.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;A model is a condensed knowledge of a domain&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The team has agreed on how to structure knowledge about the subject area.
&lt;/li&gt;
&lt;li&gt;The model captures important concepts and their interrelationships.
&lt;/li&gt;
&lt;li&gt;Experience working with early versions of the software helps to improve the model.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. Why is the model the "heart" of the software?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;The main task of PO is ** to solve the problems of the subject area**.
&lt;/li&gt;
&lt;li&gt;Complex domains require deep understanding:&lt;/li&gt;
&lt;li&gt;Developers must immerse themselves in business logic.

&lt;ul&gt;
&lt;li&gt;Technical skills (frameworks, algorithms) are secondary without understanding the domain.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Risk&lt;/strong&gt;: Developers often focus on technology, ignoring domain → create complex but useless systems.
&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3. Example: A story from Monty Python&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;On the set of &lt;em&gt;Monty Python and the Holy Grail&lt;/em&gt;, one shot didn't turn out to be funny.
&lt;/li&gt;
&lt;li&gt;The actors changed the scene — it became funny, but the editor used the old version because the sleeve was visible in the new one.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analogy in development&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Editor = a developer who takes care of the technical details but loses the point.
&lt;/li&gt;
&lt;li&gt;Director = team leader who returns the focus to the &lt;strong&gt;domain&lt;/strong&gt; (as a funny scene).
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;4. Advantages of deep domain work&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Complexity is interesting&lt;/strong&gt;:
— Creating a clear model in a chaotic domain is an intellectual challenge.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modeling skills make a developer more valuable&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The ability to identify entities, aggregates, and constraints is useful in any project.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;System approaches&lt;/strong&gt;:&lt;/li&gt;

&lt;li&gt;Tactical DDD (Entity, Value Object, Aggregate).

&lt;ul&gt;
&lt;li&gt;Strategic DDD (Bounded Context, Context Map).
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;5. Conclusion&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A good model&lt;/strong&gt; is not just documentation, but a &lt;strong&gt;decision—making tool&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DDD&lt;/strong&gt; helps:&lt;/li&gt;
&lt;li&gt;Connect code with business logic.

&lt;ul&gt;
&lt;li&gt;Avoid "technical narcissism" (when technology is more important than the task).
&lt;/li&gt;
&lt;li&gt;Create software that really solves user problems.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key quote&lt;/strong&gt;:&lt;br&gt;
&lt;em&gt;"The heart of software is its ability to solve problems in a subject area. Everything else is auxiliary."&lt;/em&gt;  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Additional concepts to explore&lt;/strong&gt;:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Tactical DDD: Entity vs Value Object, Aggregates, Repositories.
&lt;/li&gt;
&lt;li&gt;Strategic DDD: Bounded Context, Anti-Corruption Layer.
&lt;/li&gt;
&lt;li&gt;Practices: Event Storming, Domain Storytelling.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Knowledge Extraction in Domain-Driven Design || DDD Eric Evans</title>
      <dc:creator>dima853</dc:creator>
      <pubDate>Mon, 26 May 2025 10:27:28 +0000</pubDate>
      <link>https://dev.to/dima853/knowledge-extraction-in-domain-driven-design-ddd-eric-evans-1ml8</link>
      <guid>https://dev.to/dima853/knowledge-extraction-in-domain-driven-design-ddd-eric-evans-1ml8</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Summary: Knowledge Extraction in Domain-Driven Design (Chapter 1)&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. Problem: Ignorance of the subject area&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The author, a software developer, was faced with the task of creating a program for designing &lt;strong&gt;printed circuit boards (PCBs)&lt;/strong&gt;, having no knowledge of electronics.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Attempt 1&lt;/strong&gt;: Get TK from experts → fail.

&lt;ul&gt;
&lt;li&gt;Experts have proposed primitive solutions (for example, sorting ASCII files) that do not solve real problems.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Problem&lt;/strong&gt;: The gap between technical and substantive knowledge.
&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. The process of "Knowledge Crunching"&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Method&lt;/strong&gt;: Collaborative modeling through:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Dialogue with experts&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;The author drew diagrams (informal UML), the experts corrected them.

&lt;ul&gt;
&lt;li&gt;Example:&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Expert&lt;/em&gt;: "These are not chips, but &lt;em&gt;component instances&lt;/em&gt;."

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Developer&lt;/em&gt;: Clarified the terms (for example, "ref-des" = reference designator = component instance).
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Focusing on a single function&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;Disassembled &lt;strong&gt;"probe simulation"&lt;/strong&gt; — analysis of signal delays in the circuit.

&lt;ul&gt;
&lt;li&gt;Simplified the model: ignored the physics of the chips, focused on &lt;strong&gt;the topology of the connections&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iterative refinement of the model&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developer's questions&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;"How is the signal transmitted beyond the Pin?" → Found out: through * "pushes"* components.

&lt;ul&gt;
&lt;li&gt;"What counts as a 'hop'?" → Network transition (Net) = 1 hop.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Result&lt;/strong&gt;: A model of objects (Net, Pin, Component) with behavior.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;3. Creating a prototype&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Goal&lt;/strong&gt;: To test the model in practice.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prototype Features&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;No UI, no persistence, just logic.

&lt;ul&gt;
&lt;li&gt;Tests + output to the console.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Effect&lt;/strong&gt;:&lt;/li&gt;

&lt;li&gt;Experts &lt;strong&gt;saw&lt;/strong&gt; the work of the model → the dialogue became more specific.

&lt;ul&gt;
&lt;li&gt;The code and the model &lt;strong&gt;evolved together&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;4. The final model&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Key entities&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Net&lt;/strong&gt; (connection).

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pin&lt;/strong&gt; (component contact).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Component&lt;/strong&gt; (the type of component that defines the "jolts" of the signal).
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Excluded&lt;/strong&gt;:&lt;/li&gt;

&lt;li&gt;Physical parameters of the components (did not affect the task).
&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Advantages of the&lt;/strong&gt; model:&lt;/li&gt;

&lt;li&gt;Eliminated synonyms (for example, "ref-des" = "component instance").

&lt;ul&gt;
&lt;li&gt;Gave &lt;strong&gt;a single language&lt;/strong&gt; for the team.
&lt;/li&gt;
&lt;li&gt;Allowed new developers to quickly understand the logic.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;5. Lessons&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;A model is born in a dialogue&lt;/strong&gt; — you can't just take TK from experts.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Focus on specific scenarios&lt;/strong&gt; — for example, "probe simulation".
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A prototype without "excess"&lt;/strong&gt; helps to test ideas quickly.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The evolution of the model and the code&lt;/strong&gt; — they must change together.
&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Quote&lt;/strong&gt;:&lt;br&gt;
&lt;em&gt;"A model is a concise knowledge that excludes irrelevant details, but preserves the essence of the problem."&lt;/em&gt;  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Additional concepts&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ubiquitous Language&lt;/strong&gt; is a common language for developers and experts.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tactical DDD&lt;/strong&gt; — Entity (Component), Value Object (Pin), Aggregates.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iteration&lt;/strong&gt; — The model is religion, it can be changed as you learn.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;: The success of the project depends on ** deep immersion in the domain**, and not only on technology.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ingredients of Effective Modeling
&lt;/h2&gt;

&lt;p&gt;Certain things we did led to the success I just described.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Binding the model and the implementation&lt;/strong&gt; &lt;br&gt;
That crude prototype forged the essential link &lt;br&gt;
early, and it was maintained through all subsequent iterations. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cultivating a language based on the model&lt;/strong&gt;&lt;br&gt;
At first, the engineers had to explain &lt;br&gt;
elementary PCB issues to me, and I had to explain what a class diagram meant. But as the &lt;br&gt;
project proceeded, any of us could take terms straight out of the model, organize them into &lt;br&gt;
sentences consistent with the structure of the model, and be un-ambiguously understood &lt;br&gt;
without translation. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Developing a knowledge-rich model&lt;/strong&gt;&lt;br&gt;
The objects had behavior and enforced rules. The &lt;br&gt;
model wasn't just a data schema; it was integral to solving a complex problem. It captured &lt;br&gt;
knowledge of various kinds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Distilling the model&lt;/strong&gt; &lt;br&gt;
Important concepts were added to the model as it became more &lt;br&gt;
complete, but equally important, concepts were dropped when they didn't prove useful or &lt;br&gt;
central. When an unneeded concept was tied to one that was needed, a new model was &lt;br&gt;
found that distinguished the essential concept so that the other could be dropped. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Brainstorming and experimenting&lt;/strong&gt; &lt;br&gt;
The language, combined with sketches and a &lt;br&gt;
brainstorming attitude, turned our discussions into laboratories of the model, in which &lt;br&gt;
hundreds of experimental variations could be exercised, tried, and judged. As the team went &lt;br&gt;
through scenarios, the spoken expressions themselves provided a quick viability test of a &lt;br&gt;
proposed model, as the ear could quickly detect either the clarity and ease or the &lt;br&gt;
awkwardness of expression. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;It is the creativity of brainstorming and massive experimentation, leveraged through a model-based &lt;br&gt;
language and disciplined by the feedback loop through implementation, that makes it possible to &lt;br&gt;
find a knowledge-rich model and distill it. This kind of knowledge crunching turns the knowledge &lt;br&gt;
of the team into valuable models.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ddd</category>
      <category>design</category>
      <category>architecture</category>
      <category>programming</category>
    </item>
    <item>
      <title>2.3 ARCHITECTURES VERSUS MIDDLEWARE</title>
      <dc:creator>dima853</dc:creator>
      <pubDate>Sat, 17 May 2025 18:53:46 +0000</pubDate>
      <link>https://dev.to/dima853/23-architectures-versus-middleware-55op</link>
      <guid>https://dev.to/dima853/23-architectures-versus-middleware-55op</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;The role of Middleware in distributed systems&lt;/strong&gt;
&lt;/h3&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Architectural Styles and Middleware (Middleware)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Chapter 2.3 "Architectures vs Middleware"&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The Middleware Role&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Middleware is an intermediate layer between applications and distributed platforms. Its key task is to ensure &lt;strong&gt;transparency of distribution&lt;/strong&gt;, hiding from applications the complexities associated with data distribution, processing and management.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The connection of Middleware with architectural styles&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In practice, middleware often implements a specific &lt;strong&gt;architectural style&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Object-oriented style&lt;/strong&gt;: for example, CORBA (Common Object Request Broker Architecture), where interaction is built through remote objects.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event-oriented style&lt;/strong&gt;: as in TIB/Rendezvous, where components exchange asynchronous event messages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt; of this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplify application design through standardized templates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Limited flexibility. For example, CORBA initially supported only remote method calls (RPC), which later required the addition of other patterns (such as messaging), complicating the system.&lt;/li&gt;
&lt;li&gt;The risk of "bloating" middleware due to the addition of new features.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Adapting Middleware to the needs of applications&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;To solve the problem of flexibility, two approaches are proposed.:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Specialized versions of middleware&lt;/strong&gt; for different classes of applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configurable systems&lt;/strong&gt;, where &lt;strong&gt;mechanisms&lt;/strong&gt; (basic functionality) and &lt;strong&gt;policies&lt;/strong&gt; (configurable rules of behavior) are separated. This allows you to adapt middleware without rewriting the code.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Interceptors Technology&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;One of the ways to configure middleware is to use &lt;strong&gt;interceptors&lt;/strong&gt; — software modules that "wedge" into the standard execution flow to add specific logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;An example from object-oriented systems&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Object &lt;code&gt;A&lt;/code&gt; calls the method &lt;code&gt;B.do_something(value)&lt;/code&gt;, where &lt;code&gt;B&lt;/code&gt; is located on a remote machine.&lt;/li&gt;
&lt;li&gt;The call is converted into a universal query &lt;code&gt;invoke(B, &amp;amp;do_something, value)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The request is sent via the OS network interface.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;How interceptors help&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;At the query level&lt;/strong&gt;: If object &lt;code&gt;B&lt;/code&gt; is replicated, the interceptor can automatically forward the call to all replicas without requiring changes to either the &lt;code&gt;A&lt;/code&gt; code or the underlying middleware.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;At the message level&lt;/strong&gt;: if the &lt;code&gt;value&lt;/code&gt; parameter is a large amount of data, the interceptor can split it into parts to improve transmission reliability, unnoticed by the main system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transparency for the application.&lt;/li&gt;
&lt;li&gt;Minimal changes to middleware.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Problems&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The complexity of implementing universal interceptors (as shown in Schmidt et al., 2000).&lt;/li&gt;
&lt;li&gt;The balance between flexibility and ease of management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Middleware, based on architectural styles, simplifies development, but requires adaptation mechanisms (for example, interceptors). Modern systems tend to separate policies and mechanisms in order to maintain flexibility without complicating the code.&lt;/p&gt;




&lt;h4&gt;
  
  
  &lt;strong&gt;Graphical representation&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;(You can add a scheme for interceptors in remote method invocation.)  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example of interceptor operation&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Object A] → [Calling B.do_something()] → [Request Interceptor] →  
[Invoke for replicas B] → [Message Interceptor] → [Network]  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key terms&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Interceptor&lt;/strong&gt; — interceptor.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distribution Transparency&lt;/strong&gt; — transparency of distribution.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Policy-Mechanism Separation&lt;/strong&gt; — separation of policies and mechanisms.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.3.2 General approaches to adaptive software&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Adapting middleware through interceptors&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Interceptors allow middleware to be adapted to the changing operating conditions of distributed systems, such as:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mobility&lt;/strong&gt; (changing the location of nodes).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instability of QoS&lt;/strong&gt; (network connection quality).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Equipment failures&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low battery&lt;/strong&gt; (in mobile devices).
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of making applications responsible for responding to changes, middleware takes over this task.  &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Three approaches to creating adaptive software&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;McKinley et al. (2004) identify three main methods:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Separation of concerns&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The traditional approach: separating the main functionality from the "additional" aspects (security, reliability, performance).  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Problem: Many aspects (such as security) cannot be isolated into a separate module.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aspect-oriented programming (AOP)&lt;/strong&gt; It is trying to solve this problem, but it is not yet scaling to large distributed systems (Filman et al., 2005).
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Computational reflection&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The ability of the program to analyze and change its behavior during execution (Kon et al., 2002).  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is supported in languages (for example, Java) and some middleware systems.
&lt;/li&gt;
&lt;li&gt;However, reflexive middleware has not yet proven effective in large distributed systems (Blair et al., 2004).
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Component-based design&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Adaptation through the composition of components (static or dynamic).  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic component replacement&lt;/strong&gt; requires complex dependency management (Yellin, 2003).
&lt;/li&gt;
&lt;li&gt;Problem: Components often turn out to be more connected than they seem.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;2.3.3 Discussion&lt;/strong&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Middleware Issues&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Complexity and cumbersomeness&lt;/strong&gt; due to attempts to ensure transparency of distribution.
&lt;/li&gt;
&lt;li&gt;The conflict between &lt;strong&gt;universality&lt;/strong&gt; (transparency) and &lt;strong&gt;specialization&lt;/strong&gt; (for specific applications).
&lt;/li&gt;
&lt;li&gt;Example: the code size of some middleware solutions increases by &lt;strong&gt;50% in 4 years&lt;/strong&gt;, and the number of files increases by &lt;strong&gt;3 times&lt;/strong&gt; (Zhang &amp;amp; Jacobsen, 2004).
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Do I need adaptability?&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Arguments for&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;Distributed systems cannot be stopped for updates.

&lt;ul&gt;
&lt;li&gt;Dynamic replacement of components is required.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Arguments against&lt;/strong&gt;:&lt;/li&gt;

&lt;li&gt;Many changes (attacks, equipment failures) can be predicted in advance.

&lt;ul&gt;
&lt;li&gt;The complexity of adaptive solutions may outweigh their advantages.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Alternative approach&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Instead of rebuilding on the fly, you can:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Provide adaptation policies in advance&lt;/strong&gt; (for example, reallocation of resources).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automate the response to changes&lt;/strong&gt; without human intervention.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Middleware adaptability is an important but challenging task. Modern approaches (AOP, reflection, components) are not perfect yet. Perhaps the best way is predictable adaptation policies instead of "realignment on the move."&lt;/p&gt;

</description>
    </item>
    <item>
      <title>3.1 Prediction methods for conditional jumps</title>
      <dc:creator>dima853</dc:creator>
      <pubDate>Thu, 08 May 2025 18:00:52 +0000</pubDate>
      <link>https://dev.to/dima853/31-prediction-methods-for-conditional-jumps-l53</link>
      <guid>https://dev.to/dima853/31-prediction-methods-for-conditional-jumps-l53</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;1. What is a saturating 2-bit counter?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is a &lt;strong&gt;two-bit saturation counter&lt;/strong&gt;, which is used in &lt;strong&gt;branch prediction algorithms&lt;/strong&gt;. It helps the processor predict whether a conditional transition (for example, &lt;code&gt;if-else&lt;/code&gt;, cycles) will be executed (&lt;code&gt;taken&lt;/code&gt;) or not (&lt;code&gt;not taken&lt;/code&gt;).  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. How does the saturating 2-bit counter work?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The counter can be in &lt;strong&gt;4 states&lt;/strong&gt; encoded with two bits:  &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;th&gt;Prediction&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;00&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Strongly not taken&lt;/td&gt;
&lt;td&gt;The transition will not be completed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;01&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Weakly not taken&lt;/td&gt;
&lt;td&gt;Rather not executed, but may change&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;10&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Weakly taken&lt;/td&gt;
&lt;td&gt;Rather completed, but may change&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;11&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Strongly taken&lt;/td&gt;
&lt;td&gt;The transition will be completed&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Tag update rules:&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;If the transition &lt;strong&gt;is completed (&lt;code&gt;taken&lt;/code&gt;)&lt;/strong&gt; → counter &lt;strong&gt;increases by 1 (but not higher than 11)&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;If the transition &lt;strong&gt;is not completed (&lt;code&gt;not taken&lt;/code&gt;)&lt;/strong&gt; → counter ** decreases by 1 (but not below 00)**.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The counter is in the &lt;code&gt;01&lt;/code&gt; state (weakly not taken):&lt;/li&gt;
&lt;li&gt;If a transition &lt;strong&gt;has occurred&lt;/strong&gt; → goes to &lt;code&gt;10&lt;/code&gt; (weakly taken).

&lt;ul&gt;
&lt;li&gt;If the transition &lt;strong&gt;did not occur&lt;/strong&gt; → goes to &lt;code&gt;00&lt;/code&gt; (strongly not taken).
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Why exactly 2 bits?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1- The bit counter&lt;/strong&gt; (0 or 1) changes the prediction too abruptly and is subject to noise (frequent switching).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2- The bit counter&lt;/strong&gt; adds "hysteresis": the transition between &lt;code&gt;taken&lt;/code&gt; and &lt;code&gt;not taken&lt;/code&gt; requires &lt;strong&gt;two erroneous predictions in a row&lt;/strong&gt;, which makes the algorithm more resistant to random deviations.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Where is it used in processors?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Such counters are used in:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Transition History Tables (Branch History Table, BHT)&lt;/strong&gt; – stores prediction states for different transition addresses.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global and local prediction schemes&lt;/strong&gt; (for example, in the &lt;strong&gt;Tournament Predictor&lt;/strong&gt; algorithm for some processors).
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Example of work&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let's say there is a loop that runs 9 times, and exits on the 10th time.:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The first iterations&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The counter starts at &lt;code&gt;00&lt;/code&gt; (strongly not taken).  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The loop is running → the counter is gradually increased to &lt;code&gt;11&lt;/code&gt; (strongly taken).
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Last iteration (exit the loop)&lt;/strong&gt;:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The transition is not performed → the counter is reduced to &lt;code&gt;10&lt;/code&gt; (weakly taken).  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the cycle does not start anymore, the counter may drop to &lt;code&gt;00&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;6. Optimizations related to saturating counters&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency&lt;/strong&gt;: Works well for &lt;strong&gt;stable branches&lt;/strong&gt; (for example, loops with a large number of iterations).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Problems&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Inefficient for random branches&lt;/strong&gt; (if transitions are unpredictable, the counter will fluctuate frequently).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning delay&lt;/strong&gt;: It takes several runs for the counter to "learn" the correct prediction.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Saturating 2-bit counter&lt;/strong&gt; is a simple but effective branch prediction mechanism used in many processors. It provides a balance between stability and adaptability, reducing the number of prediction errors (mispredictions).  &lt;/p&gt;

&lt;h2&gt;
  
  
  Adaptive two-level predictor
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjw2kbvoplx11dd4re2e1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjw2kbvoplx11dd4re2e1.png" alt=" " width="657" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. What is Adaptive Two-Level Predictor?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is an advanced branch prediction algorithm that uses &lt;strong&gt;two levels of information&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The first level&lt;/strong&gt; is the history of previous transitions (branch history).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The second level&lt;/strong&gt; is the state table (pattern history table), where each history template has its own saturating counter.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach allows &lt;strong&gt;to take into account correlations between successive transitions&lt;/strong&gt;, which is especially useful for loops and complex conditional constructions.  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;2. How does Two-Level Predictor work?&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.1. Main components&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Branch History Register (BHR)&lt;/strong&gt;&lt;br&gt;
is a register that stores the latest transition results (for example, &lt;code&gt;taken = 1&lt;/code&gt;, &lt;code&gt;not taken = 0&lt;/code&gt;).  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;BHR = 1011&lt;/code&gt; means that the last 4 transitions were &lt;code&gt;T, NT, T, T&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pattern History Table (PHT)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A table of saturating counters (usually 2-bit), where each counter corresponds to a specific pattern from BHR.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For example, if &lt;code&gt;BHR = 1011&lt;/code&gt;, the processor looks at the PHT record for this template and uses it for prediction.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Global/Local History&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Global History (Global Two-Level)&lt;/strong&gt; – One BHR for all transitions.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local History (Local Two-Level)&lt;/strong&gt; – a separate BHR for each transfer address.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.2. Work example&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let's say we have a loop that runs 3 times, and exits on the 4th time.:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Initialization&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;BHR = 0000&lt;/code&gt; (if the history is 4-bit).
&lt;/li&gt;
&lt;li&gt;In PHT, all counters are at &lt;code&gt;00&lt;/code&gt; (strongly not taken).
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;First iterations (transition taken)&lt;/strong&gt;:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;BHR&lt;/code&gt; shifts by adding &lt;code&gt;1&lt;/code&gt;: &lt;code&gt;0001 → 0011 → 0111 → 1111&lt;/code&gt;.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For each template, the PHT is updated towards `taken'.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Last iteration (transition not taken)&lt;/strong&gt;:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;BHR = 1111&lt;/code&gt; → prediction &lt;code&gt;taken&lt;/code&gt; (but actually &lt;code&gt;not taken&lt;/code&gt;).  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The counter for &lt;code&gt;1111&lt;/code&gt; is decreasing, and the prediction may change next time.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;3. Why is Two-Level Predictor better than a simple saturating counter?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Takes into account the context of transitions&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;A simple 2-bit counter "forgets" the previous states.

&lt;ul&gt;
&lt;li&gt;Two-Level Predictor remembers &lt;strong&gt;transition sequences&lt;/strong&gt; and predicts based on patterns.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Effective for cycles and periodic branching&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;For example, if the transition alternates &lt;code&gt;T, NT, T, NT&lt;/code&gt;, BHR will help predict the next outcome.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;4. Varieties of Two-Level Predictors&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;GAg (Global History, Global PHT)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;One common BHR for all branches.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Saves resources, but is less accurate for specific branches.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PAg (Per-Address History, Global PHT)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A separate BHR for each transfer address.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More accurate, but requires more memory.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PAp (Per-Address History, Per-Address PHT)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Each branch has its own PHT.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maximum accuracy, but high difficulty.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;5. Optimizations and challenges&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5.1. Advantages&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;✅ Better accuracy than saturating counters.&lt;br&gt;&lt;br&gt;
✅ Effective for cycles and repetitive patterns.  &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5.2. Problems&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;❌ &lt;strong&gt;Requires more memory&lt;/strong&gt; (BHR + PHT).&lt;br&gt;&lt;br&gt;
❌ &lt;strong&gt;Learning delay&lt;/strong&gt; (it takes several iterations to set up).&lt;br&gt;&lt;br&gt;
❌ &lt;strong&gt;Conflicts in PHT&lt;/strong&gt; (different branches may use the same counter).  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;6. Where is it used?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Intel Pentium Pro/II/III&lt;/strong&gt; – two-level predictors were used.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modern processors&lt;/strong&gt; (AMD Zen, Intel Core) – combine two-level predictors with other methods (for example, &lt;strong&gt;TAGE predictor&lt;/strong&gt;).
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Adaptive Two-Level Predictor&lt;/strong&gt; is a powerful branch prediction engine that takes into account the history of transitions to improve accuracy. It is especially useful in loops and complex conditional scenarios, but requires additional hardware resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. The main idea of the Tournament Predictor&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The processor uses &lt;strong&gt;two (or more) different predictors&lt;/strong&gt; and dynamically selects which one is best suited for the current branch.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The main predictor&lt;/strong&gt; (for example, gshare) works well for branches with global correlation.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;An alternative predictor&lt;/strong&gt; (for example, bimodal) is effective for simple branches.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Meta predictor&lt;/strong&gt; (selector) – decides which of the two predictors to use.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;2. How does the Tournament Predictor work?&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.1. Structure&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Two predictors&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;P1&lt;/strong&gt; (for example, gshare) – takes into account the global history.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;P2&lt;/strong&gt; (for example, bimodal) is a simple 2-bit counter.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Meta-predictor table&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;Stores &lt;strong&gt;2-bit counters&lt;/strong&gt; that decide which predictor (P1 or P2) to trust.
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2.2. The algorithm of operation&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;When predicting branches:&lt;/li&gt;
&lt;li&gt;P1 and P2 give their predictions.

&lt;ul&gt;
&lt;li&gt;Meta-predictor chooses which one to use.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;After completing the branch:&lt;/li&gt;
&lt;li&gt;If &lt;strong&gt;the selected predictor was wrong&lt;/strong&gt; and the alternative was right, the meta–counter is updated towards the alternative.

&lt;ul&gt;
&lt;li&gt;If both were wrong or both were right, the meta counter does not change dramatically.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;3. Why is Tournament Predictor effective?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Adaptability&lt;/strong&gt;: If one predictor does not work well for a particular branch, the system automatically switches to another.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Versatility&lt;/strong&gt;: Copes well with different types of branches (loops, random conditional jumps).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reducing the number of mispredictions&lt;/strong&gt;: The combination of methods gives a more stable result.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;4. Example of work&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Admit:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;P1 (gshare)&lt;/strong&gt; predicts &lt;code&gt;taken&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;P2 (bimodal)&lt;/strong&gt; predicts &lt;code&gt;not taken&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Meta-predictor&lt;/strong&gt; tends towards P1 (for example, the counter &lt;code&gt;10&lt;/code&gt; = weakly prefer P1).
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Scenarios:&lt;/strong&gt;  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;If the real transition is &lt;code&gt;taken&lt;/code&gt; (P1 is right, P2 is wrong)&lt;/strong&gt; → Meta-predictor enhances trust in P1 (&lt;code&gt;10&lt;/code&gt; → &lt;code&gt;11&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;** If the real transition is &lt;code&gt;not taken&lt;/code&gt; (P1 is wrong, P2 is right)** → Meta-predictor reduces confidence in P1 (&lt;code&gt;10&lt;/code&gt; → &lt;code&gt;01&lt;/code&gt;).
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;5. Where is it used?&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Intel Pentium 4 (NetBurst)&lt;/strong&gt; – used Tournament Predictor.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modern processors (AMD Zen, Intel Core)&lt;/strong&gt; – use sophisticated hybrid circuits (for example, &lt;strong&gt;TAGE + Loop Predictor&lt;/strong&gt;).
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;6. Advantages and disadvantages&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;✅ &lt;strong&gt;High accuracy&lt;/strong&gt; due to adaptive selection.&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Versatility&lt;/strong&gt; – Suitable for different branching patterns.&lt;br&gt;&lt;br&gt;
❌ &lt;strong&gt;Complexity&lt;/strong&gt; – requires additional tables and logic.&lt;br&gt;&lt;br&gt;
❌ &lt;strong&gt;Learning delay&lt;/strong&gt; – The meta-predictor needs time to adjust.  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Tournament Predictor (Agree Predictor)&lt;/strong&gt; is a powerful hybrid algorithm that dynamically selects the best prediction method for each branch. It is especially useful in modern processors where it is important to minimize mispredictions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Summary: Branch prediction mechanisms in processors&lt;/strong&gt;
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Method&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Description&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;How does it work?&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Positive&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Minuses&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Where is it used?&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Saturating 2-bit Counter&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The simplest predictor based on a 2-bit saturation counter.&lt;/td&gt;
&lt;td&gt;- 4 states: &lt;code&gt;00&lt;/code&gt; (strongly NT), &lt;code&gt;01&lt;/code&gt; (weakly NT), &lt;code&gt;10&lt;/code&gt; (weakly T), &lt;code&gt;11&lt;/code&gt; (strongly T).&lt;br&gt;- Increases with &lt;code&gt;taken&lt;/code&gt;, decreases with &lt;code&gt;not taken&lt;/code&gt;.&lt;/td&gt;
&lt;td&gt;- Easy to implement.&lt;br&gt;- Resistant to noise (hysteresis).&lt;/td&gt;
&lt;td&gt;- Slowly adapts.&lt;br&gt;- Predicts complex branches poorly.&lt;/td&gt;
&lt;td&gt;A basic component of many predictors.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Two-Level Predictor&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Uses the conversion history (BHR) and the template table (PHT).&lt;/td&gt;
&lt;td&gt;- &lt;strong&gt;BHR&lt;/strong&gt; stores the latest transition outcomes.&lt;br&gt;- &lt;strong&gt;PHT&lt;/strong&gt; contains 2-bit counters for each BHR template.&lt;/td&gt;
&lt;td&gt;- Takes into account the context (better for loops).&lt;br&gt;- More accurate than a 2-bit counter.&lt;/td&gt;
&lt;td&gt;- Requires more memory.&lt;br&gt;- Conflicts in PHT.&lt;/td&gt;
&lt;td&gt;Intel P6 (Pentium Pro/II/III), AMD K8.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Tournament Predictor (Agree)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Hybrid predictor combining two methods (for example, gshare + bimodal).&lt;/td&gt;
&lt;td&gt;- &lt;strong&gt;Meta-predictor&lt;/strong&gt; selects which of the two predictors to use.&lt;br&gt;- Updated based on errors.&lt;/td&gt;
&lt;td&gt;- Adaptability.&lt;br&gt;- High accuracy for different types of branches.&lt;/td&gt;
&lt;td&gt;- Complex logic.&lt;br&gt;- Additional hardware costs.&lt;/td&gt;
&lt;td&gt;Intel Pentium 4, modern hybrid circuits.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Key terms&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;BHR (Branch History Register)&lt;/strong&gt; is a register that stores the history of transitions.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PHT (Pattern History Table)&lt;/strong&gt; is a table of counters corresponding to BHR patterns.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Meta-predictor&lt;/strong&gt; is a mechanism for choosing between two predictors in Tournament Predictor.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The evolution of predictors&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Saturating Counter&lt;/strong&gt; → &lt;strong&gt;Two-Level&lt;/strong&gt; → &lt;strong&gt;Tournament&lt;/strong&gt; → &lt;strong&gt;TAGE (modern CPUs)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The more complex the algorithm, the higher the accuracy, but the higher the resource consumption.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>programming</category>
      <category>cpu</category>
    </item>
  </channel>
</rss>
