<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Igor Artamonov</title>
    <description>The latest articles on DEV Community by Igor Artamonov (@splix).</description>
    <link>https://dev.to/splix</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/splix"/>
    <language>en</language>
    <item>
      <title>Performance of Lock in Coroutines</title>
      <dc:creator>Igor Artamonov</dc:creator>
      <pubDate>Tue, 18 Mar 2025 19:03:27 +0000</pubDate>
      <link>https://dev.to/splix/performance-of-lock-in-coroutines-51bc</link>
      <guid>https://dev.to/splix/performance-of-lock-in-coroutines-51bc</guid>
      <description>&lt;p&gt;Recently, while having a work conversation, I heard a thesis from a colleague that “&lt;em&gt;In Kotlin coroutines you have to use Kotlin’s Mutex to synchronize modification of an object from different threads,&lt;/em&gt;” which seems to be a common misconception, and here I want to explain why it’s not always right. With the numbers.&lt;/p&gt;

&lt;p&gt;So, basically, I see that it’s common to think: “&lt;em&gt;if you use coroutines and inside those coroutines you have a shared object which you modify from multiple coroutines, then to synchronize the access to it you have to use Kotlin’s Mutex&lt;/em&gt;”&lt;/p&gt;

&lt;p&gt;The docs imply that, and also they say that you should never have blocking code in your coroutines. But this “blocking code” is a very confusing term. It’s supposed to mean that this code wastes CPU on doing nothing (i.e. on waiting) for a time long enough to run something else during this time.&lt;/p&gt;

&lt;p&gt;So a lock is definitely a blocking code, right? Sometimes. But it doesn’t automatically mean you should avoid using it. Because the key point is “long enough.” Which means that the waiting time is longer than switching the context to another operation like copying blocks of stack/memory, which in turn also assumes polluting the CPU caches, possibly switching to another core, etc.&lt;/p&gt;

&lt;p&gt;All of those are comparably slow operations (from 1ns to access data in the current CPU L1 cache, to hundreds and thousands of nanoseconds to copy the memory to another place). Note, though, that it’s still a hundred times faster than an IO operation.&lt;/p&gt;

&lt;p&gt;So the rule of thumb is to use coroutines for IO operations, and standard primitives for non-IO operations. Unless you have some very specific case. And a shared object access is nothing like that, because system locks are very fast and memory efficient.&lt;/p&gt;

&lt;p&gt;And, btw, even to switch a coroutine context you would probably rely on a lock or two somewhere inside the coroutines library, so a JVM lock is unavoidable most likely.&lt;/p&gt;

&lt;p&gt;Okay, enough theory and assumptions. Let’s benchmark it and see if it makes sense.&lt;/p&gt;

&lt;p&gt;I have a very simple idea of a code that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;fully utilizes CPU while while executing many short and independent tasks&lt;/li&gt;
&lt;li&gt;no IO&lt;/li&gt;
&lt;li&gt;have an object to synchronize the state between threads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The “task” is calculating a SHA3 hash of the input which is the current task index and the thread index. Each one goes in its own thread. It’s a fully deterministic operation and does the exact same amount of work regardless of the order of execution. And as a synchronization step I compare it to the last known value and choose the lowest value.&lt;/p&gt;

&lt;p&gt;The code for the task:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fun cpuTask(i: Int, rail: Int): String {
    val sha3 = SHA3.Digest256()
    return Hex.toHexString(sha3.digest("CPU/$rail/$i".toByteArray()))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Called from:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class State {
    var current: String? = null

    fun update(next: String) {
        if (current == null || current!! &amp;gt; next {
           current = next
        }
    }
}

val state = State()
val lock = ReentrantLock()

runBlocking {
    //
    // run 25K threads (coroutines)
    for (rail in 1..25_000) {
        launch(Dispatchers.Default) {
            var prev: String? = null

            //
            // Execute a CPU bound task 1K times in each thread
            // resulting in ~10 updates to the shared state
            for (i in 1..1000) {
                //
                // Run a task
                val next = cpuTask(i, rail)

                //
                // Check if it's better for the current thread.
                // Only in this example, to avoid too many locks. 
                // Version with "Always" suffix doesn't have this check
                if (prev == null || prev &amp;gt; next) {
                    prev = next

                    //
                    // Update the shared state
                    // Here it's a JVM Lock, but there is a version with Mutex
                    lock.withLock {       
                         state.update(next)
                    }

                }
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are two ways to have the synchronization though. One is to always compare with the shared value (the benchmarks with the “Always” suffix). Another one is having a two-step process, first compare with the best value for the current thread, and only the best is compared to the “shared best.” This way dramatically decreases the amount of locks, and having both ways allows us to compare the effect of lock.&lt;/p&gt;

&lt;p&gt;I implemented the same logic in different ways to compare the performance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Coroutines + Java Lock&lt;/li&gt;
&lt;li&gt;Coroutines + Coroutines Mutex&lt;/li&gt;
&lt;li&gt;Plain Java Threads + Java Lock&lt;/li&gt;
&lt;li&gt;Reactor with No Lock&lt;/li&gt;
&lt;li&gt;Plain Java Threads with No Lock&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The last one is a base number to compare the others, because it’s supposed to be the most optimal code.&lt;/p&gt;

&lt;p&gt;I ran this code on the same machine with nothing running on it except the code, using Ubuntu 22.04 / Intel i7 8 cores / OpenJDK Java 21. I ran 25,000 tasks (in parallel limited to the CPU count) each doing 1,000 calculations. I ran each 12 times in random order, so the benchmark should not be significantly affected by some preexisting state of the memory, JVM warm-up, or similar.&lt;/p&gt;

&lt;p&gt;The first part is the basic benchmarks:&lt;/p&gt;

&lt;p&gt;￼&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjn2dfblezwvdba4ns91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjn2dfblezwvdba4ns91.png" alt="Image description" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we see the “&lt;em&gt;Threads No Locks&lt;/em&gt;,” which is our main basis, with 5620 ms.&lt;/p&gt;

&lt;p&gt;And the other standard JVM and Reactor-based implementations that are not very different from each other. They are +100ms to execute, which is lower than the Standard Deviation, so we can’t really say they are much slower, but they are clearly in the same bucket. It’s safe to say that on average all of them have the same performance.&lt;/p&gt;

&lt;p&gt;The code above doesn't do much locking because it selects the best per tread before locking. Now what if we &lt;em&gt;always&lt;/em&gt; lock the state and compare inside? &lt;br&gt;
￼&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4mjqawuvf0ih0wxha75d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4mjqawuvf0ih0wxha75d.png" alt="Image description" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Examples with the “&lt;em&gt;Always&lt;/em&gt;” prefix demonstrate the effect of the unnecessary locks (i.e., when each value is compared to the global value).  Compared to that, we see that the optimal code as in the previous screen is 25%+ faster.&lt;/p&gt;

&lt;p&gt;And now the Coroutines Mutex instead of JVM Lock:&lt;br&gt;
￼&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75ahr4y2xcmey91eoiwp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75ahr4y2xcmey91eoiwp.png" alt="Image description" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we see that the Coroutines Mutex doubles the time of the execution. It’s not just a minor performance issue.&lt;/p&gt;

&lt;p&gt;Note, it’s an “optimal” implementation that doesn’t use Mutex too often. Like the very first example with Lock.&lt;/p&gt;

&lt;p&gt;What if we &lt;em&gt;always&lt;/em&gt; synchronize with the Mutex? Will it be the same 25% loss of performance like with the others?&lt;/p&gt;

&lt;p&gt;￼&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtjpi3t4q42dsme860tj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtjpi3t4q42dsme860tj.png" alt="Image description" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Oh, no!!! It actually makes it 5 times slower than the previous! Or almost 10 times slower than the JVM Lock.&lt;/p&gt;

&lt;p&gt;This is the cost of using a coroutine primitive for a non-IO operation.&lt;/p&gt;

&lt;p&gt;The final table looks like this:&lt;br&gt;
￼&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3m6j6ylre1okxpij12t0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3m6j6ylre1okxpij12t0.png" alt="Image description" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In summary:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use coroutines and related, including Mutex, only for IO-bound tasks;&lt;/li&gt;
&lt;li&gt;Use Mutex if you lock over other coroutines (i.e., you call a suspend function inside the lock), but if you do everything right you would not have non-IO coroutines anyway&lt;/li&gt;
&lt;li&gt;Use standard JVM primitives, including Lock, for CPU/Memory-bound tasks with no coroutines inside the lock&lt;/li&gt;
&lt;li&gt;And of course avoid having long tasks under a lock, regardless of the type of lock&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kotlin</category>
      <category>programming</category>
      <category>java</category>
    </item>
    <item>
      <title>Extending NUC with External U.2 SSD</title>
      <dc:creator>Igor Artamonov</dc:creator>
      <pubDate>Wed, 25 Sep 2024 16:33:58 +0000</pubDate>
      <link>https://dev.to/splix/extending-nuc-with-external-u2-ssd-2e63</link>
      <guid>https://dev.to/splix/extending-nuc-with-external-u2-ssd-2e63</guid>
      <description>&lt;p&gt;Recently, I upgraded an Intel NUC by adding external SSDs as a replacement for the internal M.2 SSDs. Before I started, I wasn’t fully sure if it could be done because there isn’t much information on this topic. So, I’m writing down my experience in the hope that it will help someone if they decide to do the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  First of all, why and what
&lt;/h2&gt;

&lt;p&gt;I have an Intel NUC 12th Enthusiast, which has 3 slots for internal M.2 SSDs. It gives me about 6TB of disk space when I use 3 x 2TB Samsung SSDs. I run an Ethereum node on this machine, which alone consumes more than 4TB, and it’s growing. I wanted to run a couple of different implementations, so I needed much more space for that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qj8xb9n5wgscwkcnqp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qj8xb9n5wgscwkcnqp4.png" alt="Intel NUC 12th Enthusiast" width="631" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Technically, I could upgrade those 2TB SSDs to 4TB versions, but larger SSDs are not so practical because, to my understanding, the quality/price ratio goes down, and a 2TB SSD is the sweet spot for M.2. Also, I have already worn out a couple of those SSDs, so I wanted to install larger and more durable ones, i.e., enterprise-grade SSDs with a 7.68TB capacity, which only come as external drives.&lt;/p&gt;

&lt;p&gt;Here, I describe how I successfully connected two U.2 7.68TB SSDs to an Intel NUC, what the challenges were, and why it may not work for everyone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hardware
&lt;/h2&gt;

&lt;p&gt;There are a few protocols for drives, and one of them is NVMe, which usually works on top of PCIe. That’s because PCIe is the most performant interface. There are a few ways to connect to PCIe. In addition to the more common large slot on the motherboard (which I don’t have with Intel NUC), there’s the M.2 slot. Yes, M.2 is a PCIe slot, just a very small one. So technically, it should be possible to connect any PCIe device through M.2.&lt;/p&gt;

&lt;p&gt;Enterprise 2.5” NVMe drives may come with a U.2 connection, which is also a PCIe connection. (For reference, U.2 is called SFF-8639). So, it seems like it should work together because I’m connecting one PCIe device to the motherboard through a PCIe interface.&lt;/p&gt;

&lt;p&gt;But how do you connect a U.2 drive to an M.2 slot?&lt;/p&gt;

&lt;p&gt;Usually, you use a SAS cable to connect those drives. SAS is a protocol that can fully utilize NVMe without a performance loss. On the drive side, the cable connects to U.2 directly (SFF-8639) and then goes as a mini-SAS to the motherboard (SFF-8643).&lt;/p&gt;

&lt;p&gt;How do you connect this mini-SAS to a NUC? It turns out you can buy an adapter from mini-SAS to M.2.&lt;/p&gt;

&lt;p&gt;So first, you &lt;strong&gt;buy a SAS cable&lt;/strong&gt; (U.2 to mini-SAS / SFF-8639 to SFF-8643). Any brand should work, I guess.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0byq1g3b3l0ev6gi1ken.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0byq1g3b3l0ev6gi1ken.jpg" alt="SAS to mini-SAS cable" width="679" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, you have to &lt;strong&gt;buy an adapter from mini-SAS to M.2&lt;/strong&gt;. In my case, it was a StarTech M2E4SFF8643 because I didn’t trust no-name brands. But probably any other adapter would work, and there are plenty on Amazon.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futrsglax0eejmxgw7q4k.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Futrsglax0eejmxgw7q4k.jpg" alt="mini-SAS to M.2" width="425" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One problem with this StarTech adapter is that it comes as an M.2 2260, and this “xx60” means it’s 60mm long. Usually, you have either 42mm or 80mm drives. Those are the most common sizes, and the NUC has mountings only for those two sizes. So, you may want to &lt;strong&gt;buy an extension&lt;/strong&gt; or DIY one from a plastic piece to mount it in the standard 80mm slot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkt9m22fuzmtca0ym29g8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkt9m22fuzmtca0ym29g8.jpg" alt="M.2 Extender" width="340" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A note about these M.2 adapters: there are apparently many types, not just for SAS. Another option is Oculink SFF-8612, which is commonly used to connect an external full-size PCIe 16x, usually a GPU. You could use it for drives as well, which I guess could be scaled to more drives, but that would be a more complex approach.&lt;/p&gt;

&lt;p&gt;The third thing you need to &lt;strong&gt;buy is a Power Supply Unit&lt;/strong&gt; (PSU). With SAS cables, you must have an external power source. Standard M.2 is powered directly from the motherboard, but with SAS, this option isn’t available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6707cz98vmi7i6iy76a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6707cz98vmi7i6iy76a.jpg" alt="PSU" width="425" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What I forgot initially is that you cannot simply turn on most PSUs because they are designed to be turned on using the power button on the computer case. To turn it on manually, you connect two pins on its main cable. Google "PSU paper clip test" to understand it better. You could use a paper clip or a short wire, or, to make it more durable, you can &lt;strong&gt;buy a jumper clip&lt;/strong&gt; that connects the correct pins. There are plenty of them on Amazon as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4cdxifuyao9n6qs37vxd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4cdxifuyao9n6qs37vxd.jpg" alt="Jumper Clip" width="522" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Software Part
&lt;/h2&gt;

&lt;p&gt;Once I connected everything and turned it on, I noticed a weird issue: I didn’t see the drives in the BIOS, or, weirder, I saw them semi-randomly. On a power reset, I might see one of the drives, both, or none. Initially, I thought I had a hardware issue.&lt;/p&gt;

&lt;p&gt;But when I tried to boot from a USB drive and checked the &lt;code&gt;lsblk&lt;/code&gt;, I found both drives even though I hadn’t seen them in the BIOS. That’s weird, and I still haven’t figured out the reason. Maybe it’s a bug in my BIOS version, or maybe there are some incompatibilities between expected SATA and actual SAS. Maybe it's specific to Intel NUC. &lt;em&gt;BTW, if you have any idea what the cause is, please let me know in the comments.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After a few experiments, I became convinced that they work perfectly fine in Linux. But it still doesn’t make much sense to install an OS on them because BIOS cannot boot from them as it doesn’t see the drives.&lt;/p&gt;

&lt;p&gt;I guess a workaround is resetting the NUC until it sees any of the drives to boot. Inconvenient, but it might work. Or you could install and boot the OS from an external USB flash drive, mounting the main filesystem from those drives once it’s booted. Alternatively, you could do the same using a network boot.&lt;/p&gt;

&lt;p&gt;Remember that I have 3 slots but only 2 drives? So, my solution was to put a standard M.2 SSD in the 3rd slot and install an OS on that drive. When it boots, it successfully mounts the other drives and uses them for larger databases.&lt;/p&gt;

&lt;p&gt;It all works fine now, but in my case, it required 3 M.2 slots, as one was populated with a standard M.2 SSD to boot. Keep this in mind, as it may not work for everyone if you don’t have enough M.2 slots. But for this particular case, with Intel NUC 12, it’s definitely possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;That’s all you need for the hardware part:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;U.2 SSD drive(s)&lt;/li&gt;
&lt;li&gt;SAS U.2 to mini-SAS cable(s)&lt;/li&gt;
&lt;li&gt;Mini-SAS to M.2 adapter(s)&lt;/li&gt;
&lt;li&gt;M.2 2260 to M.2 2280 extender(s)&lt;/li&gt;
&lt;li&gt;PSU&lt;/li&gt;
&lt;li&gt;24-pin jumper&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You need a cable, adapter, and extender for each drive. And, obviously, a single PSU can power them all.&lt;/p&gt;

&lt;p&gt;You may want to leave one M.2 slot for a standard M.2 SSD to install an OS on that drive.&lt;/p&gt;

&lt;p&gt;Also, keep in mind that once you connect everything together, it will be a bunch of cables and hardware. You won’t be able to put the cover back on the NUC case, so it’s going to sit exposed somewhere on the shelf. And you might not realize how hot it gets without proper ventilation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fpnd5pf8xx1w0rea7ue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fpnd5pf8xx1w0rea7ue.png" alt="Caution Hot" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>hardware</category>
      <category>minipc</category>
      <category>homelab</category>
    </item>
    <item>
      <title>The Why Behind the Code</title>
      <dc:creator>Igor Artamonov</dc:creator>
      <pubDate>Tue, 22 Aug 2023 20:25:24 +0000</pubDate>
      <link>https://dev.to/splix/the-why-behind-the-code-2bb1</link>
      <guid>https://dev.to/splix/the-why-behind-the-code-2bb1</guid>
      <description>&lt;p&gt;Do you know why microservices architecture is so popular? Because it’s easier to rewrite the whole service rather than inspect the code to find the bug causing crashes in production. &lt;/p&gt;

&lt;p&gt;That might be a joke, but I’m sure you'll agree that looking through the code to fix a bug or add a feature is something we sometimes prefer to avoid. Especially when we struggle to understand why this code even exists. I mean, it’s easy to understand what the code does, as it’s right there on the screen. But often, we don’t understand why it’s written that way. Is it a workaround for something? Does it fix some problem in a rare case? What happens if I delete this part of the code? And how can we decide?&lt;/p&gt;

&lt;p&gt;People who've worked with me know my strong opinion on making Pull Requests, especially regarding code commit messages. In short, I believe that the commit message must answer the question “&lt;em&gt;Why&lt;/em&gt;” the commit was made, with the understanding that the code answers the “&lt;em&gt;What&lt;/em&gt;” it does. Most developers don’t share this view, and here I’m explaining my stance and trying to persuade you to adopt the same approach.&lt;/p&gt;

&lt;p&gt;In fact, I wasn’t always so meticulous about commit messages. Many years ago, I found it perfectly acceptable to leave a message like just &lt;code&gt;fixing bugs&lt;/code&gt;. And that’s what I typically see when I open someone else's code.&lt;/p&gt;

&lt;p&gt;Here's a list of messages that represent what we usually see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;add CanUpdate to DbConnection interface&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;fix for when size is 0&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;fix use_react to support absolute paths&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;fix web_base.py&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;fix show history output&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;fix dependency&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These messages, taken randomly from public GitHub repos, represent common types of commit descriptions. They seem like short summaries of what was done in the commit.&lt;/p&gt;

&lt;p&gt;Such commit messages are like an email subjects, which for most people are simply recognizable label that help distinguish between emails in a list. &lt;/p&gt;

&lt;p&gt;While these commit messages could be used as reference points for standup calls or weekly reports, they have no use after those meetings.&lt;/p&gt;

&lt;p&gt;However, after years in software development, I've learned the hard way when commit messages truly come into play. They are a last-resort tool, something you refer to when worried that a code change might break something else, or perhaps after a change has unexpectedly caused a crash elsewhere. That's when you run a &lt;code&gt;git blame&lt;/code&gt; to understand the reasons behind the existing code. Sometimes, you're the original committer, but you might not recall the reasons for writing the code that particular way.&lt;/p&gt;

&lt;p&gt;Let’s accept it that a self-explanatory code is a unicorn and code comments are almost never enough to understand it. More to that, even if you in this rare situation with a clean code and good comments, none of this can explain a code change as a whole. It cannot explain a relation between different parts of the code. To link these parts and answer the questions, the commit message should be used. It's the only space suited for this purpose.&lt;/p&gt;

&lt;p&gt;Think of the commit message as a medium to communicate with a future developer, or even a future version of yourself. It's a message that can explain multiple changes across several files, providing the rationale for those changes.&lt;/p&gt;

&lt;p&gt;I hope my point is clear and resonates with experienced developers. But how should one write these commit messages?&lt;/p&gt;

&lt;p&gt;For me, any code change is in response to a problem. Every commit (a “&lt;em&gt;solution&lt;/em&gt;”) addresses something specific in mind (a “&lt;em&gt;problem&lt;/em&gt;”). &lt;/p&gt;

&lt;p&gt;Even if you're just updating a dependency, you're likely addressing concerns about outdated, incompatible, buggy, or slow code in the current version. Or you see it as a problem maintaining a code depending on an outdated library. Such an update solves the problem by upgrading to a newer version. If you're refactoring, you likely see issues with the current code's readability, maintainability, flexibility, etc.&lt;/p&gt;

&lt;p&gt;So here's my suggestion: state both the problem and solution in the commit message. Like, &lt;code&gt;Problem: what I wanted to solve; Solution: how I solved it&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;I’m not an inventor of this approach, of course, but an advocate for it.&lt;/p&gt;

&lt;p&gt;When you adopt this approach, you might find that it simplifies the commit message drafting process. You start recognizing that you always approached commits as answers to problems, but struggled with how to write both problem and solution as one phrase. It’s okay to use two distinct phrases, and it can be easier to write them this way.&lt;/p&gt;

&lt;p&gt;Here is an example of what I suggest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Problem: egress/ingress naming for log config is too confusing
Solution: rename to request/access logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach is more informative than a vaguer message like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;renamed egress/ingress logs config to request/access logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the latter case, you might just shorten it to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rename logs config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And it stops providing as much information as the original. Also it doesn't answer &lt;em&gt;Why?&lt;/em&gt; anymore.&lt;/p&gt;

&lt;p&gt;It's very tempting to put just a &lt;em&gt;solution&lt;/em&gt; part describing &lt;em&gt;what&lt;/em&gt; is inside the commit. I made this mistake many times, and see it very often others do. Developers, when they asked to write a commit messages as &lt;em&gt;problem+solution&lt;/em&gt; start writing just &lt;em&gt;solution: what's inside the commit&lt;/em&gt;. That's not helpful! &lt;/p&gt;

&lt;p&gt;I agree, though, that some commits are straightforward enough that their intent is clear. In such cases, I think, it's acceptable to omit the obvious part. For example, when upgrading an outdated library, stating &lt;code&gt;problem: netty 4.1 is outdated&lt;/code&gt; during a &lt;code&gt;git blame&lt;/code&gt; would say enough about the solution.&lt;/p&gt;

&lt;p&gt;Like in my commit above regarding config names, the “&lt;em&gt;solution&lt;/em&gt;” part might have been redundant. However, almost 50 files were modified in that commit, so I chose to provide a comprehensive description to manage expectations.&lt;/p&gt;

&lt;p&gt;What I suggest now is adopting this approach. Just think about the problem you solved, and what is the solution. Not necessary to push very hard and be formal, but always try to answer “&lt;em&gt;Why&lt;/em&gt;” the commit was made, not what is inside it. Just put both in the commit message, without trying to save the characters. And you’ll notice how it improves your process and way of thinking. Plus makes it easier to maintain the code in future.&lt;/p&gt;

&lt;p&gt;PS Please share your opinion: &lt;a href="https://twitter.com/splix/status/1694084487221522583"&gt;https://twitter.com/splix/status/1694084487221522583&lt;/a&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
