<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Stanislav Kozlovski</title>
    <description>The latest articles on DEV Community by Stanislav Kozlovski (@kozlovski).</description>
    <link>https://dev.to/kozlovski</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kozlovski"/>
    <language>en</language>
    <item>
      <title>One-shotting a Diskless Kafka in Python</title>
      <dc:creator>Stanislav Kozlovski</dc:creator>
      <pubDate>Wed, 29 Apr 2026 16:19:55 +0000</pubDate>
      <link>https://dev.to/kozlovski/one-shotting-a-diskless-kafka-in-python-4f0m</link>
      <guid>https://dev.to/kozlovski/one-shotting-a-diskless-kafka-in-python-4f0m</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Talk is cheap, show me the code - Linus Torvalds&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In 2026, code is cheap too - design is what matters.&lt;/p&gt;

&lt;p&gt;StreamNative recently open-sourced a &lt;a href="https://github.com/lakestream-io/leaderless-log-protocol/" rel="noopener noreferrer"&gt;formally-verified protocol&lt;/a&gt; for implementing a leaderless log. Their announcement blog sent a message similar to the opening quote (h/t &lt;a href="https://x.com/@sijieg" rel="noopener noreferrer"&gt;@sijieg&lt;/a&gt;) - that in the age of AI coding harnesses, what matters more is the design/protocol of a system rather than its particular implementation.&lt;/p&gt;

&lt;p&gt;I wanted to put that to the test, so I took their protocol, took &lt;a href="https://stanislavkozlovski.medium.com/oxia-a-modern-cloud-native-zookeeper-replacement-d68198a8427c" rel="noopener noreferrer"&gt;a linearizable metadata store&lt;/a&gt; (which the protocol requires) and got cracking:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone git@github.com:oxia-db/oxia-client-python.git
git clone https://github.com/oxia-db/oxia oxia-server
git clone https://github.com/lakestream-io/leaderless-log-protocol

/code/diskless-python-kafka &lt;span class="o"&gt;(&lt;/span&gt;main&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls  

&lt;/span&gt;leaderless-log-protocol oxia-client-python      oxia-server

/code/diskless-python-kafka &lt;span class="o"&gt;(&lt;/span&gt;main&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$ &lt;/span&gt;codex &lt;span class="c"&gt;# the magic begins&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  The One Shot
&lt;/h1&gt;

&lt;p&gt;My prompt was simple:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Using the Oxia python client (in this folder), and a running Oxia server (again in this folder), please implement a leaderless log protocol python agent for writing data. (only writing. no compaction yet). Use the leaderless-log-protocol spec in the folder here. In particular, the 1-leaderless-log-protocol.md should tell you all you need to know. The 0-coordination-delegated-pattern.md can share info on Oxia/the coordination store. Implement everything in one single file.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This was enough to implement a working leaderless log distributed system (with just its write functionality). Two prompts later, I implemented the read path and the compaction path.&lt;/p&gt;

&lt;p&gt;But it wasn't optimal - the published leaderless log specification only details how to ensure correctness for a single partition. It doesn't detail how to batch many topic partitions into a single mixed WAL S3 object for cost efficiency (what WarpStream and every other Diskless Kafka do).&lt;/p&gt;

&lt;p&gt;Preserving correctness while batching and following the protocol wasn't hard though.&lt;/p&gt;

&lt;p&gt;The core thing was more or less implemented in one &lt;strong&gt;5 hour usage limit of Codex ($20 plan)&lt;/strong&gt; with gpt-5.4 xhigh.&lt;/p&gt;

&lt;p&gt;I then started spending tokens on "productionizing" it. A load-testing harness, an observability stack and subsequent performance optimizations. This took me around 2-3 days of hacking, and a lot more tokens from parallel Codex sessions.&lt;/p&gt;

&lt;p&gt;Here's how my terminal looked:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ms4jillnd9wxhi2hnif.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ms4jillnd9wxhi2hnif.jpeg" alt="It's important to work on unrelated stuff in parallel so as to limit the eventual merge conflicts." width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's important to work on unrelated stuff in parallel so as to limit the eventual merge conflicts.&lt;/p&gt;

&lt;h1&gt;
  
  
  How Diskless Works
&lt;/h1&gt;

&lt;p&gt;The leaderless log protocol will be familiar to anybody who's read about Diskless Kafka before. The key differentiator from regular Kafka is that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;no leaders exist&lt;/strong&gt;: every broker accepts writes for &lt;strong&gt;every&lt;/strong&gt; partition&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mixed-partition segment files:&lt;/strong&gt; each broker buffers data and then unloads it all in one big fat blob on S3 that contains multi-partition data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;compaction is critical:&lt;/strong&gt; eventually, a compaction process splits that big blob into &lt;strong&gt;per-partition&lt;/strong&gt; blobs optimized for sequential reads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Key benefits of this architecture are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;cost&lt;/strong&gt; - it can be &lt;a href="https://topicpartition.io/blog/kip-1150-diskless-topics-in-apache-kafka#the-bottom-line" rel="noopener noreferrer"&gt;90% cheaper&lt;/a&gt; in high throughput situations because no inter-AZ network fees are incurred.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;operational simplicity&lt;/strong&gt; - because brokers are stateless (all data is in S3), they're easier to manage and scale.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's how my write path looked like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client
  |
  | POST /produce
  v
+---------------------------+
| HTTP broker               |
| topic_partitions[]        |
+---------------------------+
  |
  | aggregate + batch
  | flush at 8 MiB or 500 ms
  v
+---------------------------+
| LeaderlessLogWriter       |
|                           |
+---------------------------+
  |
  | 1) write one shared WAL blob
  +-------------------------------&amp;gt; S3
  |                                 llog/wal-shared/{uuid}
  |
  | 2) for each partition:
  |    reserve offsets + persist sparse-index
  v
+----------------------------------------------+
| Oxia                                         |
| orders[0] offsets 1..2 -&amp;gt; shared WAL object  |
| orders[1] offsets 1..1 -&amp;gt; same WAL object    |
+----------------------------------------------+
  |
  | 3) respond with per-partition offsets/results
  v
client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;An &lt;strong&gt;HTTP Python broker&lt;/strong&gt; accepts incoming POST /produce requests whose payload is a simple JSON map of partition name to a list of records for that partition.&lt;/li&gt;
&lt;li&gt;The broker buffers requests until it either reaches &lt;strong&gt;8 MiB&lt;/strong&gt; of pending data, or the wall clock time from the first request has surpassed &lt;strong&gt;500ms&lt;/strong&gt;. When either triggers, it begins to commit the data.&lt;/li&gt;
&lt;li&gt;First, it commits the mixed topic-partition data to S3 in one big 8 MiB blob. The data is durably persisted in S3 at this point - but it doesn't have offsets applied yet.&lt;/li&gt;
&lt;li&gt;Then, for each partition, it goes to Oxia (the distributed key-value metadata store) and persists the offsets there. This now "seals" our S3 file as a legit record of Kafka record data. Our metadata points to it.&lt;/li&gt;
&lt;li&gt;The broker responds to the client's produce request.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 4) is more complex than it looks, and is critical in ensuring safety of the distributed protocol. Let me expand on it:&lt;/p&gt;

&lt;h2&gt;
  
  
  The Oxia Offset Commit
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Oxia&lt;/strong&gt; is the distributed strongly-consistent key-value store we chose as our metadata store (&lt;a href="https://stanislavkozlovski.medium.com/oxia-a-modern-cloud-native-zookeeper-replacement-d68198a8427c" rel="noopener noreferrer"&gt;article here&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The offset assignment in Oxia consists of multiple steps. A single &lt;code&gt;meta/control&lt;/code&gt; key (per partition) acts as the centralized sequencer -- it says what the latest offset is.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;meta/control&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"log_state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"OPEN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"sequence_counter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"pending"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When a writer goes to commit a new bunch of offsets for a partition there (after the mixed multi-partition S3 blob has been persisted), it increments the offset counter AND populates the &lt;code&gt;pending&lt;/code&gt; field to reference the latest mixed S3 blob that holds these offsets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"log_state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"OPEN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"sequence_counter"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;73&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;+&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"pending"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"start_offset"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;48&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"end_offset"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;72&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"msg_count"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"data_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3://bucket/llog/wal-shared/abc123"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is done with a &lt;a href="https://en.wikipedia.org/wiki/Compare-and-swap" rel="noopener noreferrer"&gt;Compare-and-Swap (CAS)&lt;/a&gt; write to Oxia.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Oxia &lt;a href="https://oxia-db.github.io/docs/features/versioning" rel="noopener noreferrer"&gt;assigns versions for EVERY write operation&lt;/a&gt;, which lets you achieve &lt;strong&gt;strongly-consistent&lt;/strong&gt; conditional updates via compare and swap operations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The next step for that writer is to move the pending data to the &lt;code&gt;index/&lt;/code&gt; key hierarchy in Oxia (for that partition). That is where the definitive [record-offset -&amp;gt; S3] data location mapping is stored. An entry in that key space looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;key:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;llog/orders/partitions/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/index/&lt;/span&gt;&lt;span class="mi"&gt;00000000000000000072&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;hint:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;00000000000000000072&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;end&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;offset&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"WAL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"msg_count"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"data_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3://bucket/llog/wal-shared/blob-c"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"encoding"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bytes-batch-v1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"byte_offset"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"byte_length"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;12000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"created_at_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1760000002000&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;orders/partitions/0&lt;/code&gt; - denotes partition-0 of the orders topic&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;00000000000000000072&lt;/code&gt; - a part of the key name, is the END offset of the records in that index entry&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;data_key&lt;/code&gt; - denotes the full S3 path for that blob file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;byte_offset/byte_length&lt;/code&gt; - denotes the exact location &lt;strong&gt;inside&lt;/strong&gt; the S3 blob file where the records are consecutively laid out. Since a read may only want a single record from that blob file, it would be inefficient to have it read the whole blob to get the record. Instead, this mapping allows for &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/range-get-olap.html" rel="noopener noreferrer"&gt;byte-ranged GETs&lt;/a&gt; to S3 that download those particular records and not a byte more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After it's written there, the &lt;code&gt;pending&lt;/code&gt; field of &lt;code&gt;meta/control&lt;/code&gt; gets deleted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Offset Summary
&lt;/h2&gt;

&lt;p&gt;So again, the path is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;write the index entry into &lt;code&gt;meta/control.pending&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;write the index entry into &lt;code&gt;index/{END_OFFSET}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;delete the &lt;code&gt;pending&lt;/code&gt; field of &lt;code&gt;meta/control&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These 3 steps are not atomic. The writer process can fail in the middle of any step.&lt;/p&gt;

&lt;p&gt;The key safety property which guarantees data stays consistent is the following - a writer NEVER overrides &lt;code&gt;meta/control.pending&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It only writes into it if it's empty (which we can guarantee via the CAS write).&lt;/p&gt;

&lt;p&gt;If it is NOT empty, that implies that a previous writer process failed to complete the steps. The new writer takes up this responsibility and &lt;a href="https://github.com/lakestream-io/leaderless-log-protocol/blob/7567a40ff918d9a04321fd7421acef227d3a3f39/examples/s3-queue/impl/src/coordination/atomic_increment.rs#L44-L47" rel="noopener noreferrer"&gt;performs steps [2, 3] itself before it writes its own index entry&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Read Path
&lt;/h1&gt;

&lt;p&gt;Now that we have our files stored in S3 and our metadata stored in Oxia, reads can be performed from literally any broker. Our brokers are completely stateless.&lt;/p&gt;

&lt;p&gt;When a broker receives a request to fetch starting offset &lt;code&gt;40&lt;/code&gt; from partition &lt;code&gt;0&lt;/code&gt; of topic &lt;code&gt;orders&lt;/code&gt;, it deterministically knows that the place to figure out which S3 file stores that data is somewhere in Oxia under the key space of &lt;code&gt;llog/orders/partitions/0/index/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;But which exact key is it? If you've noticed, our indexing is sparse.&lt;/p&gt;

&lt;p&gt;Assuming our batch size is 50 records per index (i.e the mixed S3 blob had each partition store 50 records in it), Oxia may hold two index keys (per partition) for a hundred records. In this example, they would denote two end offsets - 50 and 100:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;llog/orders/partitions/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/index/&lt;/span&gt;&lt;span class="mi"&gt;00000000000000000050&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;S&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;S&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;byte&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;etc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="err"&gt;llog/orders/partitions/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/index/&lt;/span&gt;&lt;span class="mi"&gt;00000000000000000100&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;S&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;S&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;byte&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;offset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;etc&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Assume a pathological scenario - a Fetch request comes in for offsets 40-60 (desiring data from both index entries).&lt;/p&gt;

&lt;p&gt;The reader issues a so-called &lt;a href="https://oxia-db.github.io/oxia-client-java/apidocs/latest/io/oxia/client/api/options/GetOption.html#ComparisonCeiling" rel="noopener noreferrer"&gt;Ceiling Get&lt;/a&gt; to Oxia. This gets the key-value entry whose key is the &lt;strong&gt;lowest one&lt;/strong&gt; that is &lt;strong&gt;above or equal&lt;/strong&gt; to the supplied parameter. In other words:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;ceiling_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; 50
&lt;/span&gt;&lt;span class="nf"&gt;ceiling_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; 50
&lt;/span&gt;&lt;span class="nf"&gt;ceiling_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; 50
&lt;/span&gt;&lt;span class="nf"&gt;ceiling_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;51&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; 100
&lt;/span&gt;&lt;span class="nf"&gt;ceiling_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;99&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; 100
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡(remember this behavior because it's critical to how compaction works)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because all keys hold end offsets, our reader requesting a ceiling get of 40-60 issues &lt;code&gt;ceiling_get(40)&lt;/code&gt; and knows that the entry it received - end offset 50 - holds at least &lt;strong&gt;some&lt;/strong&gt; of the records it wants. When it realizes it ends at record 50, it'll issue a ceiling get of 51 and get the next index entry 100.&lt;/p&gt;

&lt;p&gt;Knowing both S3 file locations, the reader performs &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/range-get-olap.html" rel="noopener noreferrer"&gt;byte-ranged GETs&lt;/a&gt; to fetch that data.&lt;/p&gt;

&lt;p&gt;Easy peasy!&lt;/p&gt;

&lt;h1&gt;
  
  
  Compaction
&lt;/h1&gt;

&lt;p&gt;Last but definitely not least - compaction. If you haven't yet noticed, this data model can result in pretty slow and expensive reads:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Oxia will accumulate &lt;strong&gt;a lot&lt;/strong&gt; of index keys&lt;/li&gt;
&lt;li&gt;S3 will accumulate &lt;strong&gt;a lot&lt;/strong&gt; of small files&lt;/li&gt;
&lt;li&gt;Readers who want a lot of consecutive record data need to scan multiple Oxia keys and read from multiple S3 files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just to crunch some numbers - assume our cluster has 10 brokers, assume we persist two WAL blobs a second per broker (the default 500ms per batch), and assume a mixed WAL blob has just ~20 partitions' worth of data -- that's:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;34,560,000 sparse index key entries a day&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;1,728,000 S3 files a day&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each partition would have 1,728,000 index key entries per day alone. Assuming each partition in a mixed WAL blob has ~200 records in it, each index entry itself would also just point to 200 records.&lt;/p&gt;

&lt;p&gt;If we could compact each S3 file to instead store, say, 100,000 records per partition and each index entry to denote 5000 records, we'd go down to a more manageable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3456 S3 files per partition a day&lt;/li&gt;
&lt;li&gt;69,120 index entries per partition a day&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;69,120&lt;/strong&gt; S3 files a day&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1,382,400&lt;/strong&gt; sparse index key entries a day&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So how can we do that?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Compaction Path
&lt;/h2&gt;

&lt;p&gt;The Compactor is a separate service that reads and mutates Oxia/S3. There is no need for it to talk to the broker that serves reads/writes because its process is asynchronous, and locking is guaranteed through Oxia. The compactor is therefore free to scale separately and not interfere with the broker.&lt;/p&gt;

&lt;p&gt;The Compactor works on one partition at a time. To ensure other compactors don't step on each other, it claims a so-called &lt;a href="https://oxia-db.github.io/docs/features/ephemerals" rel="noopener noreferrer"&gt;Ephemeral Record&lt;/a&gt; in Oxia - this acts as a lightweight distributed lock.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;llog/orders/partitions/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/meta/compactor-claim&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"compactor_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"compactor-1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"claimed_at_ms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1760000010000&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 An ephemeral record is one whose lifecycle is tied to a particular client. It stays alive as long as the client heartbeats. If the client dies, the record is deleted by Oxia.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Compactor keeps a compaction cursor per partition, denoting up to what offset it has compacted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;llog/orders/partitions/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/meta/compaction-cursor&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"offset"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;🤫 This single-offset implies we do a one-pass compaction only, which can be inefficient. A better implementation would support multiple passes of compaction, creating ever-larger files with each pass. (up to a limit)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Starting from the last compacted offset, it starts reading &lt;code&gt;/index&lt;/code&gt; entries for that partition and its record data from S3. It groups up many such records into a newly-created single partition-exclusive blob file and uploads it to S3.&lt;/p&gt;

&lt;p&gt;It then creates a single &lt;code&gt;/compaction&lt;/code&gt; key entry in Oxia to persist its progress:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;llog/orders/partitions/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/meta/compaction&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"WRITING_COMPACTED_INDEX"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"start_offset"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"end_offset"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"data_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3://leaderless-log-wal/llog/orders/partitions/0/data/compacted/8b8e9c9df7d94d5f8f2b7b6d3e6a1234"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;^&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;newly-compacted&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;S&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;file&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;meta/compaction&lt;/code&gt; key acts as the single source-of-truth of the current on-going compaction. The key either has data in it - which means a compaction is on-going, or it's empty - which means no compaction is happening right now.&lt;/p&gt;

&lt;p&gt;At this point, we've compacted the data into a new read-optimized file in S3.&lt;/p&gt;

&lt;p&gt;The next step is to override the metadata - our &lt;code&gt;/index&lt;/code&gt; entries. Those still point to the old mixed S3 blobs when they should actually be pointing to the new compacted file.&lt;/p&gt;

&lt;p&gt;Instead of naively overwriting every index key entry at this stage, the protocol &lt;strong&gt;only overwrites the max end offset index entry&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;llog/orders/partitions/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/index/&lt;/span&gt;&lt;span class="mi"&gt;00000000000000000100&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"COMPACTED"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"data_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3://leaderless-log-wal/llog/orders/partitions/0/data/compacted/8b8e9c9df7d94d5f8f2b7b6d3e6a1234"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;^&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;newly-compacted&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;S&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;file&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The rest of the index entries will be deleted.&lt;/p&gt;

&lt;p&gt;Remember - readers issue Ceiling GETs to find the end offset of an index entry -- and our many index entries just got merged into one big entry. So naturally, we will be left with one big (compacted) index entry whose end offset is the largest offset in it.&lt;/p&gt;

&lt;p&gt;Before they get deleted, the state update has to be persisted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;llog/orders/partitions/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/meta/compaction&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DELETING_OLD"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💭 It's important to durably persist progress. Were the compaction node to die, the fail-over to a new compactor would be faster.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The compactor then deletes all the old index entries for that partition from Oxia.&lt;/p&gt;

&lt;p&gt;Once the old index entries are deleted, the compaction state is advanced again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;llog/orders/partitions/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/meta/compaction&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"state"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"UPDATING_CURSOR"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the compaction cursor is updated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;llog/orders/partitions/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/meta/compaction-cursor&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"offset"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;101&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then the meta/compaction record is deleted:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;llog/orders/partitions/&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;/meta/compaction&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;NULL&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  The Golden Age of Programming 💛
&lt;/h1&gt;

&lt;p&gt;The funny thing is that I did not come up with these paths, nor did I implement them.&lt;/p&gt;

&lt;p&gt;I retroactively learned about how it works in detail.&lt;/p&gt;

&lt;p&gt;By pointing my agent to the &lt;a href="https://github.com/lakestream-io/leaderless-log-protocol/blob/main/1-leaderless-log-protocol.md" rel="noopener noreferrer"&gt;battle-tested, formally-verified protocol&lt;/a&gt; that got shared by &lt;a href="https://streamnative.io/" rel="noopener noreferrer"&gt;StreamNative&lt;/a&gt; - my agent implemented everything without burdening me with complex distributed system problems.&lt;/p&gt;

&lt;p&gt;It was the subsequent prompts that made it explain things to me which helped me learn.&lt;/p&gt;

&lt;p&gt;It is extremely fun to toy around with AI coding when you know what you're doing. The key thing is to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;have a strong foundation&lt;/strong&gt; in the domain you're working on -- in this case, understand distributed systems at some decent level&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;have enough experience&lt;/strong&gt; so as to have proper intuition on where the AI may have screwed up or done something inefficient&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 The most fun I had was during our iteration over the system's performance. I was aiming to hit a simple 32 MB/s write rate on a single broker. I couldn't.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First, I simply didn't have enough clients sending enough data to reach 32 MB/s per broker (duh...). So I added more (192). Throughput didn't budge but latency grew (285ms → 2074ms). Hm...&lt;/li&gt;
&lt;li&gt;Second, I thought we were overloading Oxia with too many requests. Since the number of Oxia operations scales with the number of partitions (around 3-5 ops/s per partition), I figured 128 partitions (up to 910 ops/s) was a tad too much -&amp;gt; &lt;strong&gt;lowered partitions to 32&lt;/strong&gt;. Got some improvement, esp. around latency. (2.6 MB/s -&amp;gt; 4.24 MB/s &lt;strong&gt;(up 61%)&lt;/strong&gt;; 1997ms -&amp;gt; 786ms &lt;strong&gt;(down 61%)&lt;/strong&gt;); Still low though. Can't be it.&lt;/li&gt;
&lt;li&gt;Oxia exhibited decent latency (max ~5ms per op), so it didn't make sense it would take long. &lt;strong&gt;The issue was dumber than I thought&lt;/strong&gt;. Given Python &amp;amp; the AI, the Oxia metadata requests were all SERIAL. The code would serially send hundreds of requests, always waiting for the previous one to finish. Parallelisation fixed that. ~7.41 MB/s and 523ms - good progress. The bottleneck moved to the client.&lt;/li&gt;
&lt;li&gt;Increased the number of HTTP clients again. The way the test was structured, each client would send at most one request at a time. With the given latency per request, and the size of the request - 192 requests in flight weren't enough to reach the target throughput. Increased it to 512. Much higher throughput! (18MB/s, up 162%). But latency also went up - &lt;strong&gt;890ms&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Another dumb server bottleneck - lock contention. The path that checks if a partition exists was using &lt;strong&gt;the same lock&lt;/strong&gt; as the write lock, meaning each request was blocked on the one writing. That made no sense. Removed the lock &amp;amp; added another one -- then we really got a perf boost - 28MB/s and 181ms (yes, latency went down &lt;strong&gt;80%&lt;/strong&gt;). That particular stage (locking) was taking 532ms... we got it down to &lt;strong&gt;0.09ms&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;📱 All these steps were done &lt;strong&gt;through my phone&lt;/strong&gt; in a park. 🌲 When you've got the testing harness right (export results in agent-readable JSON) and you've got a decent intuition of where the system may be slow -- querying the agent is a piece of cake.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Having the AI automate all these tedious and ultra-boring processes was a godsend. I could get 100x more done in a day than I would have pre-AI.&lt;/p&gt;

&lt;p&gt;Through this AI coding exercise, I also found &lt;a href="https://github.com/oxia-db/oxia/pull/1021" rel="noopener noreferrer"&gt;a small shard placement bug&lt;/a&gt; in Oxia that I fixed, and &lt;a href="https://github.com/oxia-db/oxia-client-python/pull/6" rel="noopener noreferrer"&gt;a feature gap&lt;/a&gt; in the Python client that also got fixed.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Results
&lt;/h1&gt;

&lt;p&gt;Testing this on real S3 and EC2, I got:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;100 MB/s writes&lt;/li&gt;
&lt;li&gt;100 MB/s reads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydhb3y1zcl0kokdh69iz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydhb3y1zcl0kokdh69iz.jpeg" alt=" " width="800" height="106"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;the cluster-wide data in and data out throughput rates&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;inside a single EC2 instance running 5 brokers, Oxia &amp;amp; compactors.&lt;/p&gt;

&lt;p&gt;All for less than &lt;strong&gt;$0.60/hour&lt;/strong&gt; of S3 API costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkaw19re0i1f4kjux5ryv.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkaw19re0i1f4kjux5ryv.jpeg" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The cost deflation of this architecture is &lt;strong&gt;real&lt;/strong&gt;. The equivalent would have cost at least &lt;strong&gt;$16.4/hour&lt;/strong&gt; of cross-AZ network costs in AWS.&lt;/p&gt;

&lt;p&gt;But it doesn't come entirely for free. Hitting the real S3 meant much higher latencies than what the local MinIO gave me:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqhtel6x5a4nykr38oe2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqhtel6x5a4nykr38oe2.jpeg" alt=" " width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2vxjwn5uijzl74acmeo.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2vxjwn5uijzl74acmeo.jpeg" alt=" " width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Average writes for 10MB objects were ~200ms, whereas p99 went up to the multi-second threshold.&lt;/p&gt;

&lt;p&gt;And herein lies the big tradeoff that this leaderless log architecture brings - higher &lt;strong&gt;end-to-end&lt;/strong&gt; latency.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;end-to-end latency&lt;/strong&gt; - measures the time from which an event was published from a producer application to the time it was read by a consumer application. This is the latency metric Kafka users care about, the rest is marketing fluff.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With this type of diskless, leaderless architecture it's inevitable you incur significantly higher latency than what your regular Kafka would (20-30x). In order of significance, these steps take the most latency:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;S3 PUTs - &lt;strong&gt;200-2500ms&lt;/strong&gt;; S3 Standard simply isn't designed for consistent low latency. Using S3 Express is more complex and incurs a ton more costs&lt;/li&gt;
&lt;li&gt;Batching - &lt;strong&gt;100-500ms&lt;/strong&gt;; In order to save on S3 API costs and keep that &lt;strong&gt;$0.60/hour&lt;/strong&gt; run rate, you have to send less PUT requests. The only way to do that is to batch the data. This helps reduce the number of small files too&lt;/li&gt;
&lt;li&gt;Metadata Store - &lt;strong&gt;10-150ms&lt;/strong&gt;; The metadata store can become a hot component as it's literally in every critical path of the system (write, read, compact)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It is frankly-said impossible to get consistently-low, &amp;lt;100ms e2e latency with this architecture.&lt;/p&gt;

&lt;p&gt;This is why I believe the future is in the engines that support both types of topics - the classically-replicated-on-disk Kafka topics and the new diskless variant:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71eryuvjfdm9dvy75uba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71eryuvjfdm9dvy75uba.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An overview (as of April 2026) of what engines support different topic profiles. Coming soon to the open source Apache Kafka too&lt;/p&gt;

&lt;h2&gt;
  
  
  👋 Parting Words
&lt;/h2&gt;

&lt;p&gt;Thanks to StreamNative for publishing the leaderless log protocol. It does not give you the full diskless Kafka secret sauce, as key things need to be implemented on top of it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no batch writes/reads&lt;/li&gt;
&lt;li&gt;caching for reads&lt;/li&gt;
&lt;li&gt;garbage collection of the mixed S3 log segments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9hf5ubnkpud7vugnfwk.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9hf5ubnkpud7vugnfwk.jpeg" alt=" " width="800" height="71"&gt;&lt;/a&gt;&lt;br&gt;
my manual GC results (deleted the whole bucket)&lt;/p&gt;

&lt;p&gt;But those are implementation details that are solvable - not correctness constraints. The core distributed system protocol is there for any motivated engineer (or AI agent) to see and build on top of.&lt;/p&gt;

&lt;p&gt;I'm sure I could iterate on it and do a lot more, but this is where I'm officially closing the token gate and concluding this experiment. If you want to continue, the repo is &lt;a href="https://github.com/stanislavkozlovski/diskless-kafka-in-python" rel="noopener noreferrer"&gt;diskless-kafka-in-python&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And if you found this article informative, share it with your network. 🌞&lt;/p&gt;

&lt;p&gt;Thanks for reading. ~Stan&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>kafka</category>
      <category>openai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Kotlin - Unit Testing Classes Without Leaking Public API!</title>
      <dc:creator>Stanislav Kozlovski</dc:creator>
      <pubDate>Sat, 05 Dec 2020 08:34:23 +0000</pubDate>
      <link>https://dev.to/kozlovski/kotlin-unit-testing-classes-without-leaking-public-api-5dp9</link>
      <guid>https://dev.to/kozlovski/kotlin-unit-testing-classes-without-leaking-public-api-5dp9</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Kotlin is an amazing new up-and-coming language. It's &lt;a href="https://blog.jetbrains.com/kotlin/category/releases/" rel="noopener noreferrer"&gt;very actively developed&lt;/a&gt; and has a ton of features that make it very appealing.&lt;br&gt;
It's been steadily gaining market share ever since Google &lt;a href="https://techcrunch.com/2017/05/17/google-makes-kotlin-a-first-class-language-for-writing-android-apps/" rel="noopener noreferrer"&gt;added Android Development support for it&lt;/a&gt; (back in 2017) and &lt;a href="https://techcrunch.com/2019/05/07/kotlin-is-now-googles-preferred-language-for-android-app-development/" rel="noopener noreferrer"&gt;made it the preferred language for such development&lt;/a&gt; exactly one year ago (May 2019)&lt;/p&gt;

&lt;p&gt;For any Java developer coming into the language, spoiler alert, there is one big surprise that awaits you --- the &lt;code&gt;package-private&lt;/code&gt; visibility modifier is missing. It doesn't exist.&lt;/p&gt;
&lt;h1&gt;
  
  
  Access Modifiers
&lt;/h1&gt;

&lt;p&gt;An &lt;em&gt;access modifier&lt;/em&gt; is the way to set the accessibility of a class/method/variable in object-oriented languages. It's instrumental in facilitating &lt;a href="https://medium.com/free-code-camp/a-short-overview-of-object-oriented-software-design-c7aa0a622c83" rel="noopener noreferrer"&gt;proper encapsulation of components&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Java
&lt;/h2&gt;

&lt;p&gt;In Java, we have four access modifiers for any methods inside a class:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;void doSomething()&lt;/code&gt;- the default modifier, we call this package-private. Any such declarations are only visible within the same package (hence the name, private for the package)&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;private void doSomething()&lt;/code&gt; --- the private modifier, declarations of which are visible only within the same class&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;protected void doSomething()&lt;/code&gt; --- declarations only visible within the package or all subclasses.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;public void doSomething()&lt;/code&gt; --- declarations are visible everywhere.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This wide choice of options allows us to tightly-control what our class exposes. It also gives us greater control over what constitutes a unit of testable code and what isn't. See the following crude example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;doer.something&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="cm"&gt;/**
 * Does something.
 */&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;JSomethingDoer&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;motivationLevel&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="cm"&gt;/**
   * @param motivationLevel a 0-100 integer, denoting how motivated the #{@link JSomethingDoer} is.
   */&lt;/span&gt;
  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;JSomethingDoer&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;motivationLevel&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;motivationLevel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;motivationLevel&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;doSomething&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;doOneLittleThing&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;maybeDoSecondLittleThing&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;doOneLittleThing&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Doing one thing."&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// package-private for testing&lt;/span&gt;
  &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;maybeDoSecondLittleThing&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;feelingAmbitious&lt;/span&gt;&lt;span class="o"&gt;())&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;doSecondLittleThing&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;doSecondLittleThing&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"We're overachievers, doing a second thing!!!"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="nf"&gt;feelingAmbitious&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;motivationLevel&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can only directly test three of the five methods above. Because &lt;code&gt;doSecondLittleThing()&lt;/code&gt; and &lt;code&gt;feelingAmbitious()&lt;/code&gt; are private, our unit tests literally cannot call those methods, hence we cannot test them directly.&lt;/p&gt;

&lt;p&gt;The only way to test them is through the package-private method &lt;code&gt;maybeDoSecondLittleThing()&lt;/code&gt; , which can be called in our unit tests because they're in the same package as the class they're testing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;doer.something&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;org.junit.Before&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;org.junit.Test&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;JSomethingDoerTest&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

  &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;JSomethingDoer&lt;/span&gt; &lt;span class="n"&gt;doer&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

  &lt;span class="nd"&gt;@Before&lt;/span&gt;
  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;setUp&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;doer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;JSomethingDoer&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;11&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="nd"&gt;@Test&lt;/span&gt;
  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Test_doSomething&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;doer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;doSomething&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="c1"&gt;// assert&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="nd"&gt;@Test&lt;/span&gt;
  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Test_doOneLittleThing&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;doer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;doOneLittleThing&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="c1"&gt;// assert&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="nd"&gt;@Test&lt;/span&gt;
  &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;Test_maybeDoSecondLittleThing&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;doer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;maybeDoSecondLittleThing&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="c1"&gt;// assert&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The package-private modifier comes very useful in this example, because it allows us to test the &lt;code&gt;maybeDoSecondLittleThing()&lt;/code&gt; method directly!&lt;br&gt;
Testing would be more cumbersome if we had to test all code paths of the &lt;code&gt;maybeDoSecondLittleThing()&lt;/code&gt; method by calling the &lt;code&gt;public doSomething() &lt;/code&gt;method.&lt;/p&gt;

&lt;p&gt;The package-private modifier also ensures that we don't leak any such internal methods to packages outside of this one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpu6947srdf61w4rel4ha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpu6947srdf61w4rel4ha.png" alt="Java IDE Example" width="702" height="286"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Notice the different package &lt;code&gt;doer&lt;/code&gt; in the Main class.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Kotlin
&lt;/h2&gt;

&lt;p&gt;Kotlin is a bit weirder in this regard. It also has four modifiers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;private fun doSomething()&lt;/code&gt; --- vanilla private modifier&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;protected fun doSomething()&lt;/code&gt; --- the same as Java's protected --- only visible within this class and subclasses.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;public fun doSomething()&lt;/code&gt; --- visible everywhere&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These three are pretty standard. I want to focus on the fourth one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;internal fun doSomething()&lt;/code&gt; --- Internal declarations are visible anywhere inside &lt;em&gt;the same module&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is as good as public inside the same module, in that sense. Many small projects have no need for multiple modules, but they're likely to have multiple packages.&lt;/p&gt;

&lt;p&gt;There is basically no way to limit access to a Kotlin method to only be within the same package.&lt;/p&gt;

&lt;p&gt;What does this mean? It means that we either need to make our methods private, thus making our testing life harder, or make them public, thus exposing unnecessary API to components in the same module.&lt;/p&gt;

&lt;p&gt;Our same example from above, again in the &lt;code&gt;doer.something&lt;/code&gt; package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="nn"&gt;doer.something&lt;/span&gt;

&lt;span class="cm"&gt;/**
 * Does something.
 * @param motivationLevel a 0-100 integer, denoting how motivated the #[SomethingDoer] is.
 */&lt;/span&gt;
&lt;span class="k"&gt;open&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SomethingDoer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;motivationLevel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;doSomething&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;doOneLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;maybeDoSecondLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;doOneLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Doing one thing."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;maybeDoSecondLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;feelingAmbitious&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;doSecondLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;doSecondLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"We're overachievers, doing a second thing!!!"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;feelingAmbitious&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nc"&gt;Boolean&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;motivationLevel&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;90&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But this time, from the &lt;code&gt;doer&lt;/code&gt; package, we can call every method:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx9livjk8obum3yj925hv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx9livjk8obum3yj925hv.png" alt="IDE Kotlin Example" width="605" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  How do we solve it?
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Solution 1 --- Refactor!
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9hw2gmbot9yrsuqxfln6.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9hw2gmbot9yrsuqxfln6.jpeg" alt="Us refactoring our code" width="800" height="1067"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It seems to be &lt;a href="https://stackoverflow.com/questions/34571/how-do-i-test-a-private-function-or-a-class-that-has-private-methods-fields-or" rel="noopener noreferrer"&gt;commonly believed&lt;/a&gt; that we shouldn't test methods that are/should be private. The idea being that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  all private methods are reachable by the public methods&lt;/li&gt;
&lt;li&gt;  the public methods expose the interface/contract of a class&lt;/li&gt;
&lt;li&gt;  private methods are implementation details, and we want to test the class' functionality, not implementation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While that makes sense in theory, reality is not as black-and-white.&lt;/p&gt;

&lt;p&gt;Regardless, you should treat any need to test non-public methods as a code smell.&lt;br&gt;
Re-evaluate your design and rethink whether it makes sense to refactor the logic around such that you get the same test coverage by testing public interface contracts, rather than implementation details.&lt;/p&gt;

&lt;p&gt;In some cases, though, it is cleaner and more maintainable to get full test coverage by testing each tiny private method code thoroughly, rather than testing each conditional branch via public methods or creating unnecessary boilerplate (e.g additional classes).&lt;/p&gt;
&lt;h2&gt;
  
  
  Solution 2 --- Reflection
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foxdmfkppf6kqn3df6gfs.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foxdmfkppf6kqn3df6gfs.jpeg" alt="Reflection" width="800" height="1000"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An alternative solution is to keep the methods &lt;code&gt;private&lt;/code&gt; and use the language's reflection features to call into said private methods.&lt;/p&gt;

&lt;p&gt;Add a dependency on the &lt;code&gt;kotlin-reflect&lt;/code&gt; library and code away!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;  &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;callPrivate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;objectInstance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methodName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;vararg&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;?):&lt;/span&gt; &lt;span class="nc"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;privateMethod&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;KFunction&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;?&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt;
        &lt;span class="n"&gt;objectInstance&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;members&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="nd"&gt;@find&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="n"&gt;methodName&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nc"&gt;KFunction&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;?&lt;/span&gt;

    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;argList&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toMutableList&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;argList&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nc"&gt;ArrayList&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;objectInstance&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;argArr&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;argList&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toArray&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;privateMethod&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="n"&gt;javaMethod&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;trySetAccessible&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;!!&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;privateMethod&lt;/span&gt;&lt;span class="o"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;apply&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(*&lt;/span&gt;&lt;span class="n"&gt;argArr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;?:&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nc"&gt;NoSuchMethodException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Method $methodName does not exist in ${objectInstance::class.qualifiedName}"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nc"&gt;IllegalAccessException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Method $methodName could not be made accessible"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Woah.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx96nezs3swf8amucziik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fx96nezs3swf8amucziik.png" alt="Code and tests" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, our &lt;code&gt;callPrivate()&lt;/code&gt; test helper is the star of the show! It allows us to call into private methods, albeit in a funky way.&lt;br&gt;
We need to specify the method name by a string, hence we lose all type checking guarantees. This results in fragile code that's also hard to read.&lt;/p&gt;

&lt;p&gt;We can leverage a &lt;a href="https://docs.oracle.com/javase/8/docs/jdk/api/javac/tree/com/sun/source/util/Plugin.html" rel="noopener noreferrer"&gt;compiler plugin&lt;/a&gt;, like &lt;a href="https://github.com/manifold-systems/manifold" rel="noopener noreferrer"&gt;Manifold&lt;/a&gt;, to get type-safe meta-programming. But you can argue that takes it a bit too far, as it introduces a large dependency that involves a lot more meta-programming behind the scenes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Solution 3 --- Open the class, protect the methods and test a wrapper!
&lt;/h2&gt;

&lt;p&gt;A third, perhaps preferable way to solve this is with a bit of subclassing work.&lt;/p&gt;

&lt;p&gt;First, we need to make our class &lt;a href="https://blog.mindorks.com/understanding-open-keyword-in-kotlin" rel="noopener noreferrer"&gt;open for extension&lt;/a&gt; via the &lt;code&gt;open&lt;/code&gt; keyword. Then, we need to make our private methods &lt;code&gt;protected&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;open&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SomethingDoer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;motivationLevel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;doSomething&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;doOneLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;maybeDoSecondLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;protected&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;doOneLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Doing one thing."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we can create a subclass inside the test class whose sole goal is to expose the protected method directly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;SomethingDoerTest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

  &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;TestSomethingDoer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;motivationLevel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;SomethingDoer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;motivationLevel&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;testDoOneLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;doOneLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;lateinit&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="py"&gt;doer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;SomethingDoer&lt;/span&gt;
  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;lateinit&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="py"&gt;doerWrapper&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;TestSomethingDoer&lt;/span&gt;

  &lt;span class="nd"&gt;@Before&lt;/span&gt;
  &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;setUp&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;doer&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SomethingDoer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;99&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;doerWrapper&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;TestSomethingDoer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;99&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nd"&gt;@Test&lt;/span&gt;
  &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;doSomething&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;doer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;doSomething&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="c1"&gt;// assert&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nd"&gt;@Test&lt;/span&gt;
  &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;Test_doOneLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;doerWrapper&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;testDoOneLittleThing&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="c1"&gt;// assert&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is yet another imperfect solution to the problem which results in more boilerplate and a non-totally-private method.&lt;/p&gt;

&lt;p&gt;It is better than the reflection approach in that it's type-safe, therefore less prone to breakage and significantly easier to maintain.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;In this short article we went over the access modifiers of both Java and Kotlin and saw where Kotlin comes short. We proposed some solutions to Kotlin's lack of the package-private modifier with relation to testing, all with sufficient code examples!&lt;/p&gt;

&lt;p&gt;Speaking of which, the code for this article live on GitHub --- &lt;a href="https://github.com/stanislavkozlovski/unit-test-kt" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;This problem has been hit by many, unsurprisingly. It makes sense given that most people come from the same background --- we miss our good old Java features 😭&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Kotlin Discussion Forum --- &lt;a href="https://discuss.kotlinlang.org/t/kotlin-to-support-package-protected-visibility/1544" rel="noopener noreferrer"&gt;https://discuss.kotlinlang.org/t/kotlin-to-support-package-protected-visibility/1544&lt;/a&gt;; &lt;a href="https://discuss.kotlinlang.org/t/another-call-for-package-private-visibility/9577/3" rel="noopener noreferrer"&gt;https://discuss.kotlinlang.org/t/another-call-for-package-private-visibility/9577&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  KT Issue on the matter --- &lt;a href="https://youtrack.jetbrains.com/issue/KT-29227" rel="noopener noreferrer"&gt;https://youtrack.jetbrains.com/issue/KT-29227&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  Another take on Medium --- &lt;a href="https://medium.com/virtuslab/on-the-missing-package-private-or-why-java-is-better-than-kotlin-in-this-regard-4a1c9ecbe40c" rel="noopener noreferrer"&gt;https://medium.com/virtuslab/on-the-missing-package-private-or-why-java-is-better-than-kotlin-in-this-regard-4a1c9ecbe40c&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kotlin</category>
      <category>jvm</category>
      <category>java</category>
      <category>testing</category>
    </item>
    <item>
      <title>Working With Multithreaded Ruby Part II</title>
      <dc:creator>Stanislav Kozlovski</dc:creator>
      <pubDate>Mon, 16 Oct 2017 07:19:09 +0000</pubDate>
      <link>https://dev.to/kozlovski/working-with-multithreaded-ruby-part-ii-5e3</link>
      <guid>https://dev.to/kozlovski/working-with-multithreaded-ruby-part-ii-5e3</guid>
      <description>&lt;p&gt;We're back at it with another edition of Multithreaded Ruby where we'll continue to dive into concurrency using our beloved language!&lt;br&gt;
Today, I'm going to introduce you to a famous multi-process synchronization problem called the &lt;a href="https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem" rel="noopener noreferrer"&gt;Producer-Consumer&lt;/a&gt; problem and we're going to look at Ruby's &lt;code&gt;ConditionVariable&lt;/code&gt; class.&lt;/p&gt;
&lt;h1&gt;
  
  
  Back to Deadlock
&lt;/h1&gt;

&lt;p&gt;A paragraph into the new article and we're at deadlocks again? Well yes, they're pretty prevalent and we did not actually touch on a solution to the problem last time.&lt;br&gt;
Let's bring back the deadlock example we used in &lt;a href="https://dev.to/enether/working-with-multithreaded-ruby-part-i-cj3"&gt;Part I&lt;/a&gt;, modified just a tiny bit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'thwait'&lt;/span&gt;

&lt;span class="n"&gt;item_accessories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="n"&gt;item_acc_lock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;
&lt;span class="n"&gt;item_lock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;

&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;item_acc_lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;  &lt;span class="c1"&gt;# pretend to work on item_accessories&lt;/span&gt;
    &lt;span class="n"&gt;item_lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;# pretend to work on item&lt;/span&gt;
      &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s1"&gt;'Worked on accessories, then on item'&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;item_lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;  &lt;span class="c1"&gt;# pretend to work on item&lt;/span&gt;
    &lt;span class="n"&gt;item_acc_lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;# pretend to work on item_accessories&lt;/span&gt;
      &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s1"&gt;'Worked on item, then on accessories'&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="no"&gt;ThWait&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all_waits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby item_worker.rb
/Users/enether/.rvm/rubies/ruby-2.4.1/lib/ruby/2.4.0/thwait.rb:112:in &lt;span class="sb"&gt;`&lt;/span&gt;pop&lt;span class="s1"&gt;': No live threads left. Deadlock? (fatal)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No surprise here, thread &lt;code&gt;a&lt;/code&gt; obviously takes a hold of &lt;code&gt;item_acc_lock&lt;/code&gt;, thread &lt;code&gt;b&lt;/code&gt; takes a hold of &lt;code&gt;item_lock&lt;/code&gt; and each of them waits for the opposite lock in an endless loop. So how could we avoid this?&lt;br&gt;
What if we had a way to temporarily release one of the locks at a specific point in our program where we could afford doing so? That way, the other thread could take the lock, do its thing and return it back for the original one to finish its work.&lt;/p&gt;
&lt;h1&gt;
  
  
  Enter ConditionVariable
&lt;/h1&gt;

&lt;p&gt;ConditionVariable is a Ruby class which lets you block a thread until another thread signals it OK to continue. It is a way to say - "I'm waiting for a lock and I can give up mine at this exact time". It is an ideal way to synchronize our &lt;code&gt;a&lt;/code&gt; and &lt;code&gt;b&lt;/code&gt; threads here:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'thwait'&lt;/span&gt;

&lt;span class="n"&gt;item_accessories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="n"&gt;item_acc_lock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;
&lt;span class="n"&gt;item_lock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;
&lt;span class="n"&gt;cv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ConditionVariable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;

&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;item_acc_lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;  &lt;span class="c1"&gt;# pretend to work on item_accessories&lt;/span&gt;
    &lt;span class="c1"&gt;# At this point, we've just finished work on item_accessories and we're at a window where we&lt;/span&gt;
    &lt;span class="c1"&gt;#   might not care if item_accessories changes. So: let somebody else take it and give it back&lt;/span&gt;
    &lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item_acc_lock&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Temporarily sleeps the thread and releases the lock&lt;/span&gt;
    &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s1"&gt;'Gained back access to item_acc_lock'&lt;/span&gt;  &lt;span class="c1"&gt;# on this line, item_acc_lock is re-acquired &lt;/span&gt;
    &lt;span class="n"&gt;item_lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;# pretend to work on item&lt;/span&gt;
      &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s1"&gt;'Worked on accessories, then on item'&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;item_lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;  &lt;span class="c1"&gt;# pretend to work on item&lt;/span&gt;
    &lt;span class="n"&gt;item_acc_lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;# pretend to work on item_accessories&lt;/span&gt;
      &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s1"&gt;'Worked on item, then on accessories'&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;signal&lt;/span&gt;
    &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"I'm still working, but I'm finished with item_acc_lock"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="no"&gt;ThWait&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all_waits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby synchronized_item_worker.rb
Worked on item, &lt;span class="k"&gt;then &lt;/span&gt;on accessories
I&lt;span class="s1"&gt;'m still working, but I'&lt;/span&gt;m finished with item_acc_lock
Gained back access to item_acc_lock
Worked on accessories, &lt;span class="k"&gt;then &lt;/span&gt;on item
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What we achieved here is a sort of synchronization: we can now be sure that one &lt;code&gt;b&lt;/code&gt; thread will always reach its &lt;code&gt;cv.signal&lt;/code&gt; line before an &lt;code&gt;a&lt;/code&gt; thread starts working on &lt;code&gt;item_accessories&lt;/code&gt;.&lt;br&gt;
Here is a picture visualizing the process:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FYYeNI3X.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FYYeNI3X.png" width="800" height="584"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;You might want to open this in another tab - &lt;a href="http://bit.ly/2xG4HZz" rel="noopener noreferrer"&gt;High-Resolution&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It is worth noting that the &lt;code&gt;ConditionVariable#signal&lt;/code&gt; method will only wake up one thread which is waiting for the variable. This means that if we have two threads waiting on a &lt;code&gt;ConditionVariable&lt;/code&gt; and its &lt;code&gt;signal&lt;/code&gt; method is called only once, the thread that does not get called will end up waiting forever for the &lt;code&gt;ConditionVariable&lt;/code&gt;, resulting in a deadlock&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'thwait'&lt;/span&gt;

&lt;span class="n"&gt;lock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;
&lt;span class="n"&gt;cv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ConditionVariable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;
&lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lock&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;signal&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="no"&gt;ThWait&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all_waits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby cv_pitfall.rb
/Users/enether/.rvm/rubies/ruby-2.4.1/lib/ruby/2.4.0/thwait.rb:112:in &lt;span class="sb"&gt;`&lt;/span&gt;pop&lt;span class="s1"&gt;': No live threads left. Deadlock? (fatal)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In such a scenario, you need to either call &lt;code&gt;signal&lt;/code&gt; as many times as there are &lt;code&gt;waits&lt;/code&gt; or use another method - &lt;code&gt;ConditionVariable#broadcast&lt;/code&gt;, which will wake up every thread that is waiting on the condition variable.&lt;/p&gt;

&lt;h1&gt;
  
  
  Producer-Consumer
&lt;/h1&gt;

&lt;p&gt;The producer-consumer problem consists of at minimum two threads, one representing a &lt;em&gt;producer&lt;/em&gt; and one representing a &lt;em&gt;consumer&lt;/em&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Producer - Sole job is to create an item and put it into the &lt;em&gt;buffer&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Consumer - Sole job is to take the item from the &lt;em&gt;buffer&lt;/em&gt; and process it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is a sample implementation in Ruby, where the &lt;code&gt;tasks&lt;/code&gt; array acts as the buffer with a fictional limitation of having at most 2 items in it at once:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'thwait'&lt;/span&gt;

&lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;tasks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;mutex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;

&lt;span class="c1"&gt;# producer&lt;/span&gt;
&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="kp"&gt;loop&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
          &lt;span class="n"&gt;tasks&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"Task :)"&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;


&lt;span class="c1"&gt;# consumer&lt;/span&gt;
&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="kp"&gt;loop&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;nil&lt;/span&gt;
      &lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
          &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;shift&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;nil?&lt;/span&gt;
        &lt;span class="mi"&gt;100000&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
          &lt;span class="c1"&gt;# Simulating task execution's CPU work&lt;/span&gt;
          &lt;span class="c1"&gt;# also doing it outside the mutex so we don't block the tasks array (other producer might want to take a task as well)&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="no"&gt;ThWait&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all_waits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Producer-Consumer problem&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As the name implies, something is not quite right with the code above. Finding problems in concurrent code is hard, so I'm going to give you a couple of minutes to figure out what is wrong.&lt;/p&gt;

&lt;p&gt;...&lt;br&gt;
...&lt;br&gt;
...&lt;/p&gt;

&lt;p&gt;Okay, if you managed to figure it out - great, if not - don't fret, multithreaded programming is unintuitive.&lt;br&gt;
The problem with the code above is that it &lt;strong&gt;wastes time&lt;/strong&gt;. You see, as we can't control when and to which thread the OS switches to, there is the possibility that we leave our consumer thread and enter the producer's when there is no reason to. &lt;br&gt;
Imagine all our consumer threads are currently executing a task and our &lt;code&gt;tasks&lt;/code&gt; array is full &lt;em&gt;(has two elements)&lt;/em&gt;. If the OS decides to do a context switch and gives control to a producer thread it would only &lt;strong&gt;waste time&lt;/strong&gt;. Since the &lt;code&gt;tasks&lt;/code&gt; array would be full, the producer would loop only to check that the array is full and not do anything else, only managing to take precious CPU time from our consumer threads.&lt;/p&gt;

&lt;p&gt;Let's track exactly how much useless iterations this thing does:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'thwait'&lt;/span&gt;

&lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;tasks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;mutex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;

&lt;span class="n"&gt;to_exit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;false&lt;/span&gt;
&lt;span class="n"&gt;times_tasks_added&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="n"&gt;times_time_wasted&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="n"&gt;executed_tasks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;

&lt;span class="c1"&gt;# consumer&lt;/span&gt;
&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="kp"&gt;loop&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;to_exit&lt;/span&gt;  &lt;span class="c1"&gt;# a way to stop execution&lt;/span&gt;

      &lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
          &lt;span class="n"&gt;tasks&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"Task :)"&lt;/span&gt;
          &lt;span class="n"&gt;times_tasks_added&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;
          &lt;span class="c1"&gt;# time here is absolutely wasted&lt;/span&gt;
          &lt;span class="n"&gt;times_time_wasted&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="c1"&gt;# producer&lt;/span&gt;
&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="kp"&gt;loop&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;to_exit&lt;/span&gt;  &lt;span class="c1"&gt;# a way to stop execution&lt;/span&gt;
      &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;nil&lt;/span&gt;
      &lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
          &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;shift&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;nil?&lt;/span&gt;
        &lt;span class="mi"&gt;100000&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
          &lt;span class="c1"&gt;# Simulating CPU work&lt;/span&gt;
          &lt;span class="c1"&gt;# also doing it outside the mutex so we don't block the tasks array (other producer might want to take a task as well)&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
        &lt;span class="n"&gt;executed_tasks&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;executed_tasks&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;  &lt;span class="c1"&gt;# don't loop forever&lt;/span&gt;
          &lt;span class="n"&gt;to_exit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;true&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="no"&gt;ThWait&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all_waits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Total tasks added: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;times_tasks_added&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Total times we branched out into the useless else statement: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;times_time_wasted&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here are the results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby squander_of_time.rb
Total tasks added: 102
Total &lt;span class="nb"&gt;times &lt;/span&gt;we branched out into the useless &lt;span class="k"&gt;else &lt;/span&gt;statement: 1633
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby squander_of_time.rb
Total tasks added: 102
Total &lt;span class="nb"&gt;times &lt;/span&gt;we branched out into the useless &lt;span class="k"&gt;else &lt;/span&gt;statement: 848282
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby squander_of_time.rb
Total tasks added: 102
Total &lt;span class="nb"&gt;times &lt;/span&gt;we branched out into the useless &lt;span class="k"&gt;else &lt;/span&gt;statement: 356418
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We know that thread context-switching is non-deterministic and these results further prove it. Sometimes we do as little as 1633 useless executions of the &lt;code&gt;else&lt;/code&gt; branch and some - as much as &lt;strong&gt;848k&lt;/strong&gt;, 8316 times more than we need!&lt;br&gt;
This is bad because it will cause your program to run slower at some times and behave seemingly normal in others.&lt;br&gt;
It's good to note that I personally found it hard to figure out the problem in this code even though it's specifically made to illustrate it. Imagine how hard it would be to spot such a thing in an established codebase!&lt;/p&gt;

&lt;p&gt;This slow-down is not acceptable, so let's fix it. Thinking the problem through, our problem seems to boil down to having a producer thread wake up when it doesn't make sense. &lt;br&gt;
What would be perfect is if we had the ability to somehow control when we resume the producer thread - specifically when a task is removed from the &lt;code&gt;tasks&lt;/code&gt; buffer so we're sure there's room to add another one.&lt;/p&gt;
&lt;h1&gt;
  
  
  ConditionVariable to the rescue!
&lt;/h1&gt;

&lt;p&gt;The fix is simple: We're just going to put a condition variable which is going to give up the &lt;code&gt;mutex&lt;/code&gt; lock whenever we detect that we do not need to continue looping in the producer. We're also going to need to tell the producer he can resume when we have an empty spot in the &lt;code&gt;tasks&lt;/code&gt; buffer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'thwait'&lt;/span&gt;

&lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;tasks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;mutex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;

&lt;span class="n"&gt;to_exit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;false&lt;/span&gt;
&lt;span class="n"&gt;times_tasks_added&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="n"&gt;times_time_wasted&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="n"&gt;executed_tasks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="n"&gt;cv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ConditionVariable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;

&lt;span class="c1"&gt;# consumer&lt;/span&gt;
&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="kp"&gt;loop&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;to_exit&lt;/span&gt;  &lt;span class="c1"&gt;# a way to stop execution&lt;/span&gt;

      &lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
          &lt;span class="n"&gt;tasks&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"Task :)"&lt;/span&gt;
          &lt;span class="n"&gt;times_tasks_added&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;
          &lt;span class="n"&gt;times_time_wasted&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
          &lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# no need to continue looping in such a case, only continue after it makes sense&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="c1"&gt;# producer&lt;/span&gt;
&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="kp"&gt;loop&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;to_exit&lt;/span&gt;  &lt;span class="c1"&gt;# a way to stop execution&lt;/span&gt;
      &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;nil&lt;/span&gt;
      &lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
          &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;shift&lt;/span&gt;
          &lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;signal&lt;/span&gt;  &lt;span class="c1"&gt;# one new task can now be added&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;nil?&lt;/span&gt;
        &lt;span class="mi"&gt;100000&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
          &lt;span class="c1"&gt;# Simulating CPU work&lt;/span&gt;
          &lt;span class="c1"&gt;# also doing it outside the mutex so we don't block te tasks array (other producer might want to take a task as well)&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
        &lt;span class="n"&gt;executed_tasks&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;executed_tasks&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;  &lt;span class="c1"&gt;# don't loop forever&lt;/span&gt;
          &lt;span class="n"&gt;to_exit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;true&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="no"&gt;ThWait&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all_waits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Total tasks added: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;times_tasks_added&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Total times we branched out into the useless else statement: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;times_time_wasted&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby saver_of_time.rb
Total tasks added: 101
Total &lt;span class="nb"&gt;times &lt;/span&gt;we branched out into the useless &lt;span class="k"&gt;else &lt;/span&gt;statement: 50
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby saver_of_time.rb
Total tasks added: 100
Total &lt;span class="nb"&gt;times &lt;/span&gt;we branched out into the useless &lt;span class="k"&gt;else &lt;/span&gt;statement: 45
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby saver_of_time.rb
Total tasks added: 100
Total &lt;span class="nb"&gt;times &lt;/span&gt;we branched out into the useless &lt;span class="k"&gt;else &lt;/span&gt;statement: 42
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Woo, performance!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In reality, it's worth noting that the previous example and this one actually run in about the same time. 400k useless iterations sound like a lot but our computers are fast enough to not let us notice this inefficiency. Regardless, I hope this example managed to clearly illustrate the problem.&lt;/p&gt;

&lt;h1&gt;
  
  
  Further optimization
&lt;/h1&gt;

&lt;p&gt;Do we even need to enter the &lt;code&gt;else&lt;/code&gt; branch at all? Could we not put &lt;code&gt;cv.wait&lt;/code&gt; inside the block which adds the tasks and have it call &lt;code&gt;wait&lt;/code&gt; when the buffer is full? We can and that way it should never enter the &lt;code&gt;else&lt;/code&gt; block, as it would only be resumed to add a task and sleep if the buffer is full.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;span class="kp"&gt;loop&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;to_exit&lt;/span&gt;  &lt;span class="c1"&gt;# a way to stop execution&lt;/span&gt;

  &lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
      &lt;span class="n"&gt;tasks&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"Task :)"&lt;/span&gt;
      &lt;span class="n"&gt;times_tasks_added&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
        &lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# no need to continue looping in such a case, only continue after it makes sense&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;
      &lt;span class="n"&gt;times_time_wasted&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby no_time_wasted.rb
Total tasks added: 100.
Total &lt;span class="nb"&gt;times &lt;/span&gt;we branched out into the useless &lt;span class="k"&gt;else &lt;/span&gt;statement: 16
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby no_time_wasted.rb
Total tasks added: 102
Total &lt;span class="nb"&gt;times &lt;/span&gt;we branched out into the useless &lt;span class="k"&gt;else &lt;/span&gt;statement: 13
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby no_time_wasted.rb
Total tasks added: 101
Total &lt;span class="nb"&gt;times &lt;/span&gt;we branched out into the useless &lt;span class="k"&gt;else &lt;/span&gt;statement: 372021
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;What the hell?&lt;/em&gt; We still got entered the &lt;code&gt;else&lt;/code&gt; block and we even got back to the previous levels of needless execution!&lt;/p&gt;

&lt;p&gt;Concurrent programming is hard. We're using two producer threads here and when one fills up the &lt;code&gt;tasks&lt;/code&gt; array it frees the mutex. The other producer thread seems to get resumed (remember we free the mutex only when &lt;code&gt;tasks&lt;/code&gt; is full) and enters the &lt;code&gt;else&lt;/code&gt; branch as the &lt;code&gt;tasks&lt;/code&gt; buffer is full.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2Fgati0Q7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2Fgati0Q7.png" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;You might want to open this in another tab - &lt;a href="http://bit.ly/2y0RTf6" rel="noopener noreferrer"&gt;High-Resolution&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Okay, well the simplest thing to do is put a &lt;code&gt;wait&lt;/code&gt; back where we had one. This should limit the useless calls as much as possible:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;span class="kp"&gt;loop&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;to_exit&lt;/span&gt;  &lt;span class="c1"&gt;# a way to stop execution&lt;/span&gt;

  &lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
      &lt;span class="n"&gt;tasks&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"Task :)"&lt;/span&gt;
      &lt;span class="n"&gt;times_tasks_added&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
        &lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;
      &lt;span class="n"&gt;times_time_wasted&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
      &lt;span class="n"&gt;cv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby no_time_wasted_fixed.rb
Total tasks added: 101
Total &lt;span class="nb"&gt;times &lt;/span&gt;we branched out into the useless &lt;span class="k"&gt;else &lt;/span&gt;statement: 5
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby no_time_wasted_fixed.rb
Total tasks added: 101
Total &lt;span class="nb"&gt;times &lt;/span&gt;we branched out into the useless &lt;span class="k"&gt;else &lt;/span&gt;statement: 8
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby no_time_wasted_fixed.rb
Total tasks added: 101
Total &lt;span class="nb"&gt;times &lt;/span&gt;we branched out into the useless &lt;span class="k"&gt;else &lt;/span&gt;statement: 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Well I'm afraid this is as much as we can do with the &lt;code&gt;ConditionVariable&lt;/code&gt;. The reason we still enter the &lt;code&gt;else&lt;/code&gt; block a couple of times is most likely the one depicted in the image above.&lt;br&gt;
Although there is one other possibility:&lt;/p&gt;

&lt;h1&gt;
  
  
  Spurious Wakeups
&lt;/h1&gt;

&lt;p&gt;A &lt;em&gt;spurious wakeup&lt;/em&gt; is when a &lt;code&gt;ConditionVariable&lt;/code&gt; gets woken up without getting signaled to. This might sound stupid but it makes sense, since it seems to boost performance in some cases. According to David R. Butenhof's Programming with POSIX Threads (ISBN 0-201-63392-2): &lt;em&gt;"Spurious wakeups may sound strange, but on some multiprocessor systems, making condition wakeup completely predictable might substantially slow all condition variable operations."&lt;/em&gt;.&lt;br&gt;
They also enforce robust multithreaded code, essentially enforcing you to take care of such cases. This is why it is &lt;strong&gt;strongly recommended&lt;/strong&gt; that you always put your &lt;code&gt;ConditionVariable&lt;/code&gt;s inside a loop which always checks the appropriate condition &lt;em&gt;(as we do with &lt;code&gt;if tasks.length &amp;lt; 2&lt;/code&gt;)&lt;/em&gt;.&lt;br&gt;
Here is an interesting discussion on the topic: &lt;a href="https://groups.google.com/forum/#!topic/comp.programming.threads/MnlYxCfql4w" rel="noopener noreferrer"&gt;comp.programming.threads&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I personally could not identify when a producer thread was woken up spuriously or simply got scheduled when a previous producer went to sleep. I did dig through the &lt;a href="https://github.com/ruby/ruby" rel="noopener noreferrer"&gt;MRI&lt;/a&gt; code to verify that &lt;code&gt;cv.wait&lt;/code&gt; is vulnerable to spurious wakeups. &lt;br&gt;
Here is the way it gets called - &lt;a href="https://github.com/ruby/ruby/blob/ruby_2_4/thread_sync.c#L1180" rel="noopener noreferrer"&gt;rb_condvar_wait&lt;/a&gt; -&amp;gt; &lt;a href="https://github.com/ruby/ruby/blob/ruby_2_4/thread_sync.c#L1157" rel="noopener noreferrer"&gt;do_sleep&lt;/a&gt; -&amp;gt; &lt;a href="https://github.com/ruby/ruby/blob/ruby_2_4/thread_sync.c#L471" rel="noopener noreferrer"&gt;mutex_sleep&lt;/a&gt; -&amp;gt; &lt;a href="https://github.com/ruby/ruby/blob/ruby_2_4/thread_sync.c#L436" rel="noopener noreferrer"&gt;rb_mutex_sleep&lt;/a&gt; -&amp;gt; &lt;a href="https://github.com/ruby/ruby/blob/ruby_2_4/thread_sync.c#L421" rel="noopener noreferrer"&gt;rb_mutex_sleep_forever&lt;/a&gt; -&amp;gt; &lt;a href="https://github.com/ruby/ruby/blob/ruby_2_4/thread.c#L1170" rel="noopener noreferrer"&gt;rb_thread_sleep_deadly_allow_spurious_wakeup&lt;/a&gt; -&amp;gt; &lt;a href="https://github.com/ruby/ruby/blob/ruby_2_4/thread.c#L1073" rel="noopener noreferrer"&gt;sleep_forever&lt;/a&gt;&lt;br&gt;
It seems to boil down to calling the &lt;code&gt;sleep_forever&lt;/code&gt; function which calls &lt;code&gt;native_sleep&lt;/code&gt; in a loop. After the code exits from the sleep, Ruby checks if the thread was woken up on purpose (in &lt;code&gt;RUBY_VM_CHECK_INTS_BLOCKING(th)&lt;/code&gt;) and schedules it to be interrupted if so. Since ours isn't, it likely enters the &lt;code&gt;if (!spurious_check)&lt;/code&gt; block and &lt;code&gt;break&lt;/code&gt;s the loop, effectively stopping the sleep.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;We touched on a couple of important topics in multithreaded programming.&lt;br&gt;
We learned about the precious &lt;code&gt;ConditionVariable&lt;/code&gt; class and more specifically how it allows you to pause threads at will and schedule a resume when you decide to.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ConditionVariable#wait(mutex)&lt;/code&gt; - puts the current thread to sleep, releases the given mutex for the time being and gets resumed strictly after a signal&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ConditionVariable#signal&lt;/code&gt; - allows &lt;strong&gt;one&lt;/strong&gt; thread that holds the given condition variable that to resume&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ConditionVariable#broadcast&lt;/code&gt; - allows &lt;strong&gt;all&lt;/strong&gt; threads that hold the given condition variable to resume&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We dabbled into the producer-consumer problem, trying to optimize it on our own and further explored concurrency problems and in the end learned about spurious wakeups.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>concurrency</category>
      <category>multithreaded</category>
    </item>
    <item>
      <title>Working with Multithreaded Ruby Part I</title>
      <dc:creator>Stanislav Kozlovski</dc:creator>
      <pubDate>Tue, 26 Sep 2017 06:47:26 +0000</pubDate>
      <link>https://dev.to/kozlovski/working-with-multithreaded-ruby-part-i-cj3</link>
      <guid>https://dev.to/kozlovski/working-with-multithreaded-ruby-part-i-cj3</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Multithreaded Ruby is a niche topic in our community and to no surprise. Most Ruby applications are web servers built on Rails or Sinatra, those are single-threaded frameworks and developers on such projects rarely even need to know about threads, &lt;a href="http://edgeguides.rubyonrails.org/active_job_basics.html" rel="noopener noreferrer"&gt;as the framework usually has got your back&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Even if you do not use it, some basic knowledge of multithreading (and its basic concepts) in an interpreted language like Ruby will surely come in handy throughout your career.&lt;/p&gt;

&lt;p&gt;I assume you know about the &lt;strong&gt;GIL&lt;/strong&gt; (Global Interpreter Lock). In case you don't know what it is, you can read my article &lt;a href="https://dev.to/enether/rubys-gil-in-a-nutshell"&gt;Ruby's GIL in a nutshell&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  GIL != the end of the world
&lt;/h1&gt;

&lt;p&gt;Even though it limits parallelism, Ruby's GIL does not completely stop it. As we know, it exists to guard the interpreter's internal state. As such, &lt;strong&gt;it only applies to Ruby operations&lt;/strong&gt;. In our normal day-to-day code there are a lot of operations that are not the job of Ruby's interpreter to handle.&lt;/p&gt;

&lt;p&gt;A good example is I/O operations. While waiting for an external service to load something, there is no need to hold the GIL, as this external service cannot harm our internal state.&lt;br&gt;
Ruby's PostgreSQL library is written in C and its method call for a DB query releases the GIL. The following example shows that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'thwait'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'pg'&lt;/span&gt;

&lt;span class="n"&gt;start&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;

&lt;span class="n"&gt;first_sleep&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s1"&gt;'Starting sleep 1'&lt;/span&gt;
  &lt;span class="n"&gt;conn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;PG&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;dbname: &lt;/span&gt;&lt;span class="s1"&gt;'test'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'SELECT pg_sleep(1);'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s1"&gt;'Finished sleep 1'&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;second_sleep&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s1"&gt;'Starting sleep 2'&lt;/span&gt;
  &lt;span class="n"&gt;conn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;PG&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;dbname: &lt;/span&gt;&lt;span class="s1"&gt;'test2'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'SELECT pg_sleep(1);'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s1"&gt;'Finished sleep 2'&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;random&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s1"&gt;'In a random thread'&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="no"&gt;ThWait&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all_waits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;first_sleep&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;second_sleep&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Time it took: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we spin up two threads, create a connection to different databases and run a sleep query for a second. Without parallelism, this should take at minimum 2 seconds.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby async_pg.rb
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Starting &lt;span class="nb"&gt;sleep &lt;/span&gt;2
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Starting &lt;span class="nb"&gt;sleep &lt;/span&gt;1
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; In a random thread
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Finished &lt;span class="nb"&gt;sleep &lt;/span&gt;2
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Finished &lt;span class="nb"&gt;sleep &lt;/span&gt;1
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Time it took: 1.074824
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;But it runs in 1 second!&lt;/em&gt;&lt;br&gt;
This proves that the PostgreSQL query does not hold the GIL and lets the other thread take control. Not only does it not lock the interpreter but it actually runs the query in parallel with the other query, that's the only way in which we could achieve a 1 second execution time to run two sleep queries!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FvgIMgo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FvgIMgo9.png" width="661" height="651"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Reminder: The GIL does not protect you
&lt;/h1&gt;

&lt;p&gt;A problem can occur when two or more threads access shared data and try to change it. This is called a &lt;strong&gt;race condition&lt;/strong&gt;.&lt;br&gt;
Because Ruby's thread scheduling algorithm can swap between threads at any time, you don't know the order in which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the algorithm and &lt;em&gt;seemingly&lt;/em&gt; out of your control.&lt;br&gt;
It is therefore possible for two threads to modify data in such a sequence where you get an unexpected outcome.&lt;/p&gt;

&lt;p&gt;Here is an example of the so called check-and-act race condition, where you check for a variable's value and then act in regards to it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'thwait'&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;send_money&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Sending $&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;  &lt;span class="c1"&gt;# Simulate network call sending of money PS: This is I/O, so you know Ruby releases GIL here&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;


&lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;money_is_sent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;false&lt;/span&gt;

&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;th&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="n"&gt;money_is_sent&lt;/span&gt;
      &lt;span class="n"&gt;send_money&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
      &lt;span class="n"&gt;money_is_sent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;true&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;th&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;


&lt;span class="no"&gt;ThWait&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all_waits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We obviously want to send the money only once but running the code shows that this is not the case&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby balling.rb
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Sending &lt;span class="nv"&gt;$10&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Sending &lt;span class="nv"&gt;$10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what happens here&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fi.imgur.com%2FicZANVe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fi.imgur.com%2FicZANVe.png" width="760" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you saw, what looks like straightforward code can end up producing a huge problem (losing us money!) when executed concurrently. It is up to you to make your code thread-safe.&lt;/p&gt;
&lt;h1&gt;
  
  
  How to protect yourself
&lt;/h1&gt;

&lt;p&gt;So how could we avoid such race conditions?&lt;br&gt;
Simple, you can take the same approach as the Ruby Core team and introduce your own lock (kind of like the GIL), which would be a local lock on a block of code.&lt;br&gt;
This is called a Mutex (Mutual Exclusion) and it helps you synchronize access to blocks of code, acting like a gatekeeper.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'thwait'&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;send_money&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Sending $&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;amount&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;  &lt;span class="c1"&gt;# Simulate network call sending of money&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;lock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;
&lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;money_is_sent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;false&lt;/span&gt;

&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;th&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="n"&gt;money_is_sent&lt;/span&gt;
        &lt;span class="n"&gt;send_money&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
        &lt;span class="n"&gt;money_is_sent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;true&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;  
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="n"&gt;threads&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;th&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;


&lt;span class="no"&gt;ThWait&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all_waits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;threads&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We define a &lt;code&gt;Mutex&lt;/code&gt; and call the &lt;code&gt;synchronize&lt;/code&gt; method. When we enter the block in the synchronize method, our mutex gets locked. If another thread tries to access code through &lt;code&gt;lock.synchronize&lt;/code&gt; it will see that the lock is locked and pause until it is unlocked.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby balling_on_a_budget.rb
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Sending &lt;span class="nv"&gt;$10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F2vNor1C.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F2vNor1C.png" width="800" height="968"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Be sure to note that &lt;code&gt;lock.synchronize&lt;/code&gt; only prevents a thread from being interrupted by others wanting to execute code wrapped inside the same &lt;code&gt;lock&lt;/code&gt; variable!&lt;br&gt;
Creating two different locks will obviously not work.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="no"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="n"&gt;money_is_sent&lt;/span&gt;
        &lt;span class="n"&gt;send_money&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
        &lt;span class="n"&gt;money_is_sent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;true&lt;/span&gt;
      &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby lock_city.rb
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Sending &lt;span class="nv"&gt;$10&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Sending &lt;span class="nv"&gt;$10&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;yeah, no way&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Mutexes are not perfect
&lt;/h1&gt;

&lt;p&gt;Now that we know about these locks, we need to pay attention to how we use them. They offer protection but there is also a possibility where that can backfire on you if not used correctly.&lt;br&gt;
It is possible to end up in a so-called &lt;strong&gt;deadlock&lt;/strong&gt; (sounds scary, doesn't it?). A deadlock is a situation where one thread that holds mutex &lt;strong&gt;A&lt;/strong&gt; waits for a mutex &lt;strong&gt;B&lt;/strong&gt; to be released but the thread that holds mutex &lt;strong&gt;B&lt;/strong&gt; is waiting for mutex &lt;strong&gt;A&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'thread'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'thwait'&lt;/span&gt;

&lt;span class="n"&gt;first_lock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;
&lt;span class="n"&gt;second_lock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Mutex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;

&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;first_lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;  &lt;span class="c1"&gt;# essentially forces a context switch&lt;/span&gt;
    &lt;span class="n"&gt;second_lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s1"&gt;'Locked #1 then #2'&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Thread&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;second_lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;  &lt;span class="c1"&gt;# essentially forces a context switch&lt;/span&gt;
    &lt;span class="n"&gt;first_lock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;synchronize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s1"&gt;'Locked #2 then #1'&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="no"&gt;ThWait&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all_waits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby dead_lock.rb
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /Users/enether/.rvm/rubies/ruby-2.4.1/lib/ruby/2.4.0/thwait.rb:112:in &lt;span class="sb"&gt;`&lt;/span&gt;pop&lt;span class="s1"&gt;': No live threads left. Deadlock? (fatal)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fi.imgur.com%2FV2mHPXx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fi.imgur.com%2FV2mHPXx.png" width="564" height="489"&gt;&lt;/a&gt;&lt;br&gt;
They are both holding what the other thread wants and waiting for what the other thread has.&lt;br&gt;
Of course, this is a pretty specific example and there are not many cases in which you might use two mutexes in such a way, but it is essential to know about this pitfall.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;We saw that regardless of the GIL you can still do tasks asynchronously (I/O and native libraries) and confirmed that it won't save you from your thread-unsafe code.&lt;br&gt;
You learned about the most common pitfall - the check-then-act race condition, we introduced a way of handling the problem through our own little GIL-esque lock (Mutex) and we saw that even that can backfire.&lt;/p&gt;

&lt;p&gt;I hope I've managed to showcase how tricky multithreaded programming can turn out to be and how it can introduce problems you would not consider programming synchronously.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>concurrency</category>
      <category>multithreaded</category>
    </item>
    <item>
      <title>Crystal - the Ruby you've never heard of</title>
      <dc:creator>Stanislav Kozlovski</dc:creator>
      <pubDate>Mon, 18 Sep 2017 12:19:29 +0000</pubDate>
      <link>https://dev.to/enether/crystal---the-ruby-youve-never-heard-of</link>
      <guid>https://dev.to/enether/crystal---the-ruby-youve-never-heard-of</guid>
      <description>&lt;h1&gt;
  
  
  What?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://crystal-lang.org/" rel="noopener noreferrer"&gt;Crystal&lt;/a&gt; is a new, elegant, multi-paradigm programming language that is productive and fast. It has &lt;del&gt;Ruby's&lt;/del&gt; a Ruby-inspired syntax and compiles to native code. It is actually unreal how similar to Ruby this language looks like.&lt;br&gt;
This language combines efficient code with developer productivity, adds full OOP, a great concurrency model and a compiler that holds your hand.&lt;/p&gt;

&lt;p&gt;This article is meant to give you a short overview, a direct performance comparison to Ruby and show some things that set it apart. It is advised you know at least some Ruby before continuing on reading.&lt;/p&gt;
&lt;h1&gt;
  
  
  Starting with the fun stuff - a performant example
&lt;/h1&gt;

&lt;p&gt;Let's actually get a feel as to how performant Crystal is.&lt;br&gt;
I wrote an &lt;a href="https://en.wikipedia.org/wiki/AA_tree" rel="noopener noreferrer"&gt;AA Tree&lt;/a&gt; in both &lt;a href="https://github.com/Enether/crystal-aa-tree/blob/master/AA_Tree.cr" rel="noopener noreferrer"&gt;Crystal&lt;/a&gt; and &lt;a href="https://github.com/Enether/crystal-aa-tree/blob/ruby-conversion/AA_Tree.rb" rel="noopener noreferrer"&gt;Ruby&lt;/a&gt;.&lt;br&gt;
&lt;em&gt;Note: Code quality might not be top-notch. Some lines of Crystal code are intentionally written more explicitly&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We are going to be running this code to benchmark each implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;elements_count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ARGV&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;to_i&lt;/span&gt;  &lt;span class="c1"&gt;# first command line argument&lt;/span&gt;
&lt;span class="n"&gt;root&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;AANode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;value: &lt;/span&gt;&lt;span class="n"&gt;elements_count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;level: &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;tree&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;AATree&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;start&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;

&lt;span class="n"&gt;elements_count&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="no"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Tree should not contain &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tree&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;contains?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;tree&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="no"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Tree should contain &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="n"&gt;tree&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;contains?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;elements_count&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="no"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Tree should contain &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="n"&gt;tree&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;contains?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;tree&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="no"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Tree should not contain &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tree&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;contains?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Time it took: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; seconds."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What this essentially does is it adds numbers to our tree (which sorts them internally) and then removes each one, one by one. We also check if the tree contains the given number twice per addition/deletion.&lt;/p&gt;

&lt;p&gt;The code snippet above is actually Crystal code.&lt;br&gt;
Like I said, these languages are identical at first glance. Rewriting the code from Crystal to Ruby took me a total of &lt;a href="https://github.com/Enether/crystal-aa-tree/pull/1/files" rel="noopener noreferrer"&gt;50 line changes&lt;/a&gt; for a 360 line file. &lt;em&gt;&lt;a href="https://github.com/Enether/crystal-aa-tree/pull/2/files" rel="noopener noreferrer"&gt;27 if you were greedy&lt;/a&gt;&lt;/em&gt;&lt;br&gt;
It is worth noting that those changes are simply removing &lt;code&gt;.as()&lt;/code&gt; method calls and type annotations.&lt;/p&gt;
&lt;h4&gt;
  
  
  Okay, they look identical but how much faster is Crystal?
&lt;/h4&gt;

&lt;p&gt;Let's build the executable and start testing&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;crystal build AA_Tree.cr &lt;span class="nt"&gt;-o&lt;/span&gt; crystal_tree &lt;span class="nt"&gt;--release&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 100 elements&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;./crystal_tree 100
Time it took: 0.0006560 seconds.
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby AA_Tree.rb 100
Time it took: 0.00172 seconds.

&lt;span class="c"&gt;# 10K elements&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;./crystal_tree 10000
Time it took: 0.0044000 seconds.
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby AA_Tree.rb 10000
Time it took: 0.288619 seconds.

&lt;span class="c"&gt;# 100K elements&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;./crystal_tree 100000
Time it took: 0.0498230 seconds.
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby AA_Tree.rb 100000
Time it took: 3.414404 seconds.

&lt;span class="c"&gt;# 1 million elements&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;./crystal_tree 1000000
Time it took: 0.5007820 seconds.
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby AA_Tree.rb 1000000
Time it took: 39.370083 seconds.

&lt;span class="c"&gt;# 10 million elements&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;./crystal_tree 100000000
Time it took: 5.6283920 seconds.
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;ruby AA_Tree.rb 100000000
&lt;span class="c"&gt;# Still running&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, it runs laps around Ruby and proves to be ~80 times faster if we were to judge by our 1 million elements example.&lt;/p&gt;

&lt;h1&gt;
  
  
  Quirks and differences to Ruby
&lt;/h1&gt;

&lt;p&gt;Despite the similarities, there are substantial differences to Ruby, here we will highlight the most obvious and interesting ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types, type checking and type unions
&lt;/h2&gt;

&lt;p&gt;The most apparent difference is that Crystal uses and mostly enforces types for variables. It has great type inference - if you do not explicitly define the type of a variable the compiler figures it out itself.&lt;br&gt;
The way this language does typing is a sort of mix between static and dynamic typing. It allows you to change a variable's type&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Hello"&lt;/span&gt;
&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; String&lt;/span&gt;
&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt;
&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; Int32&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;but it also allows you to enforce a variable's type&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;String&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Hello"&lt;/span&gt;  &lt;span class="c1"&gt;# a should be a string and only a string!&lt;/span&gt;
&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt; &lt;span class="c1"&gt;# error:  type must be String, not (Int32 | String)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Type Unions
&lt;/h3&gt;

&lt;p&gt;Were you wondering what the &lt;code&gt;(Int32 | String)&lt;/code&gt; type was in the error message above?&lt;br&gt;
This is a so-called type union, which is a set of multiple types.&lt;br&gt;
If we were to enforce &lt;code&gt;a&lt;/code&gt; to be a union of &lt;code&gt;Int32&lt;/code&gt; and &lt;code&gt;String&lt;/code&gt;, the compiler would allow us to assign either type to that variable as it knows to expect both.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;Int32&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="no"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt;
&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Hello"&lt;/span&gt;
&lt;span class="c1"&gt;# Completely okay&lt;/span&gt;

&lt;span class="c1"&gt;# But if we were to try to assign another type to it&lt;/span&gt;
&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;true&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; type must be (Int32 | String), not (Bool | Int32 | String)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Type Inference and Type Checking
&lt;/h3&gt;

&lt;p&gt;The compiler can figure out the type of a variable himself in most cases. The type inference algorithm is specifically built to work when the type of the variable is obvious to a human reader and does not dig too deep into figuring out the specific type.&lt;/p&gt;

&lt;p&gt;In the cases where multiple conditions are plausible, the compiler puts a union type on the variable. Crystal code &lt;strong&gt;won't compile&lt;/strong&gt; if the possible types do not support a given method invoked on them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nb"&gt;rand&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;
  &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"String"&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; String&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;
  &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; Int32&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; (String | Int32)&lt;/span&gt;
&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;camelcase&lt;/span&gt;  &lt;span class="c1"&gt;# =&amp;gt; undefined method 'camelcase' for Int32 (compile-time type is (Int32 | String))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the way the compiler protects you from silly mistakes with mismatched types, something that is really common in dynamic languages. Its like having your very own programming assistant!&lt;/p&gt;

&lt;p&gt;The compiler is smart enough to figure out when a variable is obviously from a given type&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nb"&gt;rand&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;
  &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"String"&lt;/span&gt;
&lt;span class="k"&gt;elsif&lt;/span&gt; &lt;span class="nb"&gt;rand&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;0.75&lt;/span&gt;
  &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;
  &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;nil&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; (String | Int32 | Nil)&lt;/span&gt;
&lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;nil?&lt;/span&gt;
  &lt;span class="c1"&gt;# a is not nil for sure&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# =&amp;gt; (String | Int32)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;is_a?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# =&amp;gt; String&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are ways to ensure the compiler that the appropriate type is set.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;String&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;camelcase&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This checks that the &lt;code&gt;a&lt;/code&gt; variable is a string and if it is not, it throws an error.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enforcing types
&lt;/h3&gt;

&lt;p&gt;As we said, we have the option to enforce a variable's type or let it be whatever.&lt;br&gt;
This holds true for a method's parameters as well. Let's define two methods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generic_receiver&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Received &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;!"&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;string_receiver&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Received string &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;!"&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I assume you can already imagine what'll happen with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="n"&gt;generic_receiver&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;generic_receiver&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Hello"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;generic_receiver&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;generic_receiver&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kp"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;string_receiver&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Hello"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;string_receiver&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# error!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is usually good practice to not enforce a variable, as it leads to more generic code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concurrency
&lt;/h2&gt;

&lt;p&gt;Its concurrent model is inspired by that of Go, namely &lt;a href="https://en.wikipedia.org/wiki/Communicating_sequential_processes" rel="noopener noreferrer"&gt;CSP&lt;/a&gt; (Communication Sequential Processing).&lt;br&gt;
It uses lightweight threads (called fibers) whose execution is managed by the runtime scheduler, not the operating system. Communication between said threads is done through channels, which can either be unbuffered or buffered.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FcwjUVtg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FcwjUVtg.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Pictured: A lot of fibers who communicate between each other through channels&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Crystal currently runs in a single thread but their roadmap intends to implement multithreading. This means that it currently has no support for parallelism (except for process forking), but that is subject to change.&lt;br&gt;
Because at this moment there's only a single thread executing your code, accessing and modifying a variable in different fibers will work just fine. However, once multiple threads is introduced in the language, it might break. That's why the recommended mechanism to communicate data is through channels.&lt;/p&gt;
&lt;h2&gt;
  
  
  Metaprogramming
&lt;/h2&gt;

&lt;p&gt;Crystal has good support for metaprogramming through macros. A macro is something that pastes code into the file during compilation.&lt;/p&gt;

&lt;p&gt;Let's define our own version of Ruby's &lt;code&gt;attr_writer&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="k"&gt;macro&lt;/span&gt; &lt;span class="nf"&gt;attr_writer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="nb"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;({{&lt;/span&gt;&lt;span class="nb"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="p"&gt;}})&lt;/span&gt;
        &lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="nb"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="nb"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Calling &lt;code&gt;attr_writer foo, Int32&lt;/code&gt; will evaluate to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;foo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;foo&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;Int32&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="vi"&gt;@foo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;foo&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Greeter&lt;/span&gt;
    &lt;span class="kp"&gt;attr_writer&lt;/span&gt; &lt;span class="n"&gt;hello_msg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;String&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;hello_msg&lt;/span&gt;
        &lt;span class="vi"&gt;@hello_msg&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;


&lt;span class="n"&gt;gr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Greeter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;
&lt;span class="n"&gt;gr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hello_msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Hello World"&lt;/span&gt;
&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;gr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hello_msg&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; Hello World&lt;/span&gt;
&lt;span class="n"&gt;gr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hello_msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;11&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; no overload matches 'Greeter#hello_msg=' with type Int32&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Crystal macros support iteration and conditionals and can access constants.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="no"&gt;MAX_LENGTH&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="k"&gt;macro&lt;/span&gt; &lt;span class="nf"&gt;define_short_methods&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;names&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;{%&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nb"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;names&lt;/span&gt; &lt;span class="p"&gt;%}&lt;/span&gt;
    &lt;span class="p"&gt;{%&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nb"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;id&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;size&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="no"&gt;MAX_LENGTH&lt;/span&gt; &lt;span class="p"&gt;%}&lt;/span&gt;
        &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="nb"&gt;name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;
          &lt;span class="p"&gt;{{&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;
        &lt;span class="k"&gt;end&lt;/span&gt;
    &lt;span class="p"&gt;{%&lt;/span&gt; &lt;span class="k"&gt;end&lt;/span&gt; &lt;span class="p"&gt;%}&lt;/span&gt;
  &lt;span class="p"&gt;{%&lt;/span&gt; &lt;span class="k"&gt;end&lt;/span&gt; &lt;span class="p"&gt;%}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt;&lt;span class="nf"&gt;ine_short_methods&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;foo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bar&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hello&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;foo&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; 0&lt;/span&gt;
&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;bar&lt;/span&gt; &lt;span class="c1"&gt;# =&amp;gt; 1&lt;/span&gt;
&lt;span class="c1"&gt;# puts hello =&amp;gt; undefined local variable or method 'hello'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Miscellaneous
&lt;/h1&gt;

&lt;p&gt;Crystal has taken a lot of cool features from other languages and provides various syntax sugar that is oh so sweet!&lt;/p&gt;

&lt;h3&gt;
  
  
  Initializing class instance variables directly in a method
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;initialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="vi"&gt;@name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="vi"&gt;@age&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="vi"&gt;@gender&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="vi"&gt;@nationality&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;is equal to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;initialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;gender&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nationality&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="vi"&gt;@name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;name&lt;/span&gt;
  &lt;span class="vi"&gt;@age&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;age&lt;/span&gt;
  &lt;span class="vi"&gt;@gender&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;gender&lt;/span&gt;
  &lt;span class="vi"&gt;@nationality&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nationality&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Implicit object notation
&lt;/h3&gt;

&lt;p&gt;Switch statements support invoking methods on the giving object without repeatedly specifying its name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;string_of_the_gods&lt;/span&gt;
&lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;size&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Long string"&lt;/span&gt;
&lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;size&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Normal String"&lt;/span&gt;
&lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;size&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
  &lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Short String"&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="p"&gt;{.&lt;/span&gt;&lt;span class="nf"&gt;even?&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;odd?&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="c1"&gt;# Matches if value1.even? &amp;amp;&amp;amp; value2.odd?&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  External keyword arguments
&lt;/h3&gt;

&lt;p&gt;My personal favorite - Crystal allows you to name a function's parameters one way for the outside world and one way for the method's body&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight crystal"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;increment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;by&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;number&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="n"&gt;increment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;by: &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Compiler
&lt;/h2&gt;

&lt;p&gt;As you saw earlier, this language is compiled to an executable. Regardless, it still has something like a REPL which proves to be similar to our beloved &lt;code&gt;irb&lt;/code&gt; - &lt;a href="https://github.com/crystal-community/icr" rel="noopener noreferrer"&gt;https://github.com/crystal-community/icr&lt;/a&gt;&lt;br&gt;
You can also directly run a file without having to compile it and then run it, via the &lt;code&gt;crystal&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; enether&lt;span class="nv"&gt;$ &lt;/span&gt;crystal AA_Tree.cr 200000
Time it took: 0.536102 seconds.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This runs a little bit slower because we do not make use of the optimizations that the &lt;code&gt;--release&lt;/code&gt; build flag brings with itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  C Bindings
&lt;/h2&gt;

&lt;p&gt;There is a way to write a performant library in Crystal which you can run in your Ruby code. The way you do this is to bind Crystal to C, which allows you to use it from Ruby.&lt;br&gt;
I did not delve too deep into this but apparently it is easy and you can do it without writing a &lt;a href="https://crystal-lang.org/docs/syntax_and_semantics/c_bindings/" rel="noopener noreferrer"&gt;single line of C&lt;/a&gt;. That is awesome!&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;If you write Ruby, picking up Crystal is natural and can quickly find yourself writing performance-critical software in it. I believe it has a lot of potential and can yield a lot of benefits to our community but also to non-ruby programmers, as the syntax is just too easy to pass up. It is a joy to write and it runs blazingly fast, that is an unique combination which very few languages can boast with.&lt;br&gt;
I hope I've sparked your interest by these short examples! I strongly encourage you to take a look at the language for yourself and notify me if I've missed something.&lt;/p&gt;

&lt;p&gt;Here are some resources to read further up on:&lt;br&gt;
&lt;a href="https://groups.google.com/forum/#!forum/crystal-lang" rel="noopener noreferrer"&gt;Google Group&lt;/a&gt;&lt;br&gt;
&lt;a href="https://gitter.im/crystal-lang/crystal" rel="noopener noreferrer"&gt;Gitter Chat&lt;/a&gt;&lt;br&gt;
&lt;a href="https://webchat.freenode.net/?channels=%23crystal-lang" rel="noopener noreferrer"&gt;IRC&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.reddit.com/r/crystal_programming/" rel="noopener noreferrer"&gt;Subreddit&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.reddit.com/r/crystal_programming/" rel="noopener noreferrer"&gt;Newsletter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>crystal</category>
      <category>compiled</category>
    </item>
    <item>
      <title>Ruby's GIL in a nutshell</title>
      <dc:creator>Stanislav Kozlovski</dc:creator>
      <pubDate>Mon, 11 Sep 2017 07:12:20 +0000</pubDate>
      <link>https://dev.to/enether/rubys-gil-in-a-nutshell</link>
      <guid>https://dev.to/enether/rubys-gil-in-a-nutshell</guid>
      <description>&lt;p&gt;Liquid syntax error: Unknown tag 'endcomment'&lt;/p&gt;
</description>
      <category>ruby</category>
      <category>gil</category>
      <category>concurrency</category>
    </item>
    <item>
      <title>Managing RESTful URLs in Django Rest Framework</title>
      <dc:creator>Stanislav Kozlovski</dc:creator>
      <pubDate>Mon, 14 Aug 2017 05:37:47 +0000</pubDate>
      <link>https://dev.to/enether/managing-restful-urls-in-django-rest-framework</link>
      <guid>https://dev.to/enether/managing-restful-urls-in-django-rest-framework</guid>
      <description>

&lt;p&gt;We've all been taught about RESTful API design. It does not take much to realize that these endpoints&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST /products/1/delete
POST /products/1/update
GET /products/1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;are inferior to&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DELETE /products/1
PUT /products/1
GET /products/1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You could also imagine that these multiple URLs per object stack up quickly and if we were to introduce more objects like &lt;code&gt;/merchants/&lt;/code&gt;, &lt;code&gt;/shops/&lt;/code&gt; we'd quickly be managing a lot of URLs which can get confusing. Nobody wants to read a 100-line &lt;em&gt;urls.py&lt;/em&gt; file.&lt;/p&gt;

&lt;p&gt;But Django Rest Framework does not support mapping the same URL to different class-views based on the request method. How could we map this one URL with different methods in our &lt;em&gt;urls.py&lt;/em&gt; file?&lt;/p&gt;

&lt;p&gt;Let's first create the initial project with our bad URLs&lt;/p&gt;

&lt;h1&gt;
  
  
  Initial Project
&lt;/h1&gt;

&lt;p&gt;We'll have a dead simple project which allows us to interact with product objects. We want to be able to update a product, get information about it and delete it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;models.py&lt;/em&gt;&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.db import models
from rest_framework import serializers


class Product(models.Model):
    name = models.CharField(max_length=500)
    price = models.DecimalField(decimal_places=2, max_digits=5)
    stock = models.IntegerField()


class ProductSerializer(serializers.ModelSerializer):
    class Meta:
        model = Product
        fields = '__all__'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;views.py&lt;/em&gt;&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from rest_framework.generics import DestroyAPIView, UpdateAPIView, RetrieveAPIView

from restful_example.models import Product, ProductSerializer


class ProductDestroyView(DestroyAPIView):
    queryset = Product.objects.all()
    serializer_class = ProductSerializer


class ProductUpdateView(UpdateAPIView):
    queryset = Product.objects.all()
    serializer_class = ProductSerializer


class ProductDetailsView(RetrieveAPIView):
    queryset = Product.objects.all()
    serializer_class = ProductSerializer
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;and our ugly &lt;em&gt;urls.py&lt;/em&gt;&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from django.conf.urls import url
from django.contrib import admin
from restful_example.views import ProductDestroyView, ProductUpdateView, ProductDetailsView

urlpatterns = [
    url(r'^admin/', admin.site.urls),
    url(r'^products/(?P&amp;lt;pk&amp;gt;\d+)/delete$', ProductDestroyView.as_view()),
    url(r'^products/(?P&amp;lt;pk&amp;gt;\d+)/update$', ProductUpdateView.as_view()),
    url(r'^products/(?P&amp;lt;pk&amp;gt;\d+)$', ProductDetailsView.as_view()),
]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Wait, don't forget tests!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;tests.py&lt;/em&gt;&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from rest_framework.test import APITestCase

from restful_example.models import Product, ProductSerializer


class ProductTests(APITestCase):
    def test_can_get_product_details(self):
        product = Product.objects.create(name='Apple Watch', price=500, stock=3)
        response = self.client.get(f'/products/{product.id}')
        self.assertEqual(response.status_code, 200)
        self.assertEqual(response.data, ProductSerializer(instance=product).data)

    def test_can_delete_product(self):
        product = Product.objects.create(name='Apple Watch', price=500, stock=3)
        response = self.client.delete(f'/products/{product.id}/delete')
        self.assertEqual(response.status_code, 204)
        self.assertEqual(Product.objects.count(), 0)

    def test_can_update_product(self):
        product = Product.objects.create(name='Apple Watch', price=500, stock=3)
        response = self.client.patch(f'/products/{product.id}/update', data={'name': 'Samsung Watch'})
        product.refresh_from_db()
        self.assertEqual(response.status_code, 200)
        self.assertEqual(product.name, 'Samsung Watch')

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Run our tests and you'd see that this works.&lt;/p&gt;

&lt;h1&gt;
  
  
  Becoming more RESTful
&lt;/h1&gt;

&lt;p&gt;Okay, now it's time to fix our URLs into more sensible ones. As we said, we want all of these views to point to one exact URL and differ only by the method they allow.&lt;/p&gt;

&lt;p&gt;The idea here is to have some sort of base view to which the request on the url gets sent to. This view will have the job to figure out which view should handle the request given its method and send it there.&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class BaseManageView(APIView):
    """
    The base class for ManageViews
        A ManageView is a view which is used to dispatch the requests to the appropriate views
        This is done so that we can use one URL with different methods (GET, PUT, etc)
    """
    def dispatch(self, request, *args, **kwargs):
        if not hasattr(self, 'VIEWS_BY_METHOD'):
            raise Exception('VIEWS_BY_METHOD static dictionary variable must be defined on a ManageView class!')
        if request.method in self.VIEWS_BY_METHOD:
            return self.VIEWS_BY_METHOD[request.method]()(request, *args, **kwargs)

        return Response(status=405)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This simple class requires us to inherit it and define a class variable named &lt;code&gt;VIEWS_BY_METHOD&lt;/code&gt;. A dictionary which will hold our method names and their appropriate handlers.&lt;br&gt;
Using this base class, creating the ManageView class for our Product model is trivial:&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class ProductManageView(BaseManageView):
    VIEWS_BY_METHOD = {
        'DELETE': ProductDestroyView.as_view,
        'GET': ProductDetailsView.as_view,
        'PUT': ProductUpdateView.as_view,
        'PATCH': ProductUpdateView.as_view
    }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It is worth mentioning here that any &lt;a href="http://www.django-rest-framework.org/api-guide/permissions/"&gt;permission classes&lt;/a&gt; must be defined in the separate views and will not work if put in the &lt;code&gt;ManageView&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We need to quickly edit our &lt;strong&gt;urls.py&lt;/strong&gt;'s &lt;code&gt;urlpatterns&lt;/code&gt;&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;urlpatterns = [
    url(r'^admin/', admin.site.urls),
    url(r'^products/(?P&amp;lt;pk&amp;gt;\d+)$', ProductManageView.as_view()),  # this now points to the manage view
]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Let's test this new view as well&lt;br&gt;
&lt;strong&gt;tests.py&lt;/strong&gt;&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class ProductManageViewTests(APITestCase):
    def test_method_pairing(self):
        self.assertEqual(len(ProductManageView.VIEWS_BY_METHOD.keys()), 4)  # we only support 4 methods
        self.assertEqual(ProductManageView.VIEWS_BY_METHOD['DELETE'], ProductDestroyView.as_view)
        self.assertEqual(ProductManageView.VIEWS_BY_METHOD['GET'], ProductDetailsView.as_view)
        self.assertEqual(ProductManageView.VIEWS_BY_METHOD['PUT'], ProductUpdateView.as_view)
        self.assertEqual(ProductManageView.VIEWS_BY_METHOD['PATCH'], ProductUpdateView.as_view)

    def test_non_supported_method_returns_405(self):
        product = Product.objects.create(name='Apple Watch', price=500, stock=3)
        response = self.client.post(f'/products/{product.id}')
        self.assertEqual(response.status_code, 405)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We change the previous tests' urls to use the new one and we can see that they pass&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Creating test database for alias 'default'...
...
----------------------------------------------------------------------
Ran 3 tests in 0.037s

OK
Destroying test database for alias 'default'...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And voila, we have the same functionality but in one url! &lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;What we did was create a main view which dispatches requests to the appropriate views given the request method. I believe that this is the right way to handle multiple methods per URL in DRF when you want to have different &lt;em&gt;class-based&lt;/em&gt; views handling each method.&lt;/p&gt;

&lt;p&gt;This, however, is not the optimal case with our dead simple example. For views which are somewhat to extremely generic (like ours, no custom logic inside) or function views, there are Routers and ViewSets&lt;/p&gt;

&lt;h1&gt;
  
  
  Bonus - Routers and ViewSets
&lt;/h1&gt;

&lt;p&gt;Since DRF is awesome, they provide us with the ability to define all the basic operations on a model in just a few lines.&lt;br&gt;
With a ViewSet class, we can define our create, read, update and destroy logic without writing any code.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;views.py&lt;/em&gt;&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from rest_framework import viewsets
class ProductViewSet(viewsets.ModelViewSet):
    queryset = Product.objects.all()
    serializer_class = ProductSerializer
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;and we need to define the URL in our urls.py using a Router class&lt;/p&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from rest_framework.routers import DefaultRouter
router = DefaultRouter(trailing_slash=False)
router.register(r'products', restful_views.ProductViewSet)

urlpatterns = [
    url(r'^admin/', admin.site.urls),
]

urlpatterns += router.urls

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This viewset, believe it or not, has all the functionality we implemented above plus some additional like creating a Product(POST /products) or getting a list of all Products(GET /products)&lt;/p&gt;

&lt;p&gt;ViewSets also allow you to override the views and create custom ones. This allows us to create function views for one URL but not class-based views.&lt;br&gt;
They are absolutely amazing for our example and other simple projects, but sub-par for managing multiple class-based non-generic views with custom logic inside them.&lt;/p&gt;


</description>
      <category>djangorestframework</category>
      <category>drf</category>
      <category>django</category>
      <category>python</category>
    </item>
    <item>
      <title>From Zero to Hero (How I became a professional developer in a year)</title>
      <dc:creator>Stanislav Kozlovski</dc:creator>
      <pubDate>Tue, 25 Jul 2017 05:39:50 +0000</pubDate>
      <link>https://dev.to/kozlovski/from-zero-to-hero-how-i-became-a-professional-developer-in-a-year</link>
      <guid>https://dev.to/kozlovski/from-zero-to-hero-how-i-became-a-professional-developer-in-a-year</guid>
      <description>&lt;p&gt;I had always thought that I would have a hard time finding a job as a programmer. When starting out, I was easily stifled by various problems. Thoughts that this was not the right thing for me were frequent. I kept going though, as I did not have a better alternative.&lt;/p&gt;

&lt;p&gt;Graduating from high school, I decided to ditch university in favor of coding bootcamps. This was just at a time when they were gaining popularity here in Bulgaria and they gave me hope I would be able to land a job quicker than going through the 4 years of university.&lt;br&gt;
I had a couple of key reasons for not enrolling in an university&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;I did not want to waste 4 years for attaining something that could have taken less. Yes, of course those 4 years are not 'wasted' and everything learnt there sooner or later pays off in a direct or indirect(as the case with math) way.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I did not want to be a financial burden on my parents for 4 more years. I knew that I could not sufficiently learn if I had to work simultaneously, so getting a job was out of the question.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I was unsure if this was the right thing for me and I was scared of studying for years just to figure out I didn’t like it once I got a taste of the professional world&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Beginning
&lt;/h2&gt;

&lt;p&gt;So I ventured out into the world of coding bootcamps, at first online, learning the very basics of programming. For this, I want to give a shout out to Software University (&lt;a href="http://www.softuni.bg" rel="noopener noreferrer"&gt;www.softuni.bg&lt;/a&gt;). Their introductory courses proved invaluable for a beginner like me. Two hours for a lecture on Arrays, live coding and numerous challenges on the material might seem like overkill now, but they proved to be immensely useful at the time. SoftUni has world-class introductory programming lessons and it’s a real shame that there are no english lectures, as the world would greatly benefit from them. Take it from somebody who tried to learn programming through other courses/tutorials, I have not been able to find an equally good beginner programming course online. Out of the others that I have seen, most go through the material too quickly and do not give the needed problems for the student to solve in order to fully grasp the material. Some do not even have homework!&lt;/p&gt;

&lt;p&gt;Regardless, a couple of introductory courses are simply not enough. I quickly realized that one bootcamp was not going to cut it and at first decided that building up a portfolio to go alongside those certifications was the thing to do.&lt;br&gt;
I started simple, building a &lt;a href="https://github.com/Enether/python_wow" rel="noopener noreferrer"&gt;RPG console game&lt;/a&gt; in Python. Of course, this was a dead-simple application but I did grow it to around 5k lines of code, which, at the time, were a lot for me.&lt;br&gt;
This still was not enough and I ventured out to other courses (some online, some in-person). My workload grew and I found myself consistently spending 10+ hours to stay on top of everything. It felt good though, as I could actually see the progress.&lt;/p&gt;

&lt;p&gt;We were now close to the new year and I had a team project to do on a course. Working with a team was not easy, it was especially hard to divide the work. Knowing me, I did most of it just because I wanted to(you learn by doing, after all) and nobody else had anything against that. This is where we created a really silly Facebook clone for pets (&lt;a href="https://github.com/codeBusters-softuni/petbook" rel="noopener noreferrer"&gt;https://github.com/codeBusters-softuni/petbook&lt;/a&gt;)&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FBsZcDCs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FBsZcDCs.jpg" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;This was cute&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At this point, I felt reasonably confident in my skills and thought that landing an internship sometime in the new year was the next best step. It was time to surround myself with professionals if I wanted to continue growing.No offense, but most people in coding bootcamps are not that good. You can't blame the bootcamp though, it's just that this format attracts people who want the 'easy' way to get into an industry with high salaries, but I digress.&lt;br&gt;
I decided that in order to get in a good company, I'd need to know a fair amount of Data Structures and Algorithms. No problem, there were plenty of resources on the topic.&lt;br&gt;
Well, this material turned out to be the toughest yet. Suddenly, I was back to the feeling that maybe I’m not cut out for this. This all culminated when it took me two-three days to implement a Red-Black tree &lt;em&gt;(the classical kind, none of these left-leaning garbage variants)&lt;/em&gt;. This was around Christmas too and as such I feel that an obligatory xkcd is in place here:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimgs.xkcd.com%2Fcomics%2Ftree.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimgs.xkcd.com%2Fcomics%2Ftree.png" width="562" height="408"&gt;&lt;/a&gt;&lt;br&gt;
On the upside, finally grasping a structure/algorithm was rewarding enough to keep me going.&lt;/p&gt;

&lt;p&gt;Very quickly, me and a friend got addicted to the direct problem solving aspect of algorithms and started competing in online programming competitions. Obviously, we did not score that well, but it was fun and it incentivized further learning. I felt I made significant progress in my algorithmic thinking here and I encourage everybody to give competitive programming a go.&lt;br&gt;
For the following months (December-March) I spent a fair amount of time solving problems, writing out data structures and learning algorithms.&lt;br&gt;
In retrospect, I'm very glad I spent some time on those topics as they built resiliency, logical thinking and confidence in my skills. I actually continue to take online courses on algorithms &lt;em&gt;(I recommend &lt;a href="https://www.coursera.org/learn/advanced-algorithms-and-complexity" rel="noopener noreferrer"&gt;this one&lt;/a&gt;)&lt;/em&gt; and I will get back into competitive programming some day.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FTLTMJHJ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FTLTMJHJ.png" height="747" width="704"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;My 'trophies'&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Inspired by the platform for algorithmic problems and competitions &lt;a href="https://www.hackerrank.com" rel="noopener noreferrer"&gt;Hackerrank&lt;/a&gt;, I wanted to make something like that but with features I felt were missing. In February I started my third project - Deadline. This was initially planned as a hackathon project and while it won us tickets to the 2017 WebIt festival both me and my partner knew we would continue to develop this &lt;em&gt;(and we still do actually, track its progress here: &lt;a href="https://github.com/two-man-army/deadline" rel="noopener noreferrer"&gt;deadline&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Time to work
&lt;/h2&gt;

&lt;p&gt;We were now around April and it was time to find a job. I felt confident I could land any internship, as I thought the main thing companies look for in an intern is their motivation and potential. I had plenty of motivation and around 8 months of non-stop Github activity to prove it.&lt;br&gt;
From December I had wanted to take a shot at Uber even though it seemed infeasible &lt;em&gt;(a year of bootcamps/courses and getting to work at a top technological company where some of the world’s most elite engineers are? yeah, right)&lt;/em&gt;.&lt;br&gt;
Well, against all odds, I did get an invite to an interview. I was ecstatic and had fun throughout the whole process. After it, I thought I had a fair chance of getting in, as I did my best to show my motivation, personality and ambitions. The technical questions were not hard at all and I felt I did well on them.&lt;br&gt;
I did not get chosen. Not a big surprise, albeit disappointing nonetheless, I'm glad I strived for it, as it helped me become a better engineer.&lt;/p&gt;

&lt;p&gt;As the things with Uber were over, it was now time to continue my search. I wanted to get in a company where the problems were interesting and I would find maximum growth as an engineer. My next choice was VMware, but their recruiting process was insanely slow and I decided I had enough of waiting. I started to apply to other companies and landed a position as a Junior Ruby Developer at &lt;a href="https://sumup.co.uk/" rel="noopener noreferrer"&gt;SumUp&lt;/a&gt;. I had no previous experience with Ruby, but I made sure to quickly learn the basics and at this point I feel more than comfortable writing it.&lt;/p&gt;

&lt;p&gt;Here we develop our own product and that is something I value in a company, I did not want to work in an outsourcing company.&lt;br&gt;
I love my current job, the colleagues are great and even though I did not get to work at the place I initially wanted, I'm glad things turned out this way.&lt;br&gt;
I look forward to a long career with them, trying to make the biggest impact I can. They trusted me and I will make sure to pay that off.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In the end, I feel I am better off this way due to my decision to skip university for the time being. I can now afford to enroll in uni due to desire and curiosity in the field, not out of seeming necessity. And if I do enroll, which I very well might, I will have a clearer vision on the material taught there and I will appreciate it more, as I would be able to more easily see its applications.&lt;/p&gt;

&lt;p&gt;As a moral of the story, I want to emphasize that you do not have to follow the beaten path to become successful in any endeavor and sometimes that can even be the better choice.&lt;br&gt;
Know that everything is possible with sufficient determination and work.&lt;/p&gt;

</description>
      <category>career</category>
      <category>development</category>
      <category>ruby</category>
      <category>python</category>
    </item>
    <item>
      <title>Hi, I'm Stanislav Kozlovski</title>
      <dc:creator>Stanislav Kozlovski</dc:creator>
      <pubDate>Sun, 23 Jul 2017 13:07:39 +0000</pubDate>
      <link>https://dev.to/kozlovski/hi-im-stanislav-kozlovski</link>
      <guid>https://dev.to/kozlovski/hi-im-stanislav-kozlovski</guid>
      <description>&lt;p&gt;I have been coding for 1.5 years.&lt;/p&gt;

&lt;p&gt;You can find me on GitHub as &lt;a href="https://github.com/Enether" rel="noopener noreferrer"&gt;Enether&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I live in Sofia, Bulgaria.&lt;/p&gt;

&lt;p&gt;I work for SumUp&lt;/p&gt;

&lt;p&gt;I mostly program in these languages: Ruby, Python, Javascript.&lt;/p&gt;

&lt;p&gt;I am currently learning more about everything.&lt;/p&gt;

&lt;p&gt;Nice to meet you.&lt;/p&gt;

</description>
      <category>introduction</category>
    </item>
  </channel>
</rss>
