<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chen</title>
    <description>The latest articles on DEV Community by Chen (@chen).</description>
    <link>https://dev.to/chen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chen"/>
    <language>en</language>
    <item>
      <title>Understanding Go’s Supercharged Map in v1.24</title>
      <dc:creator>Chen</dc:creator>
      <pubDate>Wed, 05 Mar 2025 20:30:31 +0000</pubDate>
      <link>https://dev.to/chen/understanding-gos-supercharged-map-in-v124-1gka</link>
      <guid>https://dev.to/chen/understanding-gos-supercharged-map-in-v124-1gka</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7nk1lt5980fa997yz5y.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7nk1lt5980fa997yz5y.jpg" alt="Cover image by Janko Ferlič on Unsplash" width="640" height="592"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go 1.24 introduces a new map implementation, inspired by &lt;a href="https://abseil.io/about/design/swisstables" rel="noopener noreferrer"&gt;Google's Swiss Tables&lt;/a&gt;, which brings significant optimizations and performance enhancements to the language's built-in map type. While Go's previous map implementation was already efficient, the new design takes it a step further by introducing a clever approach to data organization and access.&lt;/p&gt;

&lt;p&gt;To help digest this complex change, let's use an analogy that illustrates how the new map works and how it differs from the previous implementation.&lt;br&gt;
Let's use a relatable analogy: a library. Just as a library organizes books in a way that makes them easy to find and access, a map organizes data for efficient retrieval.&lt;/p&gt;

&lt;p&gt;This analogy will provide a high-level understanding of the key improvements without delving too deeply into technical details. For those interested in a more in-depth exploration, we'll reference additional resources throughout the explanation.&lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;strong&gt;The Library Analogy&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Think of Go’s map as a library designed to store books. Here’s how it works:&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;1. Tables Are Library Sections&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The map starts with one &lt;strong&gt;table&lt;/strong&gt;, which is like a section of the library. If this section gets too crowded, the library adds another section. Each table is divided into smaller units called &lt;strong&gt;groups&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;2. Groups Are Bookshelves&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Each table is made up of multiple &lt;strong&gt;groups&lt;/strong&gt;, which are like bookshelves in the library. A group can hold up to 8 books (key-value pairs). These groups are the fundamental storage units in Go maps.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;3. Control Word: The Librarian's Cheat Sheet&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Each bookshelf has a label that summarizes key information taped to it, called the &lt;strong&gt;control word&lt;/strong&gt;. This cheat sheet contains metadata about the books on that shelf:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It stores tiny "fingerprints" of each book's ID (derived from its hash).&lt;/li&gt;
&lt;li&gt;It marks whether slots on the shelf are empty, occupied, or deleted.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This cheat sheet helps librarians quickly locate books without flipping through every slot.&lt;/p&gt;

&lt;p&gt;It can be pictured as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+---------------------+        +---------------------+
|         Map         |        |       Library       |
+---------------------+        +---------------------+
|      Table 0        |        |      Section 1      |
+---------------------+        +---------------------+
| Control Word (64b)  | &amp;lt;----&amp;gt; |     Shelf label     |
+---------------------+        +---------------------+
| Key 0  |  Value 0   |        | Book 1 | Location 1 |
| Key 1  |  Value 1   |        | Book 2 | Location 2 |
|        ...          |        |        ...          |
| Key 7  |  Value 7   |        | Book 8 | Location 8 |
+---------------------+        +---------------------+
| Control Word (64b)  | &amp;lt;----&amp;gt; |     Shelf label     |
+---------------------+        +---------------------+
| Key 0  |  Value 0   |        | Book 1 | Location 1 |
| Key 1  |  Value 1   |        | Book 2 | Location 2 |
|        ...          |        |        ...          |
| Key 7  |  Value 7   |        | Book 8 | Location 8 |
+---------------------+        +---------------------+
|        ...          |        |        ...          |
+---------------------+        +---------------------+
|      Table 1        |        |      Section 2      |
+---------------------+        +---------------------+
|        ...          |        |        ...          |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;How It Works: Storing and Retrieving Books&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s walk through an example of storing and retrieving a book in this library.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Storing a Book&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Say you want to store "The Great Gatsby" by F. Scott Fitzgerald in the map.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Generate a Hash&lt;/strong&gt;&lt;br&gt;
The librarian generates a unique ID for &lt;code&gt;"The Great Gatsby"&lt;/code&gt; using a hash function, e.g., &lt;code&gt;0xf83c6f3a3c&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Find the Section and Bookshelf&lt;/strong&gt;&lt;br&gt;
The hash is split into two parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;H1&lt;/strong&gt;: Determines which section (table) and bookshelf (group) the book belongs to. (57 bits)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;H2&lt;/strong&gt;: A small fingerprint stored in the control word for quick identification. (7 bits)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;H1 says: "Go to Section 1, Bookshelf 3."&lt;/li&gt;
&lt;li&gt;H2 says: "Fingerprint is &lt;code&gt;3c&lt;/code&gt;."&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Place the Book&lt;/strong&gt;
The librarian places &lt;code&gt;"The Great Gatsby"&lt;/code&gt; into an available slot on Bookshelf 3 and updates the control word with &lt;code&gt;3c&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Retrieving a Book&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now you want to retrieve the book from the map.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Find the Section and Bookshelf&lt;/strong&gt;&lt;br&gt;
The librarian uses H1 from the hash of &lt;code&gt;"The Great Gatsby"&lt;/code&gt; to go directly to Section 1, Bookshelf 3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Check the Cheat Sheet (Control Word)&lt;/strong&gt;&lt;br&gt;
The librarian looks at the control word (&lt;code&gt;3c&lt;/code&gt;) to see if any slots match &lt;code&gt;"The Great Gatsby"&lt;/code&gt;'s fingerprint. If the fingerprint does not match, they know that slot doesn’t hold the desired book, saving time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Confirm and Return&lt;/strong&gt;&lt;br&gt;
If there’s a match, they compare keys directly to confirm it’s &lt;code&gt;"The Great Gatsby"&lt;/code&gt;. Once confirmed, they return its value (&lt;code&gt;10101..&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This process avoids unnecessary checks and minimizes memory lookups, making retrieval lightning-fast.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Handling Collisions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;What happens if multiple books generate the same H1 (i.e., they hash to the same group)? This scenario is known as a collision.&lt;/p&gt;

&lt;p&gt;In our library analogy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If a bookshelf is full, the librarian moves to the next available shelf in the same section.&lt;/li&gt;
&lt;li&gt;This is called &lt;strong&gt;linear probing&lt;/strong&gt;, where nearby groups are checked for free slots.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To ensure efficiency, if all shelves in a section are full, a new section (table) is added, and some books are redistributed between sections based on updated hash calculations to maintain optimal access.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why This Design is Brilliant&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The new map implementation in Go 1.24 introduces several optimizations inspired by Swiss Table design:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Cache-Friendly Layout&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Keys and values are stored together in groups, improving cache locality (storing related items close together takes advantage of how memory access works).&lt;br&gt;
When looking for an item, both its key and value are likely loaded into memory at once.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Fast Probing with Metadata&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The control word allows fast rejection of irrelevant slots using SIMD (Single Instruction, Multiple Data allows processing multiple data points with a single instruction, thus boosting performance during lookups) operations.&lt;br&gt;
This means multiple slots can be checked simultaneously, speeding up lookups significantly.&lt;/p&gt;

&lt;p&gt;While pre 1.24 use &lt;code&gt;tophash&lt;/code&gt; which reminds this strategy, it was still required to pointer chasing when overflow buckets were involved, reducing cache efficiency.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Memory Layout Example&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s a simplified version of how this might look in memory for a single group:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Control Word&lt;/th&gt;
&lt;th&gt;Key0&lt;/th&gt;
&lt;th&gt;Value0&lt;/th&gt;
&lt;th&gt;Key1&lt;/th&gt;
&lt;th&gt;Value1&lt;/th&gt;
&lt;th&gt;...&lt;/th&gt;
&lt;th&gt;Key7&lt;/th&gt;
&lt;th&gt;Value7&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;3c&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"The Great Gatsby"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;10101..&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;The control word (&lt;code&gt;3c&lt;/code&gt;) stores fingerprints for all 8 slots. (We have only one element in the group)&lt;/li&gt;
&lt;li&gt;Keys (&lt;code&gt;"The Great Gatsby"&lt;/code&gt;) and values (&lt;code&gt;10101..&lt;/code&gt;) are stored adjacently within each group for better performance.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Performance Gains&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The redesign brings significant improvements over older implementations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster lookups: Metadata allows skipping irrelevant slots quickly.&lt;/li&gt;
&lt;li&gt;Reduced memory overhead: Group storage eliminates extra pointers, and has better load-factor.&lt;/li&gt;
&lt;li&gt;Better scalability: Incremental resizing avoids performance bottlenecks during growth.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With the optimizations, lookups are up to ~30% faster compared to previous versions.&lt;/li&gt;
&lt;li&gt;Memory usage is reduced by as much as ~28% compared to older versions of Go maps.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Go 1.24’s map is like a librarian who gets smarter with every update—finding books faster, using space more efficiently where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sections (tables) expand as needed.&lt;/li&gt;
&lt;li&gt;Bookshelves (groups) keep related items close together.&lt;/li&gt;
&lt;li&gt;Cheat sheets (control words) help librarians find books faster without flipping through every slot.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This design balances speed, memory efficiency, and scalability beautifully—making Go maps one of the most optimized hash table implementations out there!&lt;/p&gt;

&lt;p&gt;Whether you're building high-performance systems or just curious about how things work under the hood, understanding these concepts can help you appreciate Go's thoughtful engineering even more.&lt;/p&gt;

&lt;p&gt;For more detailed post about this implementation, check the official Go's blog post &lt;a href="https://go.dev/blog/swisstable?utm_source=devopsian" rel="noopener noreferrer"&gt;Faster Go maps with Swiss Tables&lt;/a&gt;&lt;br&gt;
or ByteSizeGo &lt;a href="https://www.bytesizego.com/blog/go-124-swiss-table-maps" rel="noopener noreferrer"&gt;Swiss Table Maps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt; to very good explanations of maps prior 1.24&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dave.cheney.net/2018/05/29/how-the-go-runtime-implements-maps-efficiently-without-generics?utm_source=devopsian" rel="noopener noreferrer"&gt;How the Go runtime implements maps efficiently&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://victoriametrics.com/blog/go-map/index.html?utm_source=devopsian" rel="noopener noreferrer"&gt;Go Maps Explained: How Key-Value Pairs Are Actually Stored&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>go</category>
      <category>datastructures</category>
      <category>programming</category>
      <category>learning</category>
    </item>
    <item>
      <title>Why I Switched from Makefile to Taskfile</title>
      <dc:creator>Chen</dc:creator>
      <pubDate>Tue, 05 Nov 2024 12:45:45 +0000</pubDate>
      <link>https://dev.to/chen/why-i-switched-from-makefile-to-taskfile-4d8l</link>
      <guid>https://dev.to/chen/why-i-switched-from-makefile-to-taskfile-4d8l</guid>
      <description>&lt;p&gt;&lt;em&gt;Photo by Kelly Sikkema on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Software projects involve several phases, including building, testing, and deploying code. &lt;br&gt;
For instance, compiling Go source code results in an executable, while frontend frameworks compile into HTML, CSS, and JavaScript files. &lt;br&gt;
Testing is crucial before merging changes or releasing new versions. Deployment scripts often ship software to production. &lt;br&gt;
Each phase requires different tools, typically command-line utilities with various flags and parameters. &lt;br&gt;
Automation tools simplify these processes, enhancing efficiency in daily workflows.&lt;/p&gt;
&lt;h2&gt;
  
  
  Makefile
&lt;/h2&gt;

&lt;p&gt;Makefiles are powerful tools that automate software project workflows. Initially developed for C programs, they now support diverse tasks like website generation and data processing.&lt;/p&gt;

&lt;p&gt;A Makefile contains directives for the &lt;code&gt;make&lt;/code&gt; utility to build or maintain programs and files. It defines tasks and their dependencies, ensuring efficient and reproducible builds.&lt;/p&gt;

&lt;p&gt;I won’t dive into Makefiles in this blog post as I’m assuming the reader is familiar with the concept. If not, there is plenty of information over the internet (like this &lt;a href="https://makefiletutorial.com/" rel="noopener noreferrer"&gt;tutorial&lt;/a&gt; for example or it’s &lt;a href="https://en.wikipedia.org/wiki/Make_%28software%29" rel="noopener noreferrer"&gt;wikipedia page&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages of Makefile:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrated with the &lt;code&gt;make&lt;/code&gt; utility, available on most Linux/MacOS systems.&lt;/li&gt;
&lt;li&gt;A well-established tool with nearly 50 years of history.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the &lt;em&gt;main&lt;/em&gt; advantages I think Makefile have. However, Makefiles have limitations, particularly their syntax, which can be cumbersome for complex tasks.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why I Switched
&lt;/h2&gt;

&lt;p&gt;In one of my projects, I used a Makefile for tasks like running frontend/backend services and database migrations. Here's an example of a migration task:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight make"&gt;&lt;code&gt;&lt;span class="nl"&gt;migrate-up&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="nv"&gt;GOOSE_DRIVER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgres &lt;span class="nv"&gt;GOOSE_DBSTRING&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"user=app host=localhost port=5432 dbname=my-app sslmode=disable user=app"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    goose &lt;span class="nt"&gt;-dir&lt;/span&gt; database/migrations up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I wanted to load environment variables from a &lt;code&gt;.env&lt;/code&gt; file by default but allow overrides with &lt;code&gt;ENV_FILE=.env.production&lt;/code&gt;. After struggling with Makefile syntax and solutions that didn't work, I sought alternatives.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Taskfile
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://taskfile.dev" rel="noopener noreferrer"&gt;Taskfile&lt;/a&gt; is a Go-based task runner using YAML syntax for defining tasks. It simplifies project workflows by automating repetitive tasks like building, testing, and deploying code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits of Taskfile:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Readable YAML Syntax:&lt;/strong&gt; Easier to understand than Makefiles.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Single Binary:&lt;/strong&gt; No dependencies beyond the Go runtime.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Platform Support:&lt;/strong&gt; Works on Linux, macOS, and Windows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's how I solved my problem using Taskfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;
&lt;span class="na"&gt;dotenv&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.env'&lt;/span&gt;

&lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;migrate-up&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cmds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;goose -dir database/migrations up&lt;/span&gt;

  &lt;span class="na"&gt;migrate-up-prod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;dotenv&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.env.production&lt;/span&gt;
    &lt;span class="na"&gt;cmds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo executing DB migration on PRODUCTION ..&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sleep &lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt; &lt;span class="c1"&gt;# allow time to cancel&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;goose -dir database/migrations up&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Taskfile's intuitive API allowed me to quickly implement a solution that was both functional and readable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Choosing the right tool can significantly impact productivity. While Makefile served its purpose initially, Taskfile offered a more elegant solution for my needs. Transitioning took less than 30 minutes and simplified my build process considerably.&lt;/p&gt;

&lt;p&gt;If you're seeking an easy-to-use build tool, consider giving Taskfile a try.&lt;/p&gt;

</description>
      <category>go</category>
      <category>productivity</category>
      <category>ux</category>
    </item>
    <item>
      <title>The value of API-First design on side-projects</title>
      <dc:creator>Chen</dc:creator>
      <pubDate>Fri, 12 Jul 2024 14:02:07 +0000</pubDate>
      <link>https://dev.to/chen/the-value-of-api-first-design-on-side-projects-12pl</link>
      <guid>https://dev.to/chen/the-value-of-api-first-design-on-side-projects-12pl</guid>
      <description>&lt;p&gt;&lt;em&gt;Cover Photo by Douglas Lopes on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;Lately, I had a chance to try out the API-First design approach. I had never written an OpenAPI document before, so I had no real knowledge of its benefits. It always seemed like too much prep work.&lt;/p&gt;

&lt;p&gt;As developers, we often prefer writing code to writing documentation. We dive straight into coding, eager to see our project in action. However, I recently discovered a game-changing approach that has transformed my development process: API-First design. In this post, I'll share my experience implementing this method in a full-stack hobby project, highlighting how it streamlined my workflow and why it's worth considering for your next side project.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;tl;dr: &lt;strong&gt;It will force you to think about your users and how they use your API before writing any code.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’ve been working on a full-stack hobby project where my backend and frontend use different languages (Go and SvelteKit). I decided to give this approach a try and had my &lt;strong&gt;“aha” moment&lt;/strong&gt;. I wish I had done it before.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prioritizing Your Application's Foundation
&lt;/h2&gt;

&lt;p&gt;The API is how we are going to expose our app functionality. An API-first design approach prioritizes the development of APIs before implementing other parts of a software system (or writing code). This method focuses on creating a well-designed, consistent, and user-friendly API that is the foundation for the entire application.&lt;/p&gt;

&lt;p&gt;This methodology places the API at the center of the development process, treating it as a first-class citizen rather than an afterthought. Your API comes first, then the implementation.&lt;/p&gt;

&lt;p&gt;With a written API specification, we can leverage code generation tools to create some boilerplate code. By defining objects in the specification, code-gen tools can generate the relevant structs, for both the frontend and backend (yes, even when the language used is different). This is a big time saver and it helps us to be consistent.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Open API?
&lt;/h2&gt;

&lt;p&gt;“&lt;strong&gt;&lt;em&gt;The OpenAPI Specification&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;(OAS) defines a standard, language-agnostic interface to RESTful APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection&lt;/em&gt;.”&lt;/p&gt;

&lt;p&gt;Simply put, it’s a &lt;strong&gt;contract&lt;/strong&gt; that describes your API types and endpoints. You list all your API endpoints, their HTTP methods, what they possibly return, and some description of what they do.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Now, what if I told you, you can use this document to improve and accelerate your dev experience?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once I had this document, that describes the contract between my API server and its clients, these are the things I could do:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generate my backend types (Go)&lt;/li&gt;
&lt;li&gt;Generate my frontend types (Typescript)&lt;/li&gt;
&lt;li&gt;Generate a client code for my server (also Typescript)&lt;/li&gt;
&lt;li&gt;Generate a testing client with Insomnia or Postman&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a lot of boilerplate code I could save myself from writing. It ensures the frontend and backend types are synchronized since both are generated.&lt;/p&gt;

&lt;p&gt;Grab a 🍺, and let’s walk through an example.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Project Structure
&lt;/h2&gt;

&lt;p&gt;We will be using Go for the backend and some JS framework for the frontend, and a simple structure would look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;app/
├─ api/ &lt;span class="nt"&gt;--&lt;/span&gt; the place &lt;span class="k"&gt;for &lt;/span&gt;the Swagger OpenAPI document
├─ client/ &lt;span class="nt"&gt;--&lt;/span&gt; the client-side code
├─ cmd/
│  ├─ app.go &lt;span class="nt"&gt;--&lt;/span&gt; thin main func that runs our API server
├─ internal/
│  ├─ api/
│  │  ├─ main.go &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="k"&gt;for &lt;/span&gt;code-gen
│  ├─ &lt;span class="nb"&gt;users&lt;/span&gt;/
│  │  ├─ handlers.go &lt;span class="nt"&gt;--&lt;/span&gt; implements the API contract
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Generate The APIs
&lt;/h2&gt;

&lt;p&gt;Let's create our API specification. It includes two endpoints and two structs: User and Error.&lt;br&gt;
Place this file under your &lt;code&gt;/api&lt;/code&gt; directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;openapi&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;3.0.3&lt;/span&gt;
&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Devopsian OpenAPI Example&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.1.0&lt;/span&gt;
  &lt;span class="na"&gt;contact&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://devopsian.net&lt;/span&gt;
&lt;span class="na"&gt;servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost/v1"&lt;/span&gt;
&lt;span class="na"&gt;components&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schemas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;object&lt;/span&gt;
      &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;code&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;message&lt;/span&gt;
      &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;code&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;integer&lt;/span&gt;
          &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;int32&lt;/span&gt;
        &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
    &lt;span class="na"&gt;User&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;object&lt;/span&gt;
      &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;id&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;name&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;email&lt;/span&gt;
      &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
        &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
          &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;email&lt;/span&gt;

&lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;/user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get the current logged-in user&lt;/span&gt;
      &lt;span class="na"&gt;responses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;200&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;user response&lt;/span&gt;
          &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;$ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;#/components/schemas/User"&lt;/span&gt;
        &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;error&lt;/span&gt;
          &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;$ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;#/components/schemas/Error"&lt;/span&gt;
  &lt;span class="na"&gt;/signup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;post&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Creates a new user&lt;/span&gt;
      &lt;span class="na"&gt;responses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;200&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Creates a user&lt;/span&gt;
          &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;$ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;#/components/schemas/User"&lt;/span&gt;
        &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;error&lt;/span&gt;
          &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;$ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;#/components/schemas/Error"&lt;/span&gt;
      &lt;span class="na"&gt;requestBody&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;object&lt;/span&gt;
              &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
                &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
                  &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;email&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Generate Server-Side Code
&lt;/h2&gt;

&lt;p&gt;To generate the server-side code, we need some library. I found &lt;a href="https://github.com/oapi-codegen/oapi-codegen" rel="noopener noreferrer"&gt;oapi-codegen&lt;/a&gt; for that. It supports many popular HTTP libraries (echo, gin, etc.) At the time of writing, I used &lt;code&gt;oapi-codegen@v2.3.0&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the following files to your &lt;code&gt;/internal/api&lt;/code&gt; directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// /internal/api/main.go&lt;/span&gt;
&lt;span class="c"&gt;//go:generate oapi-codegen --config cfg.yaml ../api/openapi3.yaml&lt;/span&gt;
&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;api&lt;/span&gt;

&lt;span class="c"&gt;// make sure to install: &lt;/span&gt;
&lt;span class="c"&gt;// go install github.com/oapi-codegen/oapi-codegen/v2/cmd/oapi-codegen@v2.3.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# /internal/api/cfg.yaml&lt;/span&gt;
&lt;span class="na"&gt;package&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api&lt;/span&gt;
&lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;server.gen.go&lt;/span&gt;
&lt;span class="na"&gt;generate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;models&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;echo-server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I’m using the echo web framework, you can browse the library documentation to use other frameworks. Now run &lt;code&gt;go generate ./...&lt;/code&gt; and it will generate the interfaces (handlers) your web server has to implement to &lt;strong&gt;fulfill&lt;/strong&gt; this contract, including the &lt;strong&gt;types&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Interface Implementation
&lt;/h3&gt;

&lt;p&gt;Now it’s time to write the implementation. We create a &lt;code&gt;users&lt;/code&gt; package where all the user's API handlers, business logic, storage, etc. are defined. We will keep it simple and implement the handlers with static content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;UsersHandler&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;DB&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DB&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;u&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;UsersHandler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;GetUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// load the user from the database and return it to the caller&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StatusOK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="n"&gt;Email&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;types&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Email&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"demo@devopsian.net"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
       &lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;"DemoUser"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;Id&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;    &lt;span class="s"&gt;"1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;u&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;UsersHandler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;PostSignup&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PostSignupJSONBody&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StatusBadRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;Code&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StatusBadRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"invalid request"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c"&gt;// save user in database&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NoContent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StatusOK&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DB&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;UsersHandler&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;UsersHandler&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;DB&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to create our web server entry point, we define that at &lt;code&gt;cmd/server.go&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Server&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UsersHandler&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;Server&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;UsersHandler&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;

    &lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RegisterHandlers&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;":8080"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note I explicitly pass in &lt;code&gt;nil&lt;/code&gt; as DB implementation for this example, because we don’t use it.&lt;/p&gt;

&lt;p&gt;That’s it. If the code compiles, &lt;em&gt;our server implements the API contract.&lt;/em&gt; All the API endpoints are handled by the spec. If I had missed something, &lt;strong&gt;it would have broken at compile time.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;How nice is that?&lt;/p&gt;

&lt;h2&gt;
  
  
  Generate a Typescript Client
&lt;/h2&gt;

&lt;p&gt;It’s time to generate a client for our API. We use the same openapi schema file to generate a JS client. I won’t include a full frontend project in this example, but rather show how you can generate a client to an existing one.&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;client/&lt;/code&gt; directory, install the &lt;a href="https://www.npmjs.com/package/openapi-typescript-codegen" rel="noopener noreferrer"&gt;code-generation&lt;/a&gt; tool for JS: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm install openapi-typescript-codegen --save-dev&lt;/code&gt;. (This post was tested with v0.29.0)&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;client/api/&lt;/code&gt; directory, and let’s run the tool: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;npx openapi-typescript-codegen --input ../api/openapi3.yaml --output api/ --name ApiClient&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This will generate a bunch of typescript files. To use our client we need to create an instance of it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// api.ts&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ApiClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./api/ApiClient&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ApiClient&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;

&lt;span class="c1"&gt;// client has all the methods of our API:&lt;/span&gt;
&lt;span class="c1"&gt;// - getUser()&lt;/span&gt;
&lt;span class="c1"&gt;// - postSignup(requestBody: {name?: string, email?: string})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;API-First design isn't just another development buzzword—it's a powerful approach that can significantly enhance your side projects.&lt;/p&gt;

&lt;p&gt;By prioritizing your API design before implementation, you gain clarity, consistency, and efficiency.&lt;/p&gt;

&lt;p&gt;The OpenAPI specification is a &lt;strong&gt;contract&lt;/strong&gt; between your frontend and backend, &lt;strong&gt;enabling automatic code generation&lt;/strong&gt; for types, clients, and even testing tools.&lt;/p&gt;

&lt;p&gt;This approach not only saves time but also ensures better synchronization between different parts of your application.&lt;/p&gt;

&lt;p&gt;While it may seem like extra work upfront, the long-term benefits—including improved development speed, reduced errors, and better API documentation—make it a valuable investment for any side project.&lt;/p&gt;

&lt;p&gt;If you haven't tried it yet, now might be the perfect time to give it a shot and experience these benefits firsthand.&lt;/p&gt;

</description>
      <category>api</category>
      <category>go</category>
      <category>typescript</category>
      <category>fullstack</category>
    </item>
    <item>
      <title>Inside EKS Networking: Decoding the Service IP Journey</title>
      <dc:creator>Chen</dc:creator>
      <pubDate>Mon, 25 Mar 2024 14:00:00 +0000</pubDate>
      <link>https://dev.to/chen/inside-eks-networking-decoding-the-service-ip-journey-4k1</link>
      <guid>https://dev.to/chen/inside-eks-networking-decoding-the-service-ip-journey-4k1</guid>
      <description>&lt;p&gt;&lt;em&gt;Cover Photo by Taylor Vick on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;Have you ever found yourself deep in the trenches of Kubernetes networking, only to be surprised by a hidden quirk that challenges your understanding? In this blog post, I unravel the mysteries behind Kubernetes networking in Amazon EKS, shedding light on the intricate journey of a packet from the client through the NLB to an ingress controller pod.&lt;/p&gt;

&lt;p&gt;This blog is a story about a change in perception. While I was debugging a problem the other day, it got me to question what I was certain I knew. When you’re sure you understand how Kubernetes works, you encounter another road bump that challenges your understanding and makes you do some research, this article is the output of this research.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;The minimal setup required to examine what’s presented here. You would need an EKS cluster with NGINX ingress-controller deployed with NLB as the entry to your cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74gh3wfd3rt1mwtpih4e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74gh3wfd3rt1mwtpih4e.png" alt="The Setup" width="678" height="852"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;I was debugging a service with an ingress resource. I had incoming traffic from NLB to my ingress controller. I was surprised to see my ingress controller wasn’t listening on that port. The AWS console shows those endpoints as ‘healthy’, meaning they respond to health check samples. But how is that even possible if there is no process listening on that port?&lt;/p&gt;

&lt;p&gt;I gotta say this drove me nuts. I had looked online for similar issues, but nobody mentioned this problem. So I started doing some research, looking for an answer to the question &lt;em&gt;“How does Kubernetes handle Service IP under the hood, on EKS?”&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring Kubernetes Networking Magic
&lt;/h2&gt;

&lt;p&gt;Before I present the network flow of a packet, here are some assumptions I take. Kubernetes is a modular platform; this post is relevant for &lt;strong&gt;EKS&lt;/strong&gt; with the &lt;strong&gt;VPC-CNI&lt;/strong&gt; add-on running on v1.27. It was checked with &lt;strong&gt;NLB&lt;/strong&gt;, other load-balancer types might behave differently.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The next part is low-level. You should be familiar with &lt;strong&gt;iptables&lt;/strong&gt;. Two blog posts that cover this topic greatly are — &lt;a href="https://iximiuz.com/en/posts/laymans-iptables-101/"&gt;Laymans iptables 101&lt;/a&gt; and the &lt;a href="https://sudamtm.medium.com/iptables-a-comprehensive-guide-276b8604eff1"&gt;iptables — a comprehensive guide&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When a packet arrives, it first goes through the iptables &lt;strong&gt;PREROUTING&lt;/strong&gt; chain. (That’s a builtin one):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@...] # iptables -t nat -nvL PREROUTING
Chain PREROUTING (policy ACCEPT 1987 packets, 119K bytes)
 pkts bytes target     prot opt in     out     source               destination         
 756M   5G KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This rule captures every incoming packet and forwards it to the “service portals", the &lt;strong&gt;KUBE-SERVICES,&lt;/strong&gt; (that’s a custom chain) where it’s being matched and designated to the &lt;strong&gt;relevant Kubernetes Service IP&lt;/strong&gt; (nginx, in our case).&lt;/p&gt;

&lt;p&gt;Incoming packets on a matching port are DNAT’ed to a Service IP. There’s a matching rule for our nginx instance. Incoming packets on its listening port &lt;strong&gt;32443&lt;/strong&gt; are sent to its Service IP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iptables -t nat -nvL KUBE-SERVICES | grep nginx
0     0 KUBE-SVC-I66WCJWOLI45ORGK  tcp  --  *      *       0.0.0.0/0            172.20.152.36        /* ingress-nginx/ingress-nginx-controller:https-32443 cluster IP */ tcp dpt:32443 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that our packet has a destination Service IP in the cluster. But service objects in Kubernetes are just a layer of abstraction; there are no actual Pods with such IP. It’s synthetic. Kubernetes networking layer needs to translate this IP to the relevant Pods. This process is done by transforming Service IP to Endpoints. This is also done by iptables.&lt;/p&gt;

&lt;p&gt;We have a specific KUBE-SVC-* chain, which is constructed for every service object we have. The purpose of this chain is to &lt;strong&gt;translate Service IP to Endpoints&lt;/strong&gt; (the actual pods behind it). This chain has an entry for every pod alive. This is where the kernel performs ’load balancing’ between the pods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iptables -t nat -nvL KUBE-SVC-I66WCJWOLI45ORGK
Chain KUBE-SVC-I66WCJWOLI45ORGK (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-TQZ3GYVANOOYIJMO  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:https-32443 -&amp;gt; 10.1.21.61:443 */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-NEOTPBRCKXF3UWGX  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:https-32443 -&amp;gt; 10.1.36.6:443 */
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’re not done with iptables just yet. As you can see from the output, there’s another chain our packet has to go through. We’re getting close. Each chain corresponds to a Pod, with a &lt;strong&gt;KUBE-SEP-&lt;/strong&gt;* chain. If we check the rules of this chain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iptables -t nat -nvL KUBE-SEP-TQZ3GYVANOOYIJMO
Chain KUBE-SEP-TQZ3GYVANOOYIJMO (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.1.21.61         0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:https-32443 */
   64  3840 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* ingress-nginx/ingress-nginx-controller:https-32443 */ tcp to:10.1.21.61:443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first rule is for outgoing packets (SNAT). Incoming traffic matches the second rule. This rule has a &lt;strong&gt;DNAT&lt;/strong&gt; target, which is Destination Network Address Translation. This rule &lt;strong&gt;rewrite&lt;/strong&gt; the TCP packet destination IP to the Pod’s IP: 10.1.21.61:443 Now routing continues as normal, reaching the Pod on the relevant port.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;When a packet arrives, it first goes through iptables PREROUTING, where it’s forwarded to the KUBE-SERVICES chain. This chain directs packets to the relevant Kubernetes Service IP. From there, iptables translates the Service IP to Endpoints using the KUBE-SVC- chains, ultimately reaching the correct pod.&lt;/p&gt;

&lt;p&gt;In conclusion, understanding Kubernetes networking in EKS requires delving into iptables manipulations and Kubernetes Service abstractions. I hope this post sheds light on these concepts.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>aws</category>
    </item>
    <item>
      <title>How to Structure a Go Project: Start Simple, Refactor Later</title>
      <dc:creator>Chen</dc:creator>
      <pubDate>Sun, 18 Feb 2024 20:08:48 +0000</pubDate>
      <link>https://dev.to/chen/how-to-structure-a-go-project-start-simple-refactor-later-cp9</link>
      <guid>https://dev.to/chen/how-to-structure-a-go-project-start-simple-refactor-later-cp9</guid>
      <description>&lt;p&gt;Once upon a codebase, in a kingdom of endless debates, a lone developer pondered the age-old question: “What's the perfect project structure?” Spoiler alert: there isn't one. Let's embark on a quest to discover the charm of simplicity and the art of Go project structuring.&lt;/p&gt;

&lt;p&gt;The perennial question about what directory structure should I use echoes across various social platforms now and then. This subject was discussed many times. I’ve been programming in Go for a couple of years now and asked myself this question every time I had to start a new project. If you’re asking it yourself, here are my 2 cents for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is project structure important?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;"A well-organized directory structure is the scaffolding upon which a robust codebase stands tall."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A project structure is important because it affects &lt;em&gt;readability &amp;amp; maintainability.&lt;/em&gt; Think about your future self, six months from now reading your code when it’s not fresh in your memory. If you have structured it right (and there are many ways to do it) it would be easier to jump in. It would also be easier for &lt;em&gt;collaboration&lt;/em&gt; as other developers can navigate through it more easily. Most of the time we read and maintain existing code, not writing it from scratch.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the common pitfalls of project structure?
&lt;/h2&gt;

&lt;p&gt;There is no such thing as a perfect, one-size-fits-all project structure, so stop looking. It’s like asking for the perfect car. It depends on who you ask, and what are their standards and requirements. &lt;em&gt;You won’t find a single answer&lt;/em&gt;, because like many things in computer science — it depends. While there’s no one-size-fits-all solution, some general guidelines can be followed.&lt;/p&gt;

&lt;p&gt;Some of the common pitfalls:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Over-engineering:&lt;/strong&gt; Trying to anticipate all the possible scenarios and use-cases before writing any code. This leads to unnecessary complexity and abstractions, which makes your code harder to read and maintain&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Under-engineering:&lt;/strong&gt; Writing all the code in a single package, without any structure or organization. This leads to spaghetti code base, which makes it harder to test and debug&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copy-pasta:&lt;/strong&gt; Blindly following the structure of another project, without putting any thoughts or understanding the trade-offs behind it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to start simple and refactor later?
&lt;/h2&gt;

&lt;p&gt;My advice is to start simple and let the packages grow organically as you’re writing the code. Don’t worry about creating the perfect packages and abstractions at this point. Just start by writing your code in a single package — main and see how it works. A simple directory tree is &lt;strong&gt;easier&lt;/strong&gt; to read, which reflects that it is easier to &lt;strong&gt;maintain&lt;/strong&gt; and collaborate with a team or individuals.&lt;/p&gt;

&lt;p&gt;As you write more code, you’ll notice some patterns emerge and repetitions that you can extract to a different package. This will make your code more readable, maintainable, and testable.&lt;/p&gt;

&lt;p&gt;I find it much harder to do the right abstraction before writing code. How do you know which packages do you need? Start writing your business logic. It is easier to refactor code &lt;em&gt;when you see it in front of you.&lt;/em&gt; It’s usually easier to abstract some logic from a package, rather than refactoring multiple packages because you got it wrong the first time.&lt;/p&gt;

&lt;p&gt;The key is to write code and refactor it later.&lt;/p&gt;

&lt;h2&gt;
  
  
  The cmd package pattern
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;A powerful pattern but make sure to ask yourself if you need it&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is a great pattern when &lt;em&gt;necessary.&lt;/em&gt; Do you write a library? Or an app with a single binary? Then you probably don’t need to use the &lt;code&gt;cmd&lt;/code&gt; package. Ask yourself what this abstraction allows you. Remember — keep it simple.&lt;br&gt;
This pattern is useful when your code base is compiled to multiple binaries, for example, if you have a Server and a CLI.&lt;/p&gt;

&lt;p&gt;Inside this package, you put the sub-directories that contain your &lt;code&gt;main.go&lt;/code&gt; files. Each sub-directory is compiled into its own binary. This is the entry point of your app. You should look into &lt;a href="https://github.com/spf13/cobra"&gt;spf13/cobra: A Commander for modern Go CLI interactions&lt;/a&gt; project, which is a widely used library for creating powerful modern CLI applications. It blends well with &lt;a href="https://github.com/spf13/viper"&gt;spf13/viper: Go configuration with fangs&lt;/a&gt; project, which handles loading configurations nicely.&lt;/p&gt;
&lt;h2&gt;
  
  
  The internal package
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;internal&lt;/code&gt; package has a special meaning in Go. Packages that reside under &lt;code&gt;internal/&lt;/code&gt; may not be imported by packages outside the source subtree in which they reside. Therefore, these are said to be &lt;em&gt;internal packages.&lt;/em&gt; The code placed here is internal to the project, and can’t be used outside of it.&lt;/p&gt;

&lt;p&gt;There used to be times people grouped external packages under &lt;code&gt;pkg/&lt;/code&gt;, but I find it meaningless. &lt;a href="https://dave.cheney.net/2019/10/06/use-internal-packages-to-reduce-your-public-api-surface"&gt;A directory that exists only to hold other packages is a potential code smell.&lt;/a&gt;&lt;br&gt;
Instead, give your packages descriptive names. Try to describe &lt;strong&gt;what&lt;/strong&gt; they do, while &lt;a href="https://dave.cheney.net/2019/01/08/avoid-package-names-like-base-util-or-common"&gt;avoiding ambiguous names like utils or common.&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The config package
&lt;/h2&gt;

&lt;p&gt;The config package is responsible for creating and managing the configuration object that your app depends on. It simplifies the process of importing configurations from a single source and accessing them from anywhere in your code base. It also avoids the circular dependency problem that can arise from importing configurations from multiple places. &lt;/p&gt;

&lt;p&gt;Let the config package handle your app's settings, harmonizing inputs from flags, environment variables, or files into a cohesive configuration object.&lt;/p&gt;
&lt;h2&gt;
  
  
  The API package
&lt;/h2&gt;

&lt;p&gt;The API package defines the interface of your app with the outside world. It contains the schema definition (openapi) and the models (which can be generated from schema) that represent the data types and endpoints of your app. Unless a struct is internal to a specific package and is not used outside of it, it should be placed here. The API package should include any struct that is used by more than one package, or that is exposed to the client or the storage.&lt;/p&gt;

&lt;p&gt;The API package is the protocol that enables communication between different components of your app.&lt;/p&gt;
&lt;h2&gt;
  
  
  Controllers, handlers, and storage
&lt;/h2&gt;

&lt;p&gt;Now, let's delve into structuring controllers, handlers, and storage. Two approaches stand out:&lt;/p&gt;
&lt;h4&gt;
  
  
  Approach 1: Storage and Handlers (or controllers) packages
&lt;/h4&gt;

&lt;p&gt;With the first approach, you write your storage and service/controller layers in separate packages where the service imports the storage. This allows you to decouple your business logic from your data access layer and use different storage implementations (postgres, sqlite, redis, etc.) without changing your service code. Here’s a simple view of this approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
└── project/
    ├── api/
    │   ├── models.go
    │   └── openapi3.yaml
    ├── cmd/
    │   ├── server/
    │   │   └── main.go
    │   └── cli/
    │       └── main.go
    └── internal/
        ├── config/
        │   └── config.go
        ├── storage/
        │   └── postgres/
        │       ├── users.go
        │       └── ...
        ├── handlers/
        │   ├── users.go
        │   ├── users_test.go
        │   ├── storage.go
        │   └── ..
        └── server/
            ├── server.go
            ├── server_test.go
            └── ..
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Approach 2: Domain Entity Package
&lt;/h4&gt;

&lt;p&gt;With this approach, you encapsulate &lt;em&gt;logic by functionality&lt;/em&gt; so that both your service and storage code reside in the same package. This follows the principle of &lt;em&gt;domain-driven design&lt;/em&gt;, where you model your code around the business domain and its entities. Each package represents a domain entity (such as &lt;em&gt;user, post, comment,&lt;/em&gt; etc.) and contains all the code related to it (such as internal structs, methods, handlers, queries, etc.)&lt;/p&gt;

&lt;p&gt;This is a topic for a separate blog post I might do later on, but in the meantime for more details about this type of architecture check out &lt;a href="https://threedots.tech/post/ddd-lite-in-go-introduction/"&gt;introduction to DDD&lt;/a&gt; or watch &lt;a href="https://www.youtube.com/watch?v=oL6JBUk6tj0"&gt;Kat Zien great talk from GopherCon 2018.&lt;/a&gt;. Here’s a simple view of this approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
└── project/
    ├── api/
    │   ├── models.go
    │   └── openapi3.yaml
    ├── cmd/
    │   ├── server/
    │   │   └── main.go
    │   └── cli/
    │       └── main.go
    └── internal/
        ├── config/
        │   └── config.go
        ├── &lt;span class="nb"&gt;users&lt;/span&gt;/
        │   ├── postgres/
        │   │   └── db.go
        │   ├── handlers.go
        │   ├── handlers_test.go
        │   ├── storage.go
        │   └── ..
        └── server/
            ├── server.go
            ├── server_test.go
            └── ..
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, there is no single ‘perfect’ way to structure your Go project. There isn’t a one-size-fits-all. There are multiple ways, and it depends on your use-case, preferences, and the trade-offs you’re willing to take. Experiment based on your needs and requirements.&lt;/p&gt;

&lt;p&gt;Embrace simplicity initially. Write code in a single package, observe emerging patterns, and refactor as needed. The key here is to write code first, and ‘perfect’ structure later.&lt;/p&gt;

&lt;p&gt;I hope this post gave you some insights and ideas on how to structure your next project. Happy coding!&lt;/p&gt;

</description>
      <category>go</category>
      <category>learning</category>
    </item>
    <item>
      <title>This is how you want to manage your Terraform modules</title>
      <dc:creator>Chen</dc:creator>
      <pubDate>Sun, 08 Jan 2023 07:21:57 +0000</pubDate>
      <link>https://dev.to/chen/this-is-how-you-want-to-manage-your-terraform-modules-5d12</link>
      <guid>https://dev.to/chen/this-is-how-you-want-to-manage-your-terraform-modules-5d12</guid>
      <description>&lt;p&gt;What if I told you there is a way to manage all your private terraform modules, in a mono-repo, with independent versioning, without using git tags? After researching for a proper open-source tool, I found the right one for the job.&lt;/p&gt;

&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;A private registry is needed.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I looked for a way to manage private terraform modules like public ones.&lt;br&gt;
In my company, we write tailor-maid modules that describe our infrastructure. These modules have to be private.&lt;br&gt;
Terraform &lt;a href="https://developer.hashicorp.com/terraform/registry/modules/use#module-versions" rel="noopener noreferrer"&gt;recommends&lt;/a&gt; each module has its own git repository, yet, this has the burden of managing and syncing        multiple repositories.&lt;/p&gt;

&lt;p&gt;The second option suggested is to use a mono-repo, and reference the module's version using &lt;a href="https://developer.hashicorp.com/terraform/language/modules/sources#selecting-a-revision" rel="noopener noreferrer"&gt;git tags&lt;/a&gt;. At    first, this seemed alright,&lt;br&gt;
but there was a drawback — you had to give up the terraform version syntax. You no longer can use the convenient &lt;code&gt;version = ~&amp;gt; 1.0.0&lt;/code&gt; module parameter.&lt;/p&gt;

&lt;p&gt;It seemed to me that there had to be a better way.&lt;/p&gt;
&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;When it comes down to managing my infrastructure, I prefer the mono-repo approach. Since our terraform modules are small configuration blocks,&lt;br&gt;
it makes sense. It is much easier to find a module in a single repository rather than searching multiple repositories for a module.&lt;br&gt;
Our module release workflow looked something like this&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Our repository was structured with sub-directories per module&lt;/li&gt;
&lt;li&gt;We used Git tags for module versioning in the form of &lt;em&gt;&lt;strong&gt;moduleName-vX.Y.Z&lt;/strong&gt;&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Updates to module versions were &lt;em&gt;hand-delivered&lt;/em&gt; to clients (even minor patches)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our root modules (where we execute terraform plan and apply) reference modules using git tags:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  source = “https://github.com/my-org/my-repo?ref=s3-v1.0.0"

  .. module inputs ..
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This has worked well for a while. But once we had released a new module version and wanted to use it, there was no convenient way to apply that.&lt;/p&gt;

&lt;p&gt;We had to go through our root modules using this module and update their references. (We could write a simple script, but we decided not to. More on that later)&lt;/p&gt;

&lt;p&gt;The problem we had with this approach is, you can’t use the &lt;code&gt;version&lt;/code&gt; argument in the terraform block.&lt;br&gt;
This argument lets you specify a &lt;a href="https://developer.hashicorp.com/terraform/language/expressions/version-constraints" rel="noopener noreferrer"&gt;range of acceptable versions&lt;/a&gt; instead of a hard-coded one.&lt;/p&gt;

&lt;p&gt;For example, you can provide version constraints such as &lt;code&gt;version = “~&amp;gt; 1.2.0, &amp;lt; 2.0”&lt;/code&gt; which allows incrementing the “patch” automatically every time you run &lt;code&gt;terraform init&lt;/code&gt;.&lt;br&gt;
 No need to manually update a patch release, and no need to write a bash script. Terraform can manage that reliably for us,&lt;br&gt;
with a better API, if we can only discover a way to use this feature.&lt;/p&gt;

&lt;p&gt;Manually managing our tags was &lt;strong&gt;chaotic&lt;/strong&gt;. It is not an easy task to manage independent module versions in a mono-repo using the Git tags approach.&lt;/p&gt;

&lt;p&gt;A simpler approach is to release a version for the whole repository. That means every new release includes all the modules together.&lt;br&gt;
This couples the modules into a single artifact, which is easier to manage, but at the cost of development velocity.&lt;br&gt;
You cannot release minor patches just for your module, no matter how small the code change is.&lt;/p&gt;
&lt;h2&gt;
  
  
  Towards a better future
&lt;/h2&gt;

&lt;p&gt;The structure and workflows we had in place worked; just as not as well or easily as we wanted to.&lt;br&gt;
What we needed was to be able to use terraform versioning syntax, while maintaining our mono-repo.&lt;br&gt;
To achieve that, we need to treat our terraform &lt;em&gt;modules as artifacts&lt;/em&gt;; something we can archive, version, and release independently.&lt;/p&gt;

&lt;p&gt;After doing some research, we found an open-source project called &lt;strong&gt;&lt;a href="https://github.com/valentindeaconu/terralist" rel="noopener noreferrer"&gt;Terralist.&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terralist is a private Terraform registry for providers and modules following the published HashiCorp protocols.&lt;br&gt;
It provides a secure way to distribute your confidential modules and providers. That looked like a project that might help us to solve our problem, so we decided to try it out.&lt;/p&gt;

&lt;p&gt;So we maintain our mono-repo as it is. We created a job in our CI system that archives a single terraform module and uploads it to Terralist.&lt;br&gt;
Since it's a private registry, clients need to authenticate to be able to download the modules. After authentication (using terraform login &amp;lt;terralist-url&amp;gt;)&lt;br&gt;
we could use our private modules just as we use the public modules. Yes, it means we can use the versioning syntax I was talking about.&lt;/p&gt;

&lt;p&gt;Here is how we use our modules nowadays&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; terraform {
     source = “https://my-terralist.com/my-org/s3/aws”
     version = “~&amp;gt; 1.0.0”

     .. module inputs ..
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This client code will retrieve automatic patch updates on every execution from our CI system (if you execute terraform locally, you would need to re-run &lt;code&gt;terraform init&lt;/code&gt;).&lt;br&gt;
If we make larger code changes to one of our modules, for example, something that might break the API&lt;br&gt;
we would increment the &lt;strong&gt;major or minor&lt;/strong&gt; version, so it wouldn't impact our clients.&lt;/p&gt;
&lt;h3&gt;
  
  
  Terralist
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Terralist&lt;/em&gt; has an API that lets you upload a module from a git repository. It means you don't need to archive the module yourself,&lt;br&gt;
just point Terralist to its location.&lt;/p&gt;

&lt;p&gt;Terralist will clone the repository and create the artifact for you. Let's walk through an example.&lt;/p&gt;

&lt;p&gt;I have a &lt;code&gt;demo&lt;/code&gt; module, which resides in my &lt;code&gt;terraform.git&lt;/code&gt; repository under the &lt;code&gt;modules/demo&lt;/code&gt; directory. In the module itself, we keep a &lt;code&gt;version.tf&lt;/code&gt;&lt;br&gt;
with the &lt;strong&gt;major and minor&lt;/strong&gt; versions of the module. These values change only &lt;em&gt;when we introduce a change that breaks our existing API&lt;/em&gt;&lt;br&gt;
(e.g, adding a new mandatory parameter without defaults). In such case we update the major or minor &lt;strong&gt;manually&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;patch&lt;/strong&gt; number in the semver represents a safe change, such as a bug fix or non-breaking changes to the module.&lt;br&gt;
This value is &lt;em&gt;calculated based on the build number&lt;/em&gt; of our CI. We don't really care about its value, because the version constraint&lt;br&gt;
applied is in the form of &lt;code&gt;version = "~&amp;gt; 1.0.0"&lt;/code&gt;. This automatically updates to the latest patch on every execution.&lt;/p&gt;

&lt;p&gt;As part of our module build process, we &lt;em&gt;read&lt;/em&gt; the values of &lt;code&gt;version.tf&lt;/code&gt; and append the patch. Then we upload the module to Terralist using this API call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST registry.example.com/v1/api/modules/demo/aws/1.0.10/upload &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer x-api-key:&lt;/span&gt;&lt;span class="nv"&gt;$TERRALIST_API_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{ "download_url": "https://github.com/example-org/terraform/archive/refs/heads/master.zip//modules/demo" }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The module version is composed of &lt;code&gt;major.minor&lt;/code&gt; coming from the &lt;code&gt;version.tf&lt;/code&gt; file, and the &lt;code&gt;.patch&lt;/code&gt; is the build ID, which is incremented with every run and guaranteed to be unique.&lt;/p&gt;

&lt;p&gt;Pay close attention to the &lt;strong&gt;double //&lt;/strong&gt; -- this instruct Terralist to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download an archived repository&lt;/li&gt;
&lt;li&gt;Extract it locally&lt;/li&gt;
&lt;li&gt;Make an archive only from the path &lt;code&gt;modules/demo&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Upload it to the registry (basically upload it to S3 and update the registry database)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's an elegant solution: I tell Terralist where my module is, and what version it is tagged with, and it takes care for everything else.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;There are various ways to manage your infrastructure code.&lt;/p&gt;

&lt;p&gt;It depends on multiple factors, such as team size, how much you are willing to spend on 3rd party tools, and your company policy, to name a few.&lt;br&gt;
As of today, I've been using Terraform for more than 3 years, relying solely on the open-source ecosystem. That means my team and I manage everything related to our infrastructure (it might change   soon, as our infrastructure size has grown).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Terralist&lt;/em&gt; was a helpful addition to our stack; it solved a problem we had in our existing workflow, with minimal effort and no code changes to our existing modules.&lt;br&gt;
We just had to upload and version them to the new registry.&lt;br&gt;
Now, we can release each module independently and decide if we want our clients to automatically upgrade their module version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Happy Terraforming.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For further reading, HashiCorp documentation contains a lot of good information. Check out the links below&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.hashicorp.com/terraform/language/modules/syntax" rel="noopener noreferrer"&gt;Terraform Module Blocks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.hashicorp.com/terraform/registry/modules" rel="noopener noreferrer"&gt;Finding and using modules&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.hashicorp.com/terraform/tutorials/modules" rel="noopener noreferrer"&gt;Reuse configuration with modules (tutorial)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>softwaredevelopment</category>
      <category>career</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Practical unit-testing web client in Go</title>
      <dc:creator>Chen</dc:creator>
      <pubDate>Fri, 28 May 2021 11:24:29 +0000</pubDate>
      <link>https://dev.to/chen/practical-unit-testing-web-client-in-go-1o2m</link>
      <guid>https://dev.to/chen/practical-unit-testing-web-client-in-go-1o2m</guid>
      <description>&lt;p&gt;I've started a Go project, a lightweight Hashicorp Vault&lt;sup id="fnref1"&gt;1&lt;/sup&gt; client with no dependencies, and a simple API (for the user). Part of the reason I use no 3rd party modules is, I want to better understand Go internals, structure, and improve my skills. In this post, I'll conduct a &lt;em&gt;practical&lt;/em&gt; example of how I ended up testing it.&lt;/p&gt;

&lt;p&gt;The the project's name is &lt;strong&gt;&lt;a href="https://github.com/canidam/libvault"&gt;libvault&lt;/a&gt;&lt;/strong&gt;, and it's my &lt;em&gt;first open-source&lt;/em&gt; project.&lt;/p&gt;

&lt;p&gt;Vault is a secret manager service with web API and CLI. Applications can communicate with it through HTTP. I find the CLI easy to use, however, the official Vault library is a bit more complicated; it felt like a swiss-army knife when all I needed was just a simple kitchen knife. I decided to implement a light version, that covers basic functionality while maintaining a simple API.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I'm not going to cover the basics in this post. The intention of it is to be &lt;em&gt;practical&lt;/em&gt; and provide &lt;em&gt;real-life, working&lt;/em&gt; examples. The opinions here are mine. It worked for me, still, it doesn't mean it would work for you.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;I removed ALL the error handling from the code for brevity. Please make sure you handle your errors.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The Go standard library provides a really good testing package. You can manage without external frameworks or 3rd party packages. In my scenario, I need to test a web client that I wrote. The technique I found useful is &lt;em&gt;mock testing.&lt;/em&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Mock testing is an approach to unit testing that lets you make assertions about how the code under test is interacting with other system modules. In mock testing, the dependencies are replaced with objects that simulate the behaviour of the real ones. ... Such a service can be replaced with a mock object. ~ Wikipedia&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I had to choose which component to mock:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Client&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Webserver&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At a high level, mocking the client means creating a new struct that implements the &lt;em&gt;interface&lt;/em&gt; you are testing (mocking the interface). Then, provide your mocking client to the code under test. A good library with examples is &lt;a href="https://github.com/stretchr/testify#mock-package"&gt;testify&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I didn't find it useful for my use case, as I need to mock the server-side. I prefer not to modify any code on the client if I can. This would result in more &lt;strong&gt;reliable tests&lt;/strong&gt; for my package. So, I've chosen to go with the second option. Read on for how.&lt;/p&gt;

&lt;h2&gt;
  
  
  The httptest package
&lt;/h2&gt;

&lt;p&gt;Go is very friendly to web services; it has a &lt;a href="https://golang.org/pkg/net/http/httptest/"&gt;utility package (httptest)&lt;/a&gt; for testing an http server. You can simply start a webserver in your &lt;em&gt;testing code&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;First I had to fetch &lt;strong&gt;an exact response&lt;/strong&gt; from a &lt;strong&gt;real webserver&lt;/strong&gt; (Vault server), then I could easily mock it using this package. For example, when querying the &lt;code&gt;/v1/auth/approle/login&lt;/code&gt; endpoint, the response looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"request_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"de7c8097-1a38-50a6-b971-fe1836840e45"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"lease_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"renewable"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that I have the content, I find it easier to save it to a file (rather than put it inside the code). Go &lt;code&gt;testing&lt;/code&gt; package has another cool feature:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The go tool will ignore a directory named "testdata", making it available&lt;br&gt;
to hold ancillary data needed by the tests.&lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I created a directory named &lt;code&gt;testdata&lt;/code&gt; inside my project and placed the JSON content in a file - &lt;em&gt;approleExample.json&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start the mocking server
&lt;/h2&gt;

&lt;p&gt;I'll start with code, followed by an explanation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"io"&lt;/span&gt;
    &lt;span class="s"&gt;"net/http"&lt;/span&gt;
    &lt;span class="s"&gt;"net/http/httptest"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;TestClientLogin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;mux&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewServeMux&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;ts&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;httptest&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mux&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;mux&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HandleFunc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/v1/auth/approle/login"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ResponseWriter&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c"&gt;// request validation logic&lt;/span&gt;
        &lt;span class="o"&gt;...&lt;/span&gt;
        &lt;span class="c"&gt;// read json response&lt;/span&gt;
        &lt;span class="n"&gt;jsonPayload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;ioutil&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ReadFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"testdata/approleExample.json"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Header&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Content-Type"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"application/json"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WriteHeader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jsonPayload&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="c"&gt;// initalize new client pointing to the testing server&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;NewClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c"&gt;// error handling&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c"&gt;// test logic&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're familiar with the Go http package, this code is pretty self-explanatory. This is another advantage of using the standard library - &lt;em&gt;you have one less thing to learn&lt;/em&gt; when you want to contribute.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;mux&lt;/code&gt;&lt;/strong&gt; is of &lt;code&gt;ServeMux&lt;/code&gt; type. Which is &lt;em&gt;an HTTP request multiplexer&lt;/em&gt;. It matches the URL of each incoming request (&lt;code&gt;/v1/auth/approle/login&lt;/code&gt;) against a list of registered patterns and calls the handler for the pattern that most closely matches the URL (our mux.HandleFunc function body).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;ts&lt;/code&gt;&lt;/strong&gt; is of &lt;code&gt;Server&lt;/code&gt; type. A Server is an HTTP server listening on a system-chosen port on the local loopback interface, for use in end-to-end HTTP tests. Note that I use &lt;code&gt;httptest.NewServer&lt;/code&gt; to initialize it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;mux.HandleFunc(..)&lt;/code&gt;&lt;/strong&gt; defines a &lt;em&gt;path&lt;/em&gt; and an &lt;em&gt;handler (a function)&lt;/em&gt; to call. The content inside describes our server's response.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;client, err := NewClient(ts.URL)&lt;/code&gt;&lt;/strong&gt; creates a new Client (my Vault client), providing it the test server URL to work with.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The elegance of this pattern is, &lt;em&gt;every&lt;/em&gt; test case has its test webserver with all the relevant configurations. The mocked content, the test logic, etc. are all &lt;em&gt;implemented inside the test itself.&lt;/em&gt;&lt;br&gt;
This really makes life easier when &lt;em&gt;debugging, reviewing a test case logic&lt;/em&gt; or coverage.&lt;/p&gt;

&lt;p&gt;We can improve it further, make the code more clear and concise. This code includes some boilerplate: creating the mux, the server, and reading the json content. We just need to refactor these elements out (to a &lt;code&gt;setup()&lt;/code&gt; function, and &lt;code&gt;readJson(path string)&lt;/code&gt;, for example). Then call this &lt;code&gt;setup()&lt;/code&gt; function for every test case. I'll leave that to the reader to decide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;There are numerous articles about the importance of software testing. &lt;em&gt;Go makes it easier&lt;/em&gt;. You &lt;strong&gt;should&lt;/strong&gt; always write tests for your packages; it has so many advantages to just skip it. However, many people do that and I can assume the reason, which is it doesn't provide any additional functionality to your code. &lt;em&gt;Don't be one of these people.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Go standard library provides great tools, and you should use them. Personally, I prefer it over other dependencies.&lt;/p&gt;

&lt;p&gt;In this post, I gave a practical example of how you can unit-test a web client by &lt;em&gt;mocking&lt;/em&gt; it.&lt;/p&gt;

&lt;p&gt;The takeaways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Always&lt;/strong&gt; write tests. They are too valuable to give up and very easy to do with Go.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mock&lt;/strong&gt; the part your code connects with, don't mock &lt;em&gt;your&lt;/em&gt; code. If you write a server, mock the client, and vice-versa. It makes your tests much more reliable.

&lt;ol&gt;
&lt;li&gt;If you mock a server, get a &lt;em&gt;real&lt;/em&gt; server response and save it to a file&lt;/li&gt;
&lt;li&gt;Do that for every API you would like to mock&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;&lt;code&gt;testdata&lt;/code&gt;&lt;/strong&gt; directory to hold your ancillary data needed by your tests&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the next article, I will cover how to test with a TLS (HTTPS) web server and self-signed certificates, adding more examples. You can also find more examples in the source code.&lt;/p&gt;

&lt;p&gt;Feedback and comments are welcome.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;&lt;a href="https://www.vaultproject.io/"&gt;https://www.vaultproject.io/&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;&lt;a href="https://golang.org/pkg/cmd/go/internal/test/"&gt;https://golang.org/pkg/cmd/go/internal/test/&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>go</category>
      <category>code</category>
      <category>testing</category>
    </item>
    <item>
      <title>How to share persistent storage volumes in Swarm</title>
      <dc:creator>Chen</dc:creator>
      <pubDate>Tue, 23 Feb 2021 18:07:22 +0000</pubDate>
      <link>https://dev.to/chen/how-to-share-persistent-storage-volumes-in-swarm-4e00</link>
      <guid>https://dev.to/chen/how-to-share-persistent-storage-volumes-in-swarm-4e00</guid>
      <description>&lt;p&gt;Docker &lt;a href="https://docs.docker.com/engine/swarm/key-concepts/" rel="noopener noreferrer"&gt;swarm&lt;/a&gt; is an orchestration tool, similar to Kubernetes, but simpler to set up and manage.&lt;br&gt;
A swarm consists of multiple Docker hosts which run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services). A given Docker host can be a manager, a worker, or perform both roles.&lt;/p&gt;

&lt;p&gt;The docker swarm feature is embedded in the Docker Engine (using &lt;a href="https://github.com/docker/swarmkit/" rel="noopener noreferrer"&gt;swarmkit&lt;/a&gt;). This means you don't need to install extra packages to use it. You just docker. If you decided to put it in place, one of the open problems to address is the persistent storage.&lt;/p&gt;

&lt;p&gt;In the old days, many processes could share a local disk on the host. It was a common pattern to have a process that writes to files, and another that consumes them. The two processes were living happily on the same host, having no problems working together. But it's another story in the containerized world. When containers are managed by an orchestrator, you can't really tell where your container would be scheduled to run. It won't have a permanent host, neither you want to lock it to a specific host, as you lose many of the orchestrator benefits (resiliency, fail-over, etc.).&lt;/p&gt;
&lt;h2&gt;
  
  
  The Problem: Data Persistency on the Swarm cluster
&lt;/h2&gt;

&lt;p&gt;Suppose you have two services that need to share a disk, or a service that requires data persistency such as a Redis. What are your options?&lt;/p&gt;

&lt;p&gt;One easy option is to use &lt;em&gt;&lt;a href="https://docs.docker.com/storage/volumes/" rel="noopener noreferrer"&gt;docker volumes&lt;/a&gt;&lt;/em&gt;. It's a good option if you run on a single node because when you create a new docker volume, it resides on the host it was created on.&lt;/p&gt;

&lt;p&gt;But what happens when it runs on a cluster? good question. Let's walk through an example. I'm going to one from the use the Docker Documentation (&lt;a href="https://docs.docker.com/compose/gettingstarted/" rel="noopener noreferrer"&gt;Get Started with Docker Compose&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Here are the files I'm using for this example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;devopsian: ~/demo
➜ tree
&lt;span class="nb"&gt;.&lt;/span&gt;
 |-app.py
 |-Dockerfile
 |-docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# app.py
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;redis&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;redis&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;6379&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_hit_count&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;retries&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;incr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hits&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;exceptions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;ConnectionError&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;retries&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="n"&gt;exc&lt;/span&gt;
            &lt;span class="n"&gt;retries&lt;/span&gt; &lt;span class="o"&gt;-=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
            &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;hello&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get_hit_count&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Hello World! I have been seen {} times.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Dockerfile&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.7-alpine&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /code&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; FLASK_APP=app.py&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; FLASK_RUN_HOST=0.0.0.0&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apk add &lt;span class="nt"&gt;--no-cache&lt;/span&gt; gcc musl-dev linux-headers
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; flask redis
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 5000&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["flask", "run"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# docker-compose.yml&lt;/span&gt;
version: "3.7"
services:
  app:
    build: .
    ports:
      - "5000:5000"
  redis:
    image: "redis:alpine"
    volumes:
      - "redisdata:/data"

volumes:
  redisdata:
    driver: "local"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Suppose you have a small cluster of 3 nodes. (Their roles in the cluster don't matter for example). You run your docker-compose file with your app and a Redis instance, and define a volume for it. The first time you run your docker-compose, docker creates the volume and mounts it to your service on startup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxr3ct209ulbxkg7pviq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxr3ct209ulbxkg7pviq.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you want to deploy a new version of your service. The cluster decides to schedule the Redis instance on a different node than before. What will happen to your volume?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tujnqlexq63qnl4zu04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tujnqlexq63qnl4zu04.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since the volume doesn't exist on the node the service runs at, docker would create a new volume with the same name on the new node. The previous volume still exists with the data but, it resides on the old node. The Redis service has no access to it. Redis now actually has a new, empty volume for use. &lt;strong&gt;You end up in an inconsistent state&lt;/strong&gt;. This solution doesn't work for us.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Elastic File System (EFS)
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use .. It is built to &lt;em&gt;scale on-demand&lt;/em&gt; to petabytes without disrupting applications, &lt;em&gt;growing and shrinking automatically&lt;/em&gt; as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;EFS provides a NFS volume you can mount at &lt;strong&gt;runtime&lt;/strong&gt;. The swarm cluster schedules to run the service on one of its nodes and mounts the NFS volume to the container.&lt;/p&gt;

&lt;p&gt;With these configurations, if the cluster decides to move our Redis service to a different node (due to a failure, deployment, etc.), the mount (and hence the data) will move to the new node too. I won't go through how to create an EFS volume, you can find the steps &lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/efs/latest/ug/getting-started.html" rel="noopener noreferrer"&gt;here on aws tutorial&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To achieve that, we need to install the &lt;em&gt;nfs-common&lt;/em&gt; package on the swarm nodes. On Ubuntu, you can install it with: &lt;code&gt;sudo apt-get install nfs-common&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, we will update the &lt;em&gt;volumes&lt;/em&gt; definition of our docker-compose, with the new driver type and the address. It looks as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# docker-compose-with-efs.yml&lt;/span&gt;
version: "3.7"
services:
  app:
    build: .
    ports:
      - "5000:5000"
  redis:
    image: "redis:alpine"
    volumes:
      - "redis-efs:/data"

volumes:
  redis-efs:
    driver: local
    driver_opts:
      type: nfs
      o: addr=fs-1224ea45.efs.us-east-1.amazonaws.com,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2
      device: "fs-1224ea45.efs.us-east-1.amazonaws.com:/"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we run our app now, the Redis data volume is &lt;em&gt;persistent&lt;/em&gt;. It will move with it to whatever node Redis runs on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz7n7klb7jqy8gpy42or.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz7n7klb7jqy8gpy42or.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This solves our original problem.&lt;/em&gt; We now have a way to persist data across our cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Costs
&lt;/h2&gt;

&lt;p&gt;This blog post uses AWS as the cloud infrastructure. When comparing EFS, the closest service AWS I could find is EBS volumes. I assume you're familiar with it.&lt;/p&gt;

&lt;p&gt;You should use these pricing calculators from AWS to tailor the price for your use case.&lt;/p&gt;

&lt;p&gt;These are not apples to apples, but let's do a quick price comparison between the two. We will compare a 100GB volume in the EU region.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://aws.amazon.com/ebs/pricing/" rel="noopener noreferrer"&gt;EBS&lt;/a&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Charged for&lt;/th&gt;
&lt;th&gt;Price&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Storage&lt;/td&gt;
&lt;td&gt;0.088$/GB * 730 * 100&lt;/td&gt;
&lt;td&gt;$64.24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IOPS&lt;/td&gt;
&lt;td&gt;3K - included&lt;/td&gt;
&lt;td&gt;$0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Throughput&lt;/td&gt;
&lt;td&gt;125MB/s - included&lt;/td&gt;
&lt;td&gt;$0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can increase the throughput or IOPS, with additional costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://aws.amazon.com/efs/pricing/" rel="noopener noreferrer"&gt;EFS&lt;/a&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;With EFS, the charges are different (shocked ah?). You pay for &lt;em&gt;"Standard Storage Class"&lt;/em&gt; which is designed for active file system workloads.&lt;/p&gt;

&lt;p&gt;You pay a different price for &lt;em&gt;"Infrequent Access Storage Class"&lt;/em&gt; (IA) which is cost-optimized for files accessed less frequently. For this comparison, let's estimate 50% of the data is frequently accessed.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Charged for&lt;/th&gt;
&lt;th&gt;Price&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Standard Storage&lt;/td&gt;
&lt;td&gt;0.33$/GB monthly  * 50&lt;/td&gt;
&lt;td&gt;$16.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IA Storage&lt;/td&gt;
&lt;td&gt;0.025$/GB monthly * 50&lt;/td&gt;
&lt;td&gt;$1.25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Throughput&lt;/td&gt;
&lt;td&gt;2.5MB/s included, 6.60$ per additional MB/s&lt;/td&gt;
&lt;td&gt;$0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The throughput part is kinda &lt;strong&gt;tricky&lt;/strong&gt;. You get 50KB/s per Stand Storage GB (50GB * 50KB = 2.5MB/s) for &lt;em&gt;write operations&lt;/em&gt; and 150KB/s (50GB * 150KB = 7.5MB/s) for &lt;em&gt;read operations&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;As you see, there is a big performance impact when comparing EBS with EFS. If you need high throughput, EFS might get very expensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The problem I wrote about in this blog is a known problem when running microservices on a managed cluster using an orchestration tool. As with other problems, it has more than one solution where each has its trade-offs.&lt;/p&gt;

&lt;p&gt;I showed that EFS is a convenient, easy solution you can use to solve the shared storage problem. Yet, it is not the right solution for everything. The performance is limited, and it can get very expensive if your app requires high throughput. In those cases, you would want to use something else.&lt;/p&gt;

&lt;p&gt;If you can &lt;em&gt;compromise&lt;/em&gt; on the &lt;em&gt;throughput&lt;/em&gt;, EFS becomes an attractive option to manage your cluster shared storage.&lt;/p&gt;

&lt;p&gt;When you define your NFS volumes inside your docker-compose file, it is part of the service definition. It takes care of the infrastructure it uses. I find this pattern a good one when dealing with microservices.&lt;br&gt;
If you need to move this service to another cluster, anytime in the future, the NFS mount is one thing less you need to remember to do.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>aws</category>
      <category>orchestration</category>
    </item>
    <item>
      <title>Docker healthcheck experiments with Go web app</title>
      <dc:creator>Chen</dc:creator>
      <pubDate>Fri, 12 Feb 2021 12:33:32 +0000</pubDate>
      <link>https://dev.to/chen/docker-healthcheck-experiments-with-go-web-app-3c3p</link>
      <guid>https://dev.to/chen/docker-healthcheck-experiments-with-go-web-app-3c3p</guid>
      <description>&lt;p&gt;One good way to monitor your container status is to use Docker's HEALTHCHECK feature.&lt;/p&gt;

&lt;p&gt;As part of testing the followed actions upon an unhealthy container, I had to experiment how this works. I wrote a small Go app that replies to &lt;code&gt;/health&lt;/code&gt; requests, and the status it responds is configurable.&lt;/p&gt;

&lt;p&gt;The webserver listens on &lt;strong&gt;$PORT&lt;/strong&gt; (defaults to 8080). To change its status, simply make an api call to one of the supported actions. You can either connect the container to execute it,&lt;br&gt;
or if you have mapped a port on the host, you can use it.&lt;/p&gt;

&lt;p&gt;Supported actions are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;/sabotage&lt;/strong&gt; will make it respond with 500.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/timeout&lt;/strong&gt; will make it respond after 20s.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/recover&lt;/strong&gt; will return it back to healthy state, with 200 response code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find the source code &lt;a href="https://github.com/canidam/docker-go-healthcheck/tree/master/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;From the Docker &lt;a href="https://docs.docker.com/engine/reference/builder/#healthcheck"&gt;docs&lt;/a&gt; -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;The HEALTHCHECK instruction has two forms:

    HEALTHCHECK &lt;span class="o"&gt;[&lt;/span&gt;OPTIONS] CMD &lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;check container health by running a &lt;span class="nb"&gt;command &lt;/span&gt;inside the container&lt;span class="o"&gt;)&lt;/span&gt;
    HEALTHCHECK NONE &lt;span class="o"&gt;(&lt;/span&gt;disable any healthcheck inherited from the base image&lt;span class="o"&gt;)&lt;/span&gt;

The HEALTHCHECK instruction tells Docker how to &lt;span class="nb"&gt;test &lt;/span&gt;a container to check that it is still working. 
This can detect cases such as a web server that is stuck &lt;span class="k"&gt;in &lt;/span&gt;an infinite loop and unable 
to handle new connections, even though the server process is still running.

When a container has a healthcheck specified, it has a health status &lt;span class="k"&gt;in &lt;/span&gt;addition to its normal status. 
This status is initially starting. Whenever a health check passes, it becomes healthy &lt;span class="o"&gt;(&lt;/span&gt;whatever state 
it was previously &lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; After a certain number of consecutive failures, it becomes unhealthy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The options that can appear before CMD are:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--interval=DURATION (default: 30s)
--timeout=DURATION (default: 30s)
--start-period=DURATION (default: 0s)
--retries=N (default: 3)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can configure the HEALTHCHECK settings in the &lt;strong&gt;Dockerfile&lt;/strong&gt; or the &lt;strong&gt;docker-compose.yml&lt;/strong&gt;. In my example,&lt;br&gt;
I use it in the Dockerfile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Clone the repo and build the image: &lt;code&gt;docker build . -t go-healthchecker&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bring up the container, wait until it gets healthy.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7sLIiWZ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://github.com/canidam/docker-go-healthcheck/blob/master/media/up.gif%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7sLIiWZ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://github.com/canidam/docker-go-healthcheck/blob/master/media/up.gif%3Fraw%3Dtrue" alt="Run the container" width="880" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To change the healthcheck response, connect to the container and update its status&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9KPJzojp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://github.com/canidam/docker-go-healthcheck/blob/master/media/sabotage.gif%3Fraw%3Dtrue" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9KPJzojp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://github.com/canidam/docker-go-healthcheck/blob/master/media/sabotage.gif%3Fraw%3Dtrue" alt="Sabotage" width="880" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a future blog post, I will share how you can use the &lt;strong&gt;HEALTHCHECK&lt;/strong&gt; feature to control docker-compose services startup with a more fine-grained option, e.g "How to make service X start before service Y".&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>go</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How can Stackoverflow make you a better developer</title>
      <dc:creator>Chen</dc:creator>
      <pubDate>Fri, 05 Feb 2021 14:53:52 +0000</pubDate>
      <link>https://dev.to/chen/how-can-stackoverflow-make-you-a-better-developer-16jo</link>
      <guid>https://dev.to/chen/how-can-stackoverflow-make-you-a-better-developer-16jo</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"Time is the most valuable resource because you cannot get more of it."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There isn't a single developer who doesn't use Stackoverflow. Most of us use it daily. How many times a day have you googled an error, and reached Stackoverflow? It is such a valuable resource that improves productivity every single day. Consider the amount of debugging time it saved you. Yet, the majority of the developers I know don't even own an account, nevertheless trying to answer other people's questions.&lt;/p&gt;

&lt;p&gt;How easy and convenient it is to find a solution to the problem you faced. In this post, I'll share with you the benefits I found of being a contributor on Stackoverflow.&lt;/p&gt;

&lt;p&gt;Most of the experienced developers will probably stop reading right now. Individuals look to improve their coding and problem-solving skills, I hope you will own an account (and contribute) after reading this post.&lt;/p&gt;

&lt;p&gt;If you're not familiar with where it started, the site was introduced by Jeff Atwood on his blog &lt;a href="https://blog.codinghorror.com/introducing-stackoverflow-com"&gt;codinghorror&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Stackoverflow is sort of like the anti-experts-exchange (minus the nausea-inducing sleaze and quasi-legal search engine gaming) meets Wikipedia meets programming Reddit. It is by programmers, for programmers, with the ultimate intent of collectively increasing the sum total of good programming knowledge in the world. No matter what programming language you use, or what operating system you call home. Better programming is our goal."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Learn something new
&lt;/h2&gt;

&lt;p&gt;When I started learning Python about five years ago, I built my curriculum that was made of few books and online courses. I put the time and effort (about four months) to practice. As most of us, when I had faced challenges I couldn't solve by myself, I used Google and found an answer on .. you guessed it right, Stackoverflow. After a couple of iterations of this scenario, I thought to myself,&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"If I find answers to my questions on Stackoverflow, let's take a peek at the most upvoted questions tagged with Python and see what it reveals".&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It was like a &lt;strong&gt;goldmine&lt;/strong&gt; for me. Clear, refined questions with detailed answers and examples, for free. No subscription, no money-back guarantee, plain simple. I started checking all of them, reading thoroughly all the answers to nail the subject in question. These are practical questions with high-quality answers from experienced professionals.&lt;/p&gt;

&lt;p&gt;Next time you plan to learn something new, give it a try. Go to Stackoverflow and look for the most upvoted questions and answers. If you haven't ever done it before, you're gonna be amazed.&lt;/p&gt;

&lt;p&gt;I then challenged myself. I pulled my sleeves and decided I will try to &lt;em&gt;answer questions.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why would I spend my precious time solving other people's problems?
&lt;/h2&gt;

&lt;p&gt;That's a damn good question. When I took the challenge, I was &lt;strong&gt;selfish&lt;/strong&gt;. I wanted to gain confidence in my knowledge. I liked to solve problems using the things I have learned. I looked for new questions that I had a solution for and tried to help using the gained knowledge. I did this to improve myself as a &lt;strong&gt;programmer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I feel the time I spent, was paid back double. It made me a better programmer, problem-solver, and contributed to my professional career.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Knowledge is like money: to be of value it must circulate, and in circulating it can increase in quality and, hopefully, in value" -- &lt;strong&gt;Louis L'Amour&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why YOU should contribute
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Improve your coding and problem-solving skills&lt;/em&gt;&lt;/strong&gt;: I can't stress enough the value of spending time on this site, reading through popular questions, and try to solve them yourself. Answering other people's questions strengthens your knowledge and confidence in the topic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Improve your debugging capabilities&lt;/em&gt;&lt;/strong&gt;: It's one of the most valuable skills of a software engineer. By helping other people, you practice debugging their code, which is harder than debugging your own&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;If you don't find the question you're looking for - ask&lt;/em&gt;&lt;/strong&gt;: Let the community help, and help other people in the future who encounters the same problem as you did&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Learn something new, solve a new problem, dig deeper into a topic you're familiar with, and expand your knowledge&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Use your Stackoverflow profile to promote yourself. By accurately answering questions, you demonstrate your experience in a topic&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where to start?
&lt;/h2&gt;

&lt;p&gt;This blog post aimed to get you started. First and foremost, if you don't have an account, &lt;strong&gt;sign up&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I &lt;strong&gt;recommend&lt;/strong&gt; checking these two threads, &lt;a href="https://meta.stackoverflow.com/questions/318295/how-do-i-contribute-to-stack-overflow"&gt;“How do I contribute to Stackoverflow”&lt;/a&gt; and &lt;a href="https://meta.stackoverflow.com/questions/252149/how-does-a-new-user-get-started-on-stack-overflow"&gt;“How does a new user get started”&lt;/a&gt;. Spend the time reading to understand how this platform works and how to use it.&lt;/p&gt;

&lt;p&gt;Either you're learning something new, or want to improve any of your coding skills, try to participate and answer questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search for the tags of interest to see only these kinds of questions (e.g, python, go, linux)&lt;/li&gt;
&lt;li&gt;Spend ~30-60min a day for 14 days. It doesn't need to be contiguous, you can check it while you're waiting for the compilation or deployment to finish&lt;/li&gt;
&lt;li&gt;Make it a &lt;strong&gt;habit&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's all it takes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I found that actively participating on Stackoverflow of great value, for my development skills and career. Every beginner and experienced developers alike will benefit from it if used correctly. I tried to share my own experience, what works for me, and I encourage you to share your knowledge. And have fun.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Stay humble, be kind.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>career</category>
      <category>productivity</category>
      <category>beginners</category>
      <category>code</category>
    </item>
    <item>
      <title>If you don't use a secret management tool, you're doing it wrong</title>
      <dc:creator>Chen</dc:creator>
      <pubDate>Wed, 27 Jan 2021 20:19:17 +0000</pubDate>
      <link>https://dev.to/chen/if-you-don-t-use-a-secret-management-tool-you-re-doing-it-wrong-3d4b</link>
      <guid>https://dev.to/chen/if-you-don-t-use-a-secret-management-tool-you-re-doing-it-wrong-3d4b</guid>
      <description>&lt;p&gt;Secrets management refers to the tools and methods for managing digital authentication credentials (secrets), including passwords, keys, APIs, and tokens for use in applications, services, privileged accounts, and other sensitive parts of the IT ecosystem.&lt;/p&gt;

&lt;p&gt;While secrets management is applicable across an entire enterprise, the terms secrets and secrets management are referred to more commonly in IT about DevOps environments, tools, and processes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Three may keep a secret, if two of them are dead.”&lt;br&gt;
― Benjamin Franklin, Poor Richard's Almanack&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Managing and sharing secrets is a complicated task. There are various environments, with many services, where each needs to authenticate itself.&lt;/p&gt;

&lt;p&gt;If you're working in the Operations team, you clearly faced a secret committed to a git repository at one point. Even if the repository is private, this is a big no-no. If someone accidentally changes the repository to be public, or push a file with secrets to a public repository, there's no turning back. The secret will be out there in the wild.&lt;/p&gt;

&lt;p&gt;Before I proceed, I want to ask you few questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Where does your organization keep its sensitive data? Is it in an encrypted file? A database? A shared tool such as 1Password?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Who can access it?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How easy is it to add a new secret, or update an existing one?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Do you manage your secrets or they manage you?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Hashicorp Vault
&lt;/h2&gt;

&lt;p&gt;Hashicorp Vault is an open-source robust secret management. It serves as a secret repository with access control lists, auditing, and TTL access to the secrets. It also supports a variety of authentication mechanisms and storage backends.&lt;/p&gt;

&lt;p&gt;Vault keeps your secrets encrypted on disk and on transit. It has a simple API to communicate with and great documentation. The website presents two main use-cases:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secrets Management&lt;/strong&gt;: centrally store, access, and distribute dynamic secrets such as tokens, passwords, certificates, encryption keys.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Protection&lt;/strong&gt;: keep application data secure with centralized key management and simple APIs for data encryption.&lt;/p&gt;

&lt;p&gt;It gives you a central place to safely store secrets. Define who can access what, with an audit on the operations. Use short-lived tokens to reduce the impact in case of secrets gets exposed. Has great APIs you can leverage to automate processes.&lt;/p&gt;

&lt;p&gt;Vault takes their business field seriously. Its architecture is a little complex, so let's quickly overview the components and explain them in simple terms.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Secrets Paths&lt;/em&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Inside Vault, secrets kept on a file-system like path. Every secret is structured as a JSON object, with key-value pairs. These files are versioned and keep their history (refer to KV secrets engine)&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Policies&lt;/em&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Vault uses policies to define fine-grained access-control. A policy is made of a list of paths (with regex support). A policy is then attached to the token and determines what secrets a token can access and what actions it can perform (create, read, update, delete).&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Authentication and authorization&lt;/em&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We use the term authentication to refer to the first phase of the login process. For example, the user provides the correct username and password to authenticate. If the credentials match, the user is authenticated.&lt;/p&gt;

&lt;p&gt;The authorization part comes next. It defines what an authenticated user has access to.&lt;/p&gt;

&lt;p&gt;To perform operations on the Vault, a user needs to authenticate first.&lt;br&gt;
Vault refers to this as &lt;em&gt;authentication backends&lt;/em&gt;.&lt;br&gt;
It is a list of &lt;a href="https://www.vaultproject.io/docs/auth"&gt;authentication methods&lt;/a&gt; a user can use to authenticate.&lt;/p&gt;

&lt;p&gt;The simplest option is a token, where you perform the login operation using a pre-defined token to gain access. This is a bad idea. There are better options.&lt;/p&gt;

&lt;p&gt;Vault has &lt;strong&gt;SSO (OIDC)&lt;/strong&gt; integration. People in the organization can access using their AzureAD or Google accounts. Once you set it up, everyone that has an account can access Vault! You don't need to manage credentials for every person individually.&lt;/p&gt;

&lt;p&gt;Once the user logs in, the policies applied to his account or groups in the organization are applied to his token. This defines what he can do inside Vault.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;em&gt;The Flow&lt;/em&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Except for token-based authentication, all other methods work the same way. You first provide your credentials (username/password, SSO, etc.) to perform a login action. In exchange, Vault returns a &lt;em&gt;temporary token&lt;/em&gt; for you to use in subsequent API calls. The temporary token is defined by its attributes - lifetime, is it renewable, etc.&lt;/p&gt;

&lt;p&gt;When using the UI or the CLI, these things happen behind the scene. Yet, if you're using the HTTP API, you'll need to perform login action and grab the token returned by Vault.&lt;/p&gt;

&lt;p&gt;This is a new era. Developers can now read and add secrets by themselves and share these with Operations seamlessly. This increases the velocity of both teams and improving the security as a side-effect. For example, when an employee leaves the company, his access is revoked easily. Or the usage of short lived tokens instead of static passwords.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;em&gt;CI/CD pipelines&lt;/em&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The deployment platform should also access Vault. During the &lt;em&gt;build&lt;/em&gt; and &lt;em&gt;deployment&lt;/em&gt; processes, the platform can get the secrets using the &lt;a href="https://www.vaultproject.io/docs/auth/approle"&gt;approle&lt;/a&gt; authentication method. The approle method gives you the option to configure applications access type. This type should have a short time-to-live so it's useless after the deployment process.&lt;/p&gt;

&lt;p&gt;We can take it a step further. Now that we have a centralized secrets store with an API, we can build a Jenkins job that generates secrets and push them to the Vault. This formalizes the procedure and prevents humble human mistakes, such as weak passwords.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Increase teams productivity&lt;/em&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As the developers' team grows, without an appropriate tool time is going to be wasted here. Nobody likes to wait to test something just because they don't have the password yet. How much time is spent here? This can be solved by.. using the right tool for the job.&lt;/p&gt;

&lt;p&gt;Having an interface to read and add secrets would remove the toil of secrets maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Summary&lt;/em&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Using a secrets management tool has numerous benefits, but I presume most of them are unseen. On the surface, it eases the process of adding or updating a secret and managing a unified access to them, by both Dev and Ops. Replacing a secret becomes an easy task for Operations.&lt;/p&gt;

&lt;p&gt;It provides all this while keeping high-security standards and auditing. It presents an API to automate the daunting toil of managing them yourself using a less sophisticated mechanism such as files or a database.&lt;/p&gt;

&lt;p&gt;It takes your secrets to the next level, and I haven't touched the more advanced capabilities this tool has (dynamic secrets and AWS authentication method, for example). If you are intrigued, go check the documentation.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The new capabilities introduced to automate processes shouldn't be considered lightly.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;How do you manage your secrets?&lt;/p&gt;

</description>
      <category>devops</category>
      <category>infrastructure</category>
      <category>secrets</category>
      <category>security</category>
    </item>
    <item>
      <title>SRE in layman’s terms (4 core concepts)</title>
      <dc:creator>Chen</dc:creator>
      <pubDate>Mon, 02 Mar 2020 18:50:03 +0000</pubDate>
      <link>https://dev.to/chen/sre-in-layman-s-terms-4-core-concepts-hoe</link>
      <guid>https://dev.to/chen/sre-in-layman-s-terms-4-core-concepts-hoe</guid>
      <description>&lt;p&gt;There are job titles in the industry that requires prior knowledge in order to understand them. What are their responsibilities are.&lt;/p&gt;

&lt;p&gt;I oftentimes find myself try to explain what do I do for living to foreign people to tech industry. &lt;/p&gt;

&lt;p&gt;How do you explain SRE then? In this post, I’ll try to describe it in simple terms.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;SRE are developers with operations responsibilities. They are in-charge of production environment, to keep it up and running.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Service
&lt;/h2&gt;

&lt;p&gt;A business, by definition, sells a product or a service.&lt;/p&gt;

&lt;p&gt;Many of them these days has their business online. You can order almost anything through the internet. Very popular world scale services are Google and Amazon. They are available for you no matter where you are (almost).&lt;/p&gt;

&lt;p&gt;You, the client, consumes a &lt;strong&gt;service&lt;/strong&gt;. You use Google to search for interesting stuff or things you need (a nice restaurant). You read the news at your favourite news site or shop online at Amazon. &lt;/p&gt;

&lt;p&gt;These companies &lt;strong&gt;serves&lt;/strong&gt; you through the Internet. It seems to be they are online 24/7, 365 days a year. Pause for a second, and think about it. Isn’t that magical? it’s like a store that is always open, but easier to access - to enter the store you don’t need to leave your house.&lt;/p&gt;

&lt;p&gt;Now that we have defined what a (online) service is, we can cover the 4 core concepts SRE’s are usually accounted for. I say usually, because responsibilities may be different between companies.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Design a reliable and resilient system
&lt;/h2&gt;

&lt;p&gt;What does this mean anyway? &lt;/p&gt;

&lt;p&gt;as SRE, we design the infrastructure for the product. We decide which hardware to use, do the capacity planning with room to grow as needed, etc.&lt;/p&gt;

&lt;p&gt;One requirement is to make it reliable and resilient so service downtime is minimized as much as possible. &lt;br&gt;
We try to eliminate any single-point of failures a long the road (from hardware to application). Always have redundancy for your infrastructure, so if something fails - be it hardware, network or software - the system can quickly recover from it.&lt;/p&gt;

&lt;p&gt;SRE’s knows things break down. They are the ones who gets called when something critical is not working.&lt;br&gt;
It is our job to recognize possible failures along the way and mitigate them ahead of time when that's possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Monitor and alert
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Online services&lt;/strong&gt; are composed from multiple applications or features. Today, many applications run on &lt;a href="https://en.wikipedia.org/wiki/Distributed_computing"&gt;distributed systems&lt;/a&gt;. We need visibility to what’s going on.&lt;/p&gt;

&lt;p&gt;In order to meet this, we use monitoring systems that expose the service’s &lt;em&gt;health&lt;/em&gt;.  These usually looks like dashboards from control room in the movies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LdcZn3cp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ky4h6jzro40p635i2xwq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LdcZn3cp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ky4h6jzro40p635i2xwq.jpg" alt="SpaceX Control Center" width="880" height="512"&gt;&lt;/a&gt;&lt;br&gt;
Image by &lt;a href="https://pixabay.com/users/SpaceX-Imagery-885857/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=693251"&gt;SpaceX-Imagery&lt;/a&gt; from &lt;a href="https://pixabay.com/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=693251"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using these systems we define &lt;em&gt;alerts&lt;/em&gt; - for example, we know how to recognize &lt;strong&gt;unhealthy&lt;/strong&gt; patterns in the application, or hardware failures. The alert system sends us notification when things break (by email, sms, phone, etc.), instead of having someone to watch the dashboards all day long and yell :)&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Pursue product features velocity
&lt;/h2&gt;

&lt;p&gt;Once in a while, these services release new features and security updates. Like a change to the UI or an addition of new buttons. These changes require a &lt;strong&gt;software update&lt;/strong&gt; that happens behind the scene, most of the time without user interruptions.&lt;/p&gt;

&lt;p&gt;Changes to the system introduce some risk, but they also introduce new features and bug fixes clients are waiting for. This leads us to -&amp;gt; &lt;em&gt;deployment strategy&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;We define a software deployment strategy so software updates (installation of a new software) are to be successful, and when they are not to recover quickly.&lt;br&gt;
Of every change we make to the system, we always keep in mind &lt;em&gt;“how do we recover from this if something goes wrong?”&lt;/em&gt;&lt;br&gt;
Then combining the two (software deployment and recovery) procedures into a “playbook”, which can be taught of as a task list to execute. Last, we &lt;br&gt;
&lt;em&gt;automate this&lt;/em&gt; to ease the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Automate everything
&lt;/h2&gt;

&lt;p&gt;This concept is, in my opinion the most important one. Once we formalize a procedure in our daily work, if we repeat it we want to automate it. This allows us to spend our time on more important domains (research, development) rather than doing the task repeatedly. Let’s abstract that.&lt;/p&gt;

&lt;p&gt;When we have a “problem” or a “task” on our desk, we prefer to solve it one time only. This is made possible by coding it. So, as SRE’s we always prefer to code things rather than performing them manually, even tough this requires more of our time when solving the problem the first time.&lt;/p&gt;

&lt;p&gt;For example, using our monitoring and alert systems, we get notified when things do not work as expected. We can use these to trigger some code that handle the issue. Simple example is, if the application crash (becomes unavailable) automatically start it. It will bring the service back up for customers, while we can debug what happened later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sum up - in layman’s terms
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The systems reliability - simply make sure the service is online and serving customers. Grow the infrastructure as needed, while keeping things on the budget. A lot of things happen behind the scene to make it like that.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitoring - we design, develop and integrate tools that gives us visibility of what’s going on in the system. Multiple graphs and counters that help us to know the status.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automate everything - this goes without saying. If you have automated a task, you would only spend 'thinking' time on the 'problem' once.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If I had needed to describe SRE role in a short paragraph, it would probably be this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;SRE is responsible for keeping the service up and provide the ability to release software faster while reducing the risk involved with it using tools and deployment strategies. In order to achieve that, we write code 🧞&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I hope now on the next occasion you meet someone with SRE title, you would know a little better what their role is all about.&lt;/p&gt;

&lt;p&gt;Cover Image by &lt;a href="https://pixabay.com/users/GregoryButler-331410/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=389274"&gt;GregoryButler&lt;/a&gt; from &lt;a href="https://pixabay.com/?utm_source=link-attribution&amp;amp;utm_medium=referral&amp;amp;utm_campaign=image&amp;amp;utm_content=389274"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>sre</category>
      <category>devops</category>
      <category>engineering</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
