<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sergey Podgorny</title>
    <description>The latest articles on DEV Community by Sergey Podgorny (@sergey-muc).</description>
    <link>https://dev.to/sergey-muc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sergey-muc"/>
    <language>en</language>
    <item>
      <title>💾 Why I Chose SQLite for My Startup — The Most Underrated Database You're Probably Ignoring</title>
      <dc:creator>Sergey Podgorny</dc:creator>
      <pubDate>Mon, 20 Oct 2025 16:15:00 +0000</pubDate>
      <link>https://dev.to/kinsly/why-i-chose-sqlite-for-my-startup-the-most-underrated-database-youre-probably-ignoring-21eo</link>
      <guid>https://dev.to/kinsly/why-i-chose-sqlite-for-my-startup-the-most-underrated-database-youre-probably-ignoring-21eo</guid>
      <description>&lt;p&gt;For years, I ignored SQLite. I assumed it was only good for toy apps, quick experiments, or maybe some desktop utilities. Like many developers, I immediately jumped to &lt;strong&gt;Postgres&lt;/strong&gt; or &lt;strong&gt;MySQL&lt;/strong&gt; for any "serious" project. I even paid for a managed &lt;strong&gt;AWS RDS&lt;/strong&gt; instance, believing I was preparing for scale.&lt;/p&gt;

&lt;p&gt;But over time, I learned something that changed how I build small and medium-sized systems:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;👉 &lt;em&gt;For 90% of applications, SQLite is not only enough — it's often better.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My entire application runs on a single VPS, serving just a few requests per second. Most startups never reach the mythical "millions of requests per minute". For this scale, running a full database server is like renting a truck to deliver a pizza.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Why SQLite Is So Powerful
&lt;/h2&gt;

&lt;p&gt;SQLite is a &lt;strong&gt;serverless&lt;/strong&gt;, &lt;strong&gt;file-based&lt;/strong&gt;, &lt;strong&gt;self-contained&lt;/strong&gt; database engine. It's just a single file (e.g., &lt;code&gt;data.db&lt;/code&gt;) that your app reads and writes to — no network connections, no daemons, no setup.&lt;/p&gt;

&lt;p&gt;Yet it supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full &lt;strong&gt;ACID transactions&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Foreign keys&lt;/strong&gt;, &lt;strong&gt;indexes&lt;/strong&gt;, &lt;strong&gt;views&lt;/strong&gt;, &lt;strong&gt;triggers&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Concurrency (especially in &lt;strong&gt;WAL mode&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gigabytes of data&lt;/strong&gt; and &lt;strong&gt;millions of rows&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And it's &lt;strong&gt;blazing fast&lt;/strong&gt; when used correctly.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚡ The Secret Sauce: WAL Mode
&lt;/h2&gt;

&lt;p&gt;By default, SQLite stores changes using a &lt;em&gt;rollback journal&lt;/em&gt; — it writes updates directly to the main database file and keeps a small backup journal during each transaction for safety.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WAL (Write-Ahead Logging)&lt;/strong&gt; changes this strategy completely.&lt;/p&gt;

&lt;p&gt;Instead of touching the main DB directly, all writes are appended to a separate &lt;code&gt;.db-wal&lt;/code&gt; file. Readers keep reading from the main file while writers add to the WAL log. Later, the changes are merged automatically.&lt;/p&gt;

&lt;p&gt;That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Readers and writers don't block each other.&lt;/li&gt;
&lt;li&gt;Concurrent operations become much faster.&lt;/li&gt;
&lt;li&gt;Crashes don't corrupt your main data file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;WAL mode can &lt;strong&gt;significantly boost performance&lt;/strong&gt; in multi-threaded or multi-request apps, like Go web servers, where reads and writes happen at the same time.&lt;/p&gt;

&lt;p&gt;But if your app is tiny and writes only occasionally, the default mode (&lt;code&gt;DELETE&lt;/code&gt;) is perfectly fine. You can always switch later with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;PRAGMA&lt;/span&gt; &lt;span class="n"&gt;journal_mode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;WAL&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Other journal modes include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;DELETE&lt;/code&gt; (default on many builds): makes a rollback file, then deletes it after writing. When you start a transaction, SQLite writes a rollback journal file (&lt;code&gt;data.db-journal&lt;/code&gt;). After committing, it &lt;strong&gt;deletes&lt;/strong&gt; that file. It's the safest and most widely compatible default. This is what you get if you don't set &lt;code&gt;journal_mode&lt;/code&gt; at all.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TRUNCATE&lt;/code&gt;: same as DELETE, but instead of removing the journal file, it just truncates it to zero bytes and reuses it.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PERSIST&lt;/code&gt;: same idea, but keeps the journal file contents and just overwrites a header.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;MEMORY&lt;/code&gt;: journal is only in RAM → faster, but data can be lost on crash.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;OFF&lt;/code&gt;: no journaling at all → faster, but unsafe if the app or OS crashes, the database can be corrupted on crash.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;WAL&lt;/code&gt;: the special one we just discussed.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚙️ What Are PRAGMAs?
&lt;/h2&gt;

&lt;p&gt;In SQLite, &lt;strong&gt;PRAGMAs&lt;/strong&gt; are configuration switches that control database behavior — kind of like settings in a config file, but stored inside the DB engine itself.&lt;/p&gt;

&lt;p&gt;You can enable them either by running SQL commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;PRAGMA&lt;/span&gt; &lt;span class="n"&gt;foreign_keys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;PRAGMA&lt;/span&gt; &lt;span class="n"&gt;synchronous&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;NORMAL&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or pass them directly in the Go connection string:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;dsn&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;"file:data.db?_pragma=busy_timeout(10000)&amp;amp;_pragma=foreign_keys(ON)"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here are a few useful ones:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;busy_timeout(10000)&lt;/code&gt; tells SQLite: &lt;em&gt;if the database is locked, wait up to 10 seconds before giving up&lt;/em&gt; (instead of failing immediately with &lt;code&gt;database is locked&lt;/code&gt;). Useful when multiple goroutines or processes may try to write at the same time. It is good for web apps; safer than retrying manually.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;journal_mode(WAL)&lt;/code&gt; enables Write-Ahead Logging instead of the &lt;code&gt;default&lt;/code&gt; rollback journaling. It makes reads/writes more concurrent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;journal_size_limit(200000000)&lt;/code&gt; sets a cap on the WAL file size (here: ~200 MB). Normally, the WAL file can grow until checkpointed (merged back into the main DB). This prevents it from growing forever. If your DB is small, it will never hit this size anyway.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;foreign_keys(ON)&lt;/code&gt; enables foreign key constraints (&lt;code&gt;OFF&lt;/code&gt; by default in SQLite). If you define relations like &lt;code&gt;user_id REFERENCES users(id)&lt;/code&gt;, this ensures referential integrity. It is always good practice if you use relationships.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;temp_store(MEMORY)&lt;/code&gt; tells SQLite to keep temporary data (like for &lt;code&gt;ORDER BY&lt;/code&gt;, &lt;code&gt;GROUP BY&lt;/code&gt;, and indexes while querying) in RAM instead of on disk. Faster for queries, but uses more memory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;cache_size(-16000)&lt;/code&gt; sets the page cache size. The negative number means "KB" instead of pages. Here: &lt;code&gt;-16000&lt;/code&gt; = about &lt;code&gt;16 MB&lt;/code&gt; of cache. This is how much SQLite will keep in memory for speeding up queries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;synchronous(NORMAL)&lt;/code&gt; controls how careful SQLite is about flushing data to disk.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;FULL&lt;/code&gt; (default): every write is forced to disk immediately → safest, but slowest.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;NORMAL&lt;/code&gt;: doesn't flush on every step, but still durable enough for most apps (data is safe unless the OS/hardware crashes at a very unlucky moment).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;OFF&lt;/code&gt;: fastest, but risk of corruption on crash.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ℹ️ &lt;code&gt;NORMAL&lt;/code&gt; is a common compromise in web apps: faster inserts, still quite safe.&lt;/p&gt;

&lt;p&gt;Not all are required — start simple and tune only when needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 When You Should (and Shouldn't) Use SQLite
&lt;/h2&gt;

&lt;p&gt;SQLite is perfect for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Small to medium web apps&lt;/li&gt;
&lt;li&gt;APIs on a single server&lt;/li&gt;
&lt;li&gt;Prototypes, MVPs, side projects&lt;/li&gt;
&lt;li&gt;Command-line or desktop tools&lt;/li&gt;
&lt;li&gt;Local caching layers or analytics snapshots&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But you'll hit limits if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need multiple servers writing to the same DB file&lt;/li&gt;
&lt;li&gt;You expect thousands of writes per second&lt;/li&gt;
&lt;li&gt;You require fine-grained access control&lt;/li&gt;
&lt;li&gt;You need replication or clustering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's when Postgres or MySQL makes sense. Until then, SQLite saves you time, money, and complexity.&lt;/p&gt;




&lt;h2&gt;
  
  
  🐹 Example: Using SQLite with Go
&lt;/h2&gt;

&lt;p&gt;Let's set up a connection in Go using the excellent cgo-free driver &lt;code&gt;modernc.org/sqlite&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ go get modernc.org/sqlite
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"context"&lt;/span&gt;
    &lt;span class="s"&gt;"database/sql"&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"log"&lt;/span&gt;
    &lt;span class="s"&gt;"time"&lt;/span&gt;

    &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="s"&gt;"modernc.org/sqlite"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// Open a database file (or ":memory:" for in-memory)&lt;/span&gt;
    &lt;span class="n"&gt;dsn&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;"file:data.db?_pragma=busy_timeout(5000)&amp;amp;_pragma=journal_mode(WAL)"&lt;/span&gt;
    &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"sqlite"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dsn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"open: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c"&gt;// Configure pool: keep a small pool for sqlite; tune for your workload&lt;/span&gt;
    &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SetMaxOpenConns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c"&gt;// readers can have several&lt;/span&gt;
    &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SetMaxIdleConns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SetConnMaxLifetime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// Ping to verify connection&lt;/span&gt;
    &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cancel&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;cancel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PingContext&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"ping: %v"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"DB open, WAL enabled and busy_timeout set (via DSN)."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;SQLite allows many concurrent readers but only one writer at a time. A common pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;one DB instance&lt;/strong&gt; restricted to a single writer (&lt;code&gt;SetMaxOpenConns(1)&lt;/code&gt;) for all writes.&lt;/li&gt;
&lt;li&gt;Use a separate DB instance with a larger pool for readers.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// writer&lt;/span&gt;
&lt;span class="n"&gt;writerDSN&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;"file:test.db?_pragma=busy_timeout(5000)&amp;amp;_pragma=journal_mode(WAL)"&lt;/span&gt;
&lt;span class="n"&gt;writerDB&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"sqlite"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;writerDSN&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;writerDB&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SetMaxOpenConns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// single writer connection&lt;/span&gt;
&lt;span class="n"&gt;writerDB&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SetMaxIdleConns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;// readers&lt;/span&gt;
&lt;span class="n"&gt;readerDSN&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;"file:test.db?_pragma=busy_timeout(5000)"&lt;/span&gt;
&lt;span class="n"&gt;readerDB&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"sqlite"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;readerDSN&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;readerDB&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SetMaxOpenConns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// allow concurrent readers&lt;/span&gt;
&lt;span class="n"&gt;readerDB&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SetMaxIdleConns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This reduces &lt;code&gt;SQLITE_BUSY&lt;/code&gt; when multiple goroutines try to write. Practical guides and community posts recommend separating writer/reader pools.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧭 Navigating SQLite from the Terminal
&lt;/h2&gt;

&lt;p&gt;SQLite also comes with a lightweight console client: sqlite3.&lt;/p&gt;

&lt;p&gt;Open your database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sqlite3 data.db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside, these dot-commands make life easier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.help             # list all available commands
.databases        # show loaded databases (usually just `main` pointing to your file)
.tables           # list all tables
.schema           # show schema of all tables
.schema table     # show schema of a specific table
.headers on       # turn on column headers in query output
.mode column      # format results in a nicely aligned table
.mode line        # show each row vertically, one field per line (great for wide tables)
.mode list         # results as plain text separated by | (default mode)
.width 20 30 15   # set column widths when using .mode column
.nullvalue NULL   # choose how NULLs are displayed
.quit or .exit    # leave the CLI
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example session:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sqlite&amp;gt; .headers on
sqlite&amp;gt; .mode column
sqlite&amp;gt; SELECT * FROM users;
id  email             created_at
--  ----------------  ---------------------
1   test@example.com  2025-10-07 14:30:00
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you learn &lt;code&gt;.headers on&lt;/code&gt; and &lt;code&gt;.mode column&lt;/code&gt;, the CLI feels surprisingly pleasant.&lt;/p&gt;

&lt;p&gt;If you prefer certain settings to be enabled by default, then it makes sense to customize the configuration file to suit your needs. If the initialization file &lt;code&gt;~/.sqliterc&lt;/code&gt; exists, &lt;code&gt;sqlite3&lt;/code&gt; will read it to set the configuration of the interactive environment. This file should generally only contain meta-commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.headers on
.mode column
.nullvalue NULL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧩 Conclusion
&lt;/h2&gt;

&lt;p&gt;SQLite is like the pocketknife of databases — tiny, portable, and incredibly capable when used properly. For many apps, it's a &lt;strong&gt;smarter default&lt;/strong&gt; than running a full database server.&lt;/p&gt;

&lt;p&gt;For Kinsly's current stage, SQLite is the perfect choice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It's blazing fast, even on cheap VPS hardware&lt;/li&gt;
&lt;li&gt;It simplifies deployment&lt;/li&gt;
&lt;li&gt;It requires zero configuration&lt;/li&gt;
&lt;li&gt;It saves money (no RDS bills)&lt;/li&gt;
&lt;li&gt;It's easy to move or back up (just copy the file)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When the time comes to scale, migration will be straightforward.&lt;br&gt;
Until then, SQLite lets me focus on what matters: building the product, not managing infrastructure.&lt;/p&gt;

&lt;p&gt;So before spinning up another managed Postgres instance, consider starting with SQLite. You might be surprised how far it takes you.&lt;/p&gt;

</description>
      <category>database</category>
      <category>sqlite</category>
      <category>backend</category>
      <category>devops</category>
    </item>
    <item>
      <title>🚀 How I Deployed My Startup's Server Without Kubernetes or Docker (Yet)</title>
      <dc:creator>Sergey Podgorny</dc:creator>
      <pubDate>Tue, 07 Oct 2025 14:00:00 +0000</pubDate>
      <link>https://dev.to/kinsly/how-i-deployed-my-startups-server-without-kubernetes-or-docker-yet-14h5</link>
      <guid>https://dev.to/kinsly/how-i-deployed-my-startups-server-without-kubernetes-or-docker-yet-14h5</guid>
      <description>&lt;p&gt;Almost every article these days talks about setting up Kubernetes clusters, writing Terraform scripts, or diving deep into AWS-managed services. While those tools are powerful, I believe that for most small projects, they are unnecessary overhead.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;&lt;a href="https://getkinsly.com" rel="noopener noreferrer"&gt;Kinsly&lt;/a&gt;&lt;/strong&gt;, my still-small project, I chose a simpler and more old-school approach. It's lighter, uses less memory, has fewer layers of abstraction, and remains secure. In this post, I'll walk through how I deployed my server, why I avoided AWS/GCP at this stage, and the exact setup I used, from SSH security to deployment automation.&lt;/p&gt;




&lt;h2&gt;
  
  
  🌍 Choosing the Right Infrastructure
&lt;/h2&gt;

&lt;p&gt;For years, I favored &lt;strong&gt;DigitalOcean&lt;/strong&gt;: simple, affordable, and reliable. But for this first phase, I went even more cost-effective and chose &lt;strong&gt;Contabo VPS&lt;/strong&gt;. Although their SLA looks low on paper, performance in practice has been surprisingly stable.&lt;/p&gt;

&lt;p&gt;Unlike AWS or GCP, where you can easily drown in unnecessary configuration, here I get just what I need: a straightforward server that I fully control. And importantly, I can move it anywhere at any time, without being locked into provider-specific services.&lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;Lesson&lt;/strong&gt;: Keep deployments &lt;strong&gt;provider-agnostic&lt;/strong&gt; so you can move quickly when needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔐 Securing Access with SSH
&lt;/h2&gt;

&lt;p&gt;The first thing I did was lock down access. By default, servers allow SSH login with a password. That's a major attack vector: bots constantly attempt brute-force logins with common credentials. An SSH key is far more secure.&lt;/p&gt;

&lt;p&gt;Secure SSH key was created locally using this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ssh-keygen -t ed25519 -f ~/.ssh/deploy_server.pem -C "deploy@gitlab"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then the local public key was uploaded to the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This command is the easiest way to install your key.
$ ssh-copy-id -i ~/.ssh/deploy_server.pem deploy@[target-ip]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, I hardened the SSH configuration on the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo vim /etc/ssh/sshd_config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside this file, I made sure the following lines were set. This completely disables password logins and ensures only key-based access is allowed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PermitRootLogin prohibit-password
# Disallow password authentication entirely.
PasswordAuthentication no
# Ensure public key authentication is enabled.
PubkeyAuthentication yes
# Also disable challenge-response auth, which can sometimes be a password fallback.
ChallengeResponseAuthentication no
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, I applied the changes by restarting the SSH service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo systemctl restart sshd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From now on, only valid SSH keys can connect. This instantly shuts down most automated attacks.&lt;/p&gt;

&lt;p&gt;⚠️ &lt;strong&gt;IMPORTANT&lt;/strong&gt;: Before you close your current terminal session, open a new terminal and confirm you can still log in with your SSH key. If you don't, you could lock yourself out of your own server!&lt;/p&gt;




&lt;h2&gt;
  
  
  🌐 Cloudflare for Domains &amp;amp; Security
&lt;/h2&gt;

&lt;p&gt;I registered my domain through &lt;strong&gt;Cloudflare&lt;/strong&gt;. Unlike GoDaddy or Namecheap, Cloudflare sells domains at cost price (no markup) and forces you to use its DNS — this turned out to be a blessing. DNS hosting is free and comes with Cloudflare Proxy.&lt;/p&gt;

&lt;p&gt;With just the free plan, I got:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hidden server IP (Cloudflare Proxy)&lt;/li&gt;
&lt;li&gt;Free DDoS protection&lt;/li&gt;
&lt;li&gt;Global caching across regions for speed&lt;/li&gt;
&lt;li&gt;Basic analytics&lt;/li&gt;
&lt;li&gt;Built-in security filtering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvspmmmxa3246skb1fcgi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvspmmmxa3246skb1fcgi.png" alt="Cloudflare request analytics" width="800" height="726"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 In the past, I even paid AWS money just to register DNS records. Cloudflare gives more for free.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚔️ Hardening the Server Firewall
&lt;/h2&gt;

&lt;p&gt;As soon as I installed nginx, bots started probing for vulnerabilities (checking for &lt;code&gt;/wp-login.php&lt;/code&gt;, &lt;code&gt;.git&lt;/code&gt; folders, etc.). To prevent direct access to my VPS, I restricted requests to Cloudflare IPs only.&lt;/p&gt;

&lt;p&gt;To prevent this, &lt;strong&gt;&lt;code&gt;ufw&lt;/code&gt; (Uncomplicated Firewall) method is the best practice&lt;/strong&gt;. It provides the highest level of security and efficiency. It is a one-time setup, though you should plan to update the IP list once or twice a year, as Cloudflare occasionally adds new ranges.&lt;/p&gt;

&lt;p&gt;First, always allow SSH, or you'll lock yourself out.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo ufw allow ssh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo ufw allow 22/tcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then allow traffic from all of Cloudflare's IPs on port 443 (HTTPS). You can get the latest lists from &lt;a href="https://cloudflare.com/ips" rel="noopener noreferrer"&gt;cloudflare.com/ips&lt;/a&gt;. You can run a simple loop to add all the current IPv4 and IPv6 addresses:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# For IPv4
$ for ip in $(curl -s https://www.cloudflare.com/ips-v4); do sudo ufw allow from $ip to any port 443 proto tcp; done

# For IPv6
$ for ip in $(curl -s https://www.cloudflare.com/ips-v6); do sudo ufw allow from $ip to any port 443 proto tcp; done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After specifically allowing Cloudflare IPs, you can now safely deny all other traffic on ports 80 and 443.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo ufw deny http
$ sudo ufw deny https
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This forces all web traffic through Cloudflare's secure proxy, effectively locking the back door to the server.&lt;/p&gt;

&lt;p&gt;Then, if it's not already enabled, turn on the firewall. It will ask for confirmation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo ufw enable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify your rules are in place.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo ufw status verbose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see a list showing that port 22 is allowed from anywhere, port 443 is allowed from Cloudflare's IPs, and ports 80/443 are denied from anywhere else.&lt;/p&gt;

&lt;p&gt;You can also do this within nginx itself, though it's slightly less efficient as nginx has to process the connection before denying it.&lt;/p&gt;

&lt;p&gt;You would create a file (&lt;code&gt;/etc/nginx/snippets/cloudflare-ips.conf&lt;/code&gt;) containing &lt;code&gt;allow&lt;/code&gt; rules for all of Cloudflare's IPs, and then include it in your server block, followed by &lt;code&gt;deny all;&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="c"&gt;# IPv4 Ranges
&lt;/span&gt;&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;173&lt;/span&gt;.&lt;span class="m"&gt;245&lt;/span&gt;.&lt;span class="m"&gt;48&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;20&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;103&lt;/span&gt;.&lt;span class="m"&gt;21&lt;/span&gt;.&lt;span class="m"&gt;244&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;22&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;103&lt;/span&gt;.&lt;span class="m"&gt;22&lt;/span&gt;.&lt;span class="m"&gt;200&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;22&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;103&lt;/span&gt;.&lt;span class="m"&gt;31&lt;/span&gt;.&lt;span class="m"&gt;4&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;22&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;141&lt;/span&gt;.&lt;span class="m"&gt;101&lt;/span&gt;.&lt;span class="m"&gt;64&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;18&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;108&lt;/span&gt;.&lt;span class="m"&gt;162&lt;/span&gt;.&lt;span class="m"&gt;192&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;18&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;190&lt;/span&gt;.&lt;span class="m"&gt;93&lt;/span&gt;.&lt;span class="m"&gt;240&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;20&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;188&lt;/span&gt;.&lt;span class="m"&gt;114&lt;/span&gt;.&lt;span class="m"&gt;96&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;20&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;197&lt;/span&gt;.&lt;span class="m"&gt;234&lt;/span&gt;.&lt;span class="m"&gt;240&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;22&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;198&lt;/span&gt;.&lt;span class="m"&gt;41&lt;/span&gt;.&lt;span class="m"&gt;128&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;17&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;162&lt;/span&gt;.&lt;span class="m"&gt;158&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;15&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;104&lt;/span&gt;.&lt;span class="m"&gt;16&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;13&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;104&lt;/span&gt;.&lt;span class="m"&gt;24&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;14&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;172&lt;/span&gt;.&lt;span class="m"&gt;64&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;13&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;131&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;72&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;/&lt;span class="m"&gt;22&lt;/span&gt;;

&lt;span class="c"&gt;# IPv6 Ranges
&lt;/span&gt;&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;2400&lt;/span&gt;:&lt;span class="n"&gt;cb00&lt;/span&gt;::/&lt;span class="m"&gt;32&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;2606&lt;/span&gt;:&lt;span class="m"&gt;4700&lt;/span&gt;::/&lt;span class="m"&gt;32&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;2803&lt;/span&gt;:&lt;span class="n"&gt;f800&lt;/span&gt;::/&lt;span class="m"&gt;32&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;2405&lt;/span&gt;:&lt;span class="n"&gt;b500&lt;/span&gt;::/&lt;span class="m"&gt;32&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;2405&lt;/span&gt;:&lt;span class="m"&gt;8100&lt;/span&gt;::/&lt;span class="m"&gt;32&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="n"&gt;c0f&lt;/span&gt;:&lt;span class="n"&gt;f248&lt;/span&gt;::/&lt;span class="m"&gt;32&lt;/span&gt;;
&lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="n"&gt;a06&lt;/span&gt;:&lt;span class="m"&gt;98&lt;/span&gt;&lt;span class="n"&gt;c0&lt;/span&gt;::/&lt;span class="m"&gt;29&lt;/span&gt;;

&lt;span class="n"&gt;deny&lt;/span&gt; &lt;span class="n"&gt;all&lt;/span&gt;;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet will then need to be included separately in the corresponding server directive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;server&lt;/span&gt; {
    &lt;span class="c"&gt;# ...
&lt;/span&gt;    &lt;span class="n"&gt;include&lt;/span&gt; /&lt;span class="n"&gt;etc&lt;/span&gt;/&lt;span class="n"&gt;nginx&lt;/span&gt;/&lt;span class="n"&gt;snippets&lt;/span&gt;/&lt;span class="n"&gt;cloudflare&lt;/span&gt;-&lt;span class="n"&gt;ips&lt;/span&gt;.&lt;span class="n"&gt;conf&lt;/span&gt;;
    &lt;span class="c"&gt;# ...
&lt;/span&gt;}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🏗️ A Simple, Minimal Stack
&lt;/h2&gt;

&lt;p&gt;My architecture is intentionally simple: instead of React or Angular SSR, the first version of Kinsly runs on a &lt;strong&gt;static HTML page&lt;/strong&gt;. Form submissions go to a small &lt;strong&gt;Go service&lt;/strong&gt; that listens only on the local loopback interface (&lt;code&gt;127.0.0.1&lt;/code&gt;) and it's &lt;em&gt;not exposed&lt;/em&gt; to the outside world. Nginx acts as a reverse proxy, forwarding public requests from &lt;code&gt;api.getkinsly.com&lt;/code&gt; to this private port. This is a secure and standard pattern.&lt;/p&gt;

&lt;p&gt;I also considered using Unix sockets, but TCP on localhost was good enough.&lt;/p&gt;

&lt;p&gt;Here's a look at the core of my Nginx config for the API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="c"&gt;# /etc/nginx/sites-available/api.getkinsly.com
&lt;/span&gt;&lt;span class="n"&gt;server&lt;/span&gt; {
    &lt;span class="n"&gt;server_name&lt;/span&gt; &lt;span class="n"&gt;api&lt;/span&gt;.&lt;span class="n"&gt;getkinsly&lt;/span&gt;.&lt;span class="n"&gt;com&lt;/span&gt;;

    &lt;span class="n"&gt;listen&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt; &lt;span class="n"&gt;ssl&lt;/span&gt; &lt;span class="n"&gt;http2&lt;/span&gt;;
    &lt;span class="c"&gt;# ... (all my SSL and security headers go here) ...
&lt;/span&gt;
    &lt;span class="n"&gt;location&lt;/span&gt; / {
        &lt;span class="c"&gt;# These two lines enable efficient keep-alive connections to the backend.
&lt;/span&gt;        &lt;span class="n"&gt;proxy_http_version&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;.&lt;span class="m"&gt;1&lt;/span&gt;;
        &lt;span class="n"&gt;proxy_set_header&lt;/span&gt; &lt;span class="n"&gt;Connection&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;;

        &lt;span class="c"&gt;# Pass essential client information to the Go application.
&lt;/span&gt;        &lt;span class="n"&gt;proxy_set_header&lt;/span&gt; &lt;span class="n"&gt;Host&lt;/span&gt; $&lt;span class="n"&gt;host&lt;/span&gt;;
        &lt;span class="n"&gt;proxy_set_header&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;-&lt;span class="n"&gt;Real&lt;/span&gt;-&lt;span class="n"&gt;IP&lt;/span&gt; $&lt;span class="n"&gt;remote_addr&lt;/span&gt;;
        &lt;span class="n"&gt;proxy_set_header&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;-&lt;span class="n"&gt;Forwarded&lt;/span&gt;-&lt;span class="n"&gt;For&lt;/span&gt; $&lt;span class="n"&gt;proxy_add_x_forwarded_for&lt;/span&gt;;
        &lt;span class="n"&gt;proxy_set_header&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;-&lt;span class="n"&gt;Forwarded&lt;/span&gt;-&lt;span class="n"&gt;Proto&lt;/span&gt; $&lt;span class="n"&gt;scheme&lt;/span&gt;;

        &lt;span class="c"&gt;# The actual proxy pass to the local Go service.
&lt;/span&gt;        &lt;span class="n"&gt;proxy_pass&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;://&lt;span class="m"&gt;127&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;0&lt;/span&gt;.&lt;span class="m"&gt;1&lt;/span&gt;:&lt;span class="m"&gt;8001&lt;/span&gt;;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For now, Kinsly just needs to collect form data. Deploying a full Postgres or MySQL server at this stage would be pure over-engineering, so I deliberately chose &lt;strong&gt;SQLite&lt;/strong&gt; - it's file-based, requires zero configuration, incredibly fast, and perfect for the simple task of collecting waiting list emails.&lt;/p&gt;

&lt;p&gt;Later, when scaling requires it, migrating to Postgres or MySQL will be straightforward.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ Keeping the App Alive with systemd
&lt;/h2&gt;

&lt;p&gt;My Go application runs as a &lt;code&gt;systemd&lt;/code&gt; service, ensuring it's always on. The service is configured to run as the non-privileged &lt;code&gt;www-data&lt;/code&gt; user for security.&lt;/p&gt;

&lt;p&gt;Example service file (&lt;code&gt;/etc/systemd/system/kinsly-api.service&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Kinsly API Service
# Start this service only after the network is available.
After=network.target

[Service]
# The user and group the service will run as.
# Running as a dedicated, non-sudo user is a critical security best practice.
User=www-data
Group=www-data

# The command to start the application. Make sure the binary is executable (chmod +x binary)
ExecStart=/opt/kinsly/current/kinsly-api
Restart=always
# Wait 5 seconds before restarting to prevent rapid-fire restarts.
RestartSec=5

# Set environment variables if your application needs them.
Environment="APPLICATION_MODE=prod"

[Install]
# This allows the service to be enabled to start on boot.
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, if the app crashes or the server reboots, &lt;code&gt;systemd&lt;/code&gt; automatically restarts it.&lt;/p&gt;




&lt;h2&gt;
  
  
  👤 Creating a Safe Deploy User
&lt;/h2&gt;

&lt;p&gt;My GitLab CI/CD pipeline, however, connects as a separate &lt;code&gt;deploy&lt;/code&gt; user that does not have full &lt;code&gt;sudo&lt;/code&gt; access. This creates a problem: how does the CI/CD pipeline restart the service after a new deployment?&lt;/p&gt;

&lt;p&gt;The solution is to grant the &lt;code&gt;deploy&lt;/code&gt; user permission to run only that one specific command. This is done via the &lt;code&gt;sudoers&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;The ONLY safe way to edit the sudoers file is with &lt;code&gt;visudo&lt;/code&gt;. It performs a syntax check on save to prevent you from breaking &lt;code&gt;sudo&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo visudo -f /etc/sudoers.d/deploy-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I added this single line to the bottom of the file. This allows the &lt;code&gt;deploy&lt;/code&gt; user to restart the &lt;code&gt;kinsly-api&lt;/code&gt; service without a password.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;deploy&lt;/span&gt; &lt;span class="n"&gt;ALL&lt;/span&gt;=(&lt;span class="n"&gt;ALL&lt;/span&gt;) &lt;span class="n"&gt;NOPASSWD&lt;/span&gt;: /&lt;span class="n"&gt;usr&lt;/span&gt;/&lt;span class="n"&gt;bin&lt;/span&gt;/&lt;span class="n"&gt;systemctl&lt;/span&gt; &lt;span class="n"&gt;restart&lt;/span&gt; &lt;span class="n"&gt;kinsly&lt;/span&gt;-&lt;span class="n"&gt;api&lt;/span&gt;.&lt;span class="n"&gt;service&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a secure and granular way to enable deployment automation without giving away &lt;code&gt;root&lt;/code&gt; access. The CI/CD pipeline can now run &lt;code&gt;sudo systemctl restart kinsly-api.service&lt;/code&gt;, and it's the only &lt;code&gt;sudo&lt;/code&gt; command it's allowed to execute.&lt;/p&gt;

&lt;p&gt;ℹ️ Make sure the directive &lt;code&gt;@includedir /etc/sudoers.d&lt;/code&gt; is enabled in the main &lt;code&gt;/etc/sudoers&lt;/code&gt; configuration file. If it's not there, you should add it using &lt;code&gt;sudo visudo&lt;/code&gt; command (without &lt;code&gt;-f&lt;/code&gt; argument).&lt;/p&gt;

&lt;p&gt;Also, ensure correct file ownership &amp;amp; permissions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo chown root:root /etc/sudoers.d/deploy-app
$ sudo chmod 0440 /etc/sudoers.d/deploy-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Validate sudoers syntax, so you don't lock yourself out:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo visudo -c
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🔄 Deployment via GitLab CI
&lt;/h2&gt;

&lt;p&gt;Deployment is automated:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;GitLab CI builds a Go binary.&lt;/li&gt;
&lt;li&gt;It packages the binary + static HTML into a &lt;code&gt;.tar.gz&lt;/code&gt; archive.&lt;/li&gt;
&lt;li&gt;The archive is uploaded to the server and unpacks under new release folder &lt;code&gt;/opt/kinsly/releases/&amp;lt;timestamp&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;A symlink &lt;code&gt;current&lt;/code&gt; → &lt;code&gt;&amp;lt;latest_release&amp;gt;&lt;/code&gt; is updated.&lt;/li&gt;
&lt;li&gt;The service is restarted.&lt;/li&gt;
&lt;li&gt;Old releases are pruned (keep the last 5 for rollback).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is simple, reliable, and avoids the need for Docker at this stage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx99lh6mltjlj4gvhmk1s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx99lh6mltjlj4gvhmk1s.png" alt="GitLab pipelines triggered by tag creation" width="727" height="387"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🍪 Frontend Challenges: Analytics, Consent &amp;amp; Captcha
&lt;/h2&gt;

&lt;p&gt;Even with a simple site, there are complexities. To use &lt;strong&gt;Google Analytics&lt;/strong&gt;, I needed a cookie consent banner to comply with EU law. I chose &lt;strong&gt;axept.io&lt;/strong&gt;, a free and Google-certified service. &lt;/p&gt;

&lt;p&gt;I also added a free captcha (&lt;strong&gt;Altcha&lt;/strong&gt;) to prevent bots from spamming my form with fake emails. Spam protection was essential. Without it, malicious bots could flood forms, causing emails to be marked as spam.  While not perfect, it filters out the majority of malicious requests.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛡️ Final Security Checks
&lt;/h2&gt;

&lt;p&gt;Before going live, I checked open ports with &lt;code&gt;nmap&lt;/code&gt; and online scanners to ensure only nginx was exposed. My Go app, database, and system processes remain inaccessible from outside.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ nmap -sS -Pn -T5 -p- [target-ip]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmttu2yx5xy5i4e2u2tw7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmttu2yx5xy5i4e2u2tw7.png" alt="Scanned open ports on the server" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ Conclusion
&lt;/h2&gt;

&lt;p&gt;This setup may sound "basic" compared to a Kubernetes cluster on AWS, but for an early-stage project, it's &lt;strong&gt;exactly what I need&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimal cost&lt;/li&gt;
&lt;li&gt;Maximum control&lt;/li&gt;
&lt;li&gt;Easy portability&lt;/li&gt;
&lt;li&gt;Strong enough security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As the project grows, I will likely containerize services and introduce Docker orchestration, but for now, this lean approach lets me move fast without unnecessary complexity. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;It reminds me that sometimes the most elegant solution is the simplest one.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>server</category>
      <category>security</category>
      <category>sysadmin</category>
      <category>devops</category>
    </item>
    <item>
      <title>Will AI steal software engineering job?</title>
      <dc:creator>Sergey Podgorny</dc:creator>
      <pubDate>Thu, 29 Feb 2024 11:08:00 +0000</pubDate>
      <link>https://dev.to/sergey-muc/will-ai-steal-software-engineering-job-1cha</link>
      <guid>https://dev.to/sergey-muc/will-ai-steal-software-engineering-job-1cha</guid>
      <description>&lt;p&gt;The role of artificial intelligence (AI) in software engineering has sparked both curiosity and concern. The question on many minds is: Will AI steal software engineering jobs? Let's delve into this topic, explore the nuances, and decipher what the future might hold for software developers. Spoiler alert: it's not all doom and gloom.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Power of AI in Code Generation
&lt;/h3&gt;

&lt;p&gt;AI has proven its ability to generate code swiftly and efficiently. It may help developers to be more efficient and to contribute more features with less effort. &lt;/p&gt;

&lt;p&gt;While concerns about job security arise, it's essential to note that the demand for skilled software engineers is far from diminishing. In fact, there's a surplus of open positions waiting to be filled, indicating that AI could complement, rather than replace, human developers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Surviving Layoff Rollercoasters
&lt;/h3&gt;

&lt;p&gt;Did you hear about those big tech layoffs recently? Yeah, it stung. But before you start blaming AI, let's think for a second. It happens every decade or so, and is likely influenced by external factors, such as the aftermath of the Covid-19 pandemic.&lt;/p&gt;

&lt;p&gt;However, junior positions will most likely be more difficult to obtain, since AI solves many problems that have long been left to newcomers, but the overall demand for experienced developers remains high.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbrk5eh5ysb7al89r7fg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbrk5eh5ysb7al89r7fg.png" alt="Open tech jobs chart" width="800" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Evolution of Open-Source Culture
&lt;/h3&gt;

&lt;p&gt;Remember the dark ages when we had to code everything from scratch? Thanks to open source, there are millions of lines of code out there and the amount is growing. Before open-source culture became popular, all this code developers would need to write on their own, and now just need to use. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0d47pzx54h73tlwin1v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0d47pzx54h73tlwin1v.png" alt="Open-source project logos" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a result, we do not have a decrease in the number of developers, but quite the opposite, there are more and more software developers.&lt;/p&gt;

&lt;p&gt;However, the emphasis is shifting toward a deeper knowledge of programming and application architecture so that AI cannot easily replace the developer. Basic tasks may be automated, but developers with profound expertise will always be in demand. Creating HTML forms and being able to change the color of a button will soon not be enough.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Fate of Stack Overflow
&lt;/h3&gt;

&lt;p&gt;Will AI make platforms like Stack Overflow obsolete? Not quite. While AI may streamline certain processes, the essence of platforms like Stack Overflow as a community-driven hub for problem-solving is likely to persist. Trivial questions like "What's wrong with my code?" might decrease, but the platform will evolve into a more robust forum, fostering in-depth discussions and collaborative problem-solving.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksdw0mgjmzrcb90x40kn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksdw0mgjmzrcb90x40kn.png" alt="What is Wrong With my Code question on StackOverflow" width="752" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prompt Engineering and Programming Languages
&lt;/h3&gt;

&lt;p&gt;Prompt engineering sounds like the cool kid in town, right? It is a rising trend, that could redefine certain developer roles, especially in content management system (CMS) development. However, it's unlikely to replace programming languages entirely. I believe that prompt engineering will become a skill necessary for a good and effective developer.&lt;/p&gt;

&lt;p&gt;The industry's shift towards languages with simpler syntax, such as Go, Kotlin, and Dart, doesn't signal the demise of traditional languages. Instead, it emphasizes the importance of staying adaptable and acquiring diverse skill sets.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Future of Software Developers and Conclusion
&lt;/h3&gt;

&lt;p&gt;In conclusion, the fear of being replaced by AI is not unfounded, but it's crucial to recognize the positive aspects. The role of a software developer is unlikely to vanish in the next 5-10 years. To stay relevant, developers should aspire to be as proficient as machines while possessing in-depth knowledge of algorithms, data structures, and system design. Soft skills, including effective communication and understanding of business needs, will remain pivotal for success in the evolving landscape of software engineering.&lt;/p&gt;

&lt;p&gt;Change is inevitable, and AI is part of our coding journey. Let's roll with it, stay curious, and keep our dev skills sharp. The future's uncertain, but one thing's for sure: we're not going anywhere if we embrace the change and bring our A-game. 💻✨&lt;/p&gt;

</description>
      <category>ai</category>
      <category>promptengineering</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How We Transformed Our Daily Meetings for the Better</title>
      <dc:creator>Sergey Podgorny</dc:creator>
      <pubDate>Mon, 13 Nov 2023 10:00:17 +0000</pubDate>
      <link>https://dev.to/ottonova/how-we-transformed-our-daily-meetings-for-the-better-18ie</link>
      <guid>https://dev.to/ottonova/how-we-transformed-our-daily-meetings-for-the-better-18ie</guid>
      <description>&lt;p&gt;In the world of teamwork, our daily stand-ups play a crucial role in our collective success.  We wanted to make things more efficient, and that's when we found something awesome in the videos from "&lt;a href="https://www.developmentthatpays.com/" rel="noopener noreferrer"&gt;Development That Pays&lt;/a&gt;": Walking the Board.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem
&lt;/h3&gt;

&lt;p&gt;We've all been there: the endless cycle of daily stand-ups where the team updates on what they did yesterday, what they're working on today, and any blockers they're facing.&lt;/p&gt;

&lt;p&gt;While it seems like a straightforward approach, it has its drawbacks. The individual spotlight often overshadows the collaborative nature of the team event, with team members more focused on crafting their own stories than actively listening to others. This realization prompted us to seek a more effective alternative.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Discovery
&lt;/h3&gt;

&lt;p&gt;While browsing YouTube, I discovered "Development That Pays". It proved to be a goldmine of helpful videos. Two videos, in particular, caught my attention: &lt;em&gt;Daily Stand-up: You're Doing It Wrong!&lt;/em&gt; and &lt;em&gt;Agile Daily Standup - How To Walk the Board (aka Walk the Wall)&lt;/em&gt;. These videos challenged our traditional approach and introduced an alternative — Walking the Board.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/H02BlTXpcto"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/316qdj10j9M"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  The "Walking the Board" Concept
&lt;/h3&gt;

&lt;p&gt;Walking the Board, also known as Walking the Wall, offered a refreshing perspective. Instead of individual updates, the team starts with the ticket on the top right of the board. The team member assigned to that ticket provides an update, and we move on to the next ticket. This method shifts the focus from individuals to the work itself and changes the stand-up from a chain of individual updates to a &lt;em&gt;&lt;strong&gt;team event&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This approach transforms the stand-up into a collaborative journey across the board, ensuring that the spotlight remains on the work itself, not on individual narratives. No more struggling to come up with a good story, the cards on the board provide the agenda. It's a game-changer that keeps everyone focused on the work. It's a shift from "&lt;em&gt;What did I do?&lt;/em&gt;" to "&lt;em&gt;What is the status of the work on the board?&lt;/em&gt;"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7aiub4fjr67uc80r8o3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7aiub4fjr67uc80r8o3u.png" alt="Board overview" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How To Walk the Board
&lt;/h3&gt;

&lt;p&gt;Starting at the top right of the board makes &lt;em&gt;financial sense&lt;/em&gt; — the items closest to being live are discussed first. If you're familiar with the concept of net present value, you'll understand that income now is more valuable than income later, and income tomorrow is more valuable than income next week.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmkgopn3c0b39mio3mam.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmkgopn3c0b39mio3mam.png" alt="Concept of net present value chart" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The second reason for starting at the right is purely practical: we are going to move the cards across the board from left to right. By starting at the right, we create space for cards to move into.&lt;/p&gt;

&lt;p&gt;Also, the "Development That Pays" video emphasized the importance of &lt;strong&gt;moments of glory&lt;/strong&gt; — allowing team members to move their own cards and take pride in their progress. It's not just about updating the board; it's about actively participating in the collective journey.&lt;/p&gt;

&lt;p&gt;At the end of the meeting, the board and the team's understanding of its current "shape are up to date. Team members can also share non-board related topics or updates, fo example tasks not listed on the board.&lt;/p&gt;

&lt;h3&gt;
  
  
  Success Story: Implementing Walking the Board
&lt;/h3&gt;

&lt;p&gt;Inspired by these videos, we decided to give "Walking the Board" a shot. The transformation was remarkable! Our daily stand-ups became more than just updates — they turned into collaborative sessions centered around the work on the board.&lt;/p&gt;

&lt;h4&gt;
  
  
  ELMO Rule
&lt;/h4&gt;

&lt;p&gt;To keep discussions concise, we &lt;a href="https://www.seriousscrum.com/page/elmo" rel="noopener noreferrer"&gt;introduced the "ELMO" rule&lt;/a&gt;. ELMO stands for &lt;em&gt;Enough, Let's Move On&lt;/em&gt;. "ELMO" is a word that the guide and travelers may use to indicate a conversation is either off-topic or takes too long. If a discussion is going off track, &lt;strong&gt;anyone&lt;/strong&gt; can simply say "ELMO". This signals that we'll discuss it later, outside our daily stand-up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw8betonyoasrvg65nxe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw8betonyoasrvg65nxe.jpg" alt="ELMO" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Secret Order
&lt;/h4&gt;

&lt;p&gt;We established a specific order for leading the board and other meetings. This system not only enhances a sense of responsibility but also encourages shared leadership among the team.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Result and Summary
&lt;/h3&gt;

&lt;p&gt;Since adopting Walking the Board in the summer of 2020, our meetings have changed a lot. We have shifted away from giving individual updates. Instead, our focus is entirely on the work on the board. This change has made our stand-ups more productive and collaborative, as we're now centered on the tasks at hand rather than individual narratives.&lt;/p&gt;

&lt;p&gt;We switched from traditional stand-ups to "Walking the Board" because we wanted our meetings to be more efficient. The videos from "Development That Pays" played a key role in inspiring this change, showing us what wasn't working with the old way and the benefits of a more team-focused approach. Now, Walking the Board is a regular part of our daily routine, making our meetings more focused, productive, and collaborative. If you're looking to improve your stand-ups, the insights from "Development That Pays" are definitely worth exploring.&lt;/p&gt;

</description>
      <category>agile</category>
      <category>meetings</category>
      <category>collaboration</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Migrating from Self-Managed RabbitMQ to Cloud-Native AWS Amazon MQ: A Technical Odyssey</title>
      <dc:creator>Sergey Podgorny</dc:creator>
      <pubDate>Sat, 14 Oct 2023 15:04:00 +0000</pubDate>
      <link>https://dev.to/ottonova/migrating-from-self-managed-rabbitmq-to-cloud-native-aws-amazon-mq-a-technical-odyssey-1376</link>
      <guid>https://dev.to/ottonova/migrating-from-self-managed-rabbitmq-to-cloud-native-aws-amazon-mq-a-technical-odyssey-1376</guid>
      <description>&lt;p&gt;In the ever-evolving world of cloud-native solutions, it can be a daunting task to maintain message brokers. For a while, our team was responsible for a self-managed RabbitMQ instance. While this worked well initially, we encountered challenges in terms of maintenance, version updates, and data recovery. This led us to explore Amazon MQ, a fully managed message broker service offered by AWS.&lt;/p&gt;

&lt;p&gt;In this article, we'll discuss the advantages of both self-managed RabbitMQ and Amazon MQ, the reasons behind our migration, and the hurdles we faced during the transition. Our journey offers insights for other developers, who consider a similar migration path.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Self-Managed RabbitMQ Era
&lt;/h3&gt;

&lt;p&gt;Our experience with self-managed RabbitMQ was characterized by control, high availability, and the responsibility to ensure data integrity. Here are some of the advantages of this approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Total Control&lt;/strong&gt;&lt;br&gt;
Running your own RabbitMQ server gives you complete control over configuration, security, and updates. You can fine-tune the setup to meet your specific requirements: ideal for organizations with complex or unique messaging needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High Availability&lt;/strong&gt;&lt;br&gt;
It's worth noting that our entity was running on AWS EC2, whose SLA guarantees only 99.99%, but de-facto we achieved a remarkable uptime rate of 99.999% with our self-managed RabbitMQ setup. The downtime was almost non-existent, which ensured a reliable message flow through our system. High availability is crucial for many mission-critical applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Recovery&lt;/strong&gt;&lt;br&gt;
Ironically, data recovery was a challenge with our self-managed RabbitMQ. In the event of a crash, we lacked confidence in our ability to restore data fully. This vulnerability urged us to consider Amazon MQ, a fully managed solution.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Shift to Amazon MQ
&lt;/h3&gt;

&lt;p&gt;As time passed, it became apparent that managing RabbitMQ was no longer sustainable for our team. Here are the primary reasons that drove us to explore Amazon MQ as an alternative:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Skills Gap&lt;/strong&gt;&lt;br&gt;
Our team lacked in-house experts dedicated to managing RabbitMQ, which posed a risk to our operations. As RabbitMQ versions evolved, staying up-to-date became increasingly challenging. This skill gap urged us to consider Amazon MQ, a fully managed solution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Integration&lt;/strong&gt;&lt;br&gt;
As an AWS service, Amazon MQ seamlessly integrated with our existing AWS infrastructure, providing us with a more cohesive and consistent cloud environment. It allowed us to leverage existing AWS services and tools, which resulted in a smooth migration process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Managed Service&lt;/strong&gt;&lt;br&gt;
The promise of offloading the operational burden to AWS was enticing. Amazon MQ handles tasks like patching, maintenance, and scaling. This allows our team to focus on more strategic initiatives.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Security&lt;/strong&gt;&lt;br&gt;
One key advantage of switching to AmazonMQ is its strong foundation on AWS infrastructure. This not only ensures robust security practices but also regular updates are integrated into the system. So, it gives us confidence, as we know that any potential vulnerabilities are under active monitoring and management.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Amazon MQ Experience
&lt;/h3&gt;

&lt;p&gt;While the move to Amazon MQ presented numerous benefits, we also encountered some challenges that are worth noting:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbpa07bdqinvjbdk5gqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbpa07bdqinvjbdk5gqr.png" alt="Downtime logs for RabbitMQ instance" width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SLA Guarantees&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/amazon-mq/sla/" rel="noopener noreferrer"&gt;Amazon MQ's service level agreement (SLA)&lt;/a&gt; guarantees 99.9% availability. This is generally acceptable for many businesses but was a step down from our self-managed RabbitMQ's 99.999% uptime. While the difference might seem small, it translated into more downtime. A trade-off we had to accept.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Configuration&lt;/strong&gt;&lt;br&gt;
Amazon MQ abstracts many configuration details. This simplifies management for the most users. However, this simplicity comes at the cost of fine-grained control. For organizations with highly specialized requirements, this might be a drawback.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Considerations&lt;/strong&gt;&lt;br&gt;
Amazon MQ is a managed service, which means there are associated costs. While the managed service helps reduce operational overhead, it's crucial to factor in the cost implications when migrating.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  What do three nines (99.9) really mean?
&lt;/h3&gt;

&lt;p&gt;Here are my calculations according to Amazon MQ SLA:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;if the monthly downtime is &lt;a href="https://uptime.is/99.9" rel="noopener noreferrer"&gt;lower than ~43 minutes&lt;/a&gt;, they will charge 100% of the costs&lt;/li&gt;
&lt;li&gt;if the monthly downtime is &lt;a href="https://uptime.is/99" rel="noopener noreferrer"&gt;between ~43 minutes to ~7 hours&lt;/a&gt;, they will charge 90% of the costs of this downtime&lt;/li&gt;
&lt;li&gt;if the monthly downtime is &lt;a href="https://uptime.is/95" rel="noopener noreferrer"&gt;between ~7 hours to ~1day&lt;/a&gt;, they will charge 75% of the costs of this downtime&lt;/li&gt;
&lt;li&gt;and if the monthly downtime is &lt;a href="https://uptime.is/95" rel="noopener noreferrer"&gt;higher than ~1 day&lt;/a&gt;, they won't charge any costs for this downtime&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Our migration from self-managed RabbitMQ to Amazon MQ represented a shift in the way we approach message brokers. While Amazon MQ offered many benefits, such as reduced operational burden and seamless AWS integration, it came with some trade-offs, including a lower SLA guarantee and less granular control.&lt;/p&gt;

&lt;p&gt;Ultimately, the decision to migrate should be based on your organization's specific needs, resources, and objectives. For us, the trade-offs were acceptable given the advantages of a managed service within our AWS ecosystem.&lt;/p&gt;

&lt;p&gt;The path to a cloud-native solution isn't always straightforward, but it can lead to more streamlined operations and a greater focus on innovation rather than infrastructure management. Understanding the pros and cons of both approaches is vital for an informed decision about your messaging infrastructure.&lt;/p&gt;

&lt;p&gt;As technology continues to evolve, it's essential to stay adaptable and leverage the right tools and services to meet your business needs. In our case, the migration to Amazon MQ allowed us to do just that.&lt;/p&gt;

</description>
      <category>rabbitmq</category>
      <category>aws</category>
      <category>amazonmq</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Understanding GraphQL: A Guide for Backend Developers</title>
      <dc:creator>Sergey Podgorny</dc:creator>
      <pubDate>Wed, 11 Oct 2023 09:56:00 +0000</pubDate>
      <link>https://dev.to/larapulse/understanding-graphql-a-guide-for-backend-developers-10b4</link>
      <guid>https://dev.to/larapulse/understanding-graphql-a-guide-for-backend-developers-10b4</guid>
      <description>&lt;p&gt;GraphQL is a powerful query language for APIs that provides a flexible and efficient way to request and manipulate data from a server. If you're a backend developer looking to learn about GraphQL, this guide will walk you through the basics, implementation, and its relevance to your work.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is GraphQL?
&lt;/h3&gt;

&lt;p&gt;GraphQL was developed by Facebook and has gained popularity as an alternative to RESTful APIs. It provides the following key features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Query Language:&lt;/strong&gt;&lt;br&gt;
GraphQL uses a schema to define the types of data available and the relationships between them. Clients can send queries to request exactly the data they need, and they receive a JSON response containing only that data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Single Endpoint:&lt;/strong&gt;&lt;br&gt;
GraphQL typically has a single endpoint for all data operations, simplifying the API surface and allowing clients to request various data in a single request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Strongly Typed:&lt;/strong&gt;&lt;br&gt;
GraphQL APIs are strongly typed, meaning they have a defined schema with clear data types. Clients can introspect the schema to understand what data is available and how to query it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-time Data:&lt;/strong&gt;&lt;br&gt;
GraphQL can be used to request real-time data through subscriptions, allowing clients to receive updates when data changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Implementing GraphQL
&lt;/h3&gt;

&lt;p&gt;Here's a step-by-step guide to implementing GraphQL in your backend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Define a Schema:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create a schema that describes the types of data your API will expose. You'll specify object types, queries, mutations (for write operations), and potentially subscriptions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight graphql"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;!&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Query&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;!):&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Mutation&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;updateUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;!,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;UserInput&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Set Up Resolvers:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For each field in your schema, implement resolver functions. These resolver functions determine how to fetch the data for each field. These can be as simple as fetching data from a database or involve more complex operations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;resolvers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;Query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;info&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Fetch user data based on 'args.userId'&lt;/span&gt;
      &lt;span class="c1"&gt;// Return the user data&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;Mutation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;updateUser&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;parent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;info&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Update user data based on 'args.userId' and 'args.input'&lt;/span&gt;
      &lt;span class="c1"&gt;// Return the updated user data&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;User&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Resolver for the "name" field&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Resolver for the "email" field&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Middleware:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Set up middleware for handling tasks like authentication, authorization, and caching.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Server Implementation:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Choose a GraphQL server library or framework that fits your tech stack. Popular options include Apollo Server, Express GraphQL, and GraphQL Yoga for JavaScript/Node.js.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Testing and Documentation:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Write unit tests for your resolvers and document your GraphQL schema so that clients can understand how to use it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Integration with Frontend:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Implement the GraphQL client on your frontend to send queries and mutations to the GraphQL server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resolvers in GraphQL
&lt;/h3&gt;

&lt;p&gt;Resolvers are central to GraphQL, as they determine how to fetch the data for specific fields. Each field in your schema typically has its resolver function. These resolver functions can optimize data fetching.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If multiple fields can be efficiently fetched in a single query from the same data source, you can use a single resolver to retrieve all the necessary data.&lt;/li&gt;
&lt;li&gt;GraphQL clients request multiple fields in a single query, and the resolver functions are invoked only for the fields requested in that query.&lt;/li&gt;
&lt;li&gt;Resolvers allow you to tailor the database query to fetch only the data that corresponds to the requested fields.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Do You Need GraphQL?
&lt;/h3&gt;

&lt;p&gt;The decision to implement GraphQL depends on your project's requirements. You might benefit from GraphQL if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your project involves complex data fetching, with data from multiple sources.&lt;/li&gt;
&lt;li&gt;You want to avoid versioning your API, as GraphQL allows clients to request only the data they need.&lt;/li&gt;
&lt;li&gt;Real-time data updates are necessary through subscriptions.&lt;/li&gt;
&lt;li&gt;Frontend and backend teams work in parallel, as GraphQL enables frontend developers to specify their data requirements.&lt;/li&gt;
&lt;li&gt;Mobile apps are part of your project, as GraphQL's ability to fetch only the necessary data is advantageous for mobile apps.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, if your project has simple data retrieval needs or is small in scope, you may not necessarily need to implement GraphQL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;GraphQL is a powerful tool for backend developers to create flexible and efficient APIs. Its ability to allow clients to request exactly the data they need and its strong typing make it a valuable addition to your API toolkit. By understanding GraphQL's principles and implementing it effectively, you can improve data retrieval and enhance your development process.&lt;/p&gt;

&lt;p&gt;Start exploring GraphQL today, and you'll find it to be a valuable addition to your backend development toolkit.&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>api</category>
      <category>backend</category>
    </item>
    <item>
      <title>10 Lesser-Known Tools and Websites to Spice Up Your Developer Toolbox</title>
      <dc:creator>Sergey Podgorny</dc:creator>
      <pubDate>Tue, 10 Oct 2023 14:51:00 +0000</pubDate>
      <link>https://dev.to/sergey-muc/10-lesser-known-tools-and-websites-to-spice-up-your-developer-toolbox-20pa</link>
      <guid>https://dev.to/sergey-muc/10-lesser-known-tools-and-websites-to-spice-up-your-developer-toolbox-20pa</guid>
      <description>&lt;p&gt;As a software engineer, you're no stranger to the tried-and-true tools and platforms that are essential for your day-to-day work. But what if you're looking for something a bit more unconventional, something to add a touch of creativity or a spark of inspiration to your coding journey? Look no further! We've curated a list of 10 lesser-known tools and websites that can make your job more interesting and enjoyable.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Carbon
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwtdcjrfmnjgy9qd6ip61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwtdcjrfmnjgy9qd6ip61.png" alt="Carbon preview" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://carbon.now.sh/" rel="noopener noreferrer"&gt;Carbon&lt;/a&gt; allows you to create stunning and customizable code screenshots with syntax highlighting. Whether you want to share code snippets on social media or enhance your documentation, Carbon is a handy tool to have in your arsenal.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. DevHints
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpunk6twoo7eexklz83jo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpunk6twoo7eexklz83jo.png" alt="DevHints preview" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://devhints.io/" rel="noopener noreferrer"&gt;DevHints&lt;/a&gt; is your cheat sheet and quick reference repository for various programming languages, frameworks, and tools. It's the perfect resource for quick syntax lookups without the need to dive deep into documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. RegExr
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqqaxq4yk7of4cm51hhz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqqaxq4yk7of4cm51hhz.png" alt="RegExr preview" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://regexr.com/" rel="noopener noreferrer"&gt;RegExr&lt;/a&gt; simplifies working with regular expressions. This online tool provides a visual interface for building and testing regex patterns in real-time, making regex less intimidating.&lt;/p&gt;

&lt;p&gt;But that's not all! Here are a couple of bonus resources to complement your regex journey:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Regexper&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1e0ox1i28tdp1focflv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx1e0ox1i28tdp1focflv.png" alt="Regexper preview for uuid pattern" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://regexper.com/" rel="noopener noreferrer"&gt;Regexper&lt;/a&gt; takes your regular expressions to the next level. It generates interactive, visually appealing diagrams that help you understand your regex patterns. With Regexper, you can see your regex patterns come to life, making complex expressions easier to grasp.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- iHateRegex&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50qn9l0kwlo6snul5wem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50qn9l0kwlo6snul5wem.png" alt="iHateRegex preview for email pattern" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ihateregex.io/" rel="noopener noreferrer"&gt;iHateRegex&lt;/a&gt; is your ally in conquering regular expressions. It offers a collection of regex patterns for common use cases and provides explanations and examples for each. Whether you're a regex novice or a seasoned pro, iHateRegex can save you time and frustration by offering pre-built solutions and guidance.&lt;/p&gt;

&lt;p&gt;With RegExr, Regexper, and iHateRegex in your toolkit, you'll have everything you need to master regular expressions efficiently and effectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Excalidraw
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1kp4vv2g4lptnfd674e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1kp4vv2g4lptnfd674e.png" alt="Excalidraw preview" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://excalidraw.com/" rel="noopener noreferrer"&gt;Excalidraw&lt;/a&gt; is a collaborative virtual whiteboard where you can sketch diagrams, flowcharts, and wireframes. It's perfect for brainstorming and discussing design ideas with your team.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. JQ
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc13cff4s073r5f7rsby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc13cff4s073r5f7rsby.png" alt="JQ command line output" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stedolan.github.io/jq/" rel="noopener noreferrer"&gt;JQ&lt;/a&gt; is a lightweight and powerful command-line JSON processor. It's a time-saving tool for manipulating and extracting data from JSON files effortlessly.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Quicktype
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhfgb3lmctwupvwjboid.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhfgb3lmctwupvwjboid.png" alt="Quicktype preview" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://quicktype.io/" rel="noopener noreferrer"&gt;Quicktype&lt;/a&gt; automates the generation of code from JSON data. It's a real timesaver when dealing with complex JSON structures in your applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Wappalyzer
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpz9vhl71ow40ebr6nm50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpz9vhl71ow40ebr6nm50.png" alt="Wappalyzer tech stack preview for larapulse blog" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.wappalyzer.com/" rel="noopener noreferrer"&gt;Wappalyzer&lt;/a&gt; is a browser extension that identifies the technologies and frameworks used on a website. It's a valuable tool for competitive analysis and staying up-to-date with web technologies.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Hacker Typer
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8qzsrgx988lco22f873.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8qzsrgx988lco22f873.gif" alt="Hacker Typer preview" width="600" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hackertyper.net/" rel="noopener noreferrer"&gt;Hacker Typer&lt;/a&gt; is pure fun! It simulates typing like a Hollywood hacker. Use it to entertain your colleagues during presentations or meetings.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. The Useless Web
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7hm1mox6janl3iwecvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7hm1mox6janl3iwecvg.png" alt="The Useless Web preview" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://theuselessweb.com/" rel="noopener noreferrer"&gt;The Useless Web&lt;/a&gt; takes you on a journey to random, quirky, and entirely useless websites. It's a delightful way to take a quick break and have a laugh when you need to reset your mind.&lt;/p&gt;

&lt;h3&gt;
  
  
  10. A Soft Murmur
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F837mkaqaf50ne0grxcx8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F837mkaqaf50ne0grxcx8.png" alt="A Soft Murmur preview" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://asoftmurmur.com/" rel="noopener noreferrer"&gt;A Soft Murmur&lt;/a&gt; lets you create custom ambient sounds to improve your focus or relax while working. Mix sounds like rain, thunder, and birdsong to create your perfect background noise.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bonus Tools:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Lorem Ipsum&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpan9spjhqulrafemw108.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpan9spjhqulrafemw108.png" alt="Lorem Ipsum preview" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.lipsum.com/" rel="noopener noreferrer"&gt;Lorem Ipsum&lt;/a&gt; is the classic Lorem Ipsum text generator, perfect for generating placeholder text in your projects.&lt;/p&gt;

&lt;p&gt;You may also want to consider alternatives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://baconipsum.com/" rel="noopener noreferrer"&gt;Bacon Ipsum&lt;/a&gt; adds a delicious twist to placeholder text. Instead of the standard Lorem Ipsum, it generates text with a meaty theme. If you're feeling hungry for creative content, give Bacon Ipsum a try.&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://www.cupcakeipsum.com/" rel="noopener noreferrer"&gt;Cupcake Ipsum&lt;/a&gt; serves up a sweet alternative to Lorem Ipsum. It generates text with a confectionery theme, perfect for adding a touch of whimsy to your design mockups. Cupcake Ipsum is a delightful way to sprinkle some fun into your projects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Lorem Ipsum, Bacon Ipsum, and Cupcake Ipsum at your disposal, you can choose the perfect placeholder text to match the mood of your design or project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lorem Picsum&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33skqll6tklbjfe4i3bn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33skqll6tklbjfe4i3bn.png" alt="Lorem Picsum preview" width="800" height="491"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://picsum.photos/" rel="noopener noreferrer"&gt;Lorem Picsum&lt;/a&gt; provides random placeholder images, adding visual appeal to your designs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;These tools and websites may not be the most conventional choices for developers, but they can certainly add a touch of creativity, convenience, or fun to your work. So go ahead, explore, and enjoy the journey of discovering new and exciting ways to enhance your development experience!&lt;/p&gt;

</description>
      <category>fun</category>
      <category>codingtools</category>
      <category>productivityhacks</category>
      <category>developerresources</category>
    </item>
    <item>
      <title>CPU Cache Basics</title>
      <dc:creator>Sergey Podgorny</dc:creator>
      <pubDate>Mon, 09 Oct 2023 09:21:00 +0000</pubDate>
      <link>https://dev.to/larapulse/cpu-cache-basics-57ej</link>
      <guid>https://dev.to/larapulse/cpu-cache-basics-57ej</guid>
      <description>&lt;p&gt;CPU caches are the unsung heroes of modern computing, silently speeding up your computer's performance. These small but incredibly fast memory storage areas play a vital role in ensuring that your CPU can access frequently used data and instructions with lightning speed. In this article, we'll explore the world of CPU caches, uncovering their design, optimization strategies, and their indispensable role in enhancing the performance of software and systems.&lt;/p&gt;

&lt;p&gt;Here's what you should know as a software engineer:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;L1 Cache (Level 1 Cache):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;L1 cache is the smallest but fastest cache located closest to the CPU cores.&lt;/li&gt;
&lt;li&gt;It's divided into two parts: L1i (instruction cache) and L1d (data cache). L1i stores instructions, and L1d stores data.&lt;/li&gt;
&lt;li&gt;The purpose of L1 cache is to store the most frequently used instructions and data to speed up the CPU's operations.&lt;/li&gt;
&lt;li&gt;It has low latency (the time it takes to access data) and is usually separate for each CPU core.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;L2 Cache (Level 2 Cache):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;L2 cache is larger than L1 cache but slightly slower.&lt;/li&gt;
&lt;li&gt;It is shared among CPU cores in many multi-core processors.&lt;/li&gt;
&lt;li&gt;Its role is to store additional frequently used data and instructions that couldn't fit in L1 cache.&lt;/li&gt;
&lt;li&gt;L2 cache is still faster than accessing data from RAM (main memory).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;L3 Cache (Level 3 Cache, if available):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;L3 cache is even larger but slightly slower than L2 cache.&lt;/li&gt;
&lt;li&gt;It is shared across all CPU cores in a multi-core processor.&lt;/li&gt;
&lt;li&gt;L3 cache acts as a backup storage for frequently used data and instructions that couldn't fit in L1 or L2 cache.&lt;/li&gt;
&lt;li&gt;Having an L3 cache can help reduce bottlenecks when multiple CPU cores are accessing memory simultaneously.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How it works:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When the CPU needs data or instructions, it first checks if they are in the L1 cache.&lt;/li&gt;
&lt;li&gt;If the needed information is in the L1 cache (or L2 cache if not found in L1), it's called a cache hit, and the CPU can quickly retrieve it.&lt;/li&gt;
&lt;li&gt;If the data is not in the cache, it's called a cache miss. In this case, the CPU has to fetch the data from slower main memory (RAM), which takes more time.&lt;/li&gt;
&lt;li&gt;The goal of the cache hierarchy is to reduce the number of cache misses by storing the most frequently used data and instructions in the faster, smaller caches.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When it is used:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;L1, L2, and L3 caches are used constantly as the CPU executes programs.&lt;/li&gt;
&lt;li&gt;They are especially beneficial for speeding up frequently executed code and data access patterns.&lt;/li&gt;
&lt;li&gt;The cache hardware manages what gets stored in the cache, so as a software engineer, you generally don't interact with it directly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd14ya22r75jwo7mvn25h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd14ya22r75jwo7mvn25h.png" alt="CPU cache schema" width="463" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Optimization principles
&lt;/h3&gt;

&lt;p&gt;Caches like L1, L2, and L3 are &lt;em&gt;&lt;strong&gt;managed by the hardware&lt;/strong&gt;&lt;/em&gt;, and as a software engineer, &lt;em&gt;you don't have direct control over which programs or data are stored in them&lt;/em&gt;. However, you can follow certain programming and optimization principles to increase the likelihood that your program benefits from cache usage. Here's how:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Locality of Reference:&lt;/strong&gt; Caches work best when your program exhibits good locality of reference. There are two types of locality:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Temporal Locality:&lt;/strong&gt; This means that if you access a piece of data once, you're likely to access it again in the near future. To leverage temporal locality, try to reuse data that you've recently accessed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spatial Locality:&lt;/strong&gt; This refers to the tendency to access data located near recently accessed data. To benefit from spatial locality, try to access data in a sequential or predictable pattern.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cache-Friendly Data Structures:&lt;/strong&gt; Use data structures and algorithms that are cache-friendly. For example, when iterating over an array, processing elements that are stored close to each other in memory is more cache-efficient than jumping around in memory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cache Line Awareness:&lt;/strong&gt; Cache systems typically work with fixed-size cache lines (e.g., 64 bytes). Be aware of this when designing your data structures. If you only need a small portion of a cache line, avoid loading the entire line to reduce cache pollution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compiler and Compiler Flags:&lt;/strong&gt; Compilers can optimize code to improve cache locality. Use compiler flags (e.g., &lt;strong&gt;-O2&lt;/strong&gt; or &lt;strong&gt;-O3&lt;/strong&gt; in GCC) to enable optimizations. Additionally, understand how your compiler optimizes code for your target architecture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Profiling and Benchmarking:&lt;/strong&gt; Use profiling tools to analyze cache behavior in your program. Tools like perf (on Linux) or performance analyzers in integrated development environments (IDEs) can help you identify cache-related issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Thread Affinity:&lt;/strong&gt; If you're working with multi-threaded programs, consider using thread affinity techniques to bind threads to specific CPU cores. This can help minimize cache contention between threads.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Cache sizes
&lt;/h3&gt;

&lt;p&gt;Regarding the sizes of cache levels, they can vary widely depending on the CPU architecture. However, here's a rough estimate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;L1 Cache: Typically ranges from 16KB to 128KB per core.&lt;/li&gt;
&lt;li&gt;L2 Cache: Can range from 256KB to 1MB per core or be shared among multiple cores.&lt;/li&gt;
&lt;li&gt;L3 Cache: Usually shared among multiple cores and can range from 2MB to 32MB or more in high-end processors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep in mind that these numbers can change with different CPU models and generations. You can usually find the specific cache sizes for your CPU in its documentation or by checking the manufacturer's website. Understanding cache sizes can help you make informed decisions when optimizing your code for specific hardware.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cache latencies
&lt;/h3&gt;

&lt;p&gt;Let's compare the latencies of different memory levels, including CPU caches and RAM (main memory):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;L1 Cache Latency:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;L1 cache is the fastest and has the lowest latency among all memory levels.&lt;/li&gt;
&lt;li&gt;Typical latency ranges from 1 to 3 cycles, which is extremely fast.&lt;/li&gt;
&lt;li&gt;Accessing data from L1 cache is significantly faster than any other memory level.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;L2 Cache Latency:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;L2 cache has slightly higher latency compared to L1 cache.&lt;/li&gt;
&lt;li&gt;Typical latency ranges from 4 to 10 cycles, depending on the CPU architecture.&lt;/li&gt;
&lt;li&gt;It is still much faster than accessing RAM.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;L3 Cache Latency:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;L3 cache has higher latency compared to L2 and L1 caches.&lt;/li&gt;
&lt;li&gt;Typical latency ranges from 10 to 40 cycles, depending on the CPU and cache design.&lt;/li&gt;
&lt;li&gt;While slower than L1 and L2, it is still much faster than RAM.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;RAM (Main Memory) Latency:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accessing data from RAM is significantly slower than accessing any level of cache.&lt;/li&gt;
&lt;li&gt;RAM latency can vary widely, but it typically ranges from 60 to 100 cycles or more.&lt;/li&gt;
&lt;li&gt;RAM access times are several orders of magnitude slower than L1 cache.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Assuming a CPU clock speed of 3 GHz (3 billion cycles per second):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;L1 cache access time:

&lt;ul&gt;
&lt;li&gt;Fastest case (1 cycle): 1 / 3 * 10^9 = 0.33 nanoseconds&lt;/li&gt;
&lt;li&gt;Slowest case (3 cycles): 3 / 3 * 10^9 = 1 nanosecond&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;L2 cache access time:

&lt;ul&gt;
&lt;li&gt;Fastest case (4 cycles): 4 / 3 * 10^9 = 1.33 nanoseconds&lt;/li&gt;
&lt;li&gt;Slowest case (10 cycles): 10 / 3 * 10^9 = 3.33 nanoseconds&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;L3 cache access time:

&lt;ul&gt;
&lt;li&gt;Fastest case (10 cycles): 10 / 3 * 10^9 = 3.33 nanoseconds&lt;/li&gt;
&lt;li&gt;Slowest case (40 cycles): 40 / 3 * 10^9 = 13.33 nanoseconds&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;RAM access time:

&lt;ul&gt;
&lt;li&gt;Fastest case (60 cycles): 60 / 3 * 10^9 = 20 nanoseconds&lt;/li&gt;
&lt;li&gt;Slowest case (100 cycles): 100 / 3 * 10^9 = 33.33 nanoseconds&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd25ip368jj9z5tcp566t.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd25ip368jj9z5tcp566t.jpeg" alt="Latency numbers you should know - from bytebytego" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To put these numbers into perspective, accessing data from L1 cache can be over 10 times faster than accessing the same data from L2 cache, and it can be more than 100 times faster than accessing it from RAM.&lt;/p&gt;

&lt;p&gt;Efficient use of CPU caches is crucial for optimizing software performance because minimizing cache misses and utilizing cache-friendly algorithms can help reduce the impact of slower RAM access times. This is why understanding cache behavior and optimizing for cache locality is a key consideration in high-performance computing and software development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;The importance of CPU caches in modern computing cannot be overstated. These small, high-speed memory storage areas play a pivotal role in enhancing software performance by reducing the latency of data access. Cache-aware programming, which involves optimizing code and data structures to maximize cache utilization, has a profound impact on software performance.&lt;/p&gt;

&lt;p&gt;In summary, caches like L1, L2, and L3 are crucial for optimizing CPU performance by reducing memory access times. As a software engineer, understanding the basics of how caches work can help you write more efficient code, such as optimizing data access patterns and minimizing cache thrashing (when cache contents change frequently). However, the specifics of cache management are typically handled by the hardware and the operating system.&lt;/p&gt;

</description>
      <category>cpu</category>
      <category>cache</category>
      <category>ram</category>
    </item>
    <item>
      <title>Dealing with RabbitMQ exchange types</title>
      <dc:creator>Sergey Podgorny</dc:creator>
      <pubDate>Tue, 20 Apr 2021 14:53:05 +0000</pubDate>
      <link>https://dev.to/larapulse/dealing-with-rabbitmq-exchange-types-44ck</link>
      <guid>https://dev.to/larapulse/dealing-with-rabbitmq-exchange-types-44ck</guid>
      <description>&lt;p&gt;Exchanges control the routing of messages to queues.  Each exchange type defines a specific routing algorithm which the server uses to determine which bound queues a published message should be routed to.&lt;/p&gt;

&lt;p&gt;RabbitMQ provides four types of exchanges: &lt;em&gt;Direct&lt;/em&gt;, &lt;em&gt;Fanout&lt;/em&gt;, &lt;em&gt;Topic&lt;/em&gt;, and &lt;em&gt;Headers&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fanout Exchanges
&lt;/h3&gt;

&lt;p&gt;The Fanout exchange type routes messages to all bound queues indiscriminately.  If a routing key is provided, it will simply be ignored.  The following illustrates how the fanout exchange type works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8xfbjebhhvdo1q3jaygq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8xfbjebhhvdo1q3jaygq.png" alt="Fanout Exchange" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Fanout exchange type is useful for facilitating the &lt;a href="http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern" rel="noopener noreferrer"&gt;publish-subscribe pattern&lt;/a&gt;. When using the fanout exchange type, different queues can be declared to handle messages in different ways.  For instance, a message indicating a customer order has been placed might be received by one queue whose consumers fulfil the order, another whose consumers update a read-only history of orders, and yet another whose consumers record the order for reporting purposes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Direct Exchanges
&lt;/h3&gt;

&lt;p&gt;The Direct exchange type routes messages with a routing key equal to the routing key declared by the binding queue.  The following illustrates how the direct exchange type works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnoathf8v0slv1309zss1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnoathf8v0slv1309zss1.png" alt="Direct Exchange" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Direct exchange type is useful when you would like to distinguish messages published to the same exchange using a simple string identifier.  This is the type of exchange that was used in our Hello World example.  As discussed in part 3 of our series, every queue is automatically bound to a default exchange using a routing key equal to the queue name.  This default exchange is declared as a Direct exchange.  In our example, the queue named "hello-world-queue" was bound to the default exchange with a routing key of "hello-world-queue", so publishing a message to the default exchange (identified with an empty string) routed the message to the queue named "hello-world-queue".&lt;/p&gt;

&lt;h3&gt;
  
  
  Topic Exchanges
&lt;/h3&gt;

&lt;p&gt;The Topic exchange type routes messages to queues whose routing key matches all, or a portion of a routing key. With topic exchanges, messages are published with routing keys containing a series of words separated by a dot (e.g. "&lt;code&gt;word1.word2.word3&lt;/code&gt;").  Queues binding to a topic exchange supply a matching pattern for the server to use when routing the message.  Patterns may contain an asterisk ("&lt;code&gt;*&lt;/code&gt;") to match a word in a specific position of the routing key, or a hash ("&lt;code&gt;#&lt;/code&gt;") to match zero or more words.  For example, a message published with a routing key of "&lt;code&gt;honda.civic.navy&lt;/code&gt;" would match queues bound with "&lt;code&gt;honda.civic.navy&lt;/code&gt;", "&lt;code&gt;*.civic.*&lt;/code&gt;", "&lt;code&gt;honda.#&lt;/code&gt;", or "&lt;code&gt;#&lt;/code&gt;", but would not match "&lt;code&gt;honda.accord.navy&lt;/code&gt;", "&lt;code&gt;honda.accord.silver&lt;/code&gt;", "&lt;code&gt;*.accord.*&lt;/code&gt;", or "&lt;code&gt;ford.#&lt;/code&gt;".  The following illustrates how the fanout exchange type works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh9u0kz7ms0p49jatp0ea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh9u0kz7ms0p49jatp0ea.png" alt="Topic Exchange" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The topic exchange type is useful for directing messages based on multiple categories (e.g. product type and shipping preference), or for routing messages originating from multiple sources (e.g. logs containing an application name and severity level).&lt;/p&gt;

&lt;h3&gt;
  
  
  Headers Exchanges
&lt;/h3&gt;

&lt;p&gt;The Headers exchange type routes messages based upon a matching of message headers to the expected headers specified by the binding queue.  The headers exchange type is similar to the topic exchange type in that more than one criteria can be specified as a filter, but the headers exchange differs in that its criteria is expressed in the message headers as opposed to the routing key, may occur in any order, and may be specified as matching any or all of the specified headers.  The following illustrates how the headers exchange type works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzvr327v08wkwtlc26wz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzvr327v08wkwtlc26wz2.png" alt="Headers Exchange" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Headers exchange type is useful for directing messages which may contain a subset of known criteria where the order is not established and provides a more convenient way of matching based upon the use of complex types as the matching criteria (i.e. a serialized object).&lt;/p&gt;




&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;That wraps up our introduction to each of the exchange types.  Next time, we’ll walk through an example which demonstrates declaring a direct exchange explicitly and take a look at the push API.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Interesting in reading more about digital literacy? Follow &lt;a href="https://blog.larapulse.com/digital-literacy" rel="noopener noreferrer"&gt;this link&lt;/a&gt; to find more articles 😉&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>rabbitmq</category>
      <category>exchange</category>
      <category>messaging</category>
      <category>event</category>
    </item>
    <item>
      <title>Event-driven architecture over standard client-server aproach</title>
      <dc:creator>Sergey Podgorny</dc:creator>
      <pubDate>Fri, 16 Apr 2021 14:32:08 +0000</pubDate>
      <link>https://dev.to/larapulse/event-driven-architecture-over-standard-client-server-aproach-3naf</link>
      <guid>https://dev.to/larapulse/event-driven-architecture-over-standard-client-server-aproach-3naf</guid>
      <description>&lt;p&gt;Just 20 years ago, most of the business was offline and web development was just starting to gain traction. But gradually everything develops, the whole business is trying to go online. And along with it, web development is actively developing, there are more and more services. Web development is an integral part of any modern business.&lt;/p&gt;

&lt;p&gt;To keep our business running fast, we need to consider all the third-party services on which our business depends. First, let's take a look at the standard architecture we're used to.&lt;/p&gt;

&lt;h3&gt;
  
  
  Standard Client-Server architecture
&lt;/h3&gt;

&lt;p&gt;With a standard synchronous architecture, most changes have a large number of dependencies, sometimes on slow and unpredictable systems. This slows down interactions and increases the error rate. Maintainability is troublesome, as there is usually a high amount of possible side effects to consider.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp5yvl9ip76vmuhm2nyar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fp5yvl9ip76vmuhm2nyar.png" alt="Synchronous client-server architecture" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  👍 Advantages:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Simple to understand&lt;/li&gt;
&lt;li&gt;Easier to debug and test&lt;/li&gt;
&lt;li&gt;Better data consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  👎 Disadvantages:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;High rate of client-visible errors&lt;/li&gt;
&lt;li&gt;Slow&lt;/li&gt;
&lt;li&gt;One thing fails, everything fails - does not work well for complex business processes&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Event-Driven architecture
&lt;/h3&gt;

&lt;p&gt;In an event-driven architecture, when a service performs some piece of work that other services might be interested in, that service produces an &lt;em&gt;event&lt;/em&gt; — a record of the completed action. Other services consume those events so that they can perform any of their tasks needed as a result of the event.&lt;/p&gt;

&lt;p&gt;By decoupling the core change from the side effects performance is greatly increased and error rates go down. &lt;/p&gt;

&lt;p&gt;Secondary concerns can be handled independently, this allows easier maintenance, and even involving different teams and/or technologies that are best suited for the respective tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6zpskswejolmj3wecw0y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6zpskswejolmj3wecw0y.png" alt="Event-driven architecture" width="800" height="575"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h4&gt;
  
  
  👍 Advantages:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Responsive&lt;/li&gt;
&lt;li&gt;Resilient &lt;/li&gt;
&lt;li&gt;Scales as a whole as well as only where needed &lt;/li&gt;
&lt;li&gt;Development can be distributed between multiple teams&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  👎 Disadvantages:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Extra work needed to keep data consistency&lt;/li&gt;
&lt;li&gt;Higher learning curve&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Event-driven architecture has been gaining momentum in popularity lately, and there are reasons for this: services grow and develop, become larger and acquire new dependencies. Decoupling of responsibility can be a solution in this case and improve performance, but you shouldn't abuse the architecture without a detailed understanding of all its pros and cons. For small and simple projects, a standard synchronous architecture may be a much better solution.&lt;/p&gt;

&lt;p&gt;To read more, follow these links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://bryanavery.co.uk/what-is-event-driven-microservice-architecture/" rel="noopener noreferrer"&gt;What is Event-Driven microservice architecture?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.infoq.com/articles/realtime-event-driven-ecosystem/" rel="noopener noreferrer"&gt;The Challenges of Building a Reliable Real-Time Event-Driven Ecosystem&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Interesting in reading more about APIs? Follow &lt;a href="https://blog.larapulse.com/api" rel="noopener noreferrer"&gt;this link&lt;/a&gt; to find more articles 😉&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>request</category>
      <category>architecture</category>
      <category>events</category>
      <category>server</category>
    </item>
    <item>
      <title>Top commands I am using in my daily work</title>
      <dc:creator>Sergey Podgorny</dc:creator>
      <pubDate>Wed, 09 Dec 2020 15:37:17 +0000</pubDate>
      <link>https://dev.to/sergey-muc/top-commands-i-am-using-in-my-daily-work-305m</link>
      <guid>https://dev.to/sergey-muc/top-commands-i-am-using-in-my-daily-work-305m</guid>
      <description>&lt;p&gt;Nowadays developers should handle a lot of different operations, especially on their own machine. To simplify the work and monitoring of processes in the system and in the network, I picked up several tools to make work in the terminal more comfortable.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;: &lt;em&gt;this article is not about my comprehensive list of tools I am using every day, it is about useful &lt;code&gt;top&lt;/code&gt;-like tools that help me do my job better and improve performance.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;htop&lt;/code&gt;: Linux Process Monitoring
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;htop&lt;/code&gt; is a much advanced interactive and real-time Linux process monitoring tool. This is much similar to Linux &lt;code&gt;top&lt;/code&gt; command, but it has more features like user-friendly interface to manage processes, shortcut keys, vertical and horizontal view of the processes and much more. &lt;code&gt;htop&lt;/code&gt; is a third-party tool and isn't included in Linux systems, so you need to install it manually.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frxiqo26a8aghpo3i458r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frxiqo26a8aghpo3i458r.png" alt="htop - Process monitoring tool" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;it shows a frequently updated list of the processes running on a computer, normally ordered by the amount of CPU usage;&lt;/li&gt;
&lt;li&gt;it provides a full list of processes running, instead of the top resource-consuming processes;&lt;/li&gt;
&lt;li&gt;it uses colour and gives visual information about processor, swap and memory status;&lt;/li&gt;
&lt;li&gt;it can also display the processes as a tree;&lt;/li&gt;
&lt;li&gt;it provides a convenient, visual, cursor-controlled interface for sending signals to processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;ctop&lt;/code&gt;: interface for container metrics
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;ctop&lt;/code&gt; provides a concise and condensed overview of real-time metrics for multiple containers, as well as a single container view for inspecting a specific container.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It mostly works same as &lt;code&gt;docker stats&lt;/code&gt;, but has a better interface and single container view and logging pages.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F06vhatgsjgj8gtg8qse0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F06vhatgsjgj8gtg8qse0.png" alt="Ctop - Top-like interface for container metrics" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Features
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;it shows a frequently updated list of the containers with metrics running on a computer;&lt;/li&gt;
&lt;li&gt;it comes with built-in support for Docker and runC;&lt;/li&gt;
&lt;li&gt;it uses colour and gives visual information about CPU, memory and network usage;&lt;/li&gt;
&lt;li&gt;it shows logs per instance;&lt;/li&gt;
&lt;li&gt;it has a full-screen, menu-driven operation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fglhwk3ofpt20fzkx2b7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fglhwk3ofpt20fzkx2b7f.png" alt="Ctop - Instance logging" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;mytop&lt;/code&gt;: real-time threads and performance monitoring tools
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;mytop&lt;/code&gt; is an open-source, command-line tool used for monitoring MySQL performance. &lt;code&gt;mytop&lt;/code&gt; connects to a MySQL server and periodically runs the &lt;code&gt;show processlist&lt;/code&gt; and &lt;code&gt;show global status&lt;/code&gt; commands. It then summarizes the information in a useful format. Using &lt;code&gt;mytop&lt;/code&gt;, we can in real-time monitor MySQL threads, queries, and uptime as well as see which user is running queries on which database, which are the slow queries, and more. All this information can be used to optimize the MySQL server performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwyc75c2zrn2pfj2buyae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fwyc75c2zrn2pfj2buyae.png" alt="Mytop - monitoring queries and processes" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Features
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;mytop&lt;/code&gt; provides a command-line shell interface to monitor real-time MySQL/MariaDB threads, queries per second, process list and performance of databases and gives an idea for the database administrator to better optimize the server to handle the heavy load.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mytop&lt;/code&gt; display screen is really broken into two parts. The top 4 lines (header) contain summary information about your MySQL server:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The first line identified the hostname of the server and the version of MySQL it is running. The right side shows the &lt;code&gt;uptime&lt;/code&gt; of the MySQL server process in &lt;code&gt;days+hours:minutes:seconds&lt;/code&gt; format (much like FreeBSD's top) as well as the current time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The second line displays the total number of queries the server has processed, the average number of queries per second, the real-time number of queries per second, and the number of slow queries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The third line deals with threads. Versions of MySQL before 3.23.x didn't give out this information, so you'll see all zeros.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And the fourth line displays key buffer efficiency (how often keys are read from the buffer rather than disk) and the number of bytes that MySQL has sent and received.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The second part of the display lists as many threads as can fit on screen. By default, they are sorted according to their idle time (least idle first).&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Interesting in reading more about Development Tools? Follow &lt;a href="https://blog.larapulse.com/dev-tools" rel="noopener noreferrer"&gt;this link&lt;/a&gt; to find more articles 😉&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>metrics</category>
      <category>tools</category>
      <category>monitoring</category>
      <category>performance</category>
    </item>
    <item>
      <title>How and why we updated RabbitMQ queues on production</title>
      <dc:creator>Sergey Podgorny</dc:creator>
      <pubDate>Sat, 05 Dec 2020 13:16:49 +0000</pubDate>
      <link>https://dev.to/ottonova/how-and-why-we-updated-rabbitmq-queues-on-production-2h76</link>
      <guid>https://dev.to/ottonova/how-and-why-we-updated-rabbitmq-queues-on-production-2h76</guid>
      <description>&lt;p&gt;In this article, I would like to share with you and the whole internet our experience of dealing with RabbitMQ Live updates. You will learn some details about our architecture and use cases. Let's start from the simplest... Why do we need RabbitMQ in our business?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5v187fajkej1sydbgon1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5v187fajkej1sydbgon1.png" alt="Backend with synchronous tasks processing" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Our Architecture
&lt;/h3&gt;

&lt;p&gt;As a health insurance company, our business depends on many different third-party services to analyze risks, process claimable documents, charge monthly payments etc. All these processes take some time to be processed, so to keep our services fast and autonomous from each other, we are using asynchronous processing of tasks that can be done in the background. This approach speeds up responses and allows to do more in the background, ie. email sending, policy creation, acceptance verification etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsqqwauyd4ohrf5dmlrvi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsqqwauyd4ohrf5dmlrvi.png" alt="Backend with synchronous tasks processing" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whenever a client expresses some intent to the API by making a request to it, this intent can create follow-up tasks. These tasks do not need to be handled synchronously, i.e. they do not need to be handled while processing the initial request. Instead, we put a message about this intent onto the message queue where it can be picked up asynchronously by another process and handled independently from the original request.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;But with great opportunities comes great responsibility&lt;/em&gt;. Message processing is very important and critical for our business. Some messages could &lt;a href="https://www.rabbitmq.com/dlx.html#:~:text=The%20reason%20is%20a%20name,allowed%20queue%20length%20was%20exceeded" rel="noopener noreferrer"&gt;expire without being consumed or inconsistent with queue restricted arguments&lt;/a&gt;. In theory, this should not happen or might happen in a very rare case. But as we are working with customers data, we do not want to lose important messages. To keep dead messages saved in the message broker and do not stuck them in the original queue, we are using &lt;code&gt;dead-letter&lt;/code&gt; feature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F67kadgxlehzgiovfpjlz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F67kadgxlehzgiovfpjlz.png" alt="An old dead-letter implementation" width="800" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Messages are published to exchange and can be sent to multiple queues depending on the routing key. As you can see from the image above, we used the same dead-letter scheme as for the original queues, so dead messages may end up in the wrong dead-letter queues. It is not very critical if you pick up dead messages manually (considering that they are rare), but nevertheless, it is still strange to find these messages in the wrong place.&lt;/p&gt;

&lt;p&gt;To solve this problem, we need to add a new argument to the properties of the queues, it is &lt;code&gt;x-dead-letter-routing-key&lt;/code&gt; and it should be unique. As a unique value for the routing key, we can use the queue name itself. This idea brought our team one step closer to a good solution: &lt;em&gt;we don't need a dead-letter exchange anymore&lt;/em&gt; 🎉. To simplify it, we can use default nameless exchange &lt;code&gt;""&lt;/code&gt; with the dead-letter queue as the routing key and it will forward the message directly to the proper queue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv46cx1alae08819bk3ln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv46cx1alae08819bk3ln.png" alt="Dead-letter implementation with proper routing" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately, doing everything is not as easy as writing or talking about it 😒. To maintain the consistency and stability of the message broker, the RabbitMQ does not allow changing the arguments of already existing queues.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment preparation
&lt;/h3&gt;

&lt;p&gt;So, RabbitMQ does not allow you to change queue arguments in the runtime, so the only possible way to do it by removing queues and re-creating them again with updated arguments. But it is not possible in production, as we might lose some messages when they already removed, but new ones still do not exist. To solve this problem we need to introduce temporary queues to handle these messages, while old queues will be removed. For a simple system, this will be possible with 4 releases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create temporary queues, but do not handle messages from them for now.&lt;/li&gt;
&lt;li&gt;Switch to the new queues and remove old queues. At this step, we already have a properly configured queues, but names are different. To return to old names, we need to do the same steps again.&lt;/li&gt;
&lt;li&gt;Create new queues with old names, but with updated arguments. Do not consume messages from them for now.&lt;/li&gt;
&lt;li&gt;Switch to the new queues with updated arguments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4bc6nf9kzl2bhowi8smw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F4bc6nf9kzl2bhowi8smw.png" alt="4 steps to update queue arguments" width="780" height="1165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4 releases, not a few, right? This requires not only a lot of small work, but also attention to make sure everything went right every time. How can we reduce them? 🤔&lt;/p&gt;

&lt;p&gt;The simplest thing we can do is agree to rename the queues. This will reduce the number of releases by 2 times, since we will not need to rename them back. This was acceptable to us, and we even got more of it as we improved the message handling process. But that's a completely different story 😉.&lt;/p&gt;

&lt;p&gt;What else can you do? Enabling consumers and message handling in the new queues right away will reduce release count to only one, but we should accept the risk of duplicated messages when new queues already created but old ones are still processing.&lt;/p&gt;

&lt;p&gt;At this point, I was stopped by the teammate, because I did not take into account the process of our deployment. We have &lt;em&gt;blue-green deployment process&lt;/em&gt;, it's when you have multiple instances of the same thing. And when you deploy, you take one down, upgrade, then put it up, then take the other one down to upgrade. This guarantees there is something always up. In our case, this means there is always a consumer there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcy9jz6yqz8mxozymwzwp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcy9jz6yqz8mxozymwzwp.png" alt="Blue-green deployment" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, messages can definitely be duplicated if deployed during business hours. Deployment takes several minutes, which means that both old and new queues will be active for several minutes.&lt;/p&gt;

&lt;p&gt;Time to analyze and decide whether it is safe to deploy the application at night (and do we really want to do it 🙂) when the message flow is low, or it is worth implementing a third-party service like a Redis to check if the message has already been processed by some consumer, old or new.&lt;/p&gt;

&lt;h3&gt;
  
  
  Release
&lt;/h3&gt;

&lt;p&gt;The easiest way to check the load on our message broker is to check the number of logs by day of the week and time. Since we are a highly focused company working only in Germany, we have a very low message load from late evening to early morning. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvr5s2t4u9w27zzuoiy3o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvr5s2t4u9w27zzuoiy3o.png" alt="amqp logs count per datetime" width="800" height="569"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is not such a big highload as it could be, so we can accept the risk that some messages may be duplicated, but even if this happens, their number will be extremely small and we can manually solve them. This will save the resources and time that would be required for two releases.&lt;/p&gt;

&lt;p&gt;After trying to release after midnight we found out that we couldn't do it at night. Some of our third-party services are not available, so the container simply cannot be booted. Well, it was worth trying once, now we know it for sure. Nighttime for sleeping 😴.&lt;/p&gt;

&lt;p&gt;But we can still do it late in the evening or early in the morning. One has only to pay attention to the RabbitMQ load.&lt;/p&gt;

&lt;p&gt;Late in the evening:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsxtcj3ckdjp02boeqve2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsxtcj3ckdjp02boeqve2.png" alt="amqp logs count late in the evening" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Early in the morning:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv0svv72a7t6t8fqfipuk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv0svv72a7t6t8fqfipuk.png" alt="amqp logs count early in the morning" width="800" height="129"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We made the decision to press the release button early in the morning after a good night's sleep. This time everything went fine and there were no duplicates.&lt;/p&gt;

&lt;p&gt;It was not an easy way to solve this problem, but it was worth it. Solving this problem, our team and I learned a lot of interesting things about message consuming and deployment processes. Now it is even better than before, with correct queue settings and decoupled message handling 😎.&lt;/p&gt;




&lt;h3&gt;
  
  
  TL;DR
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;RabbitMQ does not allow to rename queues or change queue arguments;&lt;/li&gt;
&lt;li&gt;to change something in the queue, you have to remove it and re-create;&lt;/li&gt;
&lt;li&gt;to re-create it safe, you need to use temporary queues;&lt;/li&gt;
&lt;li&gt;stable system could be run under multiple instances, so be aware of duplicated messages between old queues and new queues;&lt;/li&gt;
&lt;li&gt;if your business is tied to one timezone and is not high loaded at night, it is acceptable to have duplicated messages instead of over-engineering your consumers.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>rabbitmq</category>
      <category>amqp</category>
      <category>production</category>
      <category>release</category>
    </item>
  </channel>
</rss>
