<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Asaduzzaman Pavel</title>
    <description>The latest articles on DEV Community by Asaduzzaman Pavel (@iampavel).</description>
    <link>https://dev.to/iampavel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iampavel"/>
    <language>en</language>
    <item>
      <title>The NixOS Tools That Actually Make a Difference</title>
      <dc:creator>Asaduzzaman Pavel</dc:creator>
      <pubDate>Mon, 20 Apr 2026 06:37:55 +0000</pubDate>
      <link>https://dev.to/iampavel/the-nixos-tools-that-actually-make-a-difference-49ip</link>
      <guid>https://dev.to/iampavel/the-nixos-tools-that-actually-make-a-difference-49ip</guid>
      <description>&lt;h1&gt;
  
  
  The NixOS Tools That Actually Make a Difference
&lt;/h1&gt;

&lt;p&gt;I run NixOS on three machines and I have the same set of tools on all of them. Not because I followed a guide, but because I hit the same friction points on each one and eventually fixed them the same way. This is that list.&lt;/p&gt;




&lt;h2&gt;
  
  
  comma — Run Software Without Installing It
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/nix-community/comma" rel="noopener noreferrer"&gt;&lt;strong&gt;comma&lt;/strong&gt;&lt;/a&gt; is the one I show people first. You put a &lt;code&gt;,&lt;/code&gt; in front of any command, and it finds the right package in nixpkgs, pulls it in temporarily, and runs it. Nothing gets installed permanently. Nothing pollutes your profile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;, cowsay &lt;span class="s2"&gt;"it just works"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Under the hood it wraps &lt;code&gt;nix shell -c&lt;/code&gt; and &lt;code&gt;nix-index&lt;/code&gt; together. It also caches results: once you pick a derivation for a command, it remembers, and once the path is evaluated by Nix, subsequent calls are nearly instant. The benchmarks in the repo show cache level 2 is about 159x faster than cache level 1. That number is real.&lt;/p&gt;

&lt;p&gt;It requires a &lt;code&gt;nix-index&lt;/code&gt; database to know where files live in nixpkgs. The companion project &lt;a href="https://github.com/nix-community/nix-index-database" rel="noopener noreferrer"&gt;nix-index-database&lt;/a&gt; provides pre-generated databases on a regular schedule, so you can skip building one yourself.&lt;/p&gt;




&lt;h2&gt;
  
  
  nix-index — Find What Package Owns a File
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/nix-community/nix-index" rel="noopener noreferrer"&gt;&lt;strong&gt;nix-index&lt;/strong&gt;&lt;/a&gt; builds a local database by indexing the binary caches. Then you query it with &lt;code&gt;nix-locate&lt;/code&gt; to find which nixpkgs package contains a specific file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;nix-locate &lt;span class="s1"&gt;'bin/hello'&lt;/span&gt;
hello.out    29,488 x /nix/store/bdjyhh70...-hello-2.10/bin/hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The part most people actually use is the &lt;code&gt;command-not-found&lt;/code&gt; integration. When you type a command that is not installed, your shell tells you the exact package to install instead of just printing an error. Coming from a traditional distro, this single feature makes NixOS feel less hostile.&lt;/p&gt;

&lt;p&gt;Building the index takes around five minutes. Or skip it and use &lt;code&gt;nix-index-database&lt;/code&gt; for a pre-built one.&lt;/p&gt;




&lt;h2&gt;
  
  
  nurl — Generate Nix Fetcher Calls from URLs
&lt;/h2&gt;

&lt;p&gt;Writing a fetcher call by hand means figuring out the right fetcher, the right attributes, and then computing a hash by actually downloading the source. It is annoying every single time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/nix-community/nurl" rel="noopener noreferrer"&gt;&lt;strong&gt;nurl&lt;/strong&gt;&lt;/a&gt; takes a URL and an optional revision and prints a ready-to-use fetcher call with the hash filled in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;nurl https://github.com/nix-community/patsh v0.2.0
fetchFromGitHub &lt;span class="o"&gt;{&lt;/span&gt;
  owner &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"nix-community"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  repo &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"patsh"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  tag &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"v0.2.0"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nb"&gt;hash&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"sha256-7HXJspebluQeejKYmVA7sy/F3dtU1gc4eAbKiPexMMA="&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It supports &lt;code&gt;fetchFromGitHub&lt;/code&gt;, &lt;code&gt;fetchFromGitLab&lt;/code&gt;, &lt;code&gt;fetchFromGitea&lt;/code&gt;, &lt;code&gt;fetchFromSourcehut&lt;/code&gt;, &lt;code&gt;fetchCrate&lt;/code&gt;, &lt;code&gt;fetchPypi&lt;/code&gt;, &lt;code&gt;fetchhg&lt;/code&gt;, &lt;code&gt;fetchsvn&lt;/code&gt;, and more. It avoids slow fixed-output derivations when faster alternatives exist. And it is the foundation that &lt;code&gt;nix-init&lt;/code&gt; builds on.&lt;/p&gt;




&lt;h2&gt;
  
  
  nix-init — Generate Full Nix Packages from URLs
&lt;/h2&gt;

&lt;p&gt;If nurl handles the fetching, &lt;a href="https://github.com/nix-community/nix-init" rel="noopener noreferrer"&gt;&lt;strong&gt;nix-init&lt;/strong&gt;&lt;/a&gt; handles everything else. Point it at a URL and it generates a complete package expression with hash prefetching, dependency inference, and license detection. All through interactive prompts with fuzzy tab completion.&lt;/p&gt;

&lt;p&gt;Supported builders: &lt;code&gt;stdenv.mkDerivation&lt;/code&gt;, &lt;code&gt;buildRustPackage&lt;/code&gt;, &lt;code&gt;buildPythonApplication&lt;/code&gt;, &lt;code&gt;buildPythonPackage&lt;/code&gt;, and &lt;code&gt;buildGoModule&lt;/code&gt;. For Rust projects it infers the cargo hash automatically. For Python it picks up dependencies from PyPI metadata.&lt;/p&gt;

&lt;p&gt;The docs are honest about this: the output will probably need tweaks, and you should verify the license and description yourself. But it does the 80% that nobody wants to do manually, which is why packaging contributions happen at all.&lt;/p&gt;




&lt;h2&gt;
  
  
  nh — A Better CLI for NixOS, Home Manager, and Darwin
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/nix-community/nh" rel="noopener noreferrer"&gt;&lt;strong&gt;nh&lt;/strong&gt;&lt;/a&gt; reimplements &lt;code&gt;nixos-rebuild&lt;/code&gt;, &lt;code&gt;home-manager switch&lt;/code&gt;, and &lt;code&gt;darwin-rebuild&lt;/code&gt; with better output and a few genuinely useful additions.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Without nh&lt;/th&gt;
&lt;th&gt;With nh&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;NixOS&lt;/td&gt;
&lt;td&gt;&lt;code&gt;nixos-rebuild switch --flake .#myHost&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;nh os switch . -H myHost&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Darwin&lt;/td&gt;
&lt;td&gt;&lt;code&gt;darwin-rebuild switch --flake .#myHost&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;nh darwin switch . -H myHost&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Home Manager&lt;/td&gt;
&lt;td&gt;&lt;code&gt;home-manager switch --flake .#myHost&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;nh home switch . -c myHome&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The shorter syntax is nice but not the point. When you run &lt;code&gt;nh os switch&lt;/code&gt;, you get a build tree from &lt;code&gt;nix-output-monitor&lt;/code&gt;, a diff of what is actually changing in your derivations, and a confirmation prompt before anything gets activated. You stop switching blind.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nh clean&lt;/code&gt; extends &lt;code&gt;nix-collect-garbage&lt;/code&gt; with gcroot cleanup, profile targeting, and time-based retention like &lt;code&gt;--keep-since 4d&lt;/code&gt;. A NixOS module in nixpkgs can wire this up as an automatic service.&lt;/p&gt;




&lt;h2&gt;
  
  
  statix — Lint Your Nix Code
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/oppiliappan/statix" rel="noopener noreferrer"&gt;&lt;strong&gt;statix&lt;/strong&gt;&lt;/a&gt; checks Nix files for antipatterns and fixes the ones it can.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;statix check tests/c.nix
&lt;span class="o"&gt;[&lt;/span&gt;W04] Warning: Assignment instead of inherit from
   ╭─[tests/c.nix:2:3]
   │
 2 │   mtl &lt;span class="o"&gt;=&lt;/span&gt; pkgs.haskellPackages.mtl&lt;span class="p"&gt;;&lt;/span&gt;
   ·   ─────────── This assignment is better written with inherit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;statix fix &lt;span class="nt"&gt;--dry-run&lt;/span&gt; tests/c.nix
-  mtl &lt;span class="o"&gt;=&lt;/span&gt; pkgs.haskellPackages.mtl&lt;span class="p"&gt;;&lt;/span&gt;
+  inherit &lt;span class="o"&gt;(&lt;/span&gt;pkgs.haskellPackages&lt;span class="o"&gt;)&lt;/span&gt; mtl&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It catches redundant pattern bindings, collapsible &lt;code&gt;let...in&lt;/code&gt; blocks, manual inherit patterns, unnecessary boolean comparisons, unquoted URIs, and more. The full list is available via &lt;code&gt;statix list&lt;/code&gt;. Specific lints can be disabled in a &lt;code&gt;statix.toml&lt;/code&gt; at the project root, and it respects &lt;code&gt;.gitignore&lt;/code&gt; by default.&lt;/p&gt;

&lt;p&gt;Plug it into your editor and forget about it.&lt;/p&gt;




&lt;h2&gt;
  
  
  nix-direnv — Fast, Persistent Dev Shells
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/nix-community/nix-direnv" rel="noopener noreferrer"&gt;&lt;strong&gt;nix-direnv&lt;/strong&gt;&lt;/a&gt; replaces direnv's built-in &lt;code&gt;use_nix&lt;/code&gt; and &lt;code&gt;use_flake&lt;/code&gt;. The built-in version re-evaluates on every shell entry, which gets slow. nix-direnv caches the result and only rebuilds when your &lt;code&gt;shell.nix&lt;/code&gt; or &lt;code&gt;flake.nix&lt;/code&gt; actually changes.&lt;/p&gt;

&lt;p&gt;The part I care about more: it pins the shell derivation in your Nix &lt;code&gt;gcroots&lt;/code&gt;. Garbage collection will not delete your build dependencies. I lost hours of cached downloads to a &lt;code&gt;nix-collect-garbage&lt;/code&gt; run before I started using this.&lt;/p&gt;

&lt;p&gt;Setup with Home Manager:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nix"&gt;&lt;code&gt;&lt;span class="nv"&gt;programs&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;direnv&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nv"&gt;enable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nv"&gt;nix-direnv&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;enable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add &lt;code&gt;use flake&lt;/code&gt; or &lt;code&gt;use nix&lt;/code&gt; to &lt;code&gt;.envrc&lt;/code&gt;, run &lt;code&gt;direnv allow&lt;/code&gt;, and your environment loads automatically on &lt;code&gt;cd&lt;/code&gt;. Works with classic &lt;code&gt;shell.nix&lt;/code&gt; and flake-based &lt;code&gt;devShells&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  flake-parts — The Module System for Your flake.nix
&lt;/h2&gt;

&lt;p&gt;Every non-trivial &lt;code&gt;flake.nix&lt;/code&gt; eventually becomes a mess. Packages for multiple systems, devShells, CI checks, a NixOS module. You end up with 300 lines of &lt;code&gt;forEachSystem&lt;/code&gt; repetition and nobody wants to touch the file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/hercules-ci/flake-parts" rel="noopener noreferrer"&gt;&lt;strong&gt;flake-parts&lt;/strong&gt;&lt;/a&gt; brings the NixOS module system to flakes. You split the configuration into focused files and flake-parts assembles them. It handles the &lt;code&gt;system&lt;/code&gt; boilerplate and makes it easier for others to pull your flake's outputs into theirs.&lt;/p&gt;

&lt;p&gt;Migration is gradual. Wrap your existing &lt;code&gt;outputs&lt;/code&gt; in &lt;code&gt;mkFlake&lt;/code&gt; and move things over piece by piece:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nix"&gt;&lt;code&gt;&lt;span class="nv"&gt;outputs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;inputs&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;flake-parts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;...&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt;
  &lt;span class="nv"&gt;flake-parts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;lib&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;mkFlake&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="kn"&gt;inherit&lt;/span&gt; &lt;span class="nv"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;systems&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"x86_64-linux"&lt;/span&gt; &lt;span class="s2"&gt;"aarch64-linux"&lt;/span&gt; &lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="c"&gt;# perSystem and other modules go here&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Projects using it include &lt;code&gt;nixd&lt;/code&gt;, &lt;code&gt;argo-workflows&lt;/code&gt;, and &lt;code&gt;hyperswitch&lt;/code&gt;. It has become a standard approach for flakes that need to stay maintainable.&lt;/p&gt;




&lt;h2&gt;
  
  
  hjem — Home File Management Without the Overhead
&lt;/h2&gt;

&lt;p&gt;If you have ever wanted Home Manager's &lt;code&gt;home.file&lt;/code&gt; without the rest of Home Manager, &lt;a href="https://github.com/feel-co/hjem" rel="noopener noreferrer"&gt;&lt;strong&gt;hjem&lt;/strong&gt;&lt;/a&gt; is worth knowing about.&lt;/p&gt;

&lt;p&gt;Hjem ("home" in Danish) is a NixOS module that manages files in &lt;code&gt;$HOME&lt;/code&gt;. That is mostly it. No program modules, no environment activation scripts, no opinion about your shell. You declare files and it links them into place using &lt;a href="https://github.com/feel-co/smfh" rel="noopener noreferrer"&gt;smfh&lt;/a&gt;, an atomic file linker backed by systemd services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nix"&gt;&lt;code&gt;&lt;span class="nv"&gt;hjem&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nv"&gt;users&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;k1ng&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;enable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;files&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="s2"&gt;".foo"&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"bar"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="s2"&gt;".config/something.json"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nv"&gt;generator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;lib&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;generators&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;toJSON&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
        &lt;span class="nv"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nv"&gt;foo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;bar&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hello"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is multi-user by design. Each key under &lt;code&gt;hjem.users&lt;/code&gt; maps to a user in &lt;code&gt;users.users&lt;/code&gt;, and hjem refuses to manage homes for users that do not exist.&lt;/p&gt;

&lt;p&gt;The honest comparison to Home Manager: hjem does far less. It does not abstract away program configuration. There is no &lt;code&gt;programs.git.enable&lt;/code&gt; equivalent. The project is explicit that application-specific modules are out of scope, though a separate project called &lt;a href="https://github.com/snugnug/hjem-rum" rel="noopener noreferrer"&gt;hjem-rum&lt;/a&gt; is building that layer on top.&lt;/p&gt;

&lt;p&gt;I think that constraint is actually the point. Home Manager is large, occasionally opinionated in ways you did not ask for, and has a reputation for activation scripts that sometimes conflict with NixOS's own. Hjem is small enough that you can read the entire codebase in an afternoon. If all you want is declarative dotfile management wired into your NixOS config without pulling in a second module framework, hjem is the cleaner answer. It is still young at around 330 stars, so I would not call it battle-tested, but it is actively maintained and the scope is intentionally narrow enough that "mostly feature-complete" is a realistic claim.&lt;/p&gt;




&lt;h2&gt;
  
  
  NixOS Options Search — Keep This Tab Open
&lt;/h2&gt;

&lt;p&gt;Not something you install: &lt;a href="https://search.nixos.org/options" rel="noopener noreferrer"&gt;&lt;strong&gt;search.nixos.org/options&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It searches the entire set of NixOS module options. Type, default value, which module declares it. When you cannot remember if it is &lt;code&gt;services.nginx.enable&lt;/code&gt; or &lt;code&gt;programs.nginx.enable&lt;/code&gt;, this is where you go. It is faster than reading source, and honestly faster than grepping nixpkgs.&lt;/p&gt;

&lt;p&gt;It also covers Home Manager options and has a package search tab. I have it open on every machine I work on.&lt;/p&gt;




&lt;p&gt;None of these are required. NixOS runs fine without them. But each one fixes something that is genuinely annoying without it, and after a while you stop noticing the friction because you stopped having it.&lt;/p&gt;

</description>
      <category>nixos</category>
      <category>nix</category>
      <category>linux</category>
      <category>devtools</category>
    </item>
    <item>
      <title>TimescaleDB Continuous Aggregates: What I Got Wrong (and How to Fix It)</title>
      <dc:creator>Asaduzzaman Pavel</dc:creator>
      <pubDate>Sat, 18 Apr 2026 03:50:55 +0000</pubDate>
      <link>https://dev.to/iampavel/timescaledb-continuous-aggregates-what-i-got-wrong-and-how-to-fix-it-43nh</link>
      <guid>https://dev.to/iampavel/timescaledb-continuous-aggregates-what-i-got-wrong-and-how-to-fix-it-43nh</guid>
      <description>&lt;p&gt;I thought continuous aggregates would solve my crypto data performance problems. They did, for about three weeks. Then the wheels came off, and I spent two months untangling the mess I'd made.&lt;/p&gt;

&lt;p&gt;If you're using TimescaleDB as your time-series database and thinking about continuous aggregates, read this first. I wish someone had warned me.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;I was storing OHLCV candlestick data (Open, High, Low, Close, Volume) for crypto pairs in TimescaleDB, a time-series database built on PostgreSQL, with millions of rows per day and queries aggregating across hours and days. Query performance on the raw hypertable was crawling, sometimes 30 seconds for a week's worth of aggregated data.&lt;/p&gt;

&lt;p&gt;Continuous aggregates seemed like the obvious fix. In TimescaleDB, a continuous aggregate is a PostgreSQL materialized view that refreshes itself incrementally in the background: pre-compute the aggregations, query the materialized view instead of raw data, and let TimescaleDB handle the rest. Clean, simple, automatic. It was none of those things.&lt;/p&gt;

&lt;h2&gt;
  
  
  The First Mistake: Wrong Time Bucket Granularity
&lt;/h2&gt;

&lt;p&gt;I created an hourly aggregate because I figured "hourly should be enough for most queries." It wasn't. Users wanted 4-hour candles, 6-hour candles, sometimes 2-hour. The hourly buckets were useless for them.&lt;/p&gt;

&lt;p&gt;Here's what I didn't understand upfront: you can't query a continuous aggregate at a finer granularity than what you defined. An hourly aggregate gives you hourly data. If you want 15-minute buckets, you're hitting the raw hypertable anyway, which defeats the whole point.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- This works fine -- same granularity as the aggregate&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;time_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'1 hour'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;AVG&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;close&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;candlesticks_1h&lt;/span&gt; &lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- This has to go to the raw table -- 15-min is finer than what we materialized&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;time_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'15 minutes'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;AVG&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;close&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;candlesticks_raw&lt;/span&gt; &lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fix was creating multiple aggregates at different granularities. And that led straight to my next problem.&lt;/p&gt;

&lt;p&gt;Worth noting: since TimescaleDB v2.9, you can stack continuous aggregates on top of each other. So you could create a 15-minute aggregate, then build an hourly aggregate on top of it, then a daily on top of that. I didn't know this at the time and ended up doing things the hard way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Refresh Policy Nightmare
&lt;/h2&gt;

&lt;p&gt;With three aggregate views (15-minute, hourly, daily), the refresh policies started stepping on each other. Data would appear in one view but not another. Sometimes queries returned different results depending on which view the planner hit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- The configuration that caused headaches&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;add_continuous_aggregate_policy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'candlesticks_15m'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;start_offset&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;INTERVAL&lt;/span&gt; &lt;span class="s1"&gt;'30 minutes'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;end_offset&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;INTERVAL&lt;/span&gt; &lt;span class="s1"&gt;'5 minutes'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;schedule_interval&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;INTERVAL&lt;/span&gt; &lt;span class="s1"&gt;'5 minutes'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;add_continuous_aggregate_policy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'candlesticks_1h'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;start_offset&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;INTERVAL&lt;/span&gt; &lt;span class="s1"&gt;'2 hours'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;end_offset&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;INTERVAL&lt;/span&gt; &lt;span class="s1"&gt;'30 minutes'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;schedule_interval&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;INTERVAL&lt;/span&gt; &lt;span class="s1"&gt;'15 minutes'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;start_offset&lt;/code&gt; and &lt;code&gt;end_offset&lt;/code&gt; parameters define the continuous aggregate refresh policy window, specifically what range of data gets recomputed each time the job runs. Get these wrong and you either have stale data (offsets too large) or a gap where queries return nothing (the refresh job simply hasn't caught up yet).&lt;/p&gt;

&lt;p&gt;I spent three days wondering why the last hour of data was missing. The answer: &lt;code&gt;end_offset&lt;/code&gt; was 30 minutes but the refresh job ran every 15. The window had materialized data up to 30 minutes ago, and the real-time aggregation that would fill that gap was disabled in my config. Once I understood the interplay between the offsets and the schedule interval, it clicked, but the documentation doesn't make this obvious.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chunk Explosion
&lt;/h2&gt;

&lt;p&gt;This one actually killed performance. Each continuous aggregate creates its own internal materialization hypertable, which means its own set of chunks. With raw data, hourly aggregates, and daily aggregates all running, I had three independent chunk hierarchies to manage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Checking disk usage per table (returns one row per chunk)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;chunks_detailed_size&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'candlesticks_raw'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;      &lt;span class="c1"&gt;-- 847 rows returned&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;chunks_detailed_size&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'candlesticks_1h'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;        &lt;span class="c1"&gt;-- 892 rows returned&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;chunks_detailed_size&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'candlesticks_daily'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;    &lt;span class="c1"&gt;-- 412 rows returned&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: &lt;code&gt;chunks_detailed_size()&lt;/code&gt; returns one row per chunk with size information, so the row count above is effectively your chunk count. When I saw nearly 900 chunks on a single aggregate view, I knew something had gone sideways.&lt;/p&gt;

&lt;p&gt;Query performance started degrading rather than improving. The database was spending more time on chunk exclusion (figuring out which chunks were relevant to a query) than actually reading data. Too many small chunks and the metadata overhead overwhelms the actual work.&lt;/p&gt;

&lt;p&gt;The fix was TimescaleDB chunk compression: compressing older chunks with a compression policy and adjusting the chunk time interval so chunks weren't being created too frequently. This isn't in the quick-start guide, but it absolutely should be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Query Flexibility
&lt;/h2&gt;

&lt;p&gt;Here's the thing nobody talks about much: continuous aggregates are rigid by design. The GROUP BY you define at creation time is the GROUP BY you're stuck with. Adding a new grouping dimension later means dropping the materialized view and rebuilding from scratch, including re-running the backfill.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Original aggregate&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;MATERIALIZED&lt;/span&gt; &lt;span class="k"&gt;VIEW&lt;/span&gt; &lt;span class="n"&gt;candlesticks_1h&lt;/span&gt;
&lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timescaledb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;continuous&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
    &lt;span class="n"&gt;time_bucket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'1 hour'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;pair&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;AVG&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;close&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;close&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;MAX&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;high&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;high&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;MIN&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;low&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;low&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;candlesticks_raw&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pair&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few weeks after shipping this, a stakeholder asked: "Can we group by exchange too?" The answer was either rebuild everything and backfill, or query raw data. Both defeated the purpose.&lt;/p&gt;

&lt;p&gt;There's no &lt;code&gt;ALTER MATERIALIZED VIEW&lt;/code&gt; that lets you change the query definition or add columns. You can tweak settings like &lt;code&gt;materialized_only&lt;/code&gt;, but the underlying query is immutable.&lt;/p&gt;

&lt;p&gt;That said, the hierarchical aggregate approach I mentioned earlier does offer some flexibility here. If I'd planned things out with stacked aggregates (raw → 15m → 1h → daily), I could have added exchange grouping at one layer without rebuilding everything. Hindsight is 20/20.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Would Do Differently
&lt;/h2&gt;

&lt;p&gt;I'm not saying continuous aggregates are the wrong tool for crypto data. When the conditions are right, they work well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your query patterns are fixed and known upfront&lt;/li&gt;
&lt;li&gt;You don't need to add grouping dimensions later&lt;/li&gt;
&lt;li&gt;Your refresh windows are carefully tested before going live&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But for crypto data specifically, where new trading pairs get added constantly and query requirements change, I'd now approach it differently:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start with the raw hypertable and good indexes before jumping to aggregates. This sounds boring, but TimescaleDB's chunk exclusion is fast. You might not need aggregates as much as you think.&lt;/li&gt;
&lt;li&gt;Use hierarchical continuous aggregates (available since TimescaleDB v2.9) to build a chain from fine to coarse granularity: raw → 15m → 1h → daily. This lets you reuse computation at each level, improve query performance at every tier, and gives you more flexibility if requirements change.&lt;/li&gt;
&lt;li&gt;Plan for chunk compression from day one. Older chunks should be compressed using a TimescaleDB compression policy, set it up early and don't let chunks pile up unmanaged.&lt;/li&gt;
&lt;li&gt;Manually call &lt;code&gt;refresh_continuous_aggregate&lt;/code&gt; to validate your aggregate design before setting up automated policies. Debugging a broken policy after the fact is painful.&lt;/li&gt;
&lt;li&gt;Limit the total number of aggregates. More views means more chunks, more refresh jobs, and more things to get out of sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The documentation frames continuous aggregates as a "set it and forget it" solution. They're not. They're a commitment to a specific query shape, and that shape is hard to change later. Go in with your eyes open.&lt;/p&gt;

</description>
      <category>timescaledb</category>
      <category>postgres</category>
      <category>timeseries</category>
      <category>database</category>
    </item>
    <item>
      <title>When Exchanges Lie: Outlier Detection Across 150+ Crypto Data Sources</title>
      <dc:creator>Asaduzzaman Pavel</dc:creator>
      <pubDate>Wed, 15 Apr 2026 01:41:00 +0000</pubDate>
      <link>https://dev.to/iampavel/when-exchanges-lie-outlier-detection-across-150-crypto-data-sources-5493</link>
      <guid>https://dev.to/iampavel/when-exchanges-lie-outlier-detection-across-150-crypto-data-sources-5493</guid>
      <description>&lt;p&gt;A few years ago I was working on a global market data platform. The job was straightforward on paper: integrate as many cryptocurrency exchanges as possible, aggregate their price and volume data, serve it reliably. We got to around 150 exchanges. That's when things got interesting.&lt;/p&gt;

&lt;p&gt;I expected noise. APIs go down, timestamps drift, small exchanges have thin order books. That's just how it is. What I did not expect was how many exchanges were not just noisy, but actively wrong in ways that were hard to call accidental.&lt;/p&gt;

&lt;p&gt;Here's what I ran into.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Web Traffic Problem
&lt;/h2&gt;

&lt;p&gt;The first signal that something was off had nothing to do with price data. It was &lt;a href="https://en.wikipedia.org/wiki/Alexa_Internet" rel="noopener noreferrer"&gt;Alexa rankings&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're not familiar, Alexa was a web traffic ranking service. It ranked websites by how much traffic they received. Imperfect, but external, independent, and hard to game without actual users. We started cross-referencing exchange-reported volume against their Alexa rank as a basic sanity check.&lt;/p&gt;

&lt;p&gt;The pattern was immediate. Exchanges claiming hundreds of millions in daily volume sometimes had Alexa ranks in the millions, putting them somewhere between a niche hobbyist blog and a small regional news site. Real exchanges with real users have real web traffic. Fake volume does not come with fake visitors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How I handled it:&lt;/strong&gt; Computed a volume-to-traffic ratio for every exchange and compared it against the median ratio across all exchanges. Anything beyond a few standard deviations from that median got a reduced trust weight. Not a hard ban, just enough drag that a lying exchange could not move the aggregate on its own.&lt;/p&gt;

&lt;p&gt;It became one of our most reliable filters. Not because it was precise, but because it was orthogonal. An exchange can manipulate its own API. It cannot manufacture a credible web presence overnight.&lt;/p&gt;

&lt;p&gt;Alexa shut down in May 2022. If I were building this today I'd use &lt;a href="https://www.similarweb.com/" rel="noopener noreferrer"&gt;SimilarWeb&lt;/a&gt; for the same purpose. Same principle: use external signals the exchange doesn't control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stale Data Served Fresh
&lt;/h2&gt;

&lt;p&gt;Some exchanges would return data with a current timestamp but the underlying numbers hadn't changed in minutes. Sometimes longer.&lt;/p&gt;

&lt;p&gt;This one could be a bug. Caching misconfiguration, a stuck worker, a failed background job. But it happened too consistently on too many exchanges to feel entirely accidental. At the very least, it was a bug they had no interest in fixing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How I handled it:&lt;/strong&gt; Each time I polled an exchange, I hashed the price and volume payload. If the hash matched the previous several responses but the timestamp had changed, I marked it stale. After enough consecutive stale responses, I pulled that exchange out of the live feed entirely until fresh data came through. Simple, cheap, and it worked. Real markets move. If your data is not moving, it's not real.&lt;/p&gt;

&lt;h2&gt;
  
  
  Price Hallucination
&lt;/h2&gt;

&lt;p&gt;This is different from a price slightly off due to liquidity differences. Every exchange has its own order book, so minor variance is expected and fine.&lt;/p&gt;

&lt;p&gt;Price hallucination is when an exchange quotes a price that has no relationship to what's happening anywhere else. Not slightly off. Structurally wrong. BTC trading at $30,000 when every other exchange has it at $43,000, and it's been that way for hours.&lt;/p&gt;

&lt;p&gt;These prices never corrected through arbitrage because there was no real liquidity behind them. Nobody was actually trading at those prices. The exchange was just publishing a number.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How I handled it:&lt;/strong&gt; Median absolute deviation across all exchanges for the same trading pair. &lt;a href="https://en.wikipedia.org/wiki/Median_absolute_deviation" rel="noopener noreferrer"&gt;MAD&lt;/a&gt; is the right tool here because it is resistant to manipulation. To move the median you need to control the majority of your data sources, which is hard when you are pulling from 150 places. Anything more than three times the MAD from the median got excluded from that price calculation entirely. The threshold sounds arbitrary but in practice it was conservative enough to catch real hallucinations while leaving legitimate variance alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ghost Liquidity
&lt;/h2&gt;

&lt;p&gt;Order books that look healthy until you try to use them.&lt;/p&gt;

&lt;p&gt;On paper: $2M sitting on the bid, $2M on the ask, tight spread. Looks like a functioning market. In practice: the moment a real order touches that book, the liquidity vanishes. Bids and asks that were sitting there seconds ago are gone.&lt;/p&gt;

&lt;p&gt;This one is particularly cynical because it's designed specifically to fool aggregators and ranking tools that look at order book depth as a quality signal. The order book is theater. It exists to look good in screenshots and API responses, not to fill trades.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How I handled it:&lt;/strong&gt; Real order books fluctuate constantly. Prices shift, sizes change, levels appear and disappear. If an exchange's top order book levels had not changed in a meaningful window during active market hours, that was a flag. Combined this with a spread-to-volume ratio check. Deep books with suspiciously low spreads on low-volume pairs do not add up. Either signal alone could be a false positive. Together they are reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Crawler-Aware APIs
&lt;/h2&gt;

&lt;p&gt;This was the one that genuinely impressed me, in a grim sort of way.&lt;/p&gt;

&lt;p&gt;Some exchanges were serving different data depending on who was asking. Known data aggregator IP ranges got clean, reasonable-looking data. Other IPs got inflated numbers. The exchange had modeled the fact that aggregators exist, figured out how to identify them, and was gaming the system accordingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How I handled it:&lt;/strong&gt; Rotating residential proxies for verification polling. Periodically I'd re-fetch the same data from a different IP and compare the responses. Persistent divergence above a threshold meant the exchange was blacklisted from aggregation entirely. Not downweighted. Gone. There's no good-faith explanation for an exchange that shows different prices to different clients. That's not a bug you fix, it's a policy you enforce.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Actually Is
&lt;/h2&gt;

&lt;p&gt;Outlier detection across many exchanges is not primarily a statistics problem. It is a trust problem. The standard approaches (Z-scores, IQR, median absolute deviation) are useful, but they assume your outliers are noise. Some of your outliers are lies, and lies require a different mindset.&lt;/p&gt;

&lt;p&gt;Here's what actually worked:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use external signals.&lt;/strong&gt; Web traffic, app store rankings, social presence. Anything the exchange doesn't control and can't fake cheaply. &lt;a href="https://www.similarweb.com/" rel="noopener noreferrer"&gt;SimilarWeb&lt;/a&gt; is the practical option today.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Weight by reputation over time.&lt;/strong&gt; Exchanges that flag consistently get less weight. Build a scoring layer that updates as you collect data. Reputation should be earned and lost dynamically, not set once at integration time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consensus over mean.&lt;/strong&gt; If 140 exchanges broadly agree and 10 do not, the 10 are your problem. The median is much harder to manipulate than the mean.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Watch for perfection.&lt;/strong&gt; Real trading data is messy. If an exchange reports exactly $10,000,000 in volume, or produces price charts that look suspiciously smooth, that's a red flag. Markets do not round like that.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Treat divergence as intent.&lt;/strong&gt; Noise is random. These patterns were consistent, directional, and self-serving. Once you start seeing that, you stop debugging and start filtering.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Most of this was figured out the hard way. You integrate a hundred exchanges optimistically, you start noticing things that don't add up, you dig in, and eventually you build a picture of what the data ecosystem actually looks like versus what it claims to be.&lt;/p&gt;

&lt;p&gt;You can't unsee it once you've seen it.&lt;/p&gt;

</description>
      <category>crypto</category>
      <category>data</category>
      <category>outlierdetection</category>
      <category>exchanges</category>
    </item>
    <item>
      <title>Twenty Years Since My First PHP Script</title>
      <dc:creator>Asaduzzaman Pavel</dc:creator>
      <pubDate>Tue, 14 Apr 2026 00:02:11 +0000</pubDate>
      <link>https://dev.to/iampavel/twenty-years-since-my-first-php-script-c74</link>
      <guid>https://dev.to/iampavel/twenty-years-since-my-first-php-script-c74</guid>
      <description>&lt;p&gt;I wrote my first PHP script 20 years ago. It was a forum: the kind where users could register, post threads, reply to each other. Looking back at the code now is genuinely uncomfortable. SQL queries sitting inside HTML. No password hashing. Variables called &lt;code&gt;$x&lt;/code&gt; and &lt;code&gt;$temp&lt;/code&gt; and, at one point I am not proud of, &lt;code&gt;$temp2&lt;/code&gt;. But it worked, and 16-year-old me thought that was basically wizardry.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Got Wrong
&lt;/h2&gt;

&lt;p&gt;Pretty much everything, if I am being honest.&lt;/p&gt;

&lt;p&gt;I concatenated user input straight into SQL queries because I had no idea what SQL injection was. Nobody had told me, and I had not thought to ask. Passwords went in as plain text because hashing seemed like something "real" developers did, not me. PHP and HTML lived in the same file because the idea of separating them had never occurred to me. Why would you?&lt;/p&gt;

&lt;p&gt;Here is roughly what a typical page looked like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?php&lt;/span&gt;
&lt;span class="nb"&gt;session_start&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nv"&gt;$user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$_SESSION&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'user'&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="cp"&gt;?&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
&lt;span class="cp"&gt;&amp;lt;?php&lt;/span&gt;
&lt;span class="nv"&gt;$id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$_GET&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'id'&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="nv"&gt;$result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;mysql_query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"SELECT * FROM posts WHERE id = &lt;/span&gt;&lt;span class="nv"&gt;$id&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$row&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;mysql_fetch_array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$result&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;h2&amp;gt;"&lt;/span&gt; &lt;span class="mf"&gt;.&lt;/span&gt; &lt;span class="nv"&gt;$row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'title'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="mf"&gt;.&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;/h2&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;p&amp;gt;"&lt;/span&gt; &lt;span class="mf"&gt;.&lt;/span&gt; &lt;span class="nv"&gt;$row&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'content'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="mf"&gt;.&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;/p&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="cp"&gt;?&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No escaping anywhere. No validation. Just raw &lt;code&gt;$_GET&lt;/code&gt; values dumped straight into a query. The fact it ran at all was luck, not skill.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Did Not Know I Did Not Know
&lt;/h2&gt;

&lt;p&gt;Security was not even a concept in my head. I knew passwords should probably be hidden from other users, but I did not think about what "hidden" actually meant in practice. XSS, CSRF, session fixation: I had never heard any of those terms. The forum got hacked twice in its first six months. Both times I had no real understanding of how it had happened. I changed some things, crossed my fingers, and kept going.&lt;/p&gt;

&lt;p&gt;Version control did not exist in my world either. I edited files directly on the server over FTP. If something broke, the fix was to stare at the code and try to remember what I had touched. I once overwrote the entire user authentication system with an older version and did not notice for three days. The backup was my memory, which, it turns out, is not a reliable backup strategy.&lt;/p&gt;

&lt;p&gt;As for learning resources: PHP.net, a few forums including PHPBuilder, and other people's source code. YouTube had technically launched in 2005, but it barely mattered: internet connections were so slow that streaming video was a joke. A two-minute clip could take the better part of an hour to buffer, assuming it loaded at all. Most people I knew were still on dial-up or early ADSL that struggled to hold a connection. Stack Overflow did not exist yet (it launched in 2008), and neither did GitHub. Subversion existed, as did CVS, but nobody in my circle was using them for personal projects: version control felt like something for big teams at real companies. You figured things out by reading whatever messy code you could find and trying stuff until it stopped throwing errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Function Names Still Annoy Me
&lt;/h2&gt;

&lt;p&gt;PHP's function naming has always been a bit of a disaster, and it was worse in 2006. &lt;code&gt;mysql_fetch_array&lt;/code&gt; versus &lt;code&gt;mysql_fetch_assoc&lt;/code&gt;: similar names, different behavior, and I mixed them up constantly. &lt;code&gt;htmlspecialchars&lt;/code&gt; versus &lt;code&gt;htmlentities&lt;/code&gt;. Why are both of those necessary? The one that really got me was &lt;code&gt;strpos&lt;/code&gt;. It returns &lt;code&gt;false&lt;/code&gt; if the substring is not found, but returns &lt;code&gt;0&lt;/code&gt; if it is found at position zero. So &lt;code&gt;if (strpos($str, 'foo'))&lt;/code&gt; silently fails when the match is right at the start. You have to use &lt;code&gt;=== false&lt;/code&gt; to be safe. I spent an embarrassing number of hours on that specific bug.&lt;/p&gt;

&lt;p&gt;The language did not help beginners understand this stuff. It still has rough edges, honestly, but at least now there are tools like PHPStan and Psalm that catch a lot of it before you ship.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Would Tell 2006 Me
&lt;/h2&gt;

&lt;p&gt;Do not panic about making it perfect. Your code is going to be rough regardless, and that is fine. Write it, ship it, break it, fix it. That loop is where the actual learning happens.&lt;/p&gt;

&lt;p&gt;Do panic about SQL injection, though. Not later. Now. Use prepared statements. They look scarier than they are. Learn them before you get hacked a third time.&lt;/p&gt;

&lt;p&gt;Stop naming variables &lt;code&gt;$temp&lt;/code&gt;. I know it feels fine in the moment, but three weeks later you will have four of them and no idea what any of them hold. Two extra seconds on a real name (&lt;code&gt;$userId&lt;/code&gt;, &lt;code&gt;$postContent&lt;/code&gt;) saves a lot of confusion later.&lt;/p&gt;

&lt;p&gt;FTP is not a backup. Learn version control, something like Git or SVN. I know it seems like overkill: a whole version control system, just for you, just for a hobby forum? But it is not overkill. These tools are free, they run fine on your local machine, and the basic workflow is not that complicated once you get past the initial setup. You will understand exactly why it matters the first time you accidentally overwrite something important and have nothing to go back to. That moment will come. It is better to be ready for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where It Ended Up
&lt;/h2&gt;

&lt;p&gt;That forum is long gone: I took it down sometime around 2008, when the server bill stopped feeling worth it for 40 active users. But I think about it sometimes. Every bad decision in that codebase taught me something. The hack that I could not explain pushed me to finally understand how injection attacks actually worked. The lost authentication system made version control feel urgent rather than optional.&lt;/p&gt;

&lt;p&gt;Twenty years on, I am still writing code. The stack has changed completely. But the basic rhythm has not: build something, watch it break in ways you did not expect, understand why, do better next time. The forum was a mess. It was also probably the most educational thing I have ever built.&lt;/p&gt;

</description>
      <category>php</category>
      <category>webdev</category>
      <category>programming</category>
      <category>career</category>
    </item>
    <item>
      <title>Atomic Operations in Go</title>
      <dc:creator>Asaduzzaman Pavel</dc:creator>
      <pubDate>Sun, 12 Apr 2026 18:25:10 +0000</pubDate>
      <link>https://dev.to/iampavel/atomic-operations-in-go-2e0g</link>
      <guid>https://dev.to/iampavel/atomic-operations-in-go-2e0g</guid>
      <description>&lt;p&gt;I used to think &lt;code&gt;sync.Mutex&lt;/code&gt; was the only way to make Go concurrency safe, until I had to trace a performance cliff in a high-throughput websocket server. Turns out, parking and waking goroutines has a cost. Atomic operations in Go let you bypass the scheduler entirely and lean directly on the CPU for lock-free programming.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hardware behind atomic operations in Go
&lt;/h2&gt;

&lt;p&gt;You already know &lt;code&gt;counter++&lt;/code&gt; is actually a read, an add, and a write. If two goroutines do it at once, you lose an increment. Data race.&lt;/p&gt;

&lt;p&gt;Atomics fix this using CPU-level instructions (like &lt;code&gt;LOCK XADD&lt;/code&gt; on x86). The CPU locks the memory bus for that specific address just long enough to do the read-modify-write. No scheduler, no kernel context switch. It just happens.&lt;/p&gt;

&lt;p&gt;One of the most frustrating things about the &lt;code&gt;sync/atomic&lt;/code&gt; package pre-Go 1.19 was the lack of type safety. You'd pass pointers to &lt;code&gt;int64&lt;/code&gt; and hope you didn't accidentally pass a regular &lt;code&gt;int&lt;/code&gt; or a 32-bit integer on a 32-bit architecture and panic at runtime. The typed API fixed this, mostly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The modern API
&lt;/h2&gt;

&lt;p&gt;Go 1.19 gave us typed wrappers. I almost never use the raw &lt;code&gt;atomic.AddInt64&lt;/code&gt; functions anymore. The typed API is just cleaner and prevents stupid pointer mistakes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;counter&lt;/span&gt; &lt;span class="n"&gt;atomic&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Int64&lt;/span&gt;

&lt;span class="n"&gt;counter&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;counter&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Store&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;counter&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c"&gt;// 10&lt;/span&gt;

&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;ptr&lt;/span&gt; &lt;span class="n"&gt;atomic&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Pointer&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Config&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;ptr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Store&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Config&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;Timeout&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full set of wrappers available now:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;atomic.Int32&lt;/code&gt; / &lt;code&gt;atomic.Int64&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Signed integer counters&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;atomic.Uint32&lt;/code&gt; / &lt;code&gt;atomic.Uint64&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Unsigned integer counters&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;atomic.Uintptr&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Unsigned integer pointer types&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;atomic.Bool&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Boolean flag&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;atomic.Pointer[T]&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Generic pointer to any type&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;atomic.Value&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Any type, as long as the concrete type is consistent&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Which brings me to atomic.Value
&lt;/h3&gt;

&lt;p&gt;I assumed &lt;code&gt;atomic.Value&lt;/code&gt; was just a generic bucket, but it has a nasty trap: the concrete type you store the first time locks it in forever.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="n"&gt;atomic&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Value&lt;/span&gt;
&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Store&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"a"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c"&gt;// This panics!&lt;/span&gt;
&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Store&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;Name&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="p"&gt;}{&lt;/span&gt;&lt;span class="s"&gt;"bad"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The documentation mentions this, but it's easy to miss until your app crashes in production because someone tried to store a &lt;code&gt;nil&lt;/code&gt; error interface when it previously held a concrete error type. I've been bitten by this exact thing. If you know the type upfront, &lt;code&gt;atomic.Pointer[T]&lt;/code&gt; is infinitely better.&lt;/p&gt;

&lt;h2&gt;
  
  
  A practical look at CAS loops
&lt;/h2&gt;

&lt;p&gt;Compare-and-Swap (CAS) is the weirdest pattern if you're exploring mutex alternatives. You don't lock, you try to update, and if someone else beat you to it, you loop and try again.&lt;/p&gt;

&lt;p&gt;Here's how I actually use it for rate limiting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;RateLimiter&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;count&lt;/span&gt;  &lt;span class="n"&gt;atomic&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Int64&lt;/span&gt;
    &lt;span class="n"&gt;maxRPS&lt;/span&gt; &lt;span class="kt"&gt;int64&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;RateLimiter&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;Allow&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="kt"&gt;bool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;current&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;current&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;maxRPS&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="c"&gt;// Did anyone change count since we loaded it?&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CompareAndSwap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;current&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;current&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="c"&gt;// Yes, they did. Loop and try again.&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This blew my mind the first time I wrote it. The loop isn't spinning endlessly; it only retries if there's actual contention &lt;em&gt;at that exact nanosecond&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Config hot-reloads without pausing
&lt;/h2&gt;

&lt;p&gt;Replacing a config struct while the app is running is where &lt;code&gt;atomic.Pointer[T]&lt;/code&gt; shines. A &lt;code&gt;sync.RWMutex&lt;/code&gt; works, but every reader has to interact with the lock state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Server&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="n"&gt;atomic&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Pointer&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Config&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Server&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;UpdateConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cfg&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Store&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cfg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Server&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;handleRequest&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;cfg&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c"&gt;// Always gets a complete, consistent config&lt;/span&gt;
    &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cfg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Timeout&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I think this might just be my favorite use case. You do all the heavy lifting of parsing and validating the new config off to the side, and the actual update is a single atomic pointer swap. No readers block. No partial reads.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to just use a Mutex
&lt;/h2&gt;

&lt;p&gt;If your update touches more than one variable, or involves complex conditional logic spanning multiple fields, stop trying to be clever.&lt;/p&gt;

&lt;p&gt;I spent two days trying to coordinate three &lt;code&gt;atomic.Int64&lt;/code&gt; counters to track queue states without locks. It was a buggy, unreadable mess. A simple &lt;code&gt;sync.Mutex&lt;/code&gt; solved it in five minutes. Atomics are for single, isolated values. If state A depends on state B, wrap them in a mutex.&lt;/p&gt;

</description>
      <category>go</category>
      <category>concurrency</category>
      <category>performance</category>
    </item>
    <item>
      <title>Home Assistant Works Great Until It Doesn't: 10 Years of Lessons</title>
      <dc:creator>Asaduzzaman Pavel</dc:creator>
      <pubDate>Sun, 12 Apr 2026 00:38:58 +0000</pubDate>
      <link>https://dev.to/iampavel/home-assistant-works-great-until-it-doesnt-10-years-of-lessons-3c6e</link>
      <guid>https://dev.to/iampavel/home-assistant-works-great-until-it-doesnt-10-years-of-lessons-3c6e</guid>
      <description>&lt;p&gt;My bathroom light turned on at 3 AM last Tuesday. Not because of a motion sensor. Because a template sensor I'd written eighteen months earlier, something that checked whether the sun had set and combined it with a binary input, started evaluating differently after an update changed how &lt;code&gt;trigger_time_only&lt;/code&gt; behaves in time-based templates. The logic was identical. The behavior wasn't. I debugged YAML until 4 AM.&lt;/p&gt;

&lt;p&gt;That's what &lt;a href="https://www.home-assistant.io/" rel="noopener noreferrer"&gt;Home Assistant&lt;/a&gt; ownership actually looks like.&lt;/p&gt;

&lt;p&gt;I got into home automation the way a lot of people my age did: by building something that barely worked and being completely obsessed with it anyway. My first attempt was an Arduino driving a relay to control a lamp. It worked about 60% of the time and had a floating pin that occasionally turned the lamp on at random. I considered this a success. I considered this a success. So when I stumbled onto Home Assistant in 2016, by then already three years old and with a serious community behind it, it felt like everything I'd been groping toward.&lt;/p&gt;

&lt;p&gt;I want to be upfront about my bias: this was always a hobby first. I did not approach it the way a normal person approaches their thermostat. I approached it the way someone approaches a long-running side project they will never fully finish. That colors everything that follows.&lt;/p&gt;

&lt;h2&gt;
  
  
  When the vendor pulls the rug
&lt;/h2&gt;

&lt;p&gt;Nobody warns you about this one clearly enough. Companies change their APIs, sometimes overnight, and your devices stop working without any notice to you.&lt;/p&gt;

&lt;p&gt;My smart thermostat ran flawlessly for two years. Then the manufacturer pushed a "platform upgrade" and the Home Assistant integration broke. The maintainer was a volunteer with a day job and a kid. It took three weeks to get a fix merged. I had no programmatic thermostat control for three weeks in February. I kept the heat running manually and bought a dumb programmable thermostat as backup that same week. Best $30 I spent on this system.&lt;/p&gt;

&lt;p&gt;This pattern repeats constantly. MyQ garage doors had their local API locked down. Various Tuya-based plugs changed their cloud handshake. A weather service I used switched authentication methods. The "local control" promise is real for many devices, but even local devices often phone home for firmware updates that can change behavior or add cloud-required authentication where there was none before. The only real protection is choosing devices with well-documented local protocols: Zigbee, Z-Wave, proper Matter implementations. Treat anything that needs a cloud handshake as temporary.&lt;/p&gt;

&lt;h2&gt;
  
  
  2.4 GHz in a dense apartment is not a solvable problem
&lt;/h2&gt;

&lt;p&gt;I live in an apartment. A 2.4 GHz scan shows somewhere around forty-seven neighboring networks, depending on the day.&lt;/p&gt;

&lt;p&gt;My ESP32-based sensors drop offline unpredictably. Not due to bad code, not due to bad hardware. The 2.4 GHz band is saturated. I have a bedroom sensor that loses connectivity for hours at a stretch when my downstairs neighbor apparently runs something particularly RF-noisy. I never nailed down exactly what. Could be a microwave, could be a baby monitor, could be a cheap router blasting on a wide channel.&lt;/p&gt;

&lt;p&gt;I tried everything in the ESPHome toolkit: static IP reservations (in the router, not the firmware), &lt;code&gt;wifi_reboot_timeout&lt;/code&gt;, &lt;code&gt;fast_connect: true&lt;/code&gt;, placing the access point as close as physically possible. The ESPHome community was genuinely helpful, but the conclusion was always the same: 2.4 GHz in dense housing is a contested shared resource. There's no fix. You manage it, you don't solve it.&lt;/p&gt;

&lt;p&gt;For critical sensors, anything feeding an automation I'd actually notice failing, I switched to Zigbee or Z-Wave. WiFi ESP32 devices now handle non-critical monitoring only: temperature logging, power sensing, things where an hour of missed readings doesn't matter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Zigbee's real interference problem
&lt;/h2&gt;

&lt;p&gt;Zigbee is supposed to solve the WiFi reliability problem. Separate protocol, mesh self-healing, lower power, designed for exactly this use case. In a detached house with a single router, it probably works beautifully.&lt;/p&gt;

&lt;p&gt;The catch: Zigbee also runs on 2.4 GHz, and its channel numbering is independent from WiFi's. WiFi channels 1, 6, and 11, the standard non-overlapping set most routers use, overlap directly with Zigbee channels 11 through 22. If your coordinator is sitting on Zigbee channel 11 and your neighbor's router is blasting on WiFi channel 1, they're fighting over the same frequencies. Zigbee loses, because it transmits at much lower power.&lt;/p&gt;

&lt;p&gt;The right move, which I didn't do when I set up my network, is to pick a Zigbee channel &lt;em&gt;before&lt;/em&gt; pairing any devices, based on a scan of the local WiFi environment. Zigbee channels 25 and 26 sit above the standard WiFi interference window entirely, though not all devices support channel 26, and channel 25 can catch the upper sideband of WiFi channel 11. Channel 15 or 20 are reasonable middle choices if your neighbors are mostly on WiFi channels 1 and 6. The problem is that changing your Zigbee channel after the fact requires re-pairing every device on the network. I've done this twice. It's a full weekend project with forty devices.&lt;/p&gt;

&lt;p&gt;Even with a well-chosen channel, I still get ghost dropouts. A sensor shows "unavailable" for forty minutes, then comes back with no log entry explaining why. Aqara, IKEA Tradfri, and Third Reality devices all exhibit this to varying degrees. I've learned to set a &lt;code&gt;considered_unavailable&lt;/code&gt; timeout of several hours on non-critical sensors so I'm not getting 2 AM notifications about a bathroom motion sensor that'll be back by morning.&lt;/p&gt;

&lt;p&gt;My coordinator is a Sonoff ZBDongle-E. One thing that genuinely helped was mounting it on a USB extension cable away from the PC's USB 3.0 ports. USB 3.0 is a known source of 2.4 GHz interference due to its switching frequency. That's the kind of thing you find buried in a Zigbee2MQTT GitHub issue at midnight.&lt;/p&gt;

&lt;h2&gt;
  
  
  The stack is genuinely large
&lt;/h2&gt;

&lt;p&gt;My current setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.proxmox.com/en/" rel="noopener noreferrer"&gt;Proxmox&lt;/a&gt; - host os&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.home-assistant.io/installation/" rel="noopener noreferrer"&gt;Home Assistant Core&lt;/a&gt;: the main application&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.home-assistant.io/integrations/recorder/" rel="noopener noreferrer"&gt;MariaDB&lt;/a&gt;: for long-term history (SQLite bogs down past a few weeks with many sensors)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.zigbee2mqtt.io/" rel="noopener noreferrer"&gt;Zigbee2MQTT&lt;/a&gt;: I switched from ZHA after the 2023.6 incident; Z2M's device support and debug tooling is better&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://mosquitto.org/" rel="noopener noreferrer"&gt;Mosquitto&lt;/a&gt;: the MQTT broker Z2M talks through&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://esphome.io/" rel="noopener noreferrer"&gt;ESPHome&lt;/a&gt;: for custom ESP32/ESP8266 devices&lt;/li&gt;
&lt;li&gt;Several HACS integrations, including one I should probably have replaced by now&lt;/li&gt;
&lt;li&gt;Node-RED, for automations complex enough that Home Assistant's native automation editor becomes painful&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every single component in that list has broken at some point. Not all at once, usually, but in combination. When a Zigbee device goes unavailable, I now run through a checklist: is the Z2M add-on running? Is the MQTT broker running? Is the coordinator physically connected (it works loose sometimes)? Is it just this device or all of them? Is this a Z2M issue or a Home Assistant entity issue? That diagnostic process takes five to ten minutes before I even start fixing anything.&lt;/p&gt;

&lt;p&gt;Some automations that actually run reliably:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Power monitoring&lt;/strong&gt; via &lt;a href="https://docs.iotawatt.com/" rel="noopener noreferrer"&gt;IotaWatt&lt;/a&gt; on my breaker panel. I get notifications when the dryer cycle ends and alerts when the AC runs for more than ninety minutes straight. IotaWatt has a local HTTP API and has never needed a cloud service. It's the most reliable thing in my stack.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Motion-based lighting&lt;/strong&gt; throughout the apartment. Bathroom lights on when occupied, hallway at 20% brightness after 9 PM. These work about 95% of the time. The other 5% is Zigbee flakiness.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alexa voice control&lt;/strong&gt; via the Home Assistant Alexa integration. I exposed my instance via a reverse proxy and custom skill.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Printer head maintenance&lt;/strong&gt;: if I haven't printed in two days, HA triggers a test page via the Brother integration. Genuinely useful and has saved one printhead.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One automation I gave up on entirely: &lt;strong&gt;presence detection&lt;/strong&gt;. I tried the companion app with GPS zones, Bluetooth beacon tracking with an ESP32 in monitor mode running room-assistant, WiFi device tracking via the Unifi integration, and a combination approach. Every method had failure modes that made it unsuitable for driving "lights off when nobody's home" logic. The GPS approach turned the lights off when my phone's location services throttled. The Bluetooth approach worked until it didn't, with no clear pattern. I now use motion sensors with a 30-minute timeout as a proxy for occupancy. It's a compromise, but it's reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 2023.6 weekend
&lt;/h2&gt;

&lt;p&gt;A few months ago a DNS misconfiguration meant Home Assistant couldn't resolve the MQTT broker by hostname. I'd never set up a local DNS entry; I'd been relying on the Supervisor's internal hostname resolution, which changed behavior in an OS update. Fixed it by hardcoding the IP. Minor, but the kind of thing where you need to know what DNS resolution is to even begin troubleshooting your coffee maker.&lt;/p&gt;

&lt;p&gt;That was minor compared to June 2023. I updated to 2023.6.1 on a Saturday morning. Within an hour, every Zigbee device in the apartment showed "unavailable." The ZHA integration had a breaking dependency change in how it handled the &lt;code&gt;zigpy&lt;/code&gt; library version. The GitHub issue documenting it existed, but it wasn't in the release notes in a way that would have caught my attention. The community forum was immediately full of people with the same problem.&lt;/p&gt;

&lt;p&gt;I spent most of that weekend factory-resetting and re-pairing forty devices. Some pairings required removing the device from the network first via the coordinator, then resetting the physical device, then re-interviewing it. Aqara devices have a reset procedure that involves pressing a button a specific number of times within a specific window. I looked up that procedure for maybe fifteen devices over the course of two days. My wife asked at some point why I was doing this instead of being present on the weekend. I didn't have a good answer.&lt;/p&gt;

&lt;p&gt;That incident is part of why I switched from ZHA to Zigbee2MQTT. Z2M stores its device database independently of Home Assistant, so a HA update can't invalidate your pairings. The separation has been worth the added configuration complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The time cost is the real cost
&lt;/h2&gt;

&lt;p&gt;Home Assistant is free software. The hardware is not expensive. Time is the real currency here.&lt;/p&gt;

&lt;p&gt;I've tracked my troubleshooting hours fairly carefully for the past few years. I started logging them after the 2023.6 incident because I was curious. It averages out to roughly two hours a week, but that average hides a very skewed distribution. Most weeks are zero. Weeks with major updates or unusual failures are ten-plus hours. Over ten years, conservatively, I've spent more than 500 hours keeping this system operational. That's a significant fraction of a year of working time, for a system that still fails in ways I can't predict.&lt;/p&gt;

&lt;p&gt;The community framing is that this time buys you privacy, local control, and flexibility unavailable anywhere else. That's true. What's less often said is that it also buys you a part-time job as sysadmin for your apartment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Living with people who didn't sign up for this
&lt;/h2&gt;

&lt;p&gt;My wife and daughter have no patience for my hobby. When the bathroom light turns off mid-shower because the motion sensor lost Zigbee connection, I hear about it. When the "goodnight" routine fails to run because MQTT had a hiccup, I hear about it. My daughter learned to flip the hallway switch twice to get the light to stay on. First flip turns it on, the second forces Home Assistant to register the state change so the automation does not turn it back off. She figured this out at age eight. No twelve-year-old should have to debug state machines to use the bathroom at night.&lt;/p&gt;

&lt;p&gt;My parents visited once. My dad stood in the living room and said, pretty reasonably, "turn on the lights." Nothing happened. Alexa requires the exact invocation; you can't just talk at the room. He looked at me like I'd made something worse than what existed before. He still mentions it.&lt;/p&gt;

&lt;p&gt;The system works beautifully when everyone using it understands its quirks and limitations. That's not most people. Most people want the light to turn on when they flip the switch, and they don't want to learn which switch in which mode causes which state drift in the automation engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the alternatives actually compare
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Approximate Cost&lt;/th&gt;
&lt;th&gt;Reliability&lt;/th&gt;
&lt;th&gt;Customization&lt;/th&gt;
&lt;th&gt;Privacy&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Home Assistant&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Free (hardware separate)&lt;/td&gt;
&lt;td&gt;6/10&lt;/td&gt;
&lt;td&gt;10/10&lt;/td&gt;
&lt;td&gt;10/10&lt;/td&gt;
&lt;td&gt;Hobbyists willing to maintain a stack&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hubitat&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~$130–150&lt;/td&gt;
&lt;td&gt;8/10&lt;/td&gt;
&lt;td&gt;7/10&lt;/td&gt;
&lt;td&gt;8/10&lt;/td&gt;
&lt;td&gt;Local control without managing a server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SmartThings&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~$100&lt;/td&gt;
&lt;td&gt;7/10&lt;/td&gt;
&lt;td&gt;6/10&lt;/td&gt;
&lt;td&gt;4/10&lt;/td&gt;
&lt;td&gt;Samsung ecosystem, accepts cloud dependency&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Apple HomeKit&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Built into iOS&lt;/td&gt;
&lt;td&gt;9/10&lt;/td&gt;
&lt;td&gt;5/10&lt;/td&gt;
&lt;td&gt;9/10&lt;/td&gt;
&lt;td&gt;Apple users who want basics to just work&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Amazon Alexa&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~$50&lt;/td&gt;
&lt;td&gt;7/10&lt;/td&gt;
&lt;td&gt;4/10&lt;/td&gt;
&lt;td&gt;2/10&lt;/td&gt;
&lt;td&gt;Voice-first, not privacy-focused&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Home Assistant wins on customization and privacy. It loses on reliability and on what I'd call operational overhead. The amount of ongoing attention the system requires to stay functional. Hubitat is the closest thing to Home Assistant for people who want local control without maintaining a multi-service stack. HomeKit, if you're in the Apple ecosystem and your use case is relatively simple, has dramatically better day-to-day reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who should actually run this
&lt;/h2&gt;

&lt;p&gt;Run Home Assistant if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Troubleshooting is something you find engaging rather than draining&lt;/li&gt;
&lt;li&gt;You can reliably spare a few hours most weeks, and sometimes an entire weekend&lt;/li&gt;
&lt;li&gt;Privacy and local control are genuinely non-negotiable for you&lt;/li&gt;
&lt;li&gt;You're treating this as a hobby, not a utility&lt;/li&gt;
&lt;li&gt;You have reasonable Linux/networking knowledge. Not expert level, but enough to know what a YAML syntax error looks like and what DNS does&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Skip it if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want the lights to reliably work&lt;/li&gt;
&lt;li&gt;You live with people who didn't opt into a temperamental system&lt;/li&gt;
&lt;li&gt;You'd rather spend free time on other things&lt;/li&gt;
&lt;li&gt;You need presence detection to function&lt;/li&gt;
&lt;li&gt;Technology failing unexpectedly is something you find genuinely stressful&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where I've landed
&lt;/h2&gt;

&lt;p&gt;I'm not migrating off Home Assistant. The IotaWatt integration alone is worth staying for, and I've built enough custom logic over the years that rebuilding it elsewhere would be its own project. But I've significantly dialed back my ambitions for what the system should do.&lt;/p&gt;

&lt;p&gt;Nothing safety-critical runs through Home Assistant. Every light has a physical switch that works independently of network state. The thermostat has a manual override. When the smart garage opener died, I went back to the original remote and didn't replace it with anything smart.&lt;/p&gt;

&lt;p&gt;If I were starting from scratch today, I'd buy far fewer devices and be much more selective about protocols. Everything on the critical path would be Zigbee or Z-Wave with known-good local support, deployed on a carefully chosen channel, with the coordinator on a USB extension cable away from noisy USB 3.0 ports. I'd skip presence detection entirely. I'd probably use a Home Assistant Green instead of building my own box. The $99 hardware is worth not debugging SD card corruption.&lt;/p&gt;

&lt;p&gt;The honest version is this: the automated home as it exists in practice is still a project, not a product. For some people that's the appeal. For most people, it's a reason to buy smart bulbs that work with HomeKit and leave it at that.&lt;/p&gt;

</description>
      <category>homeautomation</category>
      <category>homeassistant</category>
      <category>iot</category>
      <category>zigbee</category>
    </item>
    <item>
      <title>The Only Prometheus Metrics I Actually Alert On</title>
      <dc:creator>Asaduzzaman Pavel</dc:creator>
      <pubDate>Sat, 11 Apr 2026 16:40:43 +0000</pubDate>
      <link>https://dev.to/iampavel/the-only-prometheus-metrics-i-actually-alert-on-41fm</link>
      <guid>https://dev.to/iampavel/the-only-prometheus-metrics-i-actually-alert-on-41fm</guid>
      <description>&lt;p&gt;I used to instrument everything. Every function call, every cache hit, every database query. My Prometheus instance was ingesting somewhere north of 50,000 samples per second across three services, and I thought that density meant rigor. Then our checkout flow went down at 3 AM during a sale event, and I spent twenty minutes scrolling through dashboards before I found the single metric that mattered: connection pool exhaustion on the payments database. It had been queuing for six minutes before queries started timing out. I had a metric for it. I just wasn't alerting on it.&lt;/p&gt;

&lt;p&gt;That's the gap this post is about. Not the theoretical list of things you &lt;em&gt;could&lt;/em&gt; track, but what I've found worth waking up for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Golden Signals are a starting point, not an answer
&lt;/h2&gt;

&lt;p&gt;Google's four Golden Signals (latency, traffic, errors, saturation) point you in the right direction. But they're underspecified enough that following them naively leads to useless alerts.&lt;/p&gt;

&lt;p&gt;Latency without percentiles tells you nothing actionable. If your p99 is 2 seconds and your mean is 50ms, the mean is actively misleading. Users hit the tail, not the average. I track p50, p95, p99, and max. The gap between p95 and p99 is often the most interesting number. A large gap usually means a specific slow path (a missing index, a lock contention, an N+1 query) rather than a general performance problem.&lt;/p&gt;

&lt;p&gt;Errors need to distinguish user-visible failures from internal retries. A database timeout that triggers a retry and eventually succeeds is not the same failure mode as a 500 returned to the user, but both increment error counters in most client libraries by default. I split by severity: critical for user-visible failures, warning for degraded-but-functional, and info for retried-successfully events. Only critical fires a page.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I actually instrument in HTTP services
&lt;/h2&gt;

&lt;p&gt;I instrument at the edge, a single middleware or handler wrapper, so every endpoint gets consistent labels automatically. Three metrics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;http_requests_total{method, path, status}&lt;/span&gt;
&lt;span class="s"&gt;http_request_duration_seconds{method, path, status, le}&lt;/span&gt;
&lt;span class="s"&gt;http_in_flight_requests{method, path}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;http_requests_total&lt;/code&gt; gives you rate and error ratio. The histogram gives you latency at any percentile. In-flight requests catch saturation before it shows up in latency. By the time requests slow down, you've usually been saturated for a while.&lt;/p&gt;

&lt;p&gt;The mistake I made early on: I used the full request path as a label, so &lt;code&gt;/users/12345/profile&lt;/code&gt; and &lt;code&gt;/users/67890/profile&lt;/code&gt; became separate time series. At any meaningful user count, cardinality explodes and Prometheus runs out of memory. Sanitize paths before labeling. Replace ID segments with a placeholder like &lt;code&gt;/users/{id}/profile&lt;/code&gt;. This is obvious in retrospect but I've seen it sink three separate teams' setups.&lt;/p&gt;

&lt;p&gt;For gRPC, same pattern, but I add &lt;code&gt;grpc_code&lt;/code&gt; as a label. gRPC status codes are more expressive than HTTP codes (RESOURCE_EXHAUSTED vs UNAVAILABLE vs DEADLINE_EXCEEDED all have different remediation paths), and I reference them directly in alert conditions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database metrics: the failure mode that sneaks up on you
&lt;/h2&gt;

&lt;p&gt;Connection pool exhaustion is the failure mode that hurts most and is hardest to detect early. By the time queries are timing out, you've been at capacity for minutes. These come from the application, not the database server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight prometheus"&gt;&lt;code&gt;&lt;span class="n"&gt;db_pool_connections_active&lt;/span&gt;
&lt;span class="n"&gt;db_pool_connections_idle&lt;/span&gt;
&lt;span class="n"&gt;db_pool_connections_wait_count_total&lt;/span&gt;
&lt;span class="n"&gt;db_query_duration_seconds&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;le&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The wait count is your leading indicator. I alert when &lt;code&gt;rate(db_pool_connections_wait_count_total[5m]) &amp;gt; 0&lt;/code&gt;, but I treat it as a warning, not a page. Brief queuing can happen under bursty traffic without indicating a real problem. Sustained queuing (for more than a few minutes) usually means an undersized pool or a connection leak. The alert tells me to look; I don't automatically assume the worst.&lt;/p&gt;

&lt;p&gt;Query duration histograms need custom buckets. Default client library buckets assume sub-second operations, but a report query or a schema migration step might legitimately take 30 seconds. I use: 0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1.0, 2.5, 5.0, 10.0, 30.0. If your query durations don't fit this range, adjust. The buckets should bracket your actual distribution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Server metrics: what node_exporter exports vs. what matters
&lt;/h2&gt;

&lt;p&gt;Node exporter exports 700+ metrics by default in current versions (and more with optional collectors enabled). Most are noise for application operators. The ones I use consistently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;node_cpu_seconds_total&lt;/code&gt;: CPU utilization via &lt;code&gt;rate()&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;node_memory_MemAvailable_bytes&lt;/code&gt;: not MemFree; available includes reclaimable cache and gives a realistic picture of OOM risk&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;node_filesystem_avail_bytes&lt;/code&gt;: disk space&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;node_load1&lt;/code&gt;: secondary signal, never primary&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When I care about CPU, I filter by mode. &lt;code&gt;iowait&lt;/code&gt; indicates a disk-bound process; &lt;code&gt;steal&lt;/code&gt; in virtualized environments indicates you're competing with other tenants for CPU time. User and system time being high is expected. Iowait and steal are the modes that suggest something is wrong upstream of your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Alertmanager config I wish I'd started with
&lt;/h2&gt;

&lt;p&gt;My first Alertmanager config routed everything to a single Slack channel. During a cascading failure, I received 400 messages in 10 minutes and missed the root cause entirely. The cascade was loud enough to bury the signal. Here's the structure I've settled on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;route&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;group_by&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;alertname'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cluster'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;severity'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;group_wait&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
  &lt;span class="na"&gt;group_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5m&lt;/span&gt;
  &lt;span class="na"&gt;repeat_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;12h&lt;/span&gt;
  &lt;span class="na"&gt;receiver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;default'&lt;/span&gt;
  &lt;span class="na"&gt;routes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;severity = critical&lt;/span&gt;
      &lt;span class="na"&gt;receiver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pagerduty&lt;/span&gt;
      &lt;span class="na"&gt;continue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;severity = warning&lt;/span&gt;
      &lt;span class="na"&gt;receiver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;slack&lt;/span&gt;
      &lt;span class="na"&gt;group_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30m&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flowchart TD
    A[Prometheus fires alerts] --&amp;gt; B[Alertmanager]
    B --&amp;gt; C[Alert Processing]
    C --&amp;gt; D[Deduplication]
    C --&amp;gt; E[Grouping by alertname, cluster, severity]
    C --&amp;gt; F[Apply Silences]
    D --&amp;gt; G[Route to Receivers]
    E --&amp;gt; G
    F --&amp;gt; G
    G --&amp;gt;|severity=critical| H[PagerDuty]
    G --&amp;gt;|severity=critical + continue=true| I[Slack]
    G --&amp;gt;|severity=warning| J[Slack batched 30m]
    G --&amp;gt;|severity=info| K[Slack batched 30m]
    H --&amp;gt; L[Wake someone up]
    I --&amp;gt; M[Channel context]
    J --&amp;gt; N[Look in business hours]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Critical alerts hit PagerDuty immediately. Warnings batch into 30-minute windows. This prevents alert fatigue while keeping urgent pages urgent.&lt;/p&gt;

&lt;p&gt;Alert fatigue is real. When everything pages, nothing pages. Teams start muting notifications because they can't sleep, and then the alerts that matter get buried too. Better to have three alerts you actually respond to than thirty you've learned to ignore.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;continue: true&lt;/code&gt; on the critical route matters. Without it, a matching route stops processing entirely, so critical alerts would never reach Slack. I want them in both places: PagerDuty wakes someone up, Slack gives the channel context.&lt;/p&gt;

&lt;p&gt;A practical warning: Alertmanager's matching syntax changed meaningfully between versions. Before v0.22, you use &lt;code&gt;match:&lt;/code&gt; and &lt;code&gt;match_re:&lt;/code&gt; as maps. From v0.22 onward, the recommended syntax is &lt;code&gt;matchers:&lt;/code&gt; with a list format. In practice, many deployments run mixed versions. The docs you find may not match what you've deployed. Check your version first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recording rules: when complexity pays off
&lt;/h2&gt;

&lt;p&gt;Recording rules pre-compute expensive queries and make dashboards and alerts faster. I use them in two situations: when a query takes more than a second to evaluate, and when I reference the same complex expression in multiple alerts or dashboards. Outside those two cases, they add complexity without payoff.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http_rates&lt;/span&gt;
    &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
    &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;record&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;job:http_requests:rate5m&lt;/span&gt;
        &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sum(rate(http_requests_total[5m])) by (job)&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;record&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;job:http_errors:rate5m&lt;/span&gt;
        &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sum(rate(http_requests_total{status=~"5.."}[5m])) by (job)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With those recorded, the error ratio alert simplifies to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HighErrorRate&lt;/span&gt;
  &lt;span class="na"&gt;expr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;job:http_errors:rate5m / job:http_requests:rate5m &amp;gt; &lt;/span&gt;&lt;span class="m"&gt;0.01&lt;/span&gt;
  &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5m&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without recording rules, that division across all time series can time out on high-cardinality metrics. Whether you need recording rules at all depends on your scale. At small cardinality, raw queries work fine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alert thresholds and avoiding false positives
&lt;/h2&gt;

&lt;p&gt;Every alert I write includes a &lt;code&gt;for:&lt;/code&gt; duration. An instant threshold crossing is almost always a deployment blip, a brief GC pause, or a transient spike. Not something worth paging about. I use 5 minutes for error rate and latency, 10 minutes for saturation and capacity, and 0 minutes only for complete outages or security events.&lt;/p&gt;

&lt;p&gt;I include a &lt;code&gt;severity&lt;/code&gt; label in every alert. Without it, Alertmanager can't route correctly. Critical means wake someone up. Warning means look at it during business hours. Info means log it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I stopped monitoring
&lt;/h2&gt;

&lt;p&gt;I don't alert on memory usage anymore. High memory isn't a problem, OOM is. For containers, alert on &lt;code&gt;container_oom_events_total&lt;/code&gt; from cAdvisor. For VMs or bare metal, watch the OOM killer logs. Memory pressure without OOM is usually just efficient caching.&lt;/p&gt;

&lt;p&gt;I don't alert on disk I/O wait directly. I alert on latency increases that correlate with high iowait. The user pain is the latency, not the disk activity.&lt;/p&gt;

&lt;p&gt;I don't use predictive disk fill alerts ("disk will fill in 4 hours"). They generate false positives during batch jobs and log rotations without giving you actionable signal. Instead, I alert at 85% capacity with a 1-hour &lt;code&gt;for:&lt;/code&gt; duration. This is a simpler threshold, but it assumes you're monitoring write rates separately for services where fill speed matters. If a service can fill a disk in under an hour, 85% may not give you enough runway, and you'd want to tighten the threshold or add a rate-based rule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Grafana: dashboards for humans under pressure
&lt;/h2&gt;

&lt;p&gt;When I get paged at 3 AM, I need to understand what's broken in under 30 seconds. That's the design constraint. My main dashboard has four panels: request rate (QPS), error rate percentage, p99 latency, and active alerts by severity. Everything else lives on sub-dashboards, linked from variable-based drill-downs.&lt;/p&gt;

&lt;p&gt;A few things I've found useful beyond the basics: variable-based filtering lets you scope a dashboard to a single service or cluster without duplicating it. Template variables with &lt;code&gt;$job&lt;/code&gt; and &lt;code&gt;$instance&lt;/code&gt; selectors give you one dashboard that works across all your services. I also keep a separate dashboard for saturation signals: connection pool utilization, thread pool depth, queue length. These are leading indicators I want to check when latency starts rising but errors haven't followed yet.&lt;/p&gt;

&lt;p&gt;For the panels themselves, I use recording rule data rather than raw metrics. Dashboards render faster and the difference in precision doesn't matter for human-readable trend graphs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The cardinality trap
&lt;/h2&gt;

&lt;p&gt;High cardinality is the most common way to break Prometheus. Each unique label combination creates a new time series. At scale, 1,000 pods each exporting 100 metrics with 10 label combinations, you reach 1 million series quickly, and Prometheus's memory usage grows roughly linearly with series count.&lt;/p&gt;

&lt;p&gt;My constraints: no more than 5 labels per metric, no unbounded labels (user IDs, session IDs, request IDs, UUIDs), and no single metric with more than ~1,000 unique label combinations in practice.&lt;/p&gt;

&lt;p&gt;I use &lt;code&gt;prometheus_tsdb_head_series&lt;/code&gt; to watch my own cardinality. When it grows unexpectedly between deploys, I track down the culprit with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;topk(10, count by (__name__)({job="myjob"}))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows which metric names have the most series. From there, a &lt;code&gt;count by (label_name)&lt;/code&gt; query on the offending metric usually surfaces the high-cardinality label immediately.&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>grafana</category>
      <category>monitoring</category>
      <category>sre</category>
    </item>
    <item>
      <title>The Meta Tags That Matter (And the JSON-LD That Gets You Cited)</title>
      <dc:creator>Asaduzzaman Pavel</dc:creator>
      <pubDate>Sat, 11 Apr 2026 10:44:15 +0000</pubDate>
      <link>https://dev.to/iampavel/the-meta-tags-that-matter-and-the-json-ld-that-gets-you-cited-1ogj</link>
      <guid>https://dev.to/iampavel/the-meta-tags-that-matter-and-the-json-ld-that-gets-you-cited-1ogj</guid>
      <description>&lt;p&gt;I used to treat meta tags as an afterthought. Slap a &lt;code&gt;&amp;lt;title&amp;gt;&lt;/code&gt; on the page, maybe add a description, call it done. Then I watched my carefully-written articles get buried in search results while thin content from bigger sites ranked above me. The difference? They spoke Google's language. I wasn't using structured data.&lt;/p&gt;

&lt;p&gt;JSON-LD structured data and proper meta tags aren't just "nice to have" anymore. They're how you tell search engines and AI systems what your content actually means. Without them, you're trusting algorithms to guess correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Meta Tags Do
&lt;/h2&gt;

&lt;p&gt;Meta tags are HTML elements that describe your page to machines. Humans never see them directly, but they determine how your page appears in search results, social shares, and AI summaries.&lt;/p&gt;

&lt;p&gt;There are three kinds that matter: search engine metadata, social sharing metadata, and structured data.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Non-Negotiable Search Tags
&lt;/h3&gt;

&lt;p&gt;Every page needs these, full stop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;Your Article Title — Site Name&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt;
    &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"description"&lt;/span&gt;
    &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"A compelling 150-160 character summary that includes your primary keyword."&lt;/span&gt;
&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;link&lt;/span&gt; &lt;span class="na"&gt;rel=&lt;/span&gt;&lt;span class="s"&gt;"canonical"&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"https://example.com/your-page"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The title should be under 60 characters or Google truncates it. The description won't directly affect your ranking, but it absolutely affects whether people click. And the canonical tag prevents duplicate content issues when the same content lives at multiple URLs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Social Sharing: Open Graph and Twitter Cards
&lt;/h3&gt;

&lt;p&gt;Without Open Graph tags, social platforms just guess what to show. You'll get random images and truncated titles.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="c"&gt;&amp;lt;!-- Open Graph (Facebook, LinkedIn, etc.) --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:title"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"Your Article Title"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:description"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"Description for social shares"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:image"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"https://example.com/og-image.jpg"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:url"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"https://example.com/your-page"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:type"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"article"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;

&lt;span class="c"&gt;&amp;lt;!-- Twitter/X Cards --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"twitter:card"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"summary_large_image"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"twitter:title"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"Your Article Title"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"twitter:description"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"Description for Twitter"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"twitter:image"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"https://example.com/twitter-image.jpg"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use &lt;code&gt;summary_large_image&lt;/code&gt; for Twitter if you have a good featured image. The regular &lt;code&gt;summary&lt;/code&gt; card is smaller and less engaging. And yes, you still need to call them Twitter Cards — that's what the specification is named even though the company is now X.&lt;/p&gt;

&lt;h2&gt;
  
  
  JSON-LD: Structured Data That Gets You Featured
&lt;/h2&gt;

&lt;p&gt;JSON-LD (JavaScript Object Notation for Linked Data) is the format Google recommends for structured data. It's a script block that tells search engines exactly what entities are on your page.&lt;/p&gt;

&lt;p&gt;What got me interested: pages with proper structured data get featured snippets, rich results, and knowledge panel entries. Rotten Tomatoes &lt;a href="https://developers.google.com/search/case-studies/rotten-tomatoes" rel="noopener noreferrer"&gt;saw a 25% higher click-through rate&lt;/a&gt; after adding structured data. The Food Network &lt;a href="https://developers.google.com/search/case-studies/food-network" rel="noopener noreferrer"&gt;got a 35% increase in visits&lt;/a&gt;. It actually works.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multiple Image Aspect Ratios
&lt;/h3&gt;

&lt;p&gt;Most people just slap one image in their structured data. But Google recommends multiple aspect ratios — 16:9 for rich results, 4:3 for standard displays, and 1:1 for square crops.&lt;/p&gt;

&lt;p&gt;My BlogPosting schema handles this automatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;siteBaseUrl&lt;/span&gt;&lt;span class="p"&gt;}${&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;coverImage&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Original (typically 16:9)&lt;/span&gt;
    &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;siteBaseUrl&lt;/span&gt;&lt;span class="p"&gt;}${&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;coverImage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;\.(&lt;/span&gt;&lt;span class="sr"&gt;png|jpg|jpeg|webp&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="sr"&gt;$/i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;-1x1.$1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// 1:1 square&lt;/span&gt;
    &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;siteBaseUrl&lt;/span&gt;&lt;span class="p"&gt;}${&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;coverImage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;\.(&lt;/span&gt;&lt;span class="sr"&gt;png|jpg|jpeg|webp&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="sr"&gt;$/i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;-4x3.$1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt; &lt;span class="c1"&gt;// 4:3 standard&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the post doesn't have a cover image, it falls back to dynamically generated OG images with the same aspect ratio options. Your content looks right whether it appears in Google's rich results, Twitter cards, or LinkedIn shares.&lt;/p&gt;

&lt;h3&gt;
  
  
  ProfilePage Schema
&lt;/h3&gt;

&lt;p&gt;For my homepage, I use Google's ProfilePage structured data — a relatively new schema type specifically designed for personal websites and author pages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;generateProfilePageSchema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;postsCount&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@context&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://schema.org&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ProfilePage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;dateCreated&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;2024-01-15&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;dateModified&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;2026-04-11&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;mainEntity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Person&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Your Name&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;jobTitle&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Software Engineer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://example.com/photo.jpg&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// 1:1 headshot&lt;/span&gt;
                &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://example.com/og-16x9.jpg&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// 16:9 banner&lt;/span&gt;
                &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://example.com/og-4x3.jpg&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="c1"&gt;// 4:3 alternative&lt;/span&gt;
            &lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="na"&gt;sameAs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://github.com/username&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://linkedin.com/in/username&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://twitter.com/username&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
            &lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="c1"&gt;// This part is interesting:&lt;/span&gt;
            &lt;span class="na"&gt;agentInteractionStatistic&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;InteractionCounter&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="na"&gt;interactionType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://schema.org/WriteAction&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="na"&gt;userInteractionCount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;postsCount&lt;/span&gt; &lt;span class="c1"&gt;// e.g., 42&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;agentInteractionStatistic&lt;/code&gt; with &lt;code&gt;WriteAction&lt;/code&gt; tells Google how many posts you've written. When someone searches for your name, they might see "42 posts" in your knowledge panel. That's social proof in search results.&lt;/p&gt;

&lt;h3&gt;
  
  
  BlogPosting Schema: Author and Publisher
&lt;/h3&gt;

&lt;p&gt;Google has specific guidelines for authorship markup. You need both an author (the Person who wrote it) and a publisher (the Organization responsible for it). Even on a personal blog, you're both:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;generateBlogPostingSchema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Post&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@context&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://schema.org&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;BlogPosting&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;headline&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;author&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Person&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;authorName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;siteBaseUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;sameAs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;authorSameAs&lt;/span&gt; &lt;span class="c1"&gt;// Links to GitHub, LinkedIn, etc.&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;publisher&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Organization&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;authorName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;siteBaseUrl&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;datePublished&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;2026-04-11T08:00:00+00:00&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;dateModified&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;2026-04-11T10:00:00+00:00&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c1"&gt;// ISO 8601 duration format for reading time&lt;/span&gt;
        &lt;span class="na"&gt;timeRequired&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;PT8M&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="c1"&gt;// 8 minutes&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;sameAs&lt;/code&gt; array matters — it connects your content to your established presence on GitHub, LinkedIn, Twitter, etc. Google uses these E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) signals for ranking.&lt;/p&gt;

&lt;h3&gt;
  
  
  FAQ Schema for "People Also Ask"
&lt;/h3&gt;

&lt;p&gt;If you want to capture featured snippets, FAQ schema is your best bet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;generateFAQSchema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FAQItem&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt; &lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Post&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@context&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://schema.org&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;FAQPage&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;mainEntity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Question&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;question&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;acceptedAnswer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Answer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;answer&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}))&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keep answers to 40-60 words. Longer answers get truncated or ignored by Google's snippet extraction. I store FAQ items in the post frontmatter and render them both visually on the page and invisibly in the JSON-LD.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting It All Together: The Complete SEO Component
&lt;/h2&gt;

&lt;p&gt;I combine everything in a SvelteKit SEO component using &lt;code&gt;$derived&lt;/code&gt; to reactively build metadata from props, then render it server-side so crawlers see it immediately:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight svelte"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;lang=&lt;/span&gt;&lt;span class="s"&gt;"ts"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;breadcrumbs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;faq&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;readingTime&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;$props&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="c1"&gt;// Reactively generate schemas whenever props change&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;blogPostingSchema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;$derived&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nf"&gt;generateBlogPostingSchema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;breadcrumbSchema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;$derived&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nx"&gt;breadcrumbs&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nf"&gt;generateBreadcrumbSchema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;breadcrumbs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;faqSchema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;$derived&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;faq&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;post&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nf"&gt;generateFAQSchema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;faq&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Combine all structured data into one array&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;allStructuredData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;$derived&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
        &lt;span class="p"&gt;...(&lt;/span&gt;&lt;span class="nx"&gt;blogPostingSchema&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;blogPostingSchema&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[]),&lt;/span&gt;
        &lt;span class="p"&gt;...(&lt;/span&gt;&lt;span class="nx"&gt;breadcrumbSchema&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;breadcrumbSchema&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[]),&lt;/span&gt;
        &lt;span class="p"&gt;...(&lt;/span&gt;&lt;span class="nx"&gt;faqSchema&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;faqSchema&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[])&lt;/span&gt;
    &lt;span class="p"&gt;]);&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;svelte:head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;pageTitle&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"description"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;link&lt;/span&gt; &lt;span class="na"&gt;rel=&lt;/span&gt;&lt;span class="s"&gt;"canonical"&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;canonicalUrl&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;

    &lt;span class="c"&gt;&amp;lt;!-- Open Graph with multiple images --&amp;gt;&lt;/span&gt;
    &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="k"&gt;#if&lt;/span&gt; &lt;span class="nx"&gt;ogImages&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="k"&gt;#each&lt;/span&gt; &lt;span class="nx"&gt;ogImages&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;img&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:image"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;img&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:image:width"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;img&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;property=&lt;/span&gt;&lt;span class="s"&gt;"og:image:height"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;img&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="k"&gt;/each&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="k"&gt;/if&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;

    &lt;span class="c"&gt;&amp;lt;!-- Twitter with reading time labels --&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"twitter:card"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"summary_large_image"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"twitter:label1"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"Written by"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"twitter:data1"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;authorName&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"twitter:label2"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"Est. reading time"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"twitter:data2"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;readingTimeStr&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;

    &lt;span class="c"&gt;&amp;lt;!-- JSON-LD rendered server-side --&amp;gt;&lt;/span&gt;
    &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="k"&gt;#each&lt;/span&gt; &lt;span class="nx"&gt;allStructuredData&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
        &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="k"&gt;@html&lt;/span&gt; &lt;span class="s2"&gt;`&amp;lt;script type="application/ld+json"&amp;gt;${JSON.stringify(data)}&amp;lt;/script&amp;gt;`&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="k"&gt;/each&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/svelte:head&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;{@html}&lt;/code&gt; tag is necessary because Svelte escapes script content by default. Since the data is server-generated, this is safe.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the Component
&lt;/h3&gt;

&lt;p&gt;In your &lt;code&gt;+page.svelte&lt;/code&gt;, pass the data from your loader:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight svelte"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;script&amp;gt;&lt;/span&gt;
    &lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;SEO&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@components/SEO/index.svelte&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;$props&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;SEO&lt;/span&gt;
    &lt;span class="na"&gt;title=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;description=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;description&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;post=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;faq=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;faq&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="na"&gt;readingTime=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;readingTime&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. The component generates all the meta tags, OG images, and JSON-LD automatically. Since it renders in &lt;code&gt;svelte:head&lt;/code&gt; during SSR, crawlers see the complete markup without waiting for JavaScript.&lt;/p&gt;

&lt;p&gt;When you build for Cloudflare Workers (&lt;code&gt;pnpm build&lt;/code&gt;), the output in &lt;code&gt;.svelte-kit/cloudflare/&lt;/code&gt; contains static HTML with all the JSON-LD already injected. No client-side rendering required — search engine crawlers get the structured data on the first request.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;A note on tooling:&lt;/strong&gt; Google Search Console's Rich Results Test has a caching layer that can make results inconsistent — run it twice and you may get different output. More importantly, the &lt;a href="https://schema.org" rel="noopener noreferrer"&gt;schema.org documentation&lt;/a&gt; lists hundreds of properties, but only a fraction are recognized by Google. Use the &lt;a href="https://developers.google.com/search/docs/appearance/structured-data/search-gallery" rel="noopener noreferrer"&gt;Google Search Central documentation&lt;/a&gt; as your source of truth — it's what actually triggers rich results.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  AI SEO: Why This Matters More Now
&lt;/h2&gt;

&lt;p&gt;Traditional SEO gets you ranked. AI SEO gets you &lt;strong&gt;cited&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;ChatGPT, Perplexity, and Google's AI Overviews pull from structured data when generating answers. If your content isn't marked up correctly, AI systems can't extract it cleanly. They'll cite your competitors instead.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://arxiv.org/abs/2311.09735" rel="noopener noreferrer"&gt;Princeton's Generative Engine Optimization research&lt;/a&gt;, pages with proper citations and statistics get cited 40% more often by AI systems. Pages with clear structure and schema markup see 30-40% higher visibility. The same JSON-LD that gets you rich snippets also makes your content extractable for AI answers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Tools
&lt;/h2&gt;

&lt;p&gt;Don't just add markup and hope. Test it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://search.google.com/test/rich-results" rel="noopener noreferrer"&gt;Google Rich Results Test&lt;/a&gt;&lt;/strong&gt; — Shows which features your page qualifies for. Paste a URL or raw HTML.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://validator.schema.org/" rel="noopener noreferrer"&gt;Schema.org Validator&lt;/a&gt;&lt;/strong&gt; — Catches syntax errors and missing required fields. More lenient than Google's tool.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://developers.facebook.com/tools/debug/" rel="noopener noreferrer"&gt;Facebook Sharing Debugger&lt;/a&gt;&lt;/strong&gt; — Forces a scrape of your OG tags and shows exactly what Facebook sees.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://cards-dev.twitter.com/validator" rel="noopener noreferrer"&gt;Twitter Card Validator&lt;/a&gt;&lt;/strong&gt; — Same for Twitter cards. Shows card type, image, and preview.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Run the Rich Results Test twice. Google's cache can lag, and the second run is often more accurate than the first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Image Dimensions
&lt;/h2&gt;

&lt;p&gt;Get the dimensions wrong and you'll end up with blurry crops or platforms ignoring your images entirely.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Optimal Size&lt;/th&gt;
&lt;th&gt;Aspect Ratio&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Twitter/X&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1200×600 or 1200×1200&lt;/td&gt;
&lt;td&gt;2:1 or 1:1&lt;/td&gt;
&lt;td&gt;Large summary card prefers 2:1, but 1:1 also works&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Facebook&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1200×630&lt;/td&gt;
&lt;td&gt;1.91:1&lt;/td&gt;
&lt;td&gt;If you only make one OG image, make this size&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1200×627&lt;/td&gt;
&lt;td&gt;1.91:1&lt;/td&gt;
&lt;td&gt;Similar to Facebook, but slightly different&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Google&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1200×675&lt;/td&gt;
&lt;td&gt;16:9&lt;/td&gt;
&lt;td&gt;For rich results and Discover feed&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Minimums to avoid rejection:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Twitter: 144×144 (they upscale, but it looks bad)&lt;/li&gt;
&lt;li&gt;Facebook: 200×200 (anything smaller gets ignored)&lt;/li&gt;
&lt;li&gt;Google rich results: 120×120 absolute minimum, 600×600 recommended&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>seo</category>
      <category>jsonld</category>
      <category>metatags</category>
      <category>structureddata</category>
    </item>
    <item>
      <title>NixOS vs Traditional Linux: Why I Made the Switch and What I Learned</title>
      <dc:creator>Asaduzzaman Pavel</dc:creator>
      <pubDate>Thu, 09 Apr 2026 22:39:09 +0000</pubDate>
      <link>https://dev.to/iampavel/nixos-vs-traditional-linux-why-i-made-the-switch-and-what-i-learned-4j3k</link>
      <guid>https://dev.to/iampavel/nixos-vs-traditional-linux-why-i-made-the-switch-and-what-i-learned-4j3k</guid>
      <description>&lt;p&gt;After years of distro-hopping and watching every Linux installation slowly fall apart, I finally found my home in NixOS. What started as curiosity about a "weird functional Linux distribution" has turned into genuine enthusiasm for what might be the future of operating systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Breaking Point
&lt;/h2&gt;

&lt;p&gt;Like many Linux users, I had grown tired of a system I couldn't trust. I didn't need a dramatic failure to convince me. I simply wanted a Linux distribution where I could describe my entire setup in code rather than accumulating changes through countless imperative commands. Plus, knowing I could roll back instantly if I messed up a configuration gave me the confidence to experiment freely.&lt;/p&gt;

&lt;p&gt;The traditional Linux model of imperative package management, where you run commands that mutate your system state, had failed me one too many times. I needed reproducibility, reliability, and the confidence that my system would work the same way today as it did yesterday.&lt;/p&gt;

&lt;h2&gt;
  
  
  Discovering the Nix Way
&lt;/h2&gt;

&lt;p&gt;NixOS approaches system configuration from a radically different angle. Instead of imperatively installing packages and modifying configuration files scattered across your filesystem, you declare your entire system configuration in a single file (or set of files), just like dotfiles or Infrastructure as Code (IaC) tools like Ansible. The Nix package manager then builds your system from this declarative specification. With the introduction of Flakes, this process has become more manageable, allowing me to pin exact versions of dependencies and manage my configurations, just like dotfiles or IaC with Ansible, more deterministically across different machines.&lt;/p&gt;

&lt;p&gt;This means your system is reproducible. You can take your &lt;code&gt;configuration.nix&lt;/code&gt; file or your Flake configuration, just like dotfiles or an Ansible playbook to any machine and rebuild an identical system. No more "works on my machine" problems. No more forgotten configuration tweaks that break when you need to set up a new development environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Actually Like About It
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Atomic Upgrades and Rollbacks
&lt;/h3&gt;

&lt;p&gt;Every system configuration in NixOS creates a new generation. If an update breaks something, you can roll back to any previous generation instantly. This isn't just theoretical, I've used this feature countless times when experimenting with new configurations or testing newer software.&lt;/p&gt;

&lt;h3&gt;
  
  
  Isolation and Reproducibility
&lt;/h3&gt;

&lt;p&gt;Development environments in NixOS are truly isolated. I can have multiple versions of Node.js, Python, or any other runtime available simultaneously without conflicts. Each project can specify its exact dependencies, and Nix ensures they don't interfere with each other or with the base system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration as Code
&lt;/h3&gt;

&lt;p&gt;My entire system configuration lives in &lt;a href="https://github.com/k1ng440/dotfiles.nix" rel="noopener noreferrer"&gt;Github&lt;/a&gt; private GitLab repo. I can see exactly what changed between any two points in time, collaborate on configurations with teammates, and maintain separate branches for different use cases (work laptop, home desktop, and server setup).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Learning Curve
&lt;/h2&gt;

&lt;p&gt;I won't lie. NixOS has a steep learning curve.&lt;/p&gt;

&lt;p&gt;The documentation is genuinely terrible. I never even knew the official NixOS wiki (wiki.nixos.org) existed, and it turns out it's just a fork of the unofficial, backdated wiki (nixos.wiki), which is frequently the first result on Google and adds confusion with its mix of accurate and misleading information. You'll find yourself piecing together information from unofficial Discord channels (shoutout to the server owner NobbZ for their help), YouTube videos by creators like VimJoyner and EmergentMind for their guidance, the NixOS manual, Nixpkgs manual, GitHub issues, and random blog posts just to accomplish basic tasks.&lt;/p&gt;

&lt;p&gt;The official documentation often shows you what to do but rarely explains why or provides context for beginners. Want to understand how overlays work? Good luck finding a comprehensive explanation that doesn't assume you're already familiar with advanced concepts. Need to configure a service? The options are documented, but figuring out how they interact or what a minimal working configuration looks like often requires diving into the source code. Even Flakes and Home Manager, while powerful for managing configurations just like dotfiles or IaC with Ansible, add their own complexity, with sparse official guides that can leave newcomers struggling.&lt;/p&gt;

&lt;p&gt;But here's the thing: once you understand the core concepts and how to structure projects, it all makes sense. The initial investment in learning pays dividends in system reliability and maintainability. I spent more time in my first month with NixOS reading documentation than I had in years of using other distributions, but I've spent far less time dealing with broken systems since then.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Advantages in Daily Use
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Development Workflows
&lt;/h3&gt;

&lt;p&gt;Using &lt;code&gt;nix-shell&lt;/code&gt;, I can drop into any development environment instantly. Need to quickly test something with Node.js 14 and a specific set of npm packages? One command gives you an isolated environment with exactly those dependencies. Working on a legacy project that requires an older version of PostgreSQL? No problem, it won't interfere with the newer version you use for other projects. Flakes make this even easier by letting me define reproducible development environments with pinned dependencies in a single flake.nix file.&lt;/p&gt;

&lt;p&gt;If you're a developer wondering whether Nix plays well with your stack, yes it does. I use it with SvelteKit, Node.js, Go, Python, PHP and it just works. With &lt;a href="https://iampavel.dev/blog/nix-direnv-dev-environments" rel="noopener noreferrer"&gt;nix-direnv&lt;/a&gt;, your dev environment activates automatically when you enter a project directory. No manual shell switching, no version conflicts, no "it worked yesterday" moments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Server Management
&lt;/h3&gt;

&lt;p&gt;For servers, NixOS is a game-changer. Your entire server configuration is versioned and reproducible. Deploying the same configuration to multiple servers is trivial. Rolling back a problematic deployment is instant. The days of manually configuring servers and hoping you remember all the steps are over.&lt;/p&gt;

&lt;h3&gt;
  
  
  Package Management
&lt;/h3&gt;

&lt;p&gt;The Nix package repository is massive and remarkably up-to-date. Binary caches mean you rarely need to compile from source, but when you do need a custom build, Nix makes it straightforward to override package definitions or create your own.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Miss (And Don't Miss)
&lt;/h2&gt;

&lt;p&gt;I occasionally miss the simplicity of &lt;code&gt;apt install&lt;/code&gt; or &lt;code&gt;pacman -S&lt;/code&gt; for quick one-off installations. In NixOS, even temporary software installs require a bit more thought, whether you want something available in your shell, user profile, or system configuration.&lt;/p&gt;

&lt;p&gt;That said, I can still run something temporarily with &lt;a href="https://github.com/nix-community/comma" rel="noopener noreferrer"&gt;comma&lt;/a&gt;, a small tool that lets you run any package from nixpkgs without installing it. For example, instead of &lt;code&gt;nix run nixpkgs#cowsay&lt;/code&gt;, you just type &lt;code&gt;, cowsay&lt;/code&gt;. It leaves no trace and no clutter.&lt;/p&gt;

&lt;p&gt;What I don't miss is the anxiety around system updates on traditional distributions. I don't miss hunting down scattered config files under &lt;code&gt;/etc&lt;/code&gt;. I don't miss the slow accumulation of entropy that gradually breaks things. And I definitely don't miss the "hope and pray" approach to system maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Community
&lt;/h2&gt;

&lt;p&gt;The NixOS community is small but incredibly knowledgeable and helpful. The quality of discourse is high, and there's a strong culture of documenting solutions and sharing configurations. The ecosystem around Nix is growing rapidly, with tools like Home Manager for user-level configuration management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Looking Forward
&lt;/h2&gt;

&lt;p&gt;NixOS has fundamentally changed how I think about computing environments. The confidence that comes from having a completely reproducible, version-controlled system configuration is liberating. I can experiment fearlessly, knowing I can always roll back. I can maintain complex development environments without the fear of conflicts or bitrot.&lt;/p&gt;

&lt;p&gt;Is NixOS for everyone? Probably not. If you just want a simple desktop that works out of the box and you're happy with the defaults, Linux Mint, Ubuntu or Pop!_OS might serve you better. But if you're a developer, system administrator, or anyone who values reproducibility and reliability over simplicity, NixOS might just change your life.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;If you're curious or just getting started, here are some links I found useful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://nixos.org/guides/nix-pills/" rel="noopener noreferrer"&gt;Nix Pill&lt;/a&gt; - classic introduction to Nix&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nixos.org/manual/nixos/stable/" rel="noopener noreferrer"&gt;NixOS Manual&lt;/a&gt; - official system documentation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://wiki.nixos.org" rel="noopener noreferrer"&gt;NixOS Wiki&lt;/a&gt; - official NixOS wiki&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nix.dev" rel="noopener noreferrer"&gt;nix.dev&lt;/a&gt; – practical guide for using Nix in real projects&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://zero-to-nix.com/" rel="noopener noreferrer"&gt;Zero to nix&lt;/a&gt; - beginner-friendly intro to the ecosystem&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/@vimjoyer" rel="noopener noreferrer"&gt;Vimjoyer Youtube&lt;/a&gt; - tutorials and walkthroughs&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/@Emergent_Mind" rel="noopener noreferrer"&gt;Emergent_Mind Youtube&lt;/a&gt; - deep dives into Nix concepts&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://discord.com/invite/RbvHtGa" rel="noopener noreferrer"&gt;NixOS Discord (unofficial)&lt;/a&gt; - active and helpful community&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/k1ng440/dotfiles.nix" rel="noopener noreferrer"&gt;My Configuration&lt;/a&gt; - my personal NixOS setup&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>nixos</category>
      <category>linux</category>
    </item>
    <item>
      <title>Notes on Testing: Why I Prefer Testcontainers Over Mocks</title>
      <dc:creator>Asaduzzaman Pavel</dc:creator>
      <pubDate>Thu, 09 Apr 2026 22:38:52 +0000</pubDate>
      <link>https://dev.to/iampavel/notes-on-testing-why-i-prefer-testcontainers-over-mocks-53ce</link>
      <guid>https://dev.to/iampavel/notes-on-testing-why-i-prefer-testcontainers-over-mocks-53ce</guid>
      <description>&lt;p&gt;I've wasted entire Fridays writing "perfect" mocks for my database layer. I'd spend hours defining exactly what &lt;code&gt;GetByID&lt;/code&gt; should return, only to have the app crash in production because of a missing comma in my SQL or a misunderstood Postgres constraint. That's the problem: mocks don't test your code, they test your assumptions about your code. And usually, your assumptions are wrong.&lt;/p&gt;

&lt;p&gt;...And that's why I've mostly moved to Testcontainers. I'd rather wait ten seconds for a real Docker container to spin up than spend ten minutes faking a database behavior that I'm only 80% sure about anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mocks are just lies we tell ourselves
&lt;/h2&gt;

&lt;p&gt;When you mock a database repository, you're essentially saying: "When I call &lt;code&gt;GetByID&lt;/code&gt;, return this static struct." It's fast, sure, but it completely ignores reality. It doesn't catch syntax errors, it doesn't catch unique constraint violations, and it definitely doesn't catch the weird way your specific version of Postgres handles JSONB columns.&lt;/p&gt;

&lt;p&gt;I've seen too many projects where the tests were 100% green but the application was broken in its core because the mock didn't account for a simple foreign key constraint. With a real container, the database does the work for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The CI Headache (The Real Gripe)
&lt;/h2&gt;

&lt;p&gt;Look, I love Testcontainers, but setting them up in a CI environment like GitHub Actions is an absolute pain in the neck. You end up down a rabbit hole of Docker-in-Docker (DinD) configurations, permission issues, and mounting &lt;code&gt;/var/run/docker.sock&lt;/code&gt; into a runner that really doesn't want you to have that much power.&lt;/p&gt;

&lt;p&gt;There's always that one morning where the CI pipeline just hangs indefinitely because the runner ran out of disk space while trying to pull the &lt;code&gt;postgres:16-alpine&lt;/code&gt; image for the hundredth time. It's a trade-off. I'm trading "clean" CI for "reliable" code, but don't let anyone tell you it's a smooth setup. It's a battle.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I actually use it in Go
&lt;/h2&gt;

&lt;p&gt;I don't restart the container for every single test. That would be insane. I spin up one instance of Postgres at the start of the test suite, and then I use a fresh database or a schema migration for each test.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;TestRepository_CreateUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c"&gt;// Real Postgres 16 container&lt;/span&gt;
    &lt;span class="n"&gt;pgContainer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;postgres&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RunContainer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;testcontainers&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"postgres:16-alpine"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;postgres&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithWaitStrategy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ForLog&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"database system is ready to accept connections"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;
                &lt;span class="n"&gt;WithOccurrence&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;
                &lt;span class="n"&gt;WithStartupTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Second&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;pgContainer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Terminate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// Now we test against a REAL database&lt;/span&gt;
    &lt;span class="n"&gt;connStr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;pgContainer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ConnectionString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"sslmode=disable"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"postgres"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;connStr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;repo&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;repository&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewUserRepository&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c"&gt;// This will ACTUALLY fail if my SQL is broken&lt;/span&gt;
    &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;Email&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"test@example.com"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="n"&gt;assert&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NoError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Confidence Over Speed
&lt;/h2&gt;

&lt;p&gt;Yes, it's slower. But I'd rather have a test suite that takes two minutes and actually tells me the truth than a suite that takes two seconds and lies to my face. When I'm using my &lt;a href="https://dev.to/blog/go-project-structure"&gt;hand-crafted SQL approach&lt;/a&gt;, I need to know that my queries are valid. Testcontainers is the only way I've found to get that confidence without actually deploying to a staging environment.&lt;/p&gt;

&lt;p&gt;It's not perfect. The local Docker-on-Mac/Windows slowness is real, and the CI setup is a constant fight, but I'm never going back to heavy mocking for my data layer. It's just not worth the risk of a 3 AM production page.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>go</category>
      <category>backend</category>
      <category>docker</category>
    </item>
    <item>
      <title>Deploying SvelteKit to Cloudflare Workers for Free</title>
      <dc:creator>Asaduzzaman Pavel</dc:creator>
      <pubDate>Thu, 09 Apr 2026 22:33:35 +0000</pubDate>
      <link>https://dev.to/iampavel/deploying-sveltekit-to-cloudflare-workers-for-free-354d</link>
      <guid>https://dev.to/iampavel/deploying-sveltekit-to-cloudflare-workers-for-free-354d</guid>
      <description>&lt;p&gt;For years, deploying a full-stack web app meant either paying for a VPS or wrestling with complex container orchestration. Cloudflare Workers changes that. You can run SvelteKit with server-side rendering, API routes, and edge caching, all without spending a dime on the generous free tier.&lt;/p&gt;

&lt;p&gt;This guide walks through the exact setup I use for this site. No theoretical fluff, just working configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you get for free
&lt;/h2&gt;

&lt;p&gt;Cloudflare's free tier is surprisingly capable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;100,000 requests/day&lt;/strong&gt; — enough for most personal projects and small sites&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10ms CPU time per request&lt;/strong&gt; — SvelteKit runs comfortably within this&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1 GB of KV storage&lt;/strong&gt; — for simple config or session data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom domains&lt;/strong&gt; — connect your own domain at no extra cost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Compare this to Vercel's hobby tier (limited to 10s serverless function duration) or Netlify's build minute limits. Workers' V8 isolate model means faster cold starts and more predictable pricing if you ever need to scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Node.js 18+ installed&lt;/li&gt;
&lt;li&gt;A Cloudflare account (free tier works fine)&lt;/li&gt;
&lt;li&gt;A SvelteKit project ready to deploy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you don't have a SvelteKit project yet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pnpx sv create my-app
&lt;span class="nb"&gt;cd &lt;/span&gt;my-app
pnpm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1: Install the adapter
&lt;/h2&gt;

&lt;p&gt;Cloudflare Workers runs JavaScript in V8 isolates, not Node.js. SvelteKit needs an adapter to bridge this gap:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pnpm add &lt;span class="nt"&gt;-D&lt;/span&gt; @sveltejs/adapter-cloudflare
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;adapter-cloudflare&lt;/code&gt; package handles both Workers and Pages deployment. For new Workers Static Assets (the modern approach), this is the adapter you want, not the older &lt;code&gt;adapter-cloudflare-workers&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Configure svelte.config.js
&lt;/h2&gt;

&lt;p&gt;Replace the default adapter in your &lt;code&gt;svelte.config.js&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;adapter&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@sveltejs/adapter-cloudflare&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;vitePreprocess&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@sveltejs/vite-plugin-svelte&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="cm"&gt;/** @type {import('@sveltejs/kit').Config} */&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;preprocess&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;vitePreprocess&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;kit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;adapter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;adapter&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="c1"&gt;// See below for options&lt;/span&gt;
            &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;platformProxy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="na"&gt;configPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="na"&gt;persist&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="na"&gt;fallback&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;plaintext&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;routes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/*&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="na"&gt;exclude&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;files&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;build&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;redirects&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key options explained:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;config&lt;/code&gt;: Path to your Wrangler config file (we'll create this next)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;platformProxy&lt;/code&gt;: Controls how local bindings are emulated during development&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fallback&lt;/code&gt;: 'plaintext' gives you a simple 404 page; use 'spa' if you need client-side routing for unmatched paths&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;routes.exclude&lt;/code&gt;: Tells Cloudflare which requests can bypass the Worker and serve static assets directly. This saves you invocation costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 3: Create wrangler.jsonc
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;wrangler.jsonc&lt;/code&gt; file in your project root:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json-doc"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my-sveltekit-app"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"main"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".svelte-kit/cloudflare/_worker.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"compatibility_flags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"nodejs_als"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"nodejs_compat"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"compatibility_date"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-09-23"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"assets"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"binding"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ASSETS"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"directory"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".svelte-kit/cloudflare"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"routes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"pattern"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"yourdomain.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"custom_domain"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What each field does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;name&lt;/code&gt;: Your Worker's identifier in the Cloudflare dashboard&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;main&lt;/code&gt;: The entry point SvelteKit generates. Don't change this.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;compatibility_flags&lt;/code&gt;: &lt;code&gt;nodejs_als&lt;/code&gt; is required for SvelteKit's async context; &lt;code&gt;nodejs_compat&lt;/code&gt; helps with NPM packages that use Node.js APIs&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;compatibility_date&lt;/code&gt;: Cloudflare's runtime version. Bump this periodically for new features.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;assets&lt;/code&gt;: Tells Workers where your static files live&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;routes&lt;/code&gt;: Connects custom domains (you can add this later if you don't have a domain yet)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 4: Build and deploy
&lt;/h2&gt;

&lt;p&gt;Build your app locally first to verify everything works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pnpm build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a &lt;code&gt;.svelte-kit/cloudflare/&lt;/code&gt; directory containing your optimized app.&lt;/p&gt;

&lt;p&gt;Now install Wrangler CLI and deploy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pnpm add &lt;span class="nt"&gt;-g&lt;/span&gt; wrangler
wrangler login
wrangler deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the first deploy, Cloudflare gives you a &lt;code&gt;*.workers.dev&lt;/code&gt; subdomain. Free hosting, no domain required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Connect a custom domain (optional)
&lt;/h2&gt;

&lt;p&gt;If you have a domain managed by Cloudflare:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to Workers &amp;amp; Pages in your Cloudflare dashboard&lt;/li&gt;
&lt;li&gt;Select your Worker&lt;/li&gt;
&lt;li&gt;Click "Triggers" → "Add Custom Domain"&lt;/li&gt;
&lt;li&gt;Enter your domain&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If your DNS is elsewhere, add a CNAME record pointing to your &lt;code&gt;*.workers.dev&lt;/code&gt; subdomain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common gotchas
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Node.js compatibility
&lt;/h3&gt;

&lt;p&gt;Not all pnpm packages work in Workers. Anything relying on &lt;code&gt;fs&lt;/code&gt;, native bindings, or certain Node.js internals will fail. Check the &lt;a href="https://developers.cloudflare.com/workers/runtime-apis/" rel="noopener noreferrer"&gt;Cloudflare Workers runtime API docs&lt;/a&gt; before adding heavy dependencies.&lt;/p&gt;

&lt;p&gt;If you hit issues, try adding the &lt;code&gt;nodejs_compat&lt;/code&gt; flag (shown in the config above). This enables polyfills for common Node.js modules.&lt;/p&gt;

&lt;h3&gt;
  
  
  Environment variables
&lt;/h3&gt;

&lt;p&gt;Don't use &lt;code&gt;process.env&lt;/code&gt; in Workers. Instead, use SvelteKit's built-in &lt;code&gt;$env&lt;/code&gt; modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// +page.server.js&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;SECRET_API_KEY&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;$env/static/private&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Use SECRET_API_KEY here&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Cloudflare-specific bindings (KV, Durable Objects), access them via the &lt;code&gt;platform&lt;/code&gt; object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// hooks.js or +server.js&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;handle&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;resolve&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;env&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;// env.MY_KV_NAMESPACE, env.MY_DURABLE_OBJECT, etc.&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Worker size limits
&lt;/h3&gt;

&lt;p&gt;The final Worker bundle must stay under Cloudflare's size limits (currently around 1MB gzipped). If your build fails with a size error:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check for large dependencies bundled on the server side&lt;/li&gt;
&lt;li&gt;Move heavy libraries to client-only imports if possible&lt;/li&gt;
&lt;li&gt;Use dynamic imports for code splitting&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Testing locally
&lt;/h2&gt;

&lt;p&gt;You can test the production build locally with Wrangler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pnpm build
wrangler dev .svelte-kit/cloudflare/_worker.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This runs the exact same code that deploys to production, including platform bindings if you've configured them in &lt;code&gt;wrangler.jsonc&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Actions for CI/CD
&lt;/h2&gt;

&lt;p&gt;For automatic deployments on push:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/workflows/deploy.yml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-node@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pnpm install&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pnpm build&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pnpx wrangler deploy&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;CLOUDFLARE_API_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.CLOUDFLARE_API_TOKEN }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a Cloudflare API token with "Cloudflare Workers" edit permissions and add it as &lt;code&gt;CLOUDFLARE_API_TOKEN&lt;/code&gt; in your repository secrets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not Cloudflare Pages?
&lt;/h2&gt;

&lt;p&gt;Both Workers and Pages can host SvelteKit, but there are differences:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Workers&lt;/th&gt;
&lt;th&gt;Pages&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Static assets&lt;/td&gt;
&lt;td&gt;Via &lt;code&gt;assets&lt;/code&gt; binding&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serverless functions&lt;/td&gt;
&lt;td&gt;Single &lt;code&gt;_worker.js&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;/functions&lt;/code&gt; directory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Request limits&lt;/td&gt;
&lt;td&gt;100k/day free&lt;/td&gt;
&lt;td&gt;100k/day free&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Build output&lt;/td&gt;
&lt;td&gt;&lt;code&gt;.svelte-kit/cloudflare&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;.svelte-kit/cloudflare&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I prefer Workers because the &lt;code&gt;adapter-cloudflare&lt;/code&gt; generates a single &lt;code&gt;_worker.js&lt;/code&gt; file that handles routing, SSR, and static assets. It's simpler mentally. One entry point, one mental model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring your usage
&lt;/h2&gt;

&lt;p&gt;The Cloudflare dashboard shows your request volume and CPU time. Keep an eye on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Requests&lt;/strong&gt;: Free tier = 100k/day. At ~10k requests/day, you've got a 10-day buffer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CPU time&lt;/strong&gt;: SvelteKit usually runs under 5ms per request unless you're doing heavy computation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're approaching limits, consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prerendering more pages at build time&lt;/li&gt;
&lt;li&gt;Caching API responses at the edge&lt;/li&gt;
&lt;li&gt;Using KV for frequently-read, rarely-written data&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why I use this setup
&lt;/h2&gt;

&lt;p&gt;Deploying SvelteKit to Cloudflare Workers gives you a production-grade hosting stack at zero cost. The edge runtime means faster response times for global visitors, and the free tier is genuinely usable, not just a teaser to get you hooked.&lt;/p&gt;

&lt;p&gt;The setup is minimal: install an adapter, add a config file, run two commands. The rest is just SvelteKit doing what it does best.&lt;/p&gt;

</description>
      <category>sveltekit</category>
      <category>cloudflare</category>
      <category>deployment</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>SSH Config: From Spaghetti to Sanity</title>
      <dc:creator>Asaduzzaman Pavel</dc:creator>
      <pubDate>Thu, 09 Apr 2026 22:33:19 +0000</pubDate>
      <link>https://dev.to/iampavel/ssh-config-from-spaghetti-to-sanity-279d</link>
      <guid>https://dev.to/iampavel/ssh-config-from-spaghetti-to-sanity-279d</guid>
      <description>&lt;p&gt;For a long time, my "workflow" was just a series of &lt;code&gt;Ctrl+R&lt;/code&gt; searches in my shell history, hoping to find that one specific IP address or long-forgotten identity file flag. It was "SSH Spaghetti." It worked, but it was friction-heavy.&lt;/p&gt;

&lt;p&gt;I realized I needed a better way when our team moved behind a Bastion host (Jump Box). Suddenly, every time I wanted to check a log or run a quick query on an internal DB, I had to SSH into the Bastion, authenticate again, and then SSH into the internal server. It was exhausting. It broke my flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  ProxyJump: making the infrastructure transparent
&lt;/h2&gt;

&lt;p&gt;When I discovered &lt;code&gt;ProxyJump&lt;/code&gt;, it felt like magic. I could define my Bastion and my internal servers once, and suddenly I was back to a single command: &lt;code&gt;ssh internal-db&lt;/code&gt;. SSH handles the tunneling and identity forwarding behind the scenes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host bastion
  HostName 1.2.3.4
  User admin
  IdentityFile ~/.ssh/id_ed25519

Host internal-db
  HostName 10.0.1.50
  User postgres
  ProxyJump bastion
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Instant connections with multiplexing
&lt;/h2&gt;

&lt;p&gt;The second major shift was discovering multiplexing (via &lt;code&gt;ControlMaster&lt;/code&gt;). I often have three terminal windows open for the same server: one for &lt;code&gt;htop&lt;/code&gt;, one for logs, and one for a shell. I was used to the 2-3 second delay of the SSH handshake every time I opened a new tab.&lt;/p&gt;

&lt;p&gt;Setting up connection reuse changed everything. The first connection took its usual time, but the second and third were instantaneous. It made the remote server feel like it was running locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  The orphaned socket headache
&lt;/h3&gt;

&lt;p&gt;...And then I realized that if the master connection hangs or the server reboots while I have multiple tabs open, the &lt;code&gt;ControlPath&lt;/code&gt; socket gets orphaned. You'll try to SSH back in and get some cryptic error about the socket already being in use, or worse, it'll just hang indefinitely. I ended up having to write a tiny shell alias just to &lt;code&gt;rm -rf ~/.ssh/sockets/*&lt;/code&gt; when things get weird. It's a trade-off I'm willing to make, but it's not the "fire and forget" solution people claim it is.&lt;/p&gt;

&lt;h2&gt;
  
  
  The IdentityFile trap
&lt;/h2&gt;

&lt;p&gt;I once spent an hour debugging why I couldn't SSH into a new staging box. Turns out, because I had too many keys in my &lt;code&gt;ssh-agent&lt;/code&gt;, the server was rejecting me for "Too many authentication failures" before it even got to the right key.&lt;/p&gt;

&lt;p&gt;The fix is &lt;code&gt;IdentitiesOnly yes&lt;/code&gt;. It tells SSH to only use the key specified for that host, rather than trying every key in your agent like a brute-force attacker. It's one of those things I think everyone should just have in their &lt;code&gt;Host *&lt;/code&gt; defaults, but I'm not 100% sure if it breaks some edge cases with hardware tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep-alives (a bit of a guess)
&lt;/h2&gt;

&lt;p&gt;I use &lt;code&gt;ServerAliveInterval 60&lt;/code&gt; and &lt;code&gt;ServerAliveCountMax 3&lt;/code&gt;, but honestly, those numbers are a total guess based on what seemed to work for my home office's flaky Wi-Fi. It stops those annoying "Broken Pipe" disconnects during my morning coffee breaks, but I've also seen it keep a "zombie" connection alive for 10 minutes after I've closed my laptop lid.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host *
  ServerAliveInterval 60
  ServerAliveCountMax 3
  IdentitiesOnly yes
  ControlMaster auto
  ControlPath ~/.ssh/sockets/%r@%h:%p
  ControlPersist 10m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The real win here isn't saving a few seconds. It's removing the mental overhead of "getting there." Now, whether I'm jumping through a triple-bastion setup or just checking a dev box, the experience is identical. It's the "commute" of my workday, and I finally stopped dreading it.&lt;/p&gt;

</description>
      <category>ssh</category>
      <category>productivity</category>
      <category>backend</category>
    </item>
  </channel>
</rss>
