<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vivian Voss</title>
    <description>The latest articles on DEV Community by Vivian Voss (@vivian-voss).</description>
    <link>https://dev.to/vivian-voss</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vivian-voss"/>
    <language>en</language>
    <item>
      <title>ssh-agent</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Tue, 14 Apr 2026 05:50:34 +0000</pubDate>
      <link>https://dev.to/vivian-voss/ssh-agent-4078</link>
      <guid>https://dev.to/vivian-voss/ssh-agent-4078</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij78m1jbmmrs0pg2mqu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fij78m1jbmmrs0pg2mqu6.png" alt="A young developer with cat-ear headphones, leans back in her chair with arms behind her head, completely relaxed. Her left monitor shows a dark terminal with ssh-agent commands, her right monitor shows server connections with green " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Technical Beauty — Episode 31&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You have typed your passphrase four times this morning. Once to pull from GitHub. Once to deploy to staging. Once to SSH into production. Once because you mistyped it the third time and had to start over. By lunch, you will have typed it twelve more times, and by the end of the week you will have created a key without a passphrase because life, one feels, is too short.&lt;/p&gt;

&lt;p&gt;Congratulations. Your private key is now a plaintext file on disk. Anyone who reads &lt;code&gt;~/.ssh/id_ed25519&lt;/code&gt; owns every server you can reach. This is not a hypothetical. This is a Tuesday.&lt;/p&gt;

&lt;p&gt;In 1995, a password-sniffing attack hit the network at Helsinki University of Technology. Tatu Ylonen, a researcher there, decided this was unacceptable and wrote SSH that same year. As part of that implementation, he wrote ssh-agent: a process that holds your private keys in memory and signs authentication challenges on your behalf. The key never leaves the process. Not to the client. Not to the server. Not to the wire. Never.&lt;/p&gt;

&lt;p&gt;Thirty-one years later, that agent is still the authentication backbone of modern software delivery. It has been rewritten, hardened, extended, and maintained by the OpenSSH team (forked from Ylonen's ssh 1.2.12 in 1999, first stable release with OpenBSD 2.6 on 1 December 1999). The current source is at version 1.324, with contributions from Markus Friedl, Aaron Campbell, and Theo de Raadt, among others. The design, however, has not fundamentally changed. It did not need to.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Design
&lt;/h2&gt;

&lt;p&gt;The entire agent is 2,624 lines of C in a single file: &lt;code&gt;ssh-agent.c&lt;/code&gt;. Key storage, socket management, the full agent protocol, PKCS#11 smart card support, FIDO/U2F hardware key support, agent forwarding with destination constraints, and locking. In one file. Shorter than most React components one has had the pleasure of reviewing.&lt;/p&gt;

&lt;p&gt;The API is a Unix domain socket. When the agent starts, it creates a socket, sets its permissions to owner-only (&lt;code&gt;umask(0177)&lt;/code&gt;), and prints two environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;SSH_AUTH_SOCK&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/tmp/ssh-XXXXXXXXXX/agent.12345
&lt;span class="nv"&gt;SSH_AGENT_PID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;12345
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the entire interface. No configuration file. No service manager. No daemon registration. No YAML. The socket exists. Programs that know &lt;code&gt;SSH_AUTH_SOCK&lt;/code&gt; can talk to it. Programs that do not, cannot. &lt;code&gt;ls -la $SSH_AUTH_SOCK&lt;/code&gt; tells you everything you need to know about your authentication state. One does find this rather refreshing.&lt;/p&gt;

&lt;p&gt;The protocol is a packetised request-response exchange with five core operations:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Client sends&lt;/th&gt;
&lt;th&gt;Agent returns&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;List keys&lt;/td&gt;
&lt;td&gt;&lt;code&gt;REQUEST_IDENTITIES&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;List of public keys&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sign data&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;SIGN_REQUEST&lt;/code&gt; (data + key reference)&lt;/td&gt;
&lt;td&gt;Signature bytes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Add key&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;ADD_IDENTITY&lt;/code&gt; (private key + constraints)&lt;/td&gt;
&lt;td&gt;Success/failure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Remove key&lt;/td&gt;
&lt;td&gt;&lt;code&gt;REMOVE_IDENTITY&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Success/failure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lock/unlock&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;LOCK&lt;/code&gt; / &lt;code&gt;UNLOCK&lt;/code&gt; (passphrase)&lt;/td&gt;
&lt;td&gt;Success/failure&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Five operations. The IETF draft describing the protocol (draft-ietf-sshm-ssh-agent, progressing to RFC) is shorter than most framework tutorials. The entire specification fits in your head, which is, one suspects, rather the point.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Elegance
&lt;/h2&gt;

&lt;p&gt;The authentication flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Your SSH client connects to a server.&lt;/li&gt;
&lt;li&gt;The server sends a challenge (data to be signed).&lt;/li&gt;
&lt;li&gt;The client forwards the challenge to the agent via the Unix socket.&lt;/li&gt;
&lt;li&gt;The agent signs the challenge with the private key using &lt;code&gt;sshkey_sign()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The agent returns only the signature.&lt;/li&gt;
&lt;li&gt;The client sends the signature to the server.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step four is where the beauty lives. The agent calls the signing function internally. The result, a sequence of signature bytes, is sent back to the client. The private key material never crosses a process boundary. It is never serialised to any output channel. It exists in exactly one place: the agent's memory, in an in-memory linked list of &lt;code&gt;Identity&lt;/code&gt; structures.&lt;/p&gt;

&lt;p&gt;When the agent process terminates, the operating system reclaims the memory. The keys are gone. No cleanup script. No cache file to purge. No "secure delete" to trust. No residual state. The process was the vault, and the vault is demolished.&lt;/p&gt;

&lt;p&gt;The agent also monitors its parent process. If the parent shell dies (detected by &lt;code&gt;getppid()&lt;/code&gt; returning 1), the agent cleans up its socket and exits. No orphan processes accumulating in your process table. One does appreciate software that tidies up after itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Constraints
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;ssh-add&lt;/code&gt; is the interface for managing keys in the agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh-add ~/.ssh/id_ed25519          &lt;span class="c"&gt;# Add a key (passphrase prompt)&lt;/span&gt;
ssh-add &lt;span class="nt"&gt;-t&lt;/span&gt; 3600                     &lt;span class="c"&gt;# Key expires after 1 hour&lt;/span&gt;
ssh-add &lt;span class="nt"&gt;-c&lt;/span&gt;                          &lt;span class="c"&gt;# Confirm each signing operation&lt;/span&gt;
ssh-add &lt;span class="nt"&gt;-l&lt;/span&gt;                          &lt;span class="c"&gt;# List loaded key fingerprints&lt;/span&gt;
ssh-add &lt;span class="nt"&gt;-D&lt;/span&gt;                          &lt;span class="c"&gt;# Delete all keys&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-t&lt;/code&gt; flag stores an absolute expiry time in the identity's &lt;code&gt;death&lt;/code&gt; field. After that time, the key is automatically removed. No cron job. No external timer. The constraint is part of the key's identity.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;-c&lt;/code&gt; flag requires interactive confirmation (via &lt;code&gt;ssh-askpass&lt;/code&gt;) before every signing operation. The agent calls &lt;code&gt;confirm_key()&lt;/code&gt; before executing &lt;code&gt;sshkey_sign()&lt;/code&gt;. You see who is asking, and you decide whether to sign. For forwarded agents, this is the difference between convenience and a security incident.&lt;/p&gt;

&lt;p&gt;The locking mechanism (&lt;code&gt;ssh-agent -x&lt;/code&gt; or the &lt;code&gt;LOCK&lt;/code&gt; protocol message) makes all keys inaccessible until the agent is unlocked with the correct passphrase. The passphrase is stored as a salted hash, not plaintext. For stepping away from your desk, this is rather more civilised than terminating the agent and re-adding all keys when you return.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Security
&lt;/h2&gt;

&lt;p&gt;The agent takes security seriously at the implementation level:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Anti-tracing:&lt;/strong&gt; &lt;code&gt;platform_disable_tracing(0)&lt;/code&gt; at startup prevents other processes from attaching a debugger to read key material from agent memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privilege dropping:&lt;/strong&gt; &lt;code&gt;setegid(getgid())&lt;/code&gt; and &lt;code&gt;setgid(getgid())&lt;/code&gt; at startup. The agent runs with minimal privileges.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory hygiene:&lt;/strong&gt; &lt;code&gt;sshkey_free()&lt;/code&gt; for key material. &lt;code&gt;freezero()&lt;/code&gt; for PINs (zeroes memory before freeing). &lt;code&gt;explicit_bzero()&lt;/code&gt; for lock password hashes, which prevents the compiler from optimising away the zeroing because the variable is "no longer used."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Socket permissions:&lt;/strong&gt; Created with &lt;code&gt;umask(0177)&lt;/code&gt;. Owner-only access. The directory is created with &lt;code&gt;mkdtemp()&lt;/code&gt; for additional protection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not features listed on a marketing page. These are manners. The kind of quiet, disciplined engineering that distinguishes software built by people who understand what they are protecting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent Forwarding
&lt;/h2&gt;

&lt;p&gt;Agent forwarding (&lt;code&gt;ssh -A&lt;/code&gt; or &lt;code&gt;ForwardAgent yes&lt;/code&gt;) allows a remote machine to use your local agent. The remote &lt;code&gt;sshd&lt;/code&gt; creates a Unix socket, sets &lt;code&gt;SSH_AUTH_SOCK&lt;/code&gt;, and tunnels requests back through the SSH connection to your local agent. You can hop from server to server without copying keys.&lt;/p&gt;

&lt;p&gt;The risk: anyone with root on the remote machine can access the forwarded socket while your session is active. They can use your agent to authenticate to any server your keys can reach.&lt;/p&gt;

&lt;p&gt;The mitigations are characteristically minimal. &lt;code&gt;ssh-add -c&lt;/code&gt; requires confirmation per use. &lt;code&gt;ProxyJump&lt;/code&gt; avoids forwarding entirely by routing through an intermediate host without giving it agent access. Recent OpenSSH versions add destination constraints: keys can be restricted to specific hosts, so a compromised jump host cannot use the forwarded agent to reach arbitrary targets.&lt;/p&gt;

&lt;p&gt;The design philosophy is consistent: provide the mechanism, make the risks visible, let the operator decide. No hand-holding. No default that pretends to be safe. Honest tooling for people who read the man page.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Proof
&lt;/h2&gt;

&lt;p&gt;Every CI/CD system on earth uses ssh-agent. GitHub Actions has a dedicated &lt;code&gt;webfactory/ssh-agent&lt;/code&gt; action. GitLab's official documentation recommends &lt;code&gt;eval $(ssh-agent -s)&lt;/code&gt; as the standard pattern for pipeline SSH key management. Jenkins, CircleCI, Buildkite, Travis CI: all use the agent protocol.&lt;/p&gt;

&lt;p&gt;macOS integrated ssh-agent into the system keychain with Leopard in 2007. &lt;code&gt;ssh-add --apple-use-keychain&lt;/code&gt; stores passphrases in Keychain, bridging the Unix tool with Apple's credential infrastructure. GNOME and KDE auto-start agent-compatible daemons. Windows 10 added an OpenSSH agent as a system service in 2018. 1Password and Bitwarden now implement the agent protocol natively.&lt;/p&gt;

&lt;p&gt;An API designed by a Finnish researcher in 1995, implemented in a single C file, communicating over a Unix socket named by an environment variable, is the authentication backbone of modern software delivery. One does find this rather beautiful.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Principle
&lt;/h2&gt;

&lt;p&gt;ssh-agent is 2,624 lines of C. One file. One socket. One environment variable. No configuration file. No YAML. No cloud dependency. No subscription. The private key never leaves the process, the process never outlives the session, and the session never trusts more than it must.&lt;/p&gt;

&lt;p&gt;Tatu Ylonen wrote it because typing passphrases was tedious. The best Unix tools often begin this way: someone finds a task annoying, writes a small programme to solve it, and designs it with enough discipline that it still works thirty-one years later. No rewrite. No framework migration. No breaking changes.&lt;/p&gt;

&lt;p&gt;Technical beauty emerges from reduction. ssh-agent reduced authentication to five operations, one socket, and the guarantee that the secret never leaves the room.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/ssh-agent" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ssh</category>
      <category>security</category>
      <category>unix</category>
      <category>devops</category>
    </item>
    <item>
      <title>periodic</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Mon, 13 Apr 2026 06:03:35 +0000</pubDate>
      <link>https://dev.to/vivian-voss/periodic-nn1</link>
      <guid>https://dev.to/vivian-voss/periodic-nn1</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9luj21o3c2vz7qnpr0rh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9luj21o3c2vz7qnpr0rh.png" alt="A young developer with cat-ear headphones, stands between two server racks with her arms crossed. The left rack is buried under colourful sticky notes, tangled cables, and alarm clocks: cron job chaos. The right rack is clean with a single green status light and an email icon above it: one daily report, everything accounted for."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Unix Way — Episode 12&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The problem: you need scheduled maintenance. Disk cleanup, log rotation, security audits, database backups, certificate renewal checks. Every server has them. Every sysadmin has cron jobs for them. And every sysadmin has, at some point, discovered a critical cron job that died silently three months ago because nobody checked the output.&lt;/p&gt;

&lt;p&gt;cron is a scheduler. It runs commands at specified times. It does not care what those commands do, whether they succeed, whether they produce output, or whether anyone reads that output. It is a clock with a trigger. Nothing more.&lt;/p&gt;

&lt;p&gt;The real problem is not scheduling. Scheduling is trivial. The problem is knowing what happened. And cron, by design, does not answer that question.&lt;/p&gt;

&lt;p&gt;Scatter your jobs across per-user crontabs and root's crontab, and within six months you have a system where nobody knows what runs when, what produces output, what has been silently failing, or who added the job in the first place. This is not a theoretical scenario. This is every production server one has ever inherited.&lt;/p&gt;

&lt;h2&gt;
  
  
  FreeBSD: periodic(8)
&lt;/h2&gt;

&lt;p&gt;FreeBSD solved this problem in the 1990s with &lt;code&gt;periodic(8)&lt;/code&gt;, a framework that sits on top of cron and provides structure, configuration, and output management for recurring system tasks.&lt;/p&gt;

&lt;p&gt;The architecture is straightforward. Scripts live in directories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/etc/periodic/daily/
/etc/periodic/weekly/
/etc/periodic/monthly/
/etc/periodic/security/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each script is a self-contained shell script that performs one task and exits with a status code:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Exit Code&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;th&gt;Output Handling&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;Nothing notable&lt;/td&gt;
&lt;td&gt;Shown only if &lt;code&gt;daily_show_success="YES"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Notable information&lt;/td&gt;
&lt;td&gt;Shown only if &lt;code&gt;daily_show_info="YES"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Configuration warning&lt;/td&gt;
&lt;td&gt;Shown only if &lt;code&gt;daily_show_badconfig="YES"&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;gt;2&lt;/td&gt;
&lt;td&gt;Critical, always shown&lt;/td&gt;
&lt;td&gt;Cannot be suppressed&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;One configuration file controls the entire system:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# /etc/periodic.conf&lt;/span&gt;
&lt;span class="nv"&gt;daily_clean_tmps_enable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"YES"&lt;/span&gt;
&lt;span class="nv"&gt;daily_backup_passwd_enable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"YES"&lt;/span&gt;
&lt;span class="nv"&gt;daily_status_security_enable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"YES"&lt;/span&gt;
&lt;span class="nv"&gt;daily_accounting_enable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"YES"&lt;/span&gt;
&lt;span class="nv"&gt;daily_show_success&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"NO"&lt;/span&gt;
&lt;span class="nv"&gt;daily_show_info&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"YES"&lt;/span&gt;
&lt;span class="nv"&gt;daily_show_badconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"YES"&lt;/span&gt;
&lt;span class="nv"&gt;daily_output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/var/log/daily.log"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enable a task: set it to &lt;code&gt;"YES"&lt;/code&gt;. Disable it: &lt;code&gt;"NO"&lt;/code&gt;. Change the output destination: set &lt;code&gt;daily_output&lt;/code&gt; to a file path or an email address. One file. Every task. Every setting.&lt;/p&gt;

&lt;p&gt;The default configuration in &lt;code&gt;/etc/defaults/periodic.conf&lt;/code&gt; documents every available option. Your overrides go in &lt;code&gt;/etc/periodic.conf&lt;/code&gt;. The pattern is identical to &lt;code&gt;rc.conf&lt;/code&gt;: system defaults are never modified, local changes are applied on top.&lt;/p&gt;

&lt;p&gt;The output is collected from all scripts, filtered by severity, and delivered as a single report. If &lt;code&gt;daily_output&lt;/code&gt; is an email address, you receive one email every morning with the complete state of your system: which temp files were cleaned, whether the password database was backed up, what the security check found, and whether anything went wrong. You read one email with your morning coffee and know the state of the machine.&lt;/p&gt;

&lt;p&gt;Adding a custom task is trivial: write a shell script, make it executable, drop it into &lt;code&gt;/etc/periodic/daily/&lt;/code&gt;. That is the entire API. No registration, no configuration reload, no daemon restart. The framework discovers it on the next run.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;newsyslog(8)&lt;/code&gt; already knows about &lt;code&gt;/var/log/daily.log&lt;/code&gt;, &lt;code&gt;/var/log/weekly.log&lt;/code&gt;, and &lt;code&gt;/var/log/monthly.log&lt;/code&gt;. If those files exist, they are rotated automatically. The logging of your maintenance framework is itself maintained. One does appreciate the recursion.&lt;/p&gt;

&lt;p&gt;cron still does the scheduling. Three lines in &lt;code&gt;/etc/crontab&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="m"&gt;0&lt;/span&gt;  &lt;span class="m"&gt;2&lt;/span&gt;  *  *  *  &lt;span class="n"&gt;root&lt;/span&gt;  &lt;span class="n"&gt;periodic&lt;/span&gt; &lt;span class="n"&gt;daily&lt;/span&gt;
&lt;span class="m"&gt;0&lt;/span&gt;  &lt;span class="m"&gt;3&lt;/span&gt;  *  *  &lt;span class="m"&gt;6&lt;/span&gt;  &lt;span class="n"&gt;root&lt;/span&gt;  &lt;span class="n"&gt;periodic&lt;/span&gt; &lt;span class="n"&gt;weekly&lt;/span&gt;
&lt;span class="m"&gt;0&lt;/span&gt;  &lt;span class="m"&gt;5&lt;/span&gt;  &lt;span class="m"&gt;1&lt;/span&gt;  *  *  &lt;span class="n"&gt;root&lt;/span&gt;  &lt;span class="n"&gt;periodic&lt;/span&gt; &lt;span class="n"&gt;monthly&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;cron triggers periodic. periodic runs the scripts. The scripts report their status. The framework collects the output. The administrator reads one email. The separation of concerns is, one must note, rather elegant.&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenBSD: daily(8)
&lt;/h2&gt;

&lt;p&gt;OpenBSD takes a characteristically minimal approach. Three shell scripts ship with the base system: &lt;code&gt;/etc/daily&lt;/code&gt;, &lt;code&gt;/etc/weekly&lt;/code&gt;, and &lt;code&gt;/etc/monthly&lt;/code&gt;. These are system scripts. You never modify them.&lt;/p&gt;

&lt;p&gt;Your additions go into &lt;code&gt;/etc/daily.local&lt;/code&gt;, &lt;code&gt;/etc/weekly.local&lt;/code&gt;, and &lt;code&gt;/etc/monthly.local&lt;/code&gt;. These local scripts run first, before the system scripts, which makes it convenient to define variables, perform cleanup, or prepare state that the system scripts depend on.&lt;/p&gt;

&lt;p&gt;The daily script performs a comprehensive set of checks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Removes scratch and junk files from &lt;code&gt;/tmp&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Purges accounting records and reports processes killed by &lt;code&gt;pledge(2)&lt;/code&gt; or &lt;code&gt;unveil(2)&lt;/code&gt; violations&lt;/li&gt;
&lt;li&gt;Checks daemon status: lists any daemons enabled in &lt;code&gt;rc.conf.local&lt;/code&gt; that are not actually running&lt;/li&gt;
&lt;li&gt;Reports which file systems need to be dumped&lt;/li&gt;
&lt;li&gt;Runs the &lt;code&gt;security(8)&lt;/code&gt; check script&lt;/li&gt;
&lt;li&gt;Optionally backs up the root file system to &lt;code&gt;/altroot&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Optionally runs &lt;code&gt;calendar(1)&lt;/code&gt; and file system checks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The security integration is notable. OpenBSD's daily maintenance does not merely check disk space and rotate logs. It reports on pledge and unveil violations: processes that attempted to exceed their declared capabilities or access files outside their declared scope. Security is not an add-on, not an agent, not a third-party tool. It is a shell script that runs every morning and tells you who misbehaved.&lt;/p&gt;

&lt;p&gt;Output is mailed to root. The OpenBSD handbook strongly recommends that root's mail is aliased to a real user account. Otherwise, root's mail simply accumulates until the partition runs out of space. One does note the pragmatism.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linux: systemd timers
&lt;/h2&gt;

&lt;p&gt;Linux offers systemd timers as the modern alternative to cron. Each scheduled task requires two files: a &lt;code&gt;.service&lt;/code&gt; unit file defining what to run, and a &lt;code&gt;.timer&lt;/code&gt; unit file defining when to run it.&lt;/p&gt;

&lt;p&gt;A daily cleanup requires a service file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="c"&gt;# /etc/systemd/system/cleanup.service
&lt;/span&gt;&lt;span class="nn"&gt;[Unit]&lt;/span&gt;
&lt;span class="py"&gt;Description&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;Daily cleanup&lt;/span&gt;

&lt;span class="nn"&gt;[Service]&lt;/span&gt;
&lt;span class="py"&gt;Type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;oneshot&lt;/span&gt;
&lt;span class="py"&gt;ExecStart&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/usr/local/bin/cleanup.sh&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And a timer file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="c"&gt;# /etc/systemd/system/cleanup.timer
&lt;/span&gt;&lt;span class="nn"&gt;[Unit]&lt;/span&gt;
&lt;span class="py"&gt;Description&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;Run cleanup daily&lt;/span&gt;

&lt;span class="nn"&gt;[Timer]&lt;/span&gt;
&lt;span class="py"&gt;OnCalendar&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;daily&lt;/span&gt;
&lt;span class="py"&gt;Persistent&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;true&lt;/span&gt;

&lt;span class="nn"&gt;[Install]&lt;/span&gt;
&lt;span class="py"&gt;WantedBy&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;timers.target&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then enable and start it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;systemctl daemon-reload
systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; cleanup.timer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For ten maintenance tasks, this produces twenty files and ten reload-and-enable sequences. The output goes to the journal, retrievable via &lt;code&gt;journalctl -u cleanup.service&lt;/code&gt;. There is no collected daily report. There is no severity filtering. There is no single email summarising the state of the system. Each task lives in its own universe.&lt;/p&gt;

&lt;p&gt;systemd timers do offer features that cron does not: monotonic timers (relative to boot), persistent timers (catch up missed runs), calendar expressions with second-level granularity, and dependency ordering via the unit system. These are genuine capabilities. Whether the complexity is justified for "run this script every morning" is, one suspects, a matter of taste. And file count tolerance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Point
&lt;/h2&gt;

&lt;p&gt;cron tells you when. periodic tells you what happened.&lt;/p&gt;

&lt;p&gt;The difference between a scheduler and a maintenance framework is the difference between "the job ran" and "the job ran, succeeded, found nothing unusual, and here is the proof." One of those lets you sleep. The other requires you to check.&lt;/p&gt;

&lt;p&gt;FreeBSD built a framework: structured directories, a single configuration file, severity-coded output, collected daily reports. OpenBSD built simplicity: three scripts, three local overrides, security baked into the daily routine. Linux built a general-purpose process management system and asked it to also handle cron jobs.&lt;/p&gt;

&lt;p&gt;All three work. The BSDs understood, decades ago, that the problem was never scheduling. It was accountability. One configuration file. One email. One morning coffee. Rather civilised, really.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/periodic" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>freebsd</category>
      <category>linux</category>
      <category>devops</category>
      <category>sysadmin</category>
    </item>
    <item>
      <title>Why the Cloud Is the Default</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Sun, 12 Apr 2026 07:00:00 +0000</pubDate>
      <link>https://dev.to/vivian-voss/why-the-cloud-is-the-default-1k1b</link>
      <guid>https://dev.to/vivian-voss/why-the-cloud-is-the-default-1k1b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvw6rmuui4xmo8cwe6cz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvw6rmuui4xmo8cwe6cz.png" alt="A developer with cat-ear headphones, sits on the ground leaning against a small server rack, looking up thoughtfully at dark clouds raining dollar signs and padlocks. Pen-and-ink style, mostly monochrome. The text reads: " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;On Second Thought — Episode 03&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In February 2011, US Chief Information Officer Vivek Kundra published the Federal Cloud Computing Strategy under the Obama administration's 25-Point IT Reform Plan. The mandate was explicit: agencies must evaluate a cloud computing option first, and come up with a damn good reason not to use it. Kundra claimed $20 billion of the government's $80 billion IT budget could move to cloud.&lt;/p&gt;

&lt;p&gt;By 2014, Gartner's Magic Quadrant showed AWS with more than five times the cloud IaaS compute capacity of the next fourteen providers combined. By 2018, "on-premise" had become a dirty word in enterprise IT. The axiom was established.&lt;/p&gt;

&lt;p&gt;Not discovered. Not proven. Established. By a government memo, an analyst report, and three poster-child migrations from companies (Netflix, Spotify, Airbnb) whose elastic-demand requirements bear no resemblance to the vast majority of software running in production today.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Axiom
&lt;/h2&gt;

&lt;p&gt;"Of course we use the cloud. Everyone does." Nobody questions this in a board meeting. Nobody loses a promotion for recommending AWS. CTOs choose it not because it is optimal, but because it is defensible. If something goes wrong with AWS, it is AWS's fault. If something goes wrong with your own infrastructure, it is your fault. This is not engineering. This is career insurance.&lt;/p&gt;

&lt;p&gt;The startup ecosystem reinforces this. AWS Activate offers $1,000 to $100,000 in free credits to new companies. Incubators and accelerators distribute AWS credits as standard onboarding. The credits expire after twelve to twenty-four months. By then, your architecture is built on AWS services, your team knows AWS tooling, your monitoring assumes CloudWatch, and your deployment pipeline assumes CodeDeploy. Migration cost exceeds staying cost. The business model is identical to the IBM mainframe playbook of the 1970s: make switching costs higher than the cost of staying. The technology changed. The economics did not.&lt;/p&gt;

&lt;p&gt;Harvard Business School research from 2018 documented the effect: after AWS launched in 2006, first-round VC funding for cloud-benefiting startups dropped 20% because infrastructure costs fell dramatically. VCs responded by funding more startups with less diligence. The cloud did not just change infrastructure. It changed who gets funded and how. And it locked in AWS as the default infrastructure for an entire generation of companies that never evaluated the alternative.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cost
&lt;/h2&gt;

&lt;p&gt;Flexera's 2025 State of the Cloud Report found that 27% of all cloud spend is wasted. At $675 billion in global cloud infrastructure spending, that is $182 billion per year evaporating into unused resources, over-provisioned instances, and forgotten development environments. Two-thirds of organisations report waste from idle or underused resources.&lt;/p&gt;

&lt;p&gt;The utilisation numbers are worse. The median EC2 instance runs at 7 to 12% CPU utilisation. Kubernetes clusters average 10% CPU and 20% memory utilisation. You are paying for seven to ten times the compute you actually use. In no other industry would this be considered acceptable. In cloud computing, it is considered normal.&lt;/p&gt;

&lt;p&gt;37signals, the company behind Basecamp and HEY, left AWS in 2023. David Heinemeier Hansson documented the entire process publicly. The hardware investment: approximately $700,000 in Dell servers, fully recouped during the first year. The storage migration: 10 petabytes moved from S3 to Pure Storage, with an upfront cost of $1.5 million and annual operating costs under $200,000 (replacing $1.3 million per year in S3 charges alone). The total annual savings: $2 million. The five-year projection, revised upward from the original $7 million: over $10 million. With faster hardware and considerably more storage. AWS reportedly comped a quarter-million-dollar egress bill. One does appreciate the parting gift.&lt;/p&gt;

&lt;p&gt;Dropbox moved 90% of its customer data off AWS to custom colocation in 2015 and 2016. The investment: $53 million in its own data centres. The savings: $75 million over two years. The return on investment was achieved before the infrastructure was fully operational.&lt;/p&gt;

&lt;p&gt;Ahrefs, the SEO analytics company, never went to cloud. They run 850 servers in a Singapore colocation. Monthly cost per server: $1,550 on-premises versus $17,557 for the AWS equivalent. AWS would cost 11.3 times more. Over 2.5 years, the estimated savings: $400 million. Ahrefs' total revenue for 2020 to 2022 was $257.5 million. The cloud would not have reduced their margin. It would have eliminated their company.&lt;/p&gt;

&lt;p&gt;GEICO, Warren Buffett's insurance subsidiary, spent a decade migrating over 600 applications to Microsoft Azure. Costs ballooned to 2.5 times expectations. Reliability declined. In 2024, they announced repatriation of at least 50% of workloads to an OpenStack-based private cloud, projecting 50% savings per compute core and 60% per gigabyte of storage. Completion: 2029. A decade to get in, half a decade to get out.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Sovereignty Problem
&lt;/h2&gt;

&lt;p&gt;The CLOUD Act, signed into US federal law in March 2018, allows US law enforcement to compel American technology companies to hand over data stored anywhere in the world. If your data is hosted in Frankfurt or Paris, and the infrastructure is managed by AWS, Azure, or Google Cloud, it can legally be accessed by US authorities. This directly conflicts with GDPR Article 48, which requires international agreements for third-country data access. Cloud providers face an impossible choice: comply with US warrants and breach European law, or refuse and face US legal penalties.&lt;/p&gt;

&lt;p&gt;Europe's response was Gaia-X, a federated cloud initiative launched six years ago. What happened: US hyperscalers lobbied to be included. Once inside, they, in the words of Nextcloud founder Frank Karlitschek, "flooded it with documents and regulations." Founding member Scaleway withdrew in 2021. Day-one member Agdatahub received EUR 4.8 million in funding, then went into liquidation. Karlitschek called it a "paper monster." A fundamental failure of strategy, vision, and political will.&lt;/p&gt;

&lt;p&gt;The NIS2 Directive, transposed in October 2024, now requires critical-sector organisations to assess cybersecurity risks from cloud providers. DORA, effective January 2025, forces financial institutions to manage ICT risk, including cloud dependency. France's "Doctrine Cloud" mandates government data stays in French-controlled facilities. The regulatory environment is turning, slowly. The infrastructure, however, remains American.&lt;/p&gt;

&lt;h2&gt;
  
  
  The $1,000 Test
&lt;/h2&gt;

&lt;p&gt;For $1,000 per month on AWS, you get approximately four mid-tier instances (8 vCPU, 16 GB each) with no bandwidth budget. Egress costs extra. Storage costs extra. Monitoring costs extra.&lt;/p&gt;

&lt;p&gt;For $1,000 per month on Hetzner, you get seven dedicated AX102 servers. 16 cores each. 128 GB RAM each. 20 TB bandwidth included per server. Totals: 112 cores, 896 GB RAM, 140 TB bandwidth. Independent benchmarks show Hetzner delivering 76% better multi-core performance than AWS and 11 times more IOPS.&lt;/p&gt;

&lt;p&gt;The price difference is not a rounding error. It is a factor of seven to ten. For workloads that do not require elastic scaling (which is most of them), the cloud is not a premium for convenience. It is a tax on the assumption that you had no other choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Question
&lt;/h2&gt;

&lt;p&gt;86% of CIOs now plan to move some workloads back from public cloud, according to Barclays' CIO Survey. The highest figure ever recorded, up from 43% in late 2020. The axiom is cracking.&lt;/p&gt;

&lt;p&gt;Modern servers handle 500,000 HTTP requests per second. PostgreSQL delivers 70,000 IOPS. A single well-configured machine handles 50,000 concurrent users with proper caching. The vast majority of software in production does not need elastic scale. It needs reliability, predictable costs, and control over its own data.&lt;/p&gt;

&lt;p&gt;The cloud was never the only answer. It was the only answer nobody got fired for choosing. The career insurance premium was paid by every company that did not question the default.&lt;/p&gt;

&lt;p&gt;On second thought: what if the default is wrong?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/why-the-cloud-is-the-default" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>infrastructure</category>
      <category>devops</category>
      <category>selfhosting</category>
    </item>
    <item>
      <title>The Feature Creep</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Sat, 11 Apr 2026 05:00:00 +0000</pubDate>
      <link>https://dev.to/vivian-voss/the-feature-creep-3n20</link>
      <guid>https://dev.to/vivian-voss/the-feature-creep-3n20</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo26kmtmk8xir74nnu9ty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo26kmtmk8xir74nnu9ty.png" alt="A developer with cat-ear headphones, holds a simple pocket knife next to a bloated giant Swiss Army knife bursting with a calendar, email, camera, shopping cart, spreadsheet, social media icon, and a dental insurance card. Left side in red: " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Beta Stories — Episode 07&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Notion launched in 2016 as a note-taking app with a clever block-based editor. By 2018, it had databases and project management. By 2023, AI. By 2024, a calendar. By 2025, an email client. 180 feature updates shipped in 2024 alone. The desktop app, an Electron wrapper, consumes 200 MB on disk. A note-taking app that now sends your email and manages your calendar. One does wonder when it will offer dental insurance.&lt;/p&gt;

&lt;p&gt;This is the industry's favourite disease. And it has two symptoms that appear contradictory but share the same root cause.&lt;/p&gt;

&lt;h2&gt;
  
  
  Symptom One: Adding What Nobody Asked For
&lt;/h2&gt;

&lt;p&gt;Jira was created in 2002 by two engineers in Sydney with a $10,000 credit card. A bug tracker. Simple, effective, named after Godzilla. Twenty-four years later, it serves Software Teams, DevOps, Product Managers, IT, HR, and Marketing through 3,000 marketplace plugins. Teams reportedly spend more time configuring their workflow tool than doing the work it tracks. Linear, which explicitly positions itself as "the anti-Jira," loads in about half the time. One does note the marketing angle with a certain sympathy.&lt;/p&gt;

&lt;p&gt;Slack launched in February 2014 as a team chat application. By 2017, a single workspace consumed 130 MB to 960 MB of RAM. By 2019, users reported 5 GB consumption. The desktop app ran each workspace in its own Electron webview, complete with its own DOM state, JavaScript engine, and GPU resources. In 2019, Slack rebuilt the application, claiming 50% less memory. The application still consumes 200 to 500 MB on disk. For text messages.&lt;/p&gt;

&lt;p&gt;Microsoft Teams consumed 5 to 6 GB of RAM in its Electron incarnation. The 2023 WebView2 rewrite claimed "2x faster, 50% less memory, 70% less disk space." Independent benchmarking confirmed the claims. Teams now idles at approximately 1 GB. Progress, certainly. Though one does note that "only 1 GB for a chat application" is a sentence that would have been considered satire in 2005.&lt;/p&gt;

&lt;p&gt;Evernote may be the most instructive case. It launched as a simple note-taking application using 200 to 300 MB of RAM. Over the years, it added business card scanning, reminders, a presentation mode, a work chat feature, and, memorably, a marketplace selling branded backpacks and socks. The version 10 rewrite consumed over 1 GB with four notes open. The app fell from market dominance to near-irrelevance, overtaken by Notion (which is, of course, busy repeating the same trajectory at a higher velocity).&lt;/p&gt;

&lt;p&gt;iTunes is the poster child. Apple's music player accumulated video playback, TV shows, podcasts, app management, ebook purchases, iPhone synchronisation, iCloud integration, a radio service, and social features. Apple killed it in 2019 with macOS Catalina, splitting it into three separate applications. The WWDC presentation was openly critical of the bloat. One does appreciate a company acknowledging its own mistakes, even if it took fourteen years.&lt;/p&gt;

&lt;h2&gt;
  
  
  Symptom Two: Removing What Everyone Used
&lt;/h2&gt;

&lt;p&gt;Google has discontinued 299 products and services. Sixty apps. Two hundred and fourteen services. Twenty-five hardware products. The kill rate averaged twelve per year early on, rising to twenty-two per year between 2011 and 2021. 2019 was the bloodbath: over twenty-five products eliminated in a single year.&lt;/p&gt;

&lt;p&gt;Google Reader, killed in July 2013, generated 100,000 petition signatures on Change.org within 48 hours, representing 24% of the platform's total traffic. Google proceeded regardless. Feedly gained 500,000 new users in 48 hours. The users were there. The will to serve them was not.&lt;/p&gt;

&lt;p&gt;macOS Lion replaced "Save As" with "Duplicate" in 2011. Apple's support forums received hundreds of complaints per week. Users downgraded their operating systems. Apple partially reversed course in Mountain Lion, hiding "Save As" behind Option+Shift+Command+S. Four modifier keys to save a file. Progress.&lt;/p&gt;

&lt;p&gt;Windows 11, released in October 2021, removed taskbar drag-and-drop, taskbar repositioning, icon ungrouping, small taskbar icons, and the full right-click context menu. A GitHub project providing a drag-and-drop workaround accumulated 1,500 stars and 173 forks. Microsoft restored partial drag-and-drop support in September 2022, nearly a full year later. Ungrouping returned in May 2023. Taskbar repositioning has not returned at all. The upgrade removed more than it added.&lt;/p&gt;

&lt;p&gt;Reddit priced its API at $12,000 per 50 million requests in June 2023, translating to approximately $20 million per year for Apollo, a beloved third-party client with 1.5 million monthly active users. Apollo shut down on 30 June. So did Sync, BaconReader, and Reddit is Fun. Over 8,800 subreddits went dark in protest, representing a collective subscriber count of 2.8 billion. Reddit's CEO called the developer's objections "blackmail." The developer published audio recordings disproving the claim. Reddit proceeded regardless.&lt;/p&gt;

&lt;p&gt;Firefox killed its entire XUL extension ecosystem with Firefox 57 in November 2017. Approximately 15,000 legacy extensions, 75% of all add-ons, became incompatible overnight. DownThemAll, Greasemonkey, Firebug, ScrapBook: gone. The browser lost 46 million users over the following three years, declining from 256 million in 2016 to below 200 million by 2020. Mozilla's rationale was sound (security, performance), but the execution was a masterclass in alienating your most loyal users.&lt;/p&gt;

&lt;p&gt;Sonos redesigned its mobile application in May 2024, removing sleep timers, alarms, volume control, and accessibility options. The company's stock dropped 13%. Revenue declined 16% in fiscal Q4 2024. Approximately 100 employees were laid off. The CEO, Patrick Spence, resigned in January 2025. The company called the redesign "courageous." One does rather admire the vocabulary.&lt;/p&gt;

&lt;p&gt;Skype, once the default for video calling with 405 million users in 2008, added Snapchat-style stories, flashy emoji themes, and colourful chat boxes in a 2017 redesign. App store ratings dropped from 3.5 to 1.5 stars. Usage declined to 36 million by 2023. Microsoft shut it down in May 2025, twenty-two years after launch. The features nobody wanted did not save it. The features everyone relied on were already gone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Mechanism
&lt;/h2&gt;

&lt;p&gt;Adding features is how product managers justify the next sprint. Adding features is how startups justify the next funding round. Adding features is how enterprise sales closes the next contract ("Yes, we have that checkbox too"). The incentive structure rewards addition. Nobody gets promoted for keeping a feature count stable.&lt;/p&gt;

&lt;p&gt;Removing features is how engineering teams reduce maintenance cost. Removing features is how companies push users toward newer, more profitable products (Google Reader died so Google+ could fail in peace). Removing features is how redesigns happen: strip the interface, call it "modern," ship it before anyone notices what is missing.&lt;/p&gt;

&lt;p&gt;Neither decision involves asking the person using the software. The adding is driven by business metrics. The removing is driven by engineering metrics. The user is in neither equation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Signal
&lt;/h2&gt;

&lt;p&gt;VLC has been a media player since 2001. No account required. No cloud sync. No AI assistant. No subscription. It plays everything, on every platform, and it has never once tried to send your email. The traffic cone icon has not changed. The feature set has barely changed. The software works. One does find this rather refreshing.&lt;/p&gt;

&lt;p&gt;SQLite has maintained backwards compatibility since 2004 and runs on an estimated four billion devices. It did not achieve this by adding features every quarter. It achieved this by being finished. The file format has not changed in twenty-two years. The API is stable. The documentation is complete. The software does what it does, and it does not aspire to do more.&lt;/p&gt;

&lt;p&gt;These are not legacy projects clinging to relevance. They are proof that restraint is a feature. The decision not to add a calendar, an email client, or a marketplace selling branded backpacks is itself a design decision, and it is the one that ages best.&lt;/p&gt;

&lt;p&gt;Notion now sends your email. Skype added Snapchat stories and shut down eighteen months later. Google Reader served millions and was killed because it did not serve the company's social media ambitions. Windows 11 removed features that users relied on daily and spent two years putting some of them back.&lt;/p&gt;

&lt;p&gt;One pattern adds until the product collapses under its own weight. The other removes until the users collapse under their frustration. Both end the same way: the user leaves.&lt;/p&gt;

&lt;p&gt;The best software is not the one with the most features. It is the one that never added the wrong ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/the-feature-creep" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>software</category>
      <category>productdesign</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Certification Industrial Complex</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Fri, 10 Apr 2026 05:00:00 +0000</pubDate>
      <link>https://dev.to/vivian-voss/the-certification-industrial-complex-4i7d</link>
      <guid>https://dev.to/vivian-voss/the-certification-industrial-complex-4i7d</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadsoawo2w2nxndfn58o9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadsoawo2w2nxndfn58o9.png" alt="The Pitch versus The Invoice: a balance scale comparing six marketing promises of IT certifications (industry-recognised credentials, career advancement, top employers) against eleven actual costs (expiring exams, SAFe's thirteen cert levels, non-certified earning more since 2007, research showing no job performance correlation, CompTIA sold to private equity). The scale tips heavily toward the invoice side." width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Invoice — Episode 17&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;"Get certified! Industry-recognised credentials! Advance your career! Prove your expertise!"&lt;br&gt;
Splendid. Let us examine what you are actually paying for.&lt;/p&gt;

&lt;p&gt;In late 2024, CompTIA's certification business was acquired by Thoma Bravo and H.I.G. Capital, two private equity firms. The nonprofit membership organisation kept its 501(c)(6) tax-exempt status under a new name. The certification arm, the part that generates $168 million per year, became a separate for-profit entity. The quiet part, said aloud: certifications are a product, and you are the customer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Renewal Invoice
&lt;/h2&gt;

&lt;p&gt;Cisco CCNP: $700 in exam fees. Expires in three years. Recertification: retake the exams or pay $300 plus Continuing Education courses at $99 to $500 each. AWS Solutions Architect Professional: $300. Expires in three years. Retake at full price. Your knowledge does not expire every thirty-six months. Your certification does. That is not education. That is a subscription.&lt;/p&gt;

&lt;p&gt;SAFe, the Scaled Agile Framework, offers thirteen certification levels for a single methodology. Not three. Not five. Thirteen. Initial training: $500 to $3,500. Annual renewal: $195 to $995 per person, depending on level. Two million professionals have been certified. If even a quarter maintain active renewals at an average of $295, that is approximately $145 million per year flowing into one framework's ecosystem. One does admire the arithmetic, if not the pedagogy.&lt;/p&gt;

&lt;p&gt;ITIL Foundation: $680 to $850. PeopleCert, which now owns ITIL, requires 20 CPD points annually and a $129-per-year subscription to maintain certification. PMP: $555 for the exam, $139 per year for PMI membership, and 60 Professional Development Units per cycle. Commercial PDU providers charge $25 to $100 per unit, totalling roughly $3,000 per three-year cycle. Your project management certification costs more to maintain than many of the tools you manage projects with.&lt;/p&gt;

&lt;p&gt;A team of ten engineers holding mixed Cisco, AWS, and SAFe certifications can easily spend $50,000 to $100,000 per year on certification maintenance alone. Not on training. Not on learning. On keeping the certificates valid.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Competence Invoice
&lt;/h2&gt;

&lt;p&gt;Foote Partners has tracked IT skills pay premiums across 3,305 employers since 2007. Their finding: non-certified technical skills have earned nearly 2% more of base salary than certified ones for seventeen consecutive years. In 2023, the average pay premium for certifications declined nearly 1% overall, hitting its lowest point since 2015. The market is telling you something. Rather clearly.&lt;/p&gt;

&lt;p&gt;Academic research on Microsoft MCP certification found it "does not predict job performance outcomes." Despite MCP-certified professionals reporting significantly higher job competencies than non-certified peers, the certification itself had no predictive power for actual performance on the job. A separate meta-analysis concluded that estimates of the "causal impact of certification on long-term labour market outcomes are not significantly different from zero," though certification can help with initial job-finding. It opens the door. It does not prepare you for what is behind it.&lt;/p&gt;

&lt;p&gt;Christian Espinosa, a cybersecurity author, coined the term "cybersecurity paper tigers": people holding penetration testing certifications who cannot explain what nmap is. He reports: "Certifications often only require passing multiple-choice questions that people can memorise, which does not translate to the real world. Incidents do not present people with four options to avert a breach."&lt;/p&gt;

&lt;p&gt;Caveon, a test security company, estimates that 15 to 25 percent of IT certification exams show indicators of cheating. The certificate proves you sat in a room. Not necessarily that you learned anything in it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Gatekeeping Invoice
&lt;/h2&gt;

&lt;p&gt;"Must have three AWS-certified architects." A single line in a Request for Proposals that eliminates every competitor whose team uses alternative technology. The certification is not measuring competence. It is filtering for vendor loyalty. If your entire team holds Cisco certifications, switching to Juniper means writing off years of investment. The certs are worthless outside the vendor's ecosystem. That is not a side effect. That is the design.&lt;/p&gt;

&lt;p&gt;SAFe in enterprise procurement is the purest expression of this pattern. Large organisations, particularly government contractors, require "SAFe-certified teams" in their statements of work. The framework's owner, Scaled Agile Inc., sells the certification. The certification creates the procurement requirement. The requirement sells more certifications. A self-reinforcing loop generating $25 million per year.&lt;/p&gt;

&lt;p&gt;Kent Beck, creator of Extreme Programming and co-author of the Agile Manifesto, called the certification structure "dishonest," "a pyramid scheme," and "cancer." Robert C. Martin described agile certifications as "a complete joke and utter absurdity." Martin Fowler offered his assessment at a GOTO conference: "SAFe stands for Shitty Agile For Enterprises." Ken Schwaber, co-creator of Scrum, published "unSAFe at any speed" in 2013. Ten of the seventeen Agile Manifesto co-authors advise against adopting SAFe.&lt;/p&gt;

&lt;p&gt;The framework persists regardless. The invoices must be paid.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Scale Invoice
&lt;/h2&gt;

&lt;p&gt;Pearson VUE delivers 21 million certification exams annually across 20,000 test centres in 180 countries. Contract renewal rate: 99 percent. Revenue: approximately $537 million per year.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Entity&lt;/th&gt;
&lt;th&gt;Annual Revenue&lt;/th&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Pearson VUE&lt;/td&gt;
&lt;td&gt;~$537M&lt;/td&gt;
&lt;td&gt;Exam delivery&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CompTIA&lt;/td&gt;
&lt;td&gt;~$168M&lt;/td&gt;
&lt;td&gt;Certification (now PE-owned)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scaled Agile Inc.&lt;/td&gt;
&lt;td&gt;~$25M&lt;/td&gt;
&lt;td&gt;Framework + certification&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PMI&lt;/td&gt;
&lt;td&gt;Not disclosed&lt;/td&gt;
&lt;td&gt;Membership + certification&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IT Training Market (total)&lt;/td&gt;
&lt;td&gt;~$79B&lt;/td&gt;
&lt;td&gt;Training + certification&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cybersecurity Cert Market&lt;/td&gt;
&lt;td&gt;~$3.9B&lt;/td&gt;
&lt;td&gt;Growing to $7.5B by 2030&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The total IT training and certification market was valued at $79 billion in 2025, projected to reach $107 billion by 2033. Private equity is not acquiring CompTIA because it believes in professional development. It is acquiring a recurring revenue stream with a 99 percent renewal rate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Alternative
&lt;/h2&gt;

&lt;p&gt;Read the documentation. Build something. Contribute to open source. Write about what you learned. Publish your code. A GitHub profile with real commits, real pull requests, and real code reviews tells an employer more than a certificate that expires next March.&lt;/p&gt;

&lt;p&gt;The best engineers one has worked with held zero certifications. They held opinions, backed by experience, tested in production. They could explain not just what a tool does but why it was designed that way and when not to use it. No renewal fee required.&lt;/p&gt;

&lt;p&gt;Microsoft, to its credit, now offers free online renewal assessments for Azure certifications. It is the only major vendor that does not charge for recertification. One does note this with a certain respect, and a certain awareness of what it implies about everyone else.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;The certification industrial complex operates on three revenue streams: initial fees ($100 to $3,500 per exam), recurring renewals ($129 to $995 per person per year), and the invisible cost: vendor lock-in. Once your team is certified in technology X, switching to technology Y means writing off the investment and starting over. The real product is not knowledge. It is switching costs.&lt;/p&gt;

&lt;p&gt;When a nonprofit sells its certification business to private equity, the transaction is not a change in strategy. It is a confession. The certifications were always a product. The professionals were always the customers. The knowledge was always incidental.&lt;/p&gt;

&lt;p&gt;Pearson VUE delivers 21 million exams per year. CompTIA generates $168 million. Scaled Agile generates $25 million. The industry does not sell knowledge. It sells permission: permission to apply, permission to bid, permission to call yourself qualified.&lt;/p&gt;

&lt;p&gt;When private equity acquires the exam, the product is you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/certification-industrial-complex" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>career</category>
      <category>programming</category>
      <category>certification</category>
      <category>industry</category>
    </item>
    <item>
      <title>Rust Says No</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Thu, 09 Apr 2026 07:00:16 +0000</pubDate>
      <link>https://dev.to/vivian-voss/rust-says-no-chm</link>
      <guid>https://dev.to/vivian-voss/rust-says-no-chm</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsmii6xc4l5i3cm5398n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsmii6xc4l5i3cm5398n.png" alt="A young developer with cat-ear headphones, sits at her desk surrounded by monitors showing Rust code. Ferris the crab mascot watches from the desk. Papers marked with red X symbols are scattered around — the compiler said no. The text reads: " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;By Design — Episode 03&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You have written this bug. You have returned null from a function because the real value was not available yet. You have caught an exception three levels up and logged "something went wrong" without knowing what. You have pushed to production, and at 3 AM the on-call phone rang because a pointer pointed to memory that no longer existed.&lt;/p&gt;

&lt;p&gt;Every language you have ever used let you compile that code. Every single one said: "Looks fine to me."&lt;/p&gt;

&lt;p&gt;In 2006, Graydon Hoare walked up twenty-one flights of stairs to his apartment in Vancouver because his elevator had crashed. The software, written in C, had a memory bug. The elevator ran on code that could access freed memory, dereference null pointers, and corrupt its own state. Hoare lived on the 21st floor. It was not the first time the elevator had crashed. It was, however, the last time he accepted it as normal.&lt;/p&gt;

&lt;p&gt;He started writing a programming language that evening. He named it after the rust fungus: an organism, as MIT Technology Review later noted, "over-engineered for survival." Rather fitting.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Complaint
&lt;/h2&gt;

&lt;p&gt;"Rust is too hard. The borrow checker fights you. There is no garbage collector. No null. No exceptions. No inheritance. Everything is immutable by default. Why does this language say no to everything?"&lt;/p&gt;

&lt;p&gt;One does hear this. Usually from someone whose last production outage was caused by one of those features.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Design: No Garbage Collector
&lt;/h2&gt;

&lt;p&gt;You know the garbage collector pause that spiked your latency? The one you tuned, and tuned, and tuned, and then it spiked again?&lt;/p&gt;

&lt;p&gt;Memory in Rust is managed by ownership. Every value has exactly one owner. When the owner goes out of scope, the value is freed. No garbage collector runs in the background. No stop-the-world pauses. No GC tuning. No memory leaked because a reference was held somewhere you forgot about.&lt;/p&gt;

&lt;p&gt;Discord documented this in a now-famous blog post. Their Read States service, written in Go, experienced latency spikes every two minutes. The Go garbage collector paused to scan the entire LRU cache. The spikes were not caused by a massive amount of garbage, but by the GC scanning live data to determine what was free. The cache was mostly alive. The scan was mostly pointless. The latency was entirely real.&lt;/p&gt;

&lt;p&gt;They rewrote it in Rust. The latency dropped from milliseconds to microseconds. The spikes disappeared. Not because the Rust code was more clever. Because there was no garbage collector to pause. One does rather appreciate a solution that works by removing the problem.&lt;/p&gt;

&lt;p&gt;The trade-off is real: you must think about ownership. Every value must have a clear owner. Every borrow must have a clear lifetime. The compiler enforces this at compile time. The cost is paid during development, when your IDE is open and your coffee is warm. The reward is paid in production, when neither is true.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Design: No Null
&lt;/h2&gt;

&lt;p&gt;You know the null check you forgot last Tuesday? The one that worked fine in testing because the test data always had a value?&lt;/p&gt;

&lt;p&gt;In 2009, Tony Hoare (no relation to Graydon) stood before an audience at QCon London and called null his "billion dollar mistake." He had invented it in 1965 for ALGOL W, and he described it as the source of "innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years." One does note that the estimate was conservative even then.&lt;/p&gt;

&lt;p&gt;Rust has no null. Instead, it has &lt;code&gt;Option&amp;lt;T&amp;gt;&lt;/code&gt;: a value is either &lt;code&gt;Some(value)&lt;/code&gt; or &lt;code&gt;None&lt;/code&gt;. The compiler forces you to handle both cases. You cannot call a method on a value that might not exist without first checking whether it exists. The billion-dollar mistake is prevented by the type system, at compile time, with zero runtime cost.&lt;/p&gt;

&lt;p&gt;This is not convenience. It is the elimination of a defect class. Every NullPointerException your Java code has ever thrown, every segfault from dereferencing null in C, every "undefined is not a function" in JavaScript: Rust's type system makes them structurally impossible. Not unlikely. Not caught by tests. Impossible.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Design: No Exceptions
&lt;/h2&gt;

&lt;p&gt;You know the try-catch that silently swallowed an error? The one where the catch block logged a message nobody read, and the application continued in an undefined state for forty minutes before someone noticed?&lt;/p&gt;

&lt;p&gt;Functions that can fail in Rust return &lt;code&gt;Result&amp;lt;T, E&amp;gt;&lt;/code&gt;: either &lt;code&gt;Ok(value)&lt;/code&gt; or &lt;code&gt;Err(error)&lt;/code&gt;. The failure path is visible in the function's type signature. Every caller knows that the function can fail. Every caller must handle the failure or explicitly propagate it with the &lt;code&gt;?&lt;/code&gt; operator.&lt;/p&gt;

&lt;p&gt;There are no hidden throws. No &lt;code&gt;catch&lt;/code&gt; blocks that swallow errors silently. No stack unwinding that crosses function boundaries invisibly. No "everything is fine" until a try-catch three levels up catches something it does not understand and logs "An error occurred." One does wonder how many production incidents have begun with that exact log message.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;?&lt;/code&gt; operator makes error propagation ergonomic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;read_config&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;contents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;read_to_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;toml&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;contents&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each &lt;code&gt;?&lt;/code&gt; either unwraps the success or returns the error to the caller. The code reads linearly. The error handling is explicit. The types tell the truth. Quite the novelty.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Design: No Inheritance
&lt;/h2&gt;

&lt;p&gt;You know the six-level class hierarchy where nobody remembers what the grandparent overrides? The one where changing the parent class broke seventeen subclasses in ways that only appeared in integration testing?&lt;/p&gt;

&lt;p&gt;Rust has no class inheritance. No &lt;code&gt;extends&lt;/code&gt;. No &lt;code&gt;class Dog extends Animal&lt;/code&gt;. No hierarchy where the behaviour of your object depends on decisions made four levels above by someone who left the company in 2019.&lt;/p&gt;

&lt;p&gt;Instead, Rust uses traits: shared behaviour defined as interfaces, implemented by types. A struct can implement as many traits as it needs. Traits compose. They do not cascade.&lt;/p&gt;

&lt;p&gt;There is no diamond problem. No fragile base class. No "I changed the parent class and now everything behaves differently." The relationship between types is explicit: this type implements this behaviour. Full stop. One does rather miss problems one never has.&lt;/p&gt;

&lt;p&gt;This is composition over inheritance, enforced by the language. Not a design pattern you choose to follow when you remember. A constraint the compiler imposes whether you remember or not.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Design: No Implicit Mutability
&lt;/h2&gt;

&lt;p&gt;In Rust, all variables are immutable by default. To make something mutable, you write &lt;code&gt;let mut&lt;/code&gt;. The act of changing state becomes a conscious, visible decision in the code. Not a default. A declaration.&lt;/p&gt;

&lt;p&gt;Combined with the borrow checker's rule that you cannot have a mutable reference and an immutable reference to the same data at the same time, this eliminates data races at compile time. Not by detecting them at runtime. Not by crashing when they occur. By making them structurally impossible to express.&lt;/p&gt;

&lt;p&gt;Every concurrent bug you have ever debugged, every "it works on my machine" that vanished under load, every race condition that appeared once in ten thousand runs: Rust's type system prevents them by refusing to compile code that could produce them. The compiler does not trust you. It is, one must concede, entirely justified.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trade-Off
&lt;/h2&gt;

&lt;p&gt;Let us be honest. The learning curve is real and well-documented.&lt;/p&gt;

&lt;p&gt;The borrow checker will reject code that compiles without complaint in every other language you know. It will reject code that is, in fact, correct. It will reject code that has no bug. It will reject code because it cannot &lt;em&gt;prove&lt;/em&gt; the code has no bug, and Rust has decided that "cannot prove safe" is the same as "unsafe."&lt;/p&gt;

&lt;p&gt;This is frustrating. It feels adversarial. It feels like the compiler is wrong and you are right and the code is fine and why will it not just compile.&lt;/p&gt;

&lt;p&gt;It is not wrong. It is conservative. And in the gap between "probably correct" and "provably correct" lie the bugs you would have shipped to production, discovered at 3 AM, and spent three days debugging whilst questioning your career choices.&lt;/p&gt;

&lt;p&gt;The Rust community is honest about this cost. The first three months are painful. The compiler's error messages are unusually good (it tells you what went wrong and often suggests the fix), but the frequency of those messages during learning is high. You will argue with the compiler. You will lose. You will, eventually, realise that losing to the compiler is considerably cheaper than losing to production.&lt;/p&gt;

&lt;p&gt;The reward: once it compiles, it works. Not always. But with a frequency and reliability that other systems languages do not match. The category of bugs that Rust eliminates (use-after-free, null dereference, data races, buffer overflows) accounts for approximately 70% of all security vulnerabilities in C and C++ codebases, according to research from Microsoft and Google's Project Zero. Seventy percent. One does find that number rather difficult to ignore.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Proof
&lt;/h2&gt;

&lt;p&gt;The Linux kernel accepted Rust as a supported language in December 2025. It is no longer experimental. Dave Airlie, maintainer of the DRM subsystem, stated that the DRM project was approximately one year away from requiring Rust and disallowing C for new drivers. The kernel contains 34 million lines of C and 25 thousand lines of Rust. The transition has begun. One does note the ratio with a certain quiet patience.&lt;/p&gt;

&lt;p&gt;Microsoft has rewritten 188,000 lines of Windows kernel and DirectWrite code in Rust, with a stated ambition to eliminate C and C++ from its entire codebase by 2030. When Microsoft, a company not traditionally associated with radical architectural courage, decides to rewrite its kernel in a new language, one does pay attention.&lt;/p&gt;

&lt;p&gt;Discord's Go-to-Rust migration eliminated garbage collector latency spikes and reduced response time from milliseconds to microseconds. The blog post has become required reading in systems engineering circles. Quite deservedly.&lt;/p&gt;

&lt;p&gt;Cloudflare built Infire, a custom LLM inference engine, in Rust, achieving 7% faster inference than vLLM. AWS, Google, and Meta all run Rust in production at significant scale.&lt;/p&gt;

&lt;p&gt;Android 16 ships with Rust-built components in the kernel (ashmem memory allocator). Millions of devices run Rust in production without their owners knowing or caring. Which is, of course, exactly how infrastructure should work.&lt;/p&gt;

&lt;p&gt;45% of enterprises now run Rust in non-trivial production workloads. The Stack Overflow Developer Survey has named Rust the most admired language for nine consecutive years, with an 83% admiration rate in 2024. Not because it is easy. Not because the learning curve is gentle. Because the elevator stops crashing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Principle
&lt;/h2&gt;

&lt;p&gt;Every feature a language grants is a failure mode it accepts. Every convenience added is a bug category normalised. Every "yes" comes with a cost that compounds over decades, paid not by the language designer but by every developer who inherits the codebase.&lt;/p&gt;

&lt;p&gt;The courage to say no is the rarest engineering virtue. Rust proves that a language can refuse convenience, refuse familiar patterns, refuse the features that every other language considers mandatory, and win adoption not despite the refusal but because of it.&lt;/p&gt;

&lt;p&gt;The borrow checker is not a barrier. It is a boundary. And boundaries, applied with discipline, are what separate systems that survive from systems that merely ship.&lt;/p&gt;

&lt;p&gt;Graydon Hoare's elevator still works. One rather suspects it is no longer written in C.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Graydon Hoare stepped down from Rust in 2013. The language he started in frustration on a stairwell is now in the Linux kernel, the Windows kernel, and the infrastructure of every major cloud provider. The fungus, it turns out, was indeed over-engineered for survival.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/rust-says-no" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>programming</category>
      <category>systemdesign</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Page Transition You Never Had to Build</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Wed, 08 Apr 2026 08:10:36 +0000</pubDate>
      <link>https://dev.to/vivian-voss/the-page-transition-you-never-had-to-build-561a</link>
      <guid>https://dev.to/vivian-voss/the-page-transition-you-never-had-to-build-561a</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tez76l45yuicivv1fj3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8tez76l45yuicivv1fj3.png" alt="Split scene: oversized industrial tools (jackhammer, crane, welding gear, chains) struggle to connect two small pages, while the Developer holds a tiny paintbrush with three elegant glowing strokes already bridging the gap. Three lines of CSS replace entire frameworks." width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Stack Patterns — Episode 11&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;"We need a single-page application because users expect smooth transitions between pages."&lt;/p&gt;

&lt;p&gt;One has heard this argument rather a lot over the past decade. It justified React. It justified Vue Router. It justified Framer Motion (32KB minified). It justified Barba.js. It justified entire application architectures built around the premise that clicking a link should not, under any circumstances, feel like clicking a link.&lt;/p&gt;

&lt;p&gt;400KB of JavaScript so that a heading could fade whilst the URL changed. Marvellous.&lt;/p&gt;

&lt;p&gt;The browser does it now. In CSS. Three lines.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;Both pages include this CSS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="k"&gt;@view-transition&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="py"&gt;navigation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the entire opt-in. Click a link that navigates to another page on the same origin. The browser takes a screenshot of the old page, loads the new page, takes a screenshot of the new page, and cross-fades between them. No JavaScript. No framework. No hydration. No virtual DOM diffing the entire document tree so that a title can slide thirty pixels to the left.&lt;/p&gt;

&lt;p&gt;The default transition is a cross-fade. It works immediately. For many sites, this is sufficient, and one does find sufficiency rather underrated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Named Transitions
&lt;/h2&gt;

&lt;p&gt;Want a specific element to animate from its old position to its new one? One CSS property:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="nt"&gt;h1&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;view-transition-name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;heading&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The same &lt;code&gt;view-transition-name&lt;/code&gt; on the corresponding element on both pages. The browser snapshots the old position, snapshots the new position, and animates between them. The element does not need to be the same DOM node. It does not need to be the same component. It does not need to exist in the same JavaScript context. It merely needs the same name.&lt;/p&gt;

&lt;p&gt;This is the key insight: the browser matches elements across pages by name, not by identity. Two entirely separate HTML documents, served by any backend in any language, connected only by a CSS property. The elegance is rather difficult to overstate.&lt;/p&gt;

&lt;p&gt;You can name as many elements as you wish:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="nt"&gt;h1&lt;/span&gt;       &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;view-transition-name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;heading&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nc"&gt;.hero&lt;/span&gt;    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;view-transition-name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;hero&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nt"&gt;nav&lt;/span&gt;      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;view-transition-name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;navigation&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nc"&gt;.sidebar&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;view-transition-name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;sidebar&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each named element transitions independently. Unnamed elements participate in the default cross-fade.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom Animations
&lt;/h2&gt;

&lt;p&gt;The API exposes CSS pseudo-elements for both the old and new states:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="nd"&gt;::view-transition-old&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nt"&gt;heading&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;animation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.3s&lt;/span&gt; &lt;span class="n"&gt;ease-out&lt;/span&gt; &lt;span class="n"&gt;slide-out&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nd"&gt;::view-transition-new&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nt"&gt;heading&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;animation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.3s&lt;/span&gt; &lt;span class="n"&gt;ease-in&lt;/span&gt; &lt;span class="n"&gt;slide-in&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The old state and the new state are separate snapshots rendered as replaced elements. You animate them independently. The browser composites them. The result is buttery smooth because it happens in the compositor thread, not in JavaScript.&lt;/p&gt;

&lt;p&gt;You can slide, scale, rotate, clip, or apply any CSS animation you would normally use. The full power of CSS animations and keyframes is available. No library API to learn. No React hooks to chain. Just CSS.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cost We Paid
&lt;/h2&gt;

&lt;p&gt;Let us be honest about what the industry traded for smooth page transitions.&lt;/p&gt;

&lt;p&gt;We abandoned server-rendered HTML. We shipped 300KB JavaScript runtimes to the client. We broke the back button and spent engineering hours rebuilding it in JavaScript. We reinvented routing in userland because the browser's native navigation was "not smooth enough." We lost native browser caching. We invented hydration to fix the problem we created by removing HTML in the first place. We built entire state management libraries (Redux, Zustand, Jotai, Pinia, the list grows quarterly) so the client could remember what the server already knew.&lt;/p&gt;

&lt;p&gt;We broke accessibility. Screen readers that worked perfectly with server-rendered pages now had to contend with JavaScript-mutated DOMs, focus management nightmares, and route changes that announced nothing. We broke SEO. Google had to build a headless Chrome renderer just to index SPA content, and for years, sites that relied on client-side rendering ranked lower simply because the crawler could not see the content.&lt;/p&gt;

&lt;p&gt;We broke the browser's loading indicator. Users no longer knew whether a page was loading or frozen. So we built skeleton screens to simulate the loading indicator we had removed. Marvellous.&lt;/p&gt;

&lt;p&gt;All so a heading could slide.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Replaces
&lt;/h2&gt;

&lt;p&gt;The View Transition API does not replace everything. Framer Motion and GSAP remain valuable for complex interactive animations: drag-and-drop, spring physics, gesture-driven sequences. Those are runtime interactions, not page transitions. The distinction matters.&lt;/p&gt;

&lt;p&gt;But for page-to-page navigation transitions? The use case that justified SPAs for millions of websites that are, fundamentally, collections of pages linked together with anchor tags?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Solution&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;th&gt;JavaScript Required&lt;/th&gt;
&lt;th&gt;Works Without JS&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Barba.js&lt;/td&gt;
&lt;td&gt;7.5KB min&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Framer Motion&lt;/td&gt;
&lt;td&gt;32KB min&lt;/td&gt;
&lt;td&gt;Yes (React)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SPA router&lt;/td&gt;
&lt;td&gt;Entire framework&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;View Transition API&lt;/td&gt;
&lt;td&gt;0KB&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Graceful fallback&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Zero kilobytes. Because it is the browser. The runtime you already shipped.&lt;/p&gt;

&lt;h2&gt;
  
  
  Browser Support
&lt;/h2&gt;

&lt;p&gt;Here is where honesty matters, and where many articles about this API get rather conveniently vague.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full cross-document support:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chrome 126+ (desktop and Android)&lt;/li&gt;
&lt;li&gt;Edge 126+&lt;/li&gt;
&lt;li&gt;Safari 18.2+ (including iOS Safari)&lt;/li&gt;
&lt;li&gt;Opera 112+&lt;/li&gt;
&lt;li&gt;Samsung Internet 29+&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Partial or no cross-document support:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Firefox 146+: supports same-document view transitions (Level 1), but cross-document transitions remain behind a flag (&lt;code&gt;dom.viewTransitions.enabled&lt;/code&gt; in Nightly only as of April 2026)&lt;/li&gt;
&lt;li&gt;Opera Mini, KaiOS Browser, UC Browser: no support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Global coverage:&lt;/strong&gt; 87.82% of users (CanIUse, April 2026). That is high, but it is not universal. Firefox's absence from the cross-document specification is notable, and for teams whose audience includes a significant Firefox share, this is a legitimate consideration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it does not matter as much as you think:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For browsers that do not support the API: nothing breaks. The user sees a normal page load. No error. No fallback code. No polyfill. No broken layout. No JavaScript exception. The page loads exactly as it would have loaded before the API existed.&lt;/p&gt;

&lt;p&gt;This is progressive enhancement in its purest form. The enhanced experience is literally free for supporting browsers (no additional bytes shipped), and the baseline experience is what every user had before. You are not degrading the experience for unsupported browsers. You are enhancing it for supported ones. The risk is zero. The cost is three lines of CSS.&lt;/p&gt;

&lt;p&gt;One does rather appreciate an API that fails by doing nothing wrong.&lt;/p&gt;

&lt;p&gt;Compare this with the SPA approach: if your JavaScript bundle fails to load (network timeout, CDN outage, ad blocker, corporate proxy), your users see a blank white page. The "enhanced" experience degrades to nothing. The View Transition API degrades to a normal page load. One does note the asymmetry.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Constraints
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Same-origin only.&lt;/strong&gt; Cross-document view transitions work for same-origin navigations only. You cannot animate a transition from your site to an external URL. This is a security constraint, not a limitation: the browser needs access to both documents to snapshot them. If your site links to external domains, those navigations simply behave normally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unique names per page.&lt;/strong&gt; Each &lt;code&gt;view-transition-name&lt;/code&gt; must be unique within a single document. Two elements on the same page cannot share a name. This is rarely a problem in practice, but it means you cannot, for example, animate all list items with the same name. Each needs its own.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance consideration.&lt;/strong&gt; The browser captures raster screenshots of named elements. Naming dozens of elements on a complex page could impact transition performance. In practice, naming three to five key elements (header, hero image, navigation, main content area) produces smooth results. Naming everything is rather missing the point of selective enhancement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No cross-origin.&lt;/strong&gt; Worth stating twice. If your architecture relies on navigating between subdomains (&lt;code&gt;app.example.com&lt;/code&gt; to &lt;code&gt;docs.example.com&lt;/code&gt;), these are cross-origin navigations and will not trigger view transitions. The same-origin policy applies strictly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Same-Document Transitions
&lt;/h2&gt;

&lt;p&gt;For single-page applications, the same-document View Transition API (available since Chrome 111, Safari 18+, Firefox 133+) provides equivalent functionality using &lt;code&gt;document.startViewTransition()&lt;/code&gt;. This variant has broader browser support because it landed earlier.&lt;/p&gt;

&lt;p&gt;But the cross-document variant is the interesting one, because it means multi-page applications, the architecture the web was built for, can now match the perceived smoothness of SPAs. Without being SPAs. Without the JavaScript. Without the complexity. Without breaking the back button, the cache, the loading indicator, accessibility, SEO, or the fundamental contract between browser and server.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Point
&lt;/h2&gt;

&lt;p&gt;For a decade, "smooth page transitions" was the argument that justified client-side routing, framework adoption, and the entire SPA architecture for websites that were, at their core, pages linked together with anchor tags. The browser lacked the capability, so we built it in JavaScript. Reasonably enough.&lt;/p&gt;

&lt;p&gt;The browser has the capability now. Three lines of CSS. Zero JavaScript. Progressive enhancement built in. 87.82% global support and climbing. Firefox is the last holdout for cross-document, and the same-document variant already works there.&lt;/p&gt;

&lt;p&gt;The argument no longer applies. The question is no longer "should we use an SPA for transitions?" The question is: "what were the other reasons?" One does suspect the list is rather shorter than expected.&lt;/p&gt;

&lt;p&gt;One does wonder how many SPAs will be reconsidered. One does rather suspect the answer is: not enough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/view-transitions" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>css</category>
      <category>webdev</category>
      <category>frontend</category>
      <category>javascript</category>
    </item>
    <item>
      <title>DTrace</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Tue, 07 Apr 2026 05:48:27 +0000</pubDate>
      <link>https://dev.to/vivian-voss/dtrace-oni</link>
      <guid>https://dev.to/vivian-voss/dtrace-oni</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F968a2tb0rw25e7r44g2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F968a2tb0rw25e7r44g2o.png" alt="Administrator observes a transparent server rack with glowing blue syscall pathways and gold data streams, while an opaque dark server sits unobserved behind her. Full observability versus blind guessing: logging is guessing, DTrace is asking."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Technical Beauty — Episode 30&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In 2001, Bryan Cantrill was debugging a production system at Sun Microsystems. The system was misbehaving. The logs said nothing useful. The metrics showed nothing abnormal. By every observable measure, the system was fine. Except it was not fine. It was doing something wrong, and nobody could see what.&lt;/p&gt;

&lt;p&gt;Cantrill had helped build an entirely synthetic system: every instruction, every data structure, every byte was placed there by human beings. And yet the system could not answer the most basic question: what are you doing right now?&lt;/p&gt;

&lt;p&gt;That bothered him rather a lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Debugging production systems has always offered two options, both terrible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option one:&lt;/strong&gt; add logging statements, rebuild, redeploy, wait for the issue to recur. This works in development. In production, it means restarting a database serving ten thousand connections to add a printf. The cure is worse than the disease. And if you guessed wrong about where to log, you restart again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option two:&lt;/strong&gt; attach a debugger. Stop the process, inspect memory, step through code. This works beautifully on a developer's laptop. On a production trading system processing four million transactions per hour, stopping the process is not debugging. It is an outage.&lt;/p&gt;

&lt;p&gt;Unix has had syscall tracing since the 1990s. truss on Solaris and FreeBSD. strace on Linux, written by Paul Kranenburg for SunOS in 1991, ported to Linux by Branko Lankester. Both trace system calls by intercepting them via ptrace. Both work by stopping the traced process for every single syscall, switching context to the tracer, recording the call, and resuming. The overhead is brutal. Running strace on a production web server is rather like performing surgery whilst repeatedly switching off the lights.&lt;/p&gt;

&lt;p&gt;strace traces one process. It cannot follow what happens inside the kernel. It cannot correlate across services. It tells you what happened, but not why. And it makes everything slower in the process of telling you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;DTrace does not stop anything.&lt;/p&gt;

&lt;p&gt;Bryan Cantrill, Mike Shapiro, and Adam Leventhal designed and built DTrace at Sun Microsystems. The original ideation began in the late 1990s. The implementation was first integrated into Solaris in September 2003. It shipped with Solaris 10 in January 2005. Development took approximately four years of focused engineering.&lt;/p&gt;

&lt;p&gt;The tagline: "Concise answers to arbitrary questions about the system."&lt;/p&gt;

&lt;p&gt;DTrace works by compiling probe scripts, written in a purpose-built language called D, into safe bytecode that is injected directly into the running kernel. When a probe fires, the bytecode executes in kernel context: no context switch, no process stop, no overhead beyond the probe itself.&lt;/p&gt;

&lt;p&gt;When a probe is not enabled, the overhead is zero. Not low. Not negligible. Zero. The original machine instruction runs unmodified. The probe point does not exist until you enable it. This is not an optimisation. It is the architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Language
&lt;/h2&gt;

&lt;p&gt;The D language is deliberately constrained. It cannot allocate memory. It cannot loop infinitely. It cannot dereference invalid pointers. It cannot modify kernel state. It cannot crash the system.&lt;/p&gt;

&lt;p&gt;These are not limitations. They are the reason DTrace is safe for production. The language was designed so that any valid D program is guaranteed not to harm the system it observes. Safety is not a runtime check. It is a compile-time invariant.&lt;/p&gt;

&lt;p&gt;A DTrace one-liner to count system calls by process on a running FreeBSD server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dtrace &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s1"&gt;'syscall:::entry { @[execname] = count(); }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No recompilation. No restart. No risk. The answer appears whilst the system continues serving traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Proof
&lt;/h2&gt;

&lt;p&gt;In 2006, DTrace won the Wall Street Journal's Technology Innovation Award, Gold. Not for a consumer product. Not for an app. For a kernel instrumentation framework that lets you ask a running operating system what it is doing. One does find that rather encouraging about the state of technology journalism, at least in 2006.&lt;/p&gt;

&lt;p&gt;FreeBSD integrated DTrace in 2008. It ships in base. No packages to install, no modules to compile. macOS has included DTrace since Leopard (2007): every Mac ships with a kernel-level dynamic tracing framework that most users will never know exists. illumos, the open-source continuation of OpenSolaris, inherited DTrace from its birthplace.&lt;/p&gt;

&lt;p&gt;On Linux, DTrace's influence is unmistakable. eBPF (extended Berkeley Packet Filter) and its higher-level interface bpftrace provide similar capabilities: safe bytecode execution in the kernel, dynamic instrumentation, production-safe tracing. Brendan Gregg, who wrote the definitive DTrace book, now writes the definitive eBPF material. The ideas travelled. The architecture was validated by imitation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Philosophy
&lt;/h2&gt;

&lt;p&gt;Cantrill articulated the core insight clearly: "We had created an entirely synthetic system, yet we could not ask ourselves what the software was doing. There was a real lack of observability in the system."&lt;/p&gt;

&lt;p&gt;Observability is not logging. Logging records what the developer anticipated might go wrong. Observability answers questions the developer never thought to ask. The difference is the difference between a searchlight and the sun.&lt;/p&gt;

&lt;p&gt;DTrace does not require you to predict your questions in advance. You do not instrument your code for DTrace. You do not add tracing libraries. You do not configure exporters. The probes exist at every function boundary, every syscall, every I/O operation. You simply ask.&lt;/p&gt;

&lt;p&gt;This is what makes DTrace beautiful: it treats the running system as something that should be fully transparent to the operator. Not partially visible through pre-configured dashboards. Not indirectly observable through aggregated metrics. Directly, completely, safely observable. In production. Under load. Right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Point
&lt;/h2&gt;

&lt;p&gt;Twenty-three years after its first integration, DTrace remains the standard against which production tracing tools are measured. Its core design has not changed because it did not need to. Zero overhead when disabled. Safe by construction. Concise answers to arbitrary questions.&lt;/p&gt;

&lt;p&gt;Bryan Cantrill is now CEO of Oxide Computer Company, building rack-scale computers with the same philosophy: the system should be fully observable, fully debuggable, and fully understood. The principle survived the company that created it. One does find that rather beautiful.&lt;/p&gt;

&lt;p&gt;DTrace. Ask the system. It will answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/dtrace" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>unix</category>
      <category>freebsd</category>
      <category>observability</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Philosophy</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Mon, 06 Apr 2026 06:42:04 +0000</pubDate>
      <link>https://dev.to/vivian-voss/the-philosophy-843</link>
      <guid>https://dev.to/vivian-voss/the-philosophy-843</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufanw98g2fv1zwhn96fl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufanw98g2fv1zwhn96fl.png" alt="A massive ancient tree representing Unix as a living system. Its root system glows with luminescent blue data veins, while blue fruits in the crown represent the output. Claudine monitors the results on a tablet, connected to the tree through faint green circuit traces: one tool, one job, composed through pipes." width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Unix Way — Episode 11&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In 1978, Doug McIlroy, head of the Bell Labs Computing Sciences Research Centre and inventor of the Unix pipe, wrote four sentences that have outlasted every framework, every methodology, and every architectural manifesto published since:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new 'features'."&lt;/p&gt;

&lt;p&gt;"Expect the output of every program to become the input to another, as yet unknown, program."&lt;/p&gt;

&lt;p&gt;"Design and build software, even operating systems, to be tried early, ideally within weeks."&lt;/p&gt;

&lt;p&gt;"Use tools in preference to unskilled help to lighten a programming task."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Eleven episodes into this series, and one does feel it is rather time to explain why it is called what it is called.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Principles
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Do one thing and do it well.&lt;/strong&gt; grep searches. sort sorts. awk transforms. cut extracts. None of them does the other's job. None of them needs to. Composition replaces integration: small tools, connected by text streams, solving problems their authors never anticipated. Rob Pike and Brian Kernighan wrote in 1984: "The power of a system comes more from the relationships among programs than from the programs themselves." The pipe is not a feature. It is the architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Text as the universal interface.&lt;/strong&gt; Not JSON. Not Protocol Buffers. Not YAML with its implicit type coercion. Plain text, readable by humans and machines alike, piped from one process to the next. Peter Salus summarised McIlroy's philosophy in three lines: "Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface." A format that has not required a version number in fifty years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Silence is golden.&lt;/strong&gt; A successful command returns nothing. Only failure speaks. The exit code carries the meaning: zero for success, non-zero for failure. In a world of verbose logging frameworks, notification services, and dashboard metrics for metrics, one does rather miss the quiet confidence of &lt;code&gt;$? = 0&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The filesystem as organisational principle.&lt;/strong&gt; Directories for structure, files for content, paths for relationships, permissions for access control. No schema registry. No configuration server. The hierarchy was always there. chmod predates your IAM provider by four decades. Last week's By Design episode on CSV made the case: the filesystem handles hierarchy so the format does not need to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fail noisily.&lt;/strong&gt; Eric Raymond's Rule of Repair: when you must fail, fail in a way that is easy to diagnose. A Unix tool that encounters an error writes to stderr and exits with a non-zero code. It does not return HTTP 200 with an error buried in a JSON body. It does not silently swallow exceptions. It fails, clearly, and lets the operator decide what to do. Quite the contrast to modern systems where "everything is fine" is the default response to catastrophe.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Proof
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt; embodies these principles in its architecture more faithfully than perhaps any other operating system in active development. The base system is one coherent unit: kernel, userland, and documentation, all developed together in a single source tree. Third-party software lives strictly in &lt;code&gt;/usr/local/&lt;/code&gt;, with its own &lt;code&gt;/usr/local/etc/&lt;/code&gt; for configuration. The separation is clean and inviolable. Each service is a small shell script in &lt;code&gt;rc.d&lt;/code&gt;, using shared functions from &lt;code&gt;rc.subr&lt;/code&gt;, inspectable and composable. One package system: &lt;code&gt;pkg&lt;/code&gt;. One ports tree. One source of truth.&lt;/p&gt;

&lt;p&gt;FreeBSD's man pages are written in &lt;code&gt;mdoc(7)&lt;/code&gt; semantic markup and are widely regarded as among the most comprehensive and well-maintained documentation in the industry. Documentation is not an afterthought. It is a first-class artefact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;macOS&lt;/strong&gt; has been certified UNIX by The Open Group since October 2007 (Mac OS X 10.5 Leopard). Every Mac ships with a POSIX-compliant userland: sh, grep, awk, sed, make, and the rest. macOS 26 Tahoe holds current certification for both Intel and Apple Silicon. The tools are there. Most users never open Terminal. One does wonder what they think the machine is for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenBSD&lt;/strong&gt; took "do one thing well" to its logical extreme: build the most secure general-purpose operating system ever built, because it refused to do anything else first. Two remote holes in the default install in nearly thirty years. That is not a feature list. That is a philosophy applied with discipline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NetBSD&lt;/strong&gt; applies the same principles to portability: one codebase running on over sixty hardware architectures, from mainframes to toasters. The code is clean enough to port because the abstractions are honest enough to adapt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;illumos&lt;/strong&gt;, the open-source continuation of Sun's OpenSolaris, carries the SVR4 Unix heritage forward. ZFS, DTrace, and Zones were born here: filesystem integrity, dynamic tracing, and operating system virtualisation, each doing one thing, each doing it well.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Departure
&lt;/h2&gt;

&lt;p&gt;Linux is not UNIX. It is Unix-like. The Linux kernel itself is remarkably well-disciplined: Linus Torvalds' rule, "We don't break userspace," has held for decades. The kernel syscall interface is stable, predictable, and trustworthy.&lt;/p&gt;

&lt;p&gt;But above the kernel, many distributions are quietly walking away from the principles that made Unix what it is.&lt;/p&gt;

&lt;p&gt;systemd has absorbed init, logging (journald), DNS resolution (resolved), NTP (timesyncd), network configuration (networkd), device management (udevd), and an EFI boot manager into a single project. Its creator, Lennart Poettering, openly states: "systemd's sources do not contain a single line of code originating from original UNIX." He describes this as evolution. Others describe it differently.&lt;/p&gt;

&lt;p&gt;Logging moved from plain text in &lt;code&gt;/var/log/&lt;/code&gt; to binary journals. One cannot grep a binary file. One cannot tail it. One cannot pipe it to awk. The universal interface was replaced with a proprietary query tool. Users report 40x slower query performance compared to grep on plain text. One does rather find that instructive.&lt;/p&gt;

&lt;p&gt;Package management fragmented: apt, snap, flatpak, pip, npm, all managing overlapping software on the same machine. Compare FreeBSD: one package system, one ports tree, one source of truth.&lt;/p&gt;

&lt;p&gt;John Goerzen, a long-time Debian developer, put it plainly: "I used to be able to say Linux was clean, logical, well put-together, and organised. I cannot really say this anymore."&lt;/p&gt;

&lt;p&gt;The Devuan project forked Debian entirely to preserve "Init Freedom" without systemd entanglement. They described systemd as "an intrusive monolith." Rather un-Unix, that.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Point
&lt;/h2&gt;

&lt;p&gt;The most reliable systems in computing follow these principles. SQLite: one file, one job, backwards-compatible since 2004. CSV: one format, no opinion, no runtime. ZFS: one filesystem, complete integrity, snapshots and checksums built in. cron: one scheduler, fifty years, never broke.&lt;/p&gt;

&lt;p&gt;The philosophy is not nostalgia. It is engineering. Small, composable, transparent, silent when correct, loud when broken. The systems that follow it outlast the systems that ignore it. Every time.&lt;/p&gt;

&lt;p&gt;Rob Pike's Rule 5: "Data dominates. If you have chosen the right data structures and organised things well, the algorithms will almost always be self-evident."&lt;/p&gt;

&lt;p&gt;Dennis Ritchie said it best: "UNIX is basically a simple operating system, but you have to be a genius to understand the simplicity."&lt;/p&gt;

&lt;p&gt;Eleven episodes in. This is why.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/the-unix-philosophy" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>unix</category>
      <category>freebsd</category>
      <category>philosophy</category>
      <category>programming</category>
    </item>
    <item>
      <title>Why We Teach Tools Instead of Foundations</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Sun, 05 Apr 2026 07:03:43 +0000</pubDate>
      <link>https://dev.to/vivian-voss/why-we-teach-tools-instead-of-foundations-132h</link>
      <guid>https://dev.to/vivian-voss/why-we-teach-tools-instead-of-foundations-132h</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w5wsaecifdsm0nz5hai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w5wsaecifdsm0nz5hai.png" alt="Pen and ink illustration in Heinrich Zille style. A developer contemplates a tall Jenga-like tower of stacked blocks: solid foundation blocks at the bottom (circuits, binary code, network symbols), a missing and unstable middle section with gaps, and a precarious top section of modern tool blocks (clouds, containers, dashboards) tilting dangerously. The foundations were built, the middle was skipped, the tools were stacked on top anyway." width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;On Second Thought — Episode 02&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the 1990s, a computer science degree taught you C, operating systems, compiler theory, networking, and formal languages. You graduated understanding how a machine works from transistor to process. The tools were thin, the understanding was deep, and nobody asked whether you had a Kubernetes certification because Kubernetes did not exist and neither did the problem it solves.&lt;/p&gt;

&lt;p&gt;In 2026, a computer science degree teaches you React. AWS has embedded its certifications into four-year university programmes at Western Governors University and Purdue University Global. The Kubernetes certification saw 250,000 enrolments last year, growing 49% annually. The coding bootcamp market is worth $2.65 billion and projected to reach $14 billion by 2032. React dominates bootcamp curricula for the fourth consecutive year, with popularity up 116% versus 2023.&lt;/p&gt;

&lt;p&gt;Alan Kay said it in 2004: "Most undergraduate degrees in computer science these days are basically Java vocational training." Twenty-two years later, one merely needs to replace "Java" with "React." The observation has aged rather better than the curricula.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Axiom
&lt;/h2&gt;

&lt;p&gt;Learn the tool, learn the craft. The industry treats proficiency with a framework as evidence of engineering competence. A Kubernetes certification proves you can operate Kubernetes. It does not prove you understand what a container actually is: a process in a namespace with resource limits, running on a kernel that was doing isolation long before Docker wrote a CLI for it.&lt;/p&gt;

&lt;p&gt;The certification market is worth $76 billion and growing. Cybersecurity certifications alone: $3.9 billion. The CKA (Certified Kubernetes Administrator) represents 54% of Kubernetes-related job postings. The credential has become the proxy for competence. The proxy, however, measures familiarity with an interface, not understanding of a system.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Origin
&lt;/h2&gt;

&lt;p&gt;Universities once taught foundations because there were no tools to teach. When the only languages were C and Lisp, you had no choice but to understand memory, pointers, and recursion. The tools were thin. The understanding was mandatory.&lt;/p&gt;

&lt;p&gt;Then the abstraction layers arrived. And with them, the vendors. AWS Academy operates in 6,800 institutions across 120 countries, with a $100 million commitment to education equity. Microsoft offers $100 per year in Azure credits per student. Google partners with 317 universities through Internet2. The curriculum follows the sponsorship. Not because universities are corrupt, but because funding shapes focus, and focus shapes graduates.&lt;/p&gt;

&lt;p&gt;Operating systems, compiler theory, and networking have moved from required to elective in many programmes. Web development, cloud computing, and AI/ML have taken their place. The shift is not conspiratorial. It is economic. Graduates who know React get hired faster than graduates who know how a compiler works. The market rewards the tool, not the understanding.&lt;/p&gt;

&lt;p&gt;One does not blame the universities. One merely notes that it is rather difficult to teach fundamentals when the funding comes from companies selling abstractions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cost
&lt;/h2&gt;

&lt;p&gt;MIT, one of the finest computer science programmes on earth, launched "The Missing Semester" in 2020. A course teaching shell tools, version control, text editors, and command-line automation. The course exists because MIT observed that its own students "lack knowledge of tools available to them" and "often perform repetitive tasks by hand." The standard CS curriculum, they wrote, "is missing critical topics about the computing ecosystem."&lt;/p&gt;

&lt;p&gt;If MIT's students cannot use grep, one does wonder about the rest.&lt;/p&gt;

&lt;p&gt;The abstraction ceiling is real and well-documented. Developers who use Docker cannot explain namespaces and cgroups. Developers who deploy to Kubernetes cannot configure iptables. Developers who write React cannot explain what the browser does with their code after the build step. Docker, as one engineer put it, "feels like magic until your container gets OOMKilled or you can't reach a port you swore was open. Then you realise you aren't running a mini-virtual machine; you're just running a process in a very fancy cage."&lt;/p&gt;

&lt;p&gt;When the abstraction breaks, and it always does, they have no layer beneath to fall back to. The foundation was never taught. It was skipped. And skipping foundations does not save time. It borrows it, at interest.&lt;/p&gt;

&lt;p&gt;HackerRank's 2025 report found that only 22% of developers are given time for learning and upskilling. 48% must find the time themselves. Employers are "unsure early-career developers can code without heavy AI assistance." The abstraction ceiling is not just a technical problem. It is becoming a hiring problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Question
&lt;/h2&gt;

&lt;p&gt;Dijkstra wrote: "Computer science is no more about computers than astronomy is about telescopes." He meant that the discipline is about thought, not machinery. One does wonder what he would make of a curriculum built entirely around the telescopes.&lt;/p&gt;

&lt;p&gt;He also wrote: "Universities should not be afraid of teaching radical novelties; on the contrary, it is their calling to welcome the opportunity to do so. Their willingness to do so is our main safeguard against dictatorships, be they of the proletariat, of the scientific establishment, or of the corporate elite."&lt;/p&gt;

&lt;p&gt;One does note the phrase "corporate elite" with a certain quiet interest.&lt;/p&gt;

&lt;p&gt;What if we taught the protocol before the framework? The syscall before the container? The language before the library? What if proficiency meant understanding what happens beneath the abstraction, not merely operating the abstraction itself?&lt;/p&gt;

&lt;p&gt;250,000 Kubernetes certifications were issued last year. One does wonder how many of them could explain what a process is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/why-we-teach-tools" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>computerscience</category>
      <category>education</category>
      <category>webdev</category>
      <category>career</category>
    </item>
    <item>
      <title>AI Code Generation: The Hallucination Tax</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Sun, 05 Apr 2026 06:20:50 +0000</pubDate>
      <link>https://dev.to/vivian-voss/ai-code-generation-the-hallucination-tax-1jdb</link>
      <guid>https://dev.to/vivian-voss/ai-code-generation-the-hallucination-tax-1jdb</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpylcqx5qzfm1g2ofdpms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpylcqx5qzfm1g2ofdpms.png" alt="Split illustration comparing two developers using AI code generation. Left (green): developer with clean vanilla code on screen, fewer hallucinations, reviewable in minutes. Right (red): developer overwhelmed by framework code, multiple error-filled monitors, 19.6% fake packages, debugging takes longer than writing. Title: AI Code Generation." width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Performance-Fresser — Episode 20&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;"AI will write your code! 55% faster! Ship in half the time!"&lt;/p&gt;

&lt;p&gt;METR ran a randomised controlled trial. Sixteen experienced developers, 246 tasks, mature codebases averaging one million lines of code. Result: developers using AI were 19% slower. Not faster. Slower.&lt;/p&gt;

&lt;p&gt;The developers themselves believed they were 20% faster. They were not. One does admire the confidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hallucination
&lt;/h2&gt;

&lt;p&gt;19.6% of AI-recommended packages do not exist. Nearly one in five imports point to packages that were never published. 43% of those hallucinated packages reappear consistently across re-queries. The AI does not guess randomly. It hallucinates with conviction, and it hallucinates the same things repeatedly.&lt;/p&gt;

&lt;p&gt;This is not an edge case. Across 756,000 code samples and 16 models, the pattern is remarkably consistent. Attackers have noticed, naturally. "Slopsquatting" registers packages matching AI-hallucinated names on npm and PyPI, turning the model's confidence into a supply chain attack vector. Rather entrepreneurial of them.&lt;/p&gt;

&lt;p&gt;40% of GitHub Copilot's generated code contains security vulnerabilities (NYU, 89 scenarios, 1,692 programs). Developers with AI access write significantly less secure code than those without, whilst being considerably more confident that their code is secure. Stanford measured this across 47 developers in Python, JavaScript, and C. The less they questioned the AI, the more vulnerabilities they introduced. One does wonder whether "confidence" and "competence" have always been this loosely coupled.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Complexity Tax
&lt;/h2&gt;

&lt;p&gt;Here is what the benchmarks reveal but the marketing rather conveniently omits: AI performs measurably worse on complex, abstracted code. Framework-specific conventions, proprietary APIs, deep dependency chains: these are the contexts where hallucination rates climb. The more you ask the model to navigate, the more creatively it invents.&lt;/p&gt;

&lt;p&gt;20.41% of code hallucinations stem from incorrect API usage. The more framework-specific the API, the more the model confuses conventions, invents methods that do not exist, and mixes patterns from different versions. Higher Halstead complexity, larger vocabulary, deeper abstraction: all correlate with higher failure rates in LLM-generated code. One might call it poetic justice: the abstractions designed to simplify development are now the abstractions that confuse the tool designed to simplify development.&lt;/p&gt;

&lt;p&gt;Vanilla code in the language's standard library produces cleaner AI output. Not because the AI is smarter. Because there is less to hallucinate about. Fewer abstractions, fewer proprietary patterns, fewer opportunities for the model to confidently fabricate something that compiles but does not work.&lt;/p&gt;

&lt;p&gt;JavaScript illustrates this rather neatly: 21.3% hallucinated imports versus 15.8% in Python. More packages in the ecosystem means more hallucination surface. The complexity you built for humans to struggle with is now the complexity AI struggles with too. The tax, as one might say, compounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Model
&lt;/h2&gt;

&lt;p&gt;Not all models are equal, and the tooling matters as much as the model behind it.&lt;/p&gt;

&lt;p&gt;Copilot autocompletes lines. It predicts the next token based on your current file. An agentic model in a proper development environment reasons about architecture, reads your project structure, navigates across files, and understands context at a system level. The difference is not incremental. It is categorical.&lt;/p&gt;

&lt;p&gt;Less than 44% of AI-generated code was accepted in the METR study. The developers spent more time evaluating, adjusting, and discarding suggestions than they would have spent writing the code themselves. The tool that was meant to remove friction became the friction. Quite the achievement.&lt;/p&gt;

&lt;p&gt;Choosing the right model for the right task is engineering. Using whatever ships with your IDE is hope. Hope is a marvellous thing. It is not, however, a deployment strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Code Quality Decline
&lt;/h2&gt;

&lt;p&gt;GitClear analysed 211 million lines of code and measured the impact of AI adoption on code quality:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Refactoring collapsed: from 25% of changed lines (2021) to under 10% (2024)&lt;/li&gt;
&lt;li&gt;Code cloning surged: copy-pasted lines rose from 8.3% to 12.3% (a 48% increase)&lt;/li&gt;
&lt;li&gt;Code churn doubled: lines reverted or updated within two weeks doubled versus the 2021 baseline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI generates code faster. The code gets replaced faster. The net effect on the codebase is not acceleration. It is the accumulation of disposable code that nobody refactors because the AI will cheerfully generate more. One does wonder whether "velocity" was always a euphemism for "volume."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;AI code generation is a brilliant instrument. In the hands of someone who understands both the tool and the codebase, it accelerates work that would otherwise be tedious. In the hands of someone working inside a framework they do not fully understand, it amplifies the confusion at machine speed.&lt;/p&gt;

&lt;p&gt;The answer is not more AI. It is less complexity. Reduce what the model must carry in context. Write code a human can read in one pass: lean, minimal, close to the language's own idioms. The AI will follow, because there is less to get wrong.&lt;/p&gt;

&lt;p&gt;45% of developers say debugging AI code takes longer than writing it themselves. One does suspect this has less to do with the AI and rather more to do with not understanding the framework the AI is writing for. A developer who masters the fundamentals of the language itself reviews AI output in seconds. A developer buried in abstractions cannot review anyone's code, including their own.&lt;/p&gt;

&lt;p&gt;61% say AI produces code that "looks correct but is not reliable." The hallucination is not in the model. It is in the expectation that a tool can navigate complexity you have not mastered yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trust Erosion
&lt;/h2&gt;

&lt;p&gt;The developer community is noticing. Stack Overflow's 2025 survey (65,000 respondents) tells the story:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;84% use or plan to use AI tools (adoption is not the problem)&lt;/li&gt;
&lt;li&gt;Only 29% trust AI accuracy (down from 40% the previous year)&lt;/li&gt;
&lt;li&gt;Favourability dropped from 72% to 60%&lt;/li&gt;
&lt;li&gt;75% still prefer asking a human over trusting AI output (rather telling)&lt;/li&gt;
&lt;li&gt;77% say "vibe coding" is not part of their professional work (one does hope)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The industry adopted the tool before it understood the tool. Now the understanding is catching up, and confidence is dropping. Not because AI got worse. Because expectations met reality. One does find that reality has rather poor timing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Lever
&lt;/h2&gt;

&lt;p&gt;The solution is not abandoning AI. It is reducing what the AI must navigate.&lt;/p&gt;

&lt;p&gt;Write lean code: close to the language's own idioms, minimal abstractions, no framework magic. A function that does one thing, named clearly, understood where it is read. The AI generates better output for this code because there is less to hallucinate about. The human reviews it faster because there is less to misunderstand.&lt;/p&gt;

&lt;p&gt;This is not nostalgia. It is architecture for the AI era. The same principles that made code maintainable for humans now make it reviewable when generated by machines.&lt;/p&gt;

&lt;p&gt;Write lean. The AI will follow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/ai-code-generation" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>codequality</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Update Treadmill</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Sat, 04 Apr 2026 07:12:46 +0000</pubDate>
      <link>https://dev.to/vivian-voss/the-update-treadmill-2p95</link>
      <guid>https://dev.to/vivian-voss/the-update-treadmill-2p95</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frlw9vgbbw32na7xpnjp9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frlw9vgbbw32na7xpnjp9.png" alt="Split illustration: Left side shows a developer trapped in a hamster wheel made of error messages, build failures, and update notifications (red-tinted chaos). Right side shows the same developer sitting calmly at a minimal desk with a single terminal and coffee (green-tinted calm). Text overlays compare framework update maintenance (75% of dev time) with SQLite's backwards compatibility since 2004, pledged to 2050." width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Beta Stories — Episode 06&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Your software worked on Friday. You ran &lt;code&gt;npm update&lt;/code&gt; on Monday. It no longer works on Monday. Nothing in your code changed. Not a single line. The platform moved underneath you.&lt;/p&gt;

&lt;p&gt;One does find it rather charming that the industry calls this "progress."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Promise
&lt;/h2&gt;

&lt;p&gt;"Ship early, iterate fast, embrace change." Agile taught an entire generation that stability is something to overcome. Sprints replaced milestones. Continuous deployment replaced releases. The word "done" quietly left the building and has not been seen since.&lt;/p&gt;

&lt;p&gt;The ecosystem took the hint. Angular ships a new major version every six months. Eighteen major versions in ten years. Each with breaking changes, each with an 18-month support window, each politely informing you that the version you spent three months migrating to is now approaching end of life. You are not building software. You are servicing a subscription to your own framework.&lt;/p&gt;

&lt;p&gt;The build tools followed suit. Grunt was the standard in 2013. Gulp replaced it by 2015. Webpack took over by 2017. Now it is Vite, with Webpack satisfaction down to 26%. Four generations of tooling in ten years, each requiring its own configuration language, its own plugin ecosystem, its own mental model. The code you built did not change. The machinery around it was replaced four times.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Decay
&lt;/h2&gt;

&lt;p&gt;44% of breaking changes in npm packages arrive in minor and patch releases. The versions that semantic versioning promises are safe. They are not. They merely look safe, which is rather worse.&lt;/p&gt;

&lt;p&gt;Next.js 15 reversed its caching defaults. Code that relied on cached responses now fetches on every request. Next.js 16 dropped Node.js 18 entirely. One does not recall requesting either of these changes, yet here one is, updating CI pipelines on a Tuesday afternoon.&lt;/p&gt;

&lt;p&gt;The migrations that do get announced fare little better. Python 2 to 3 took twelve years. Twelve years for a language migration that was meant to be straightforward. The cost to the industry has been estimated at over $100 billion. AngularJS reached end of life in 2021. Four years later, 1.2 million websites still run it, with 419,000 weekly npm downloads. The migration was not completed. It was abandoned.&lt;/p&gt;

&lt;p&gt;Stripe measured the damage in 2018: developers spend 33% of their time on technical debt. 13.5 hours per week per developer on inefficiencies that produce no value. Across the industry, 75% of development time goes to maintenance. Not features. Not innovation. Keeping the treadmill running so that one may continue running on the treadmill. Splendid use of engineering talent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Mechanism
&lt;/h2&gt;

&lt;p&gt;Framework authors need adoption. Adoption requires novelty. Novelty requires breaking the previous version. Break, migrate, break, migrate. The cycle is not accidental. It is the product.&lt;/p&gt;

&lt;p&gt;Agile provided the philosophy. "Embrace change" became "inflict change." Sprints guarantee a next iteration, never a stable release. The process ensures that software never reaches a finished state, and the vocabulary ensures nobody notices: it is not instability, it is iteration. It is not churn, it is evolution.&lt;/p&gt;

&lt;p&gt;And the tooling enforces it. Dependabot opens 200 pull requests per week for a monorepo. Not because your code is broken. Because a dependency four levels deep released a patch for a vulnerability in a function you do not call. The automated pull request arrives, the CI runs, the reviewer spends twenty minutes verifying that nothing changed, and the treadmill advances one step. Multiply by fifty projects and you have a full-time job that produces nothing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Signal
&lt;/h2&gt;

&lt;p&gt;SQLite has maintained backwards compatibility since 2004. The developers have committed to preserving the current file format, SQL syntax, and C API until the year 2050. That is not a version policy. That is a promise to every system that depends on it.&lt;/p&gt;

&lt;p&gt;POSIX has been stable since 1988. Thirty-eight years of API consistency across operating systems, vendors, and hardware architectures. A shell script written in 1990 still runs.&lt;/p&gt;

&lt;p&gt;FreeBSD has maintained source compatibility across major releases for over thirty years. Code written for FreeBSD in the 1990s compiles and runs on FreeBSD 14 today. Not because nobody touched the system. Because every change was made with the understanding that other people's software depends on it.&lt;/p&gt;

&lt;p&gt;These are not legacy projects. They are proof that stability is a design decision, not a technical limitation. The treadmill is not inevitable. It is profitable.&lt;/p&gt;

&lt;p&gt;Your framework updates every six months. Your database file format has not changed in twenty-two years. One of them understood what "production" means.&lt;/p&gt;

&lt;p&gt;"Go into frameworks," they said. They did not mention the frame gets replaced every six months whilst you are still hanging in it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/the-update-treadmill" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>agile</category>
      <category>techdebt</category>
    </item>
  </channel>
</rss>
