<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vivian Voss</title>
    <description>The latest articles on DEV Community by Vivian Voss (@vivian-voss).</description>
    <link>https://dev.to/vivian-voss</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vivian-voss"/>
    <language>en</language>
    <item>
      <title>One Clock, One Tool, Three Distros</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Mon, 04 May 2026 08:51:37 +0000</pubDate>
      <link>https://dev.to/vivian-voss/one-clock-one-tool-three-distros-nf9</link>
      <guid>https://dev.to/vivian-voss/one-clock-one-tool-three-distros-nf9</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fva8koh9tindgj10hym17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fva8koh9tindgj10hym17.png" alt="A warm lamp-lit watchmaker's workshop scene. On the left, a cluttered wooden workbench holds three different mechanical clock movements being worked on simultaneously, each surrounded by its own scattered set of tools and parts; three small handwritten labels propped against them read " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Unix Way — Episode 15&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Ask a Linux admin which time daemon their server runs. Pause for the silence. Then watch them check three different places to find out. On FreeBSD the question does not arise.&lt;/p&gt;

&lt;p&gt;This is not a story about clocks. It is a story about how a basic system service ended up with three implementations, three default behaviours, three configuration files, and zero agreement, and what it costs the people who have to administer the result.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Short History of NTP
&lt;/h2&gt;

&lt;p&gt;The Network Time Protocol was designed by David L. Mills at the University of Delaware in 1985. RFC 958 standardised the original protocol; RFC 1305 codified version 3 in 1992; RFC 5905 finalised version 4 in 2010. The reference implementation, called &lt;code&gt;ntpd&lt;/code&gt;, was written by Mills and his students alongside the protocol itself. For most of the next thirty years, &lt;code&gt;ntpd&lt;/code&gt; was the only serious option for synchronising the clock on a Unix machine. The University of Delaware NTP code became the reference, the documentation, and the default.&lt;/p&gt;

&lt;p&gt;The protocol matters because it is harder than it looks. A client cannot simply ask a server "what time is it" and accept the answer; the round-trip introduces latency, the network jitters, the local clock drifts at a rate that depends on temperature and load, and the only way to compute a useful offset is to query several servers, weigh their replies against each other, and apply a clock-discipline algorithm that gradually steers the local oscillator without abrupt jumps. The full NTP protocol embeds this into the daemon. The Simple Network Time Protocol, SNTP, is the same wire format used naively: query one server, accept one answer, set the clock.&lt;/p&gt;

&lt;h2&gt;
  
  
  FreeBSD: One Tool, In Base
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;ntpd(8)&lt;/code&gt; has been part of the FreeBSD base system since 4.x in 2000. To enable it, an administrator adds a single line to &lt;code&gt;/etc/rc.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;ntpd_enable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"YES"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The configuration file &lt;code&gt;/etc/ntp.conf&lt;/code&gt; is shipped with sensible defaults pointing at the FreeBSD pool servers; non-default servers can be added by editing it. The service is started with &lt;code&gt;service ntpd start&lt;/code&gt;. The diagnostic command is &lt;code&gt;ntpq -p&lt;/code&gt;, which prints the list of configured peers, their measured offsets, and the daemon's current opinion about which it is using.&lt;/p&gt;

&lt;p&gt;For the rare case where the in-base &lt;code&gt;ntpd&lt;/code&gt; does not fit, OpenNTPD is available in ports under &lt;code&gt;net/openntpd&lt;/code&gt;. The most common reason to reach for it is jails: the OpenBSD-derived OpenNTPD is more comfortable binding to specific addresses, which matters when a jail wants its own NTP daemon listening on its own IP. For ninety-nine servers out of a hundred, the in-base &lt;code&gt;ntpd&lt;/code&gt; is the answer, and the answer has not changed in over twenty years.&lt;/p&gt;

&lt;p&gt;The reason this is unremarkable on FreeBSD is the reason most things on FreeBSD are unremarkable: a single team makes the decisions and maintains the result. The base system includes &lt;code&gt;ntpd&lt;/code&gt;. The base system documents &lt;code&gt;ntpd&lt;/code&gt;. The base system upgrades &lt;code&gt;ntpd&lt;/code&gt;. There is no second opinion, because there is no second team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linux: ntpd, the Original
&lt;/h2&gt;

&lt;p&gt;The same &lt;code&gt;ntpd&lt;/code&gt; from David L. Mills was, for most of Linux's existence, the standard time daemon on Linux too. Distributions packaged it under names like &lt;code&gt;ntp&lt;/code&gt; (Debian, Ubuntu) or &lt;code&gt;ntp&lt;/code&gt; (RHEL, Fedora). The configuration file was &lt;code&gt;/etc/ntp.conf&lt;/code&gt;, the service was &lt;code&gt;ntpd&lt;/code&gt;, the diagnostic was &lt;code&gt;ntpq -p&lt;/code&gt;. The world worked.&lt;/p&gt;

&lt;p&gt;By the late 2010s, the consensus had begun to shift. The &lt;code&gt;ntpd&lt;/code&gt; codebase had a complicated history of maintenance, several serious CVEs, and a configuration syntax that newer admins found rather opaque. The Linux Foundation funded a separate "Network Time Foundation" maintenance effort which kept the lights on but did not modernise the daemon in ways that distributions wanted. By 2026, every major distribution has moved on. &lt;code&gt;ntpd&lt;/code&gt; is still installable, still works, and is still a perfectly reasonable choice for an administrator who knows it. It is no longer what new servers ship with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linux: chrony, the Modern Replacement
&lt;/h2&gt;

&lt;p&gt;Richard Curnow began work on &lt;code&gt;chrony&lt;/code&gt; in 1997, originally as a personal project for synchronising laptops with intermittent network connectivity. After Curnow stepped back from active development, Miroslav Lichvar at Red Hat picked it up, and &lt;code&gt;chrony&lt;/code&gt; is now developed under Red Hat sponsorship. By 2026 it is the default time daemon on Fedora, RHEL, CentOS Stream, Rocky, Alma, openSUSE, and Arch.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;chrony&lt;/code&gt; is a complete NTP implementation. It can act as a client, a server, or both. It supports NTS (Network Time Security, RFC 8915) for authenticated time without the operational burden of NTP autokey. It converges faster than &lt;code&gt;ntpd&lt;/code&gt; on first start, handles long network outages without drifting badly, and behaves better on virtual machines whose hardware clock cannot be trusted. The configuration file is &lt;code&gt;/etc/chrony.conf&lt;/code&gt; on RHEL-family systems and &lt;code&gt;/etc/chrony/chrony.conf&lt;/code&gt; on Debian-family systems, because packagers disagreed on the right path. The daemon is &lt;code&gt;chronyd&lt;/code&gt;. The diagnostic is &lt;code&gt;chronyc tracking&lt;/code&gt; (current sync state) and &lt;code&gt;chronyc sources&lt;/code&gt; (list of upstream servers). The service unit is sometimes &lt;code&gt;chronyd&lt;/code&gt;, sometimes &lt;code&gt;chrony&lt;/code&gt;, depending on which packaging tradition the distribution inherited.&lt;/p&gt;

&lt;p&gt;If a Linux admin is asked to set up time synchronisation on a server in 2026 and given a free hand, the answer is almost always &lt;code&gt;chrony&lt;/code&gt;. The accuracy is better, the codebase is healthier, the maintainer is responsive, and the documentation is current.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linux: systemd-timesyncd, the Minimalist
&lt;/h2&gt;

&lt;p&gt;The third answer was added by systemd in version 213 (December 2014). &lt;code&gt;systemd-timesyncd&lt;/code&gt; is not a full NTP implementation; it is an SNTP client. It queries one server at a time, accepts the answer, and sets the clock. It does not triangulate from multiple sources, it does not weight peers against each other, it does not detect a single misconfigured upstream lying about the time. It is also small, simple, and adequate for the case it was built for: a desktop or a container that just needs the clock to be roughly right.&lt;/p&gt;

&lt;p&gt;Ubuntu adopted &lt;code&gt;systemd-timesyncd&lt;/code&gt; as the default in 18.04 (2018) and has kept it as the desktop default ever since; Ubuntu Server installs &lt;code&gt;chrony&lt;/code&gt; by default in newer releases but &lt;code&gt;systemd-timesyncd&lt;/code&gt; was the documented standard for several LTS cycles. Many container images use &lt;code&gt;systemd-timesyncd&lt;/code&gt; because it has fewer dependencies than &lt;code&gt;chrony&lt;/code&gt;. The configuration file is &lt;code&gt;/etc/systemd/timesyncd.conf&lt;/code&gt;, the status command is &lt;code&gt;timedatectl status&lt;/code&gt;, and the daemon is part of systemd itself rather than a separate package.&lt;/p&gt;

&lt;p&gt;The trap with &lt;code&gt;systemd-timesyncd&lt;/code&gt; is that it ignores &lt;code&gt;/etc/ntp.conf&lt;/code&gt;. An administrator who inherits a server, sees &lt;code&gt;/etc/ntp.conf&lt;/code&gt; with carefully tuned upstream servers, and assumes those servers are being used, may well be wrong: if &lt;code&gt;systemd-timesyncd&lt;/code&gt; is the active daemon, it is reading &lt;code&gt;/etc/systemd/timesyncd.conf&lt;/code&gt;, and &lt;code&gt;/etc/ntp.conf&lt;/code&gt; is decoration. The diagnostic for "which daemon is actually running" on a Linux box is to enumerate the candidates: &lt;code&gt;systemctl status chronyd&lt;/code&gt;, &lt;code&gt;systemctl status ntpd&lt;/code&gt;, &lt;code&gt;systemctl status systemd-timesyncd&lt;/code&gt;, until one of them returns "active". Two of them sometimes do. That conversation is its own kind of fun.&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenNTPD, the Quiet One
&lt;/h2&gt;

&lt;p&gt;A footnote, but worth one. OpenNTPD was written by Henning Brauer for OpenBSD in 2003, motivated by the same complaints about the reference &lt;code&gt;ntpd&lt;/code&gt; that eventually drove Linux distributions toward &lt;code&gt;chrony&lt;/code&gt;: complicated configuration, a hard-to-audit codebase, and licensing that did not fit OpenBSD's standards. OpenNTPD is included in OpenBSD base, and is available on FreeBSD via the &lt;code&gt;net/openntpd&lt;/code&gt; port. It is a deliberately small full-NTP implementation, with a configuration file the size of a postcard. For administrators who want a simple time daemon without the size of &lt;code&gt;ntpd&lt;/code&gt; and without depending on &lt;code&gt;chrony&lt;/code&gt;, OpenNTPD is the quiet third option that has worked steadily for over twenty years.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Point
&lt;/h2&gt;

&lt;p&gt;The same problem (the clock drifts, set it from a network source) has three answers on Linux because three different communities, working on overlapping but not identical use cases, arrived at three different daemons, and the distributions that integrate them could not converge on a single recommendation. Fedora chose &lt;code&gt;chrony&lt;/code&gt;. Ubuntu chose &lt;code&gt;systemd-timesyncd&lt;/code&gt; for the desktop and &lt;code&gt;chrony&lt;/code&gt; for the server. Debian sat on &lt;code&gt;ntpd&lt;/code&gt; for years before transitioning. Arch let the user decide.&lt;/p&gt;

&lt;p&gt;FreeBSD did not have that argument. The base team picked &lt;code&gt;ntpd&lt;/code&gt;, kept it for two decades, and the rest of the system was built around the choice. When OpenNTPD became available, it went into ports as a clearly-labelled alternative; it did not displace the in-base default.&lt;/p&gt;

&lt;p&gt;The cost of three answers is paid every time a new admin inherits a Linux server and has to discover which daemon is currently running, which config file it is reading, and which it ought to be reading. The cost of one answer is a config file you have already seen, on a system you have already learned.&lt;/p&gt;

&lt;p&gt;This is not a question of which daemon is best. &lt;code&gt;chrony&lt;/code&gt; is, by most measures, the best of the four. The question is what it costs to need to know that, every time, on every system, in a way that varies by distribution and by year. On FreeBSD the answer to "what time daemon does this server run" is in &lt;code&gt;/etc/rc.conf&lt;/code&gt;, has been since 2000, and is the same on the next server too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/one-clock-one-tool-three-distros" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>freebsd</category>
      <category>linux</category>
      <category>ntp</category>
      <category>devops</category>
    </item>
    <item>
      <title>Why We Reach for the Layer</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Sun, 03 May 2026 07:37:36 +0000</pubDate>
      <link>https://dev.to/vivian-voss/why-we-reach-for-the-layer-4lio</link>
      <guid>https://dev.to/vivian-voss/why-we-reach-for-the-layer-4lio</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdftzun0nlxio17rgzyt8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdftzun0nlxio17rgzyt8.png" alt="A pen-and-ink illustration in the tradition of a 1900s newspaper plate. On the left side, a quote by Antoine de Saint-Exupéry reads: " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;On Second Thought — Episode 06&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The ORM hides the SQL. The cache hides the ORM. The service mesh hides the services. The operator hides the YAML, which already hid the kubelet, which already hid the container, which already hid the process. By Tuesday, nobody quite remembers what the original problem was. They are too busy configuring its sixth wrapper.&lt;/p&gt;

&lt;p&gt;This is the post about that wrapper.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Axiom
&lt;/h2&gt;

&lt;p&gt;When something does not work as wished, one adds a layer on top. The pattern is invisible because it is universal. We do it in code, in infrastructure, in process, in organisation. We wrap APIs in clients, clients in adapters, adapters in service objects, service objects in factories. We wrap deploys in pipelines, pipelines in operators, operators in platforms, platforms in portals. We wrap teams in tribes, tribes in chapters, chapters in centres of excellence. The plaster is the one tool that fits every wound, and the wound, rather conveniently, is never the one that gets examined.&lt;/p&gt;

&lt;p&gt;The reflex is so deeply trained that the alternative does not occur as an option. The question "could we remove the underlying thing instead of wrapping it?" is rarely asked because the team that built the underlying thing is in the next room, the project that delivered it is in the previous quarter's review, and the engineer who would have to do the removal has thirteen tickets that close more cleanly with a wrapper. So the wrapper goes in, and a year later, the wrapper has its own wrapper.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Origin
&lt;/h2&gt;

&lt;p&gt;Layered architecture has a perfectly respectable origin. Edsger Dijkstra published "The Structure of the THE-Multiprogramming System" in CACM in May 1968, introducing disciplined layering as a means of bounding complexity. Each layer presented a strictly defined interface to the layer above it; an engineer could reason about one layer without holding the entire stack in their head. It was a brilliant move and remains one.&lt;/p&gt;

&lt;p&gt;David Parnas, four years later, gave the underlying principle its enduring name. His 1972 paper, "On the Criteria To Be Used in Decomposing Systems into Modules", introduced Information Hiding: a module should hide what is likely to change behind a stable interface, so that change in one place does not propagate to all the others. Layers were one application of the principle. The intent was to contain complexity, not to defer it.&lt;/p&gt;

&lt;p&gt;Somewhere between Parnas's paper and the third generation of cloud abstractions, the verb shifted. Containing became postponing. The layer that once prevented the lower one from leaking now exists primarily to defer the moment in which one would have to look at it. The Kubernetes operator does not hide a stable abstraction; it hides a YAML format that nobody wishes to read. The retry decorator does not bound a clean interface; it papers over an upstream service that has never been made reliable. The ORM does not abstract the database; it postpones the conversation about what the queries should actually be.&lt;/p&gt;

&lt;p&gt;The vocabulary survived. The discipline did not. One does notice the inversion.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cost
&lt;/h2&gt;

&lt;p&gt;Manny Lehman, working at IBM in the early 1970s, formulated what came to be called the laws of software evolution. The second law, in its mature form: the complexity of an evolving system increases unless explicit work is done to maintain or reduce it. Few sentences in computer science have aged this well. Lehman compared it, half-seriously, to the second law of thermodynamics: entropy is the default; order requires energy. The work to maintain or reduce, in practice, is the work that nobody is funded to do, because it produces no new feature, ships no new ticket, and leaves no diagram for the architecture review.&lt;/p&gt;

&lt;p&gt;Defensive code-paths multiply as a consequence. Every API call gets wrapped in retries. Every value gets wrapped in null-checks. Every cache gets wrapped in invalidation logic. Phil Karlton, working at Carnegie Mellon and later at Netscape, is widely credited with the observation, somewhere around 1970, that there are two hard things in computer science: cache invalidation and naming things. The line was popularised by Tim Bray around 1996. We have, with rather industrious enthusiasm, made the first one our default architectural pattern, and we still cannot agree on the name of the variable that holds the result.&lt;/p&gt;

&lt;p&gt;The cost is not only the cache. The cost is what happens to the people inside the system. The Senior Engineer's day shifts from building to understanding. She spends the morning tracing why a request that should take six milliseconds is taking eight hundred, walks through three retry decorators, two adapter classes, a service mesh sidecar, and a fallback strategy that has not been triggered since 2023, and finds at the bottom of all of it a database query that wants for an index. The index goes in; the eight hundred milliseconds become six. The retry decorator stays. The adapter stays. The sidecar stays. Removing them would be another quarter of work, and the quarter has other plans.&lt;/p&gt;

&lt;p&gt;The Junior Engineer never gets to building, because the layers between her and the system have grown taller than the system itself. She is taught the operator before the syscall, the framework before the language, the platform before the protocol. When the abstraction breaks (and it always does), there is no layer beneath to fall back to. The foundation was never taught. It was skipped.&lt;/p&gt;

&lt;p&gt;This was the substance of Episode 02. It is also the substance of this one. The two are linked because the layer-reaching reflex and the foundation-skipping curriculum are two halves of the same economy: an industry that compounds abstractions because compounding abstractions can be hired for, certified for, conferenced about, and sold. Reduction cannot be hired for. There is no certification.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Question
&lt;/h2&gt;

&lt;p&gt;Reduction is the hardest discipline in software. It looks easy from the outside because the result is, by definition, small. The result is small because someone has spent twenty years making it small.&lt;/p&gt;

&lt;p&gt;SQLite, the most widely deployed database engine on earth, carries roughly 156,000 lines of mature C code (the canonical figure published by the project for version 3.42, May 2023). It has stayed one library because, every time a feature was proposed, the maintainers asked whether the existing surface could be made to do the work instead. The test suite is 92 million lines. The library is 156,000. That ratio is not an accident. It is the operational definition of reduction as a discipline.&lt;/p&gt;

&lt;p&gt;awk has run essentially unchanged since Aho, Weinberger, and Kernighan published it at Bell Labs in 1977. Forty-nine years on, the language is the same language. Engineers who learned it in the 1980s can read code written this morning. The design was small enough to be finished, and the maintainers had the discipline to recognise that "finished" is a category that exists.&lt;/p&gt;

&lt;p&gt;pf, the OpenBSD packet filter, has been one configuration file with one syntax since OpenBSD 3.0 in December 2001. Daniel Hartmeier began writing it in June 2001, after IPFilter was removed for licensing reasons. The syntax has been refined; the model has not been replaced. Twenty-five years later, an administrator who learned pf in 2003 can read a pf.conf written this week. There is no v2. There is no successor. There is no parallel implementation that one is encouraged to migrate to. There is one tool that does the work it was built to do.&lt;/p&gt;

&lt;p&gt;None of these were elegant by accident. They were elegant by patience, which is the one resource the sprint cycle does not allocate. Each of them required a maintainer or a small team to refuse, repeatedly, the temptation to add. Refusing is not a quarterly metric. It is not a ticket category. It is not a Slack reaction. It is the silent work that holds the small body of software that the rest of the industry quietly stands on without thanking.&lt;/p&gt;

&lt;p&gt;The deeper question is not whether we should layer less. The deeper question is what kind of organisation, what kind of contract, what kind of incentive structure could allow reduction to be a fundable activity rather than a private virtue practised by the few. Today it is funded only by accident: by maintainers who are paid for something else, by retired engineers donating evenings, by small institutions that never grew into the structures that would have stopped them from doing it.&lt;/p&gt;

&lt;p&gt;What would happen if a team were given one sprint, just one, not to add a layer but to remove one? Who has the authority to ask the question? Who would carry the cost of the answer being yes?&lt;/p&gt;

&lt;p&gt;The plaster is cheap. The wound is not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/why-we-reach-for-the-layer" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>technicaldebt</category>
      <category>engineering</category>
      <category>reduction</category>
    </item>
    <item>
      <title>The Browser That Brought Its Own AI</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Sat, 02 May 2026 07:05:51 +0000</pubDate>
      <link>https://dev.to/vivian-voss/the-browser-that-brought-its-own-ai-30fd</link>
      <guid>https://dev.to/vivian-voss/the-browser-that-brought-its-own-ai-30fd</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjikzfp9wwpk9cnjyo8dn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjikzfp9wwpk9cnjyo8dn.png" alt="A cyberpunk editorial illustration showing a desktop web browser as an architectural cross-section against a deep night-blue background with magenta and cyan glow. In the centre, a powerful black computer chip labelled " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Not in the Brief, Episode 01&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Open chrome://on-device-internals in a new tab. If your machine qualifies, you will see a multi-gigabyte language model that Chrome has downloaded onto your disk, listed with a version number and a file size. Any website you visit can call this model through an API in JavaScript. There is no permission prompt. There never was. This is the first episode of &lt;em&gt;Not in the Brief&lt;/em&gt;: a series on the documented mechanics of software that has been added to your machine without you being asked. We start with the largest target available, because the change is hidden in plain sight, and because almost everyone is affected.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Built In
&lt;/h2&gt;

&lt;p&gt;Chrome ships seven on-device AI APIs, all backed by a foundation model called Gemini Nano. Gemini Nano runs locally; the inference does not leave the machine. The APIs, in order of relevance, are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LanguageModel&lt;/strong&gt;: the Prompt API. Free-form text in, text out.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Summarizer&lt;/strong&gt;: text summarisation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Translator&lt;/strong&gt;: language translation between supported pairs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Writer&lt;/strong&gt;: short-form generative writing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rewriter&lt;/strong&gt;: text rewriting and tone change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proofreader&lt;/strong&gt;: grammar and style correction.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LanguageDetector&lt;/strong&gt;: language identification.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Generally available for extensions since Chrome 138 (May 2025). For ordinary web pages, the APIs are in Origin Trial and behind a feature flag for end users; in practice, this means a website needs an Origin Trial token, and the browser must have the model loaded. The trial route does not require user consent at the page level either.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Got On Your Machine
&lt;/h2&gt;

&lt;p&gt;Announced at Google I/O in May 2024. The decision logic Chrome applies today, as documented by Google's own developer pages, is the following.&lt;/p&gt;

&lt;p&gt;When the user starts Chrome on a qualifying device, the browser checks four things in the background: more than 4 GB of VRAM available, at least 16 GB of system RAM, at least 22 GB of free space on the volume holding the Chrome profile, and a supported operating system (Windows 10 or 11, macOS 13 or later, Linux, or ChromeOS on a Plus device). On mobile Chrome the entire feature is unavailable, so phones are out. Desktops, laptops and Chromebook Plus devices are in.&lt;/p&gt;

&lt;p&gt;When all four conditions hold and any local activity (a relevant page, an extension, an Origin Trial token) triggers the API, Chrome downloads Gemini Nano in the background, on an unmetered connection. There is no installation prompt. No "Chrome would like to download a 2 GB model" dialog. The model arrives, lives on disk, and updates itself.&lt;/p&gt;

&lt;p&gt;If the free disk space later drops below 10 GB, Chrome removes the model. If the eligibility criteria are not met for 30 days, the model is purged. Both removals happen without user interaction; both reinstall when the conditions return.&lt;/p&gt;

&lt;h2&gt;
  
  
  How A Website Talks To The Model
&lt;/h2&gt;

&lt;p&gt;The API surface is straightforward. A web page in JavaScript writes:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;const session = await LanguageModel.create();&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If the model is available, a session is returned and the page can call &lt;code&gt;session.prompt("...")&lt;/code&gt; to get text back. There is no permission dialog at any point in this exchange. The same code, in 2018, would have produced a &lt;code&gt;navigator.mediaDevices.getUserMedia()&lt;/code&gt; call for the microphone or camera, and would have triggered a browser-level prompt asking the user. The Prompt API does not.&lt;/p&gt;

&lt;p&gt;The cross-origin story is partial. A top-level page on &lt;code&gt;example.com&lt;/code&gt; can call the API. A same-origin iframe can call it. A cross-origin iframe (an embedded ad, an embedded widget) needs the parent page to set the &lt;code&gt;allow="language-model"&lt;/code&gt; attribute on the iframe. This is the only permission boundary in the architecture, and it lives between iframes, not between site and user.&lt;/p&gt;

&lt;p&gt;The capability probe &lt;code&gt;LanguageModel.availability()&lt;/code&gt; returns one of three values: &lt;code&gt;'available'&lt;/code&gt;, &lt;code&gt;'downloading'&lt;/code&gt;, or &lt;code&gt;'unavailable'&lt;/code&gt;. Any page that calls this method learns whether the visitor is on a model-capable device. This is a hardware-class probe without a permission prompt, in a browser whose Privacy Sandbox was discontinued in April 2025 and whose own advertising policy explicitly permits digital fingerprinting since February 2025. It is one further entry in a list of more than thirty active fingerprinting vectors active in production Chrome today.&lt;/p&gt;

&lt;p&gt;There is no rate limit on the website's calls. There is no per-origin token quota. The compute cost is paid by the user's CPU, GPU and battery; the website pays nothing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why The Permission Prompt Is The Story
&lt;/h2&gt;

&lt;p&gt;Browsers express their security policy through their dialogs. Geolocation needs one. Microphone, camera, screen capture, notifications, clipboard, USB, MIDI, Bluetooth: all need them. Local language-model inference does not. That is not a small detail. It is an architectural statement: this feature is classified, by Chrome, as belonging to the standard platform, not to the privileged plate. It sits next to the JavaScript engine, not next to the camera.&lt;/p&gt;

&lt;p&gt;A reasonable user reading the brief would expect to be asked about a feature that uses several gigabytes of disk, runs on the user's GPU, and consumes battery whenever a website calls it. The user is not asked. The brief did not specify this, the user accepted it by accepting Chrome.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means For Risk
&lt;/h2&gt;

&lt;p&gt;Three risk lines are real, and they should be named without panic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fingerprinting and tracking.&lt;/strong&gt; &lt;code&gt;LanguageModel.availability()&lt;/code&gt; is a fingerprinting input. Combined with the canvas, font, audio, language, GPU and timing vectors that are already in production use across roughly thirteen percent of the top 20,000 websites (per a 2025 ACM study), the addition of a model-availability probe contributes to a higher-entropy fingerprint. In a browser without Privacy Sandbox and with explicit policy permission for fingerprinting, this is a measurable degradation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Indirect prompt injection.&lt;/strong&gt; Web content goes into the model. Model output goes back into web UIs. A page that includes a hidden instruction in user-readable text can attempt to coerce the model into producing output that influences subsequent actions. OWASP found indirect prompt injection in 73 percent of production AI deployments it audited in 2024. Google has responded with a five-layer defence and a separate "User Alignment Critic" model that watches the agentic Gemini sidebar; that response is itself a recognition that the threat class is severe. The on-device Prompt API does not face the agentic-action surface, but a website that uses it to summarise web content for the user is one Prompt-Injection layer away from inserting whatever the attacker has inserted into the displayed result.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hijack and privilege boundaries.&lt;/strong&gt; CVE-2026-0628, disclosed in early January 2026 and fixed by Google in mid-January, allowed Chrome extensions with basic permissions to hijack the Gemini Live panel and inherit camera, microphone and file-access privileges through the panel's surface. The panel is not the same component as the Prompt API, but the disclosure shows that the boundary between the AI surfaces in Chrome and the rest of the browser's privilege system has been crossed at least once.&lt;/p&gt;

&lt;p&gt;These are not theoretical risks. They are documented in Chrome's own security advisories, in OWASP's annual report and in disclosures by Palo Alto's Unit 42, Malwarebytes and SecurityWeek.&lt;/p&gt;

&lt;h2&gt;
  
  
  How To See It On Your Own Machine
&lt;/h2&gt;

&lt;p&gt;Three tabs, one policy. Five minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;chrome://on-device-internals.&lt;/strong&gt; The model status, version, file size and update history. If the page shows a Gemini Nano entry with a version number and a size, the model is on your disk and ready to be called.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;chrome://flags/#optimization-guide-on-device-model.&lt;/strong&gt; Set this to Disabled. Relaunch Chrome. The model will not be downloaded; if it has already been downloaded, it will be removed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;chrome://flags/#prompt-api-for-gemini-nano.&lt;/strong&gt; Set this to Disabled. Relaunch. The web-page API surface is gone; pages calling &lt;code&gt;LanguageModel.create()&lt;/code&gt; will fail with &lt;code&gt;'unavailable'&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enterprise policy.&lt;/strong&gt; On Windows, set the registry value &lt;code&gt;HKLM\SOFTWARE\Policies\Google\Chrome\GenAILocalFoundationalModelSettings = 1&lt;/code&gt; (DWORD). On macOS and Linux there are equivalent profile keys. Chrome will respect this and the model will not return through future updates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevTools.&lt;/strong&gt; Open a tab on any site, open DevTools, switch to the Network panel, and filter for &lt;code&gt;optimizationguide-pa.googleapis.com&lt;/code&gt;. This is the model and configuration update server. You will see traffic when Chrome checks for or pulls model updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern Is Not Chrome-Only
&lt;/h2&gt;

&lt;p&gt;This is a series, and Chrome is the first episode because it is the largest. The same pattern is in other browsers, each in its own form.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Microsoft Edge&lt;/strong&gt; ships a Copilot Sidebar that is enabled by default for many users; the toggle lives at &lt;code&gt;edge://settings/copilot&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Brave Browser&lt;/strong&gt; ships Leo AI built in, with a cloud-mode toggle in Settings → Leo.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Firefox&lt;/strong&gt; added an AI Chat sidebar with a configurable provider in &lt;code&gt;about:preferences#general&lt;/code&gt; → AI Chatbot.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Arc&lt;/strong&gt; ships AI features in Settings → AI; the project is now under The Browser Company.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apple Safari&lt;/strong&gt; integrates Apple Intelligence on supported macOS and iOS versions, configured under System Settings → Apple Intelligence.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The architecture varies. The pattern is the same: an AI feature added by default, exposed to web pages or to the user, with the awareness path tucked into a Settings page that few users will visit unprompted. We will take each of these in turn over the coming episodes, and the awareness path for each.&lt;/p&gt;

&lt;h2&gt;
  
  
  What To Take Home
&lt;/h2&gt;

&lt;p&gt;If your browser ships an API without a permission prompt, the browser has stated where that API stands. It is treating the feature like part of the standard platform: not a privileged surface that needs explicit user consent, but a default capability of the runtime.&lt;/p&gt;

&lt;p&gt;That is not an argument against Chrome's built-in AI. It is an argument for knowing it is there. The feature is real. The performance is genuinely good. The local-model architecture is, in some respects, more privacy-respecting than a cloud round-trip would have been. The quarrel is not with the existence of the feature. It is with the silence of its arrival, and with the dialog that the browser used to show and now does not.&lt;/p&gt;

&lt;p&gt;The browser used to ask about the camera. It does not ask about the model. The line moved. The dialog did not.&lt;/p&gt;

&lt;p&gt;If you have not looked, you do not know what is on. The looking is not difficult. It just has to start.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/the-browser-brought-its-own-ai" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt;, System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>browser</category>
      <category>ai</category>
      <category>privacy</category>
      <category>awareness</category>
    </item>
    <item>
      <title>The Subscription You Did Not Ask For</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Fri, 01 May 2026 09:05:18 +0000</pubDate>
      <link>https://dev.to/vivian-voss/the-subscription-you-did-not-ask-for-15o7</link>
      <guid>https://dev.to/vivian-voss/the-subscription-you-did-not-ask-for-15o7</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3j1cuswjx473cl0jkfm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3j1cuswjx473cl0jkfm.png" alt="A hand-drawn editorial illustration of a small modern design studio in landscape view. A young developer with long dark wavy hair and a pink cat-ear headset, sits at a wooden desk holding an open brown leather wallet, looking down into it with quiet sadness. Behind her, a Studio Display shows the readable email " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In the Net, Episode 01&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In 2012, Adobe shipped Creative Suite 6. A studio could buy the Master Collection once, for around two thousand five hundred euros per seat (US list price was 2,599 dollars), and run it for years. Thirteen years later, the same studio leases Adobe Creative Cloud All Apps for around 743 euros per seat per year (Adobe Germany list price, May 2026). The tools have not got dramatically better. The architecture under the licence rather has.&lt;/p&gt;

&lt;p&gt;This is the first episode of &lt;em&gt;In the Net&lt;/em&gt;: a series on the documented mechanics of vendor lock-in. The premise is simple. Every platform tells you how to come in. The architecture tells you whether you can leave. We will look at how each major platform's promise was built, what the lock-in mechanisms are, why the documented exit does not work as advertised, and what the realistic escape routes look like.&lt;/p&gt;

&lt;p&gt;We start with Adobe because the pattern is at its clearest. The promise was good. The promise was kept for thirty years. The architecture under the subscription, however, is a separate story.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Promise, Honestly
&lt;/h2&gt;

&lt;p&gt;Adobe Creative Suite was, for three decades, the most defensibly chosen tool stack in design and publishing. Photoshop appeared in 1988. Illustrator in 1987. InDesign in 1999. Premiere in 1991. The applications were excellent, kept in active development, and produced the file formats that the entire commercial design industry settled on.&lt;/p&gt;

&lt;p&gt;A studio buying CS6 in 2012 made a rational decision. Pay once, run forever. Major upgrade cycle every three years. Predictable cost, predictable workflow, the same key bindings their staff had learned over a decade. Nothing about that was a trap.&lt;/p&gt;

&lt;p&gt;This matters. Lock-in stories are most useful when they begin with the promise that was real. Adobe's promise was real. The mechanism that came after did not undo the original promise. It changed the architecture under it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Day the Architecture Changed
&lt;/h2&gt;

&lt;p&gt;In May 2013, Adobe announced that CS6 was the last perpetual-licence release of Creative Suite. Going forward, the only way to use Photoshop, Illustrator, InDesign and the rest would be Creative Cloud, a monthly subscription. The applications continued to ship. The licence did not.&lt;/p&gt;

&lt;p&gt;The transition was managed elegantly. Existing CS6 owners could keep using their software. New customers and upgrades, however, were on the subscription path. Within two years, the perpetual market was effectively closed.&lt;/p&gt;

&lt;p&gt;Three lock-in mechanisms were wired into the new architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  The First Hook: File-Format Binding
&lt;/h2&gt;

&lt;p&gt;Adobe's file formats are proprietary and rich. PSD carries Smart Objects, Layer Effects, Adjustment Layers, and a depth of state that no open format mirrors completely. AI carries Illustrator's specific path semantics. INDD, more than the others, has effectively no portable equivalent: IDML exists as an interchange format, but loses live links, master-page state, and complex layout work.&lt;/p&gt;

&lt;p&gt;The judgement is not that Adobe should publish their file formats. The judgement is that the file format is the lock. A studio that has spent a decade building PSDs is not just buying software; it is paying rent on its own back catalogue.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Second Hook: Cloud-Library Binding
&lt;/h2&gt;

&lt;p&gt;Creative Cloud syncs fonts, brushes, asset libraries, colour palettes, and shared resources between the applications and across teams. Cancel the subscription, and the synchronised assets become unavailable. Some assets remain locally, others depend on the licence. The working environment is not portable.&lt;/p&gt;

&lt;p&gt;This is a more recent hook than the file format. It came in over the years between 2014 and 2020, with Creative Cloud Libraries, Adobe Fonts (formerly Typekit), and the deeper integration between Photoshop and Lightroom Cloud. Each individual feature was a real productivity gain. The cumulative effect was that the user's working environment lived inside Adobe's account, not on their own machines.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Third Hook: Cancellation Architecture
&lt;/h2&gt;

&lt;p&gt;Adobe's Annual Plan, billed monthly, carries an Early Termination Fee equal to 50 percent of the remaining months on the contract. A user three months into a twelve-month plan who cancels owes nine months at half rate.&lt;/p&gt;

&lt;p&gt;In June 2024, the US Department of Justice, on behalf of the Federal Trade Commission, filed suit against Adobe alleging that this fee was hidden behind layered cancellation flows and that the company had violated the Restore Online Shoppers' Confidence Act. The case (FTC v. Adobe) is, as of this writing in 2026 and to my knowledge, still in motion. Whatever the eventual outcome, the architecture itself is documented in the agreements.&lt;/p&gt;

&lt;p&gt;A subscription that costs more to leave than to keep is not, by any honest reading, a subscription one can leave at will. The architecture says what the marketing copy does not.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Standing: Market Power and How the User Is Treated
&lt;/h2&gt;

&lt;p&gt;Two further dimensions matter, and neither is reducible to price. The first is market power. The second is how the user is treated in the contracts the user has signed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Market position.&lt;/strong&gt; Adobe holds, by industry-aggregator estimates, more than eighty percent of the creative-software market. Photoshop alone accounts for around forty-two percent of the graphic-design segment, InDesign for around twenty-six percent, Illustrator for around twelve percent (multiple market-research aggregators citing 2024 figures, e.g. ElectroIQ, Bayelsawatch). The consequence is that Adobe's three core file formats, PSD, AI and INDD, are the de-facto industry standard. A studio that has migrated to Affinity, Krita or any other alternative still receives PSDs from clients, suppliers and partners. The lock-in is not only at the studio level. It is at the level of the entire industry's file-exchange protocol. An agency that holds out alone has not actually escaped; it has only insulated its own machines, while still negotiating in someone else's format.&lt;/p&gt;

&lt;p&gt;This is the difference between a Lock-in and a Quasi-Monopoly. The Lock-in keeps the individual customer. The Quasi-Monopoly keeps the customer's customers, and through them, the customer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How the user is treated.&lt;/strong&gt; In February 2024, Adobe quietly updated its Terms of Service to include language that, in many readings, granted Adobe broad rights to access user content, including to train generative AI models. The change went largely unnoticed for months. In June 2024, Adobe pushed a routine re-acceptance pop-up to Creative Cloud and Document Cloud users, and many users encountered the new language for the first time as a forced confirmation: accept, or lose access to your tools. The artist community responded with a sustained backlash. Adobe issued clarifications on its own blog (10 June 2024), and on 24 June 2024 published revised Terms that explicitly state Adobe does not train generative AI on customer content (with the exception of submissions to the Adobe Stock marketplace), and that the company never has.&lt;/p&gt;

&lt;p&gt;The clarification is, as far as one can verify, accurate. The clarified Terms are clearer than the prior wording. The question that remains is what was being granted before the backlash, and why a company with this market position needed to be told by its own users that this is not the language a creative tool's licence ought to contain. The original wording was not a slip. It was the architecture of the contract a corporation believed it could propose to its captive user base. That belief, and not the words themselves, is the part of this story that does not get corrected.&lt;/p&gt;

&lt;p&gt;These two dimensions, market position and contractual stance, change the reading of the price. The price is not only what the studio pays in money. The price is also what it accepts, by signing the contract, about the rights it retains over its own work, and about the position from which the contract is offered. A subscription is the lower-cost end of the price. The other end is harder to put a number on.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Exit That Isn't
&lt;/h2&gt;

&lt;p&gt;Adobe will tell you the exits exist. They are correct, technically. They are wrong, operationally.&lt;/p&gt;

&lt;p&gt;Export your PSD to a portable format. Smart Objects flatten. Layer Effects sometimes lose their parameters. Adjustment Layers may bake into the underlying pixels. The file opens elsewhere; it is no longer the file you built. Anyone who has tried to recover an old PSD in another tool has met this in person.&lt;/p&gt;

&lt;p&gt;Cancel your subscription. The Early Termination Fee triggers. The flow is layered. Customers report that the cancellation requires multiple confirmations, optional retention offers, and live-chat negotiation. The exit is documented. It is also a fence.&lt;/p&gt;

&lt;p&gt;This is a Lock-in by design. Not in the sense that someone in a boardroom said "let us trap our users". In the sense that the architecture, taken as a whole, produces the outcome that users do not leave even when they would prefer to. That is what an architecture is: the outcome the structure makes likely, regardless of intent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Price
&lt;/h2&gt;

&lt;p&gt;For a studio of ten seats on Creative Cloud All Apps, the recurring cost is in the order of seven thousand four hundred euros a year, every year, with no terminating event. Compare to the perpetual model: ten CS6 Master Collection licences in 2012 would have cost around twenty-five thousand euros and remained usable for the years the team chose to run them. The subscription model has been more profitable for Adobe and more expensive for studios across any horizon longer than a few years.&lt;/p&gt;

&lt;p&gt;For a studio that wants to leave, the cost is migration. Months of file conversion, weeks of retraining, and a tail of legacy work that will, rather inevitably, not transfer cleanly. This is not a fee Adobe charges; it is a cost the architecture imposes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Escape Route
&lt;/h2&gt;

&lt;p&gt;The alternatives are real and professional, but the landscape has shifted in 2024 and 2025. Two of the strongest names have changed owners. The migration paths are still good. They simply require attention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Affinity, by Serif (now Canva).&lt;/strong&gt; In March 2024, Canva acquired Serif for approximately three hundred million pounds (around three hundred and fifty million euros at the time, as reported by Canva's own newsroom). In October 2025, the three Affinity applications (Photo, Designer, Publisher) were consolidated into a single application called Affinity by Canva, version 3.0. The new model is free with a Canva account. AI features are locked behind a Canva Pro subscription, around one hundred and ten euros a year. The original Pledge given at acquisition included a commitment to perpetual licences; that commitment was, by the publisher's own communication at the version 3.0 launch, replaced when the new model launched. Affinity remains a strong one-to-one replacement for the Adobe core, with direct PSD and AI import. It is also itself an example of the pattern this series tracks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pixelmator Pro, by Pixelmator (now Apple).&lt;/strong&gt; Apple announced its acquisition of the Pixelmator team in November 2024 and completed it in February 2025 (MacRumors, TechCrunch). The standalone perpetual licence is still listed on the Mac App Store at around fifty euros (US list 49.99 dollars). Newer features, such as the Warp tool added with the iPad release in January 2026, are exclusive to Apple Creator Studio: a subscription bundle priced at 12.99 dollars per month or 129 dollars per year (US list, around twelve euros a month or one hundred and twenty euros a year), which ships Pixelmator Pro alongside Final Cut Pro, Logic Pro, and the iWork suite. The standalone path remains, but the rate of new feature development sits inside the bundle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DaVinci Resolve, by Blackmagic Design.&lt;/strong&gt; Free for video editing, with a Studio version available as a one-time purchase. The free version includes colour grading that has, afaik, been used on theatrical productions; Blackmagic Design lists notable feature-film credits on its own product pages. The most stable of the alternatives in this list, with a perpetual non-subscription model that has held since 2009.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Krita.&lt;/strong&gt; Free and open source, professional digital painting application maintained by the KDE community. Strong brush engine, deep tablet support, scriptable. Stable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GIMP and Inkscape.&lt;/strong&gt; Free, photo and vector. Robust and feature-complete for most workflows. The user experience is, honestly, dated, and migration cost from Adobe is real work because the muscle memory does not transfer cleanly. They are the right choice for budget-constrained teams or FOSS-aligned organisations who can absorb the UX cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capture One.&lt;/strong&gt; A direct Lightroom alternative. Individual perpetual licences are still offered (US list 299 dollars, around two hundred and seventy-five euros). Multi-user studios, however, were moved to subscription-only in 2024 (PetaPixel reported a 344 percent multi-user pricing change). This is the same pattern as Adobe: perpetual quietly retained for individuals, removed for the larger commercial users.&lt;/p&gt;

&lt;p&gt;The maths still works. Adobe Photoshop and Lightroom Photo plan in Germany is around 17.99 euros per month (annual commitment, billed monthly) or about 142 euros per year prepaid; Photoshop Single App and Creative Cloud All Apps cost more (Adobe Germany list, May 2026, with VAT). Affinity is free with an account. Pixelmator Pro standalone is around fifty euros, perpetual. DaVinci Resolve is free. The break-even against any Adobe subscription happens within months, not years.&lt;/p&gt;

&lt;p&gt;The harder question is which of the alternatives stays an alternative. Two of the largest names have changed owners in eighteen months, and the change has, in both cases, moved the architecture in directions worth watching. This is not an argument against the alternatives. It is an argument for the only sustainable strategy in this landscape: attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Take Home
&lt;/h2&gt;

&lt;p&gt;If your tools come with a recurring bill, your back catalogue lives in someone else's licence. If your tools come from a vendor that holds eighty percent of the market, the industry's standard formats are theirs as well, and your migration is incomplete even when your studio has finished it. If your tools come with Terms of Service that granted broad rights over your content until your community noticed, the architecture of the contract is a separate question from the price tag.&lt;/p&gt;

&lt;p&gt;The pattern repeats. The market holds the lock-in. The contract holds the user. Even the alternatives need watching, because two of the largest names have changed owners in eighteen months. The subscription you did not ask for is the architecture you signed up for: the rent has gone up, the formats have stayed standard, and the wording in the agreement has, for at least one entire spring, said something the publisher later had to take back.&lt;/p&gt;

&lt;p&gt;The tools have not got better. The rent has, the standing has not, and attention is the only price that scales.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/the-subscription-you-did-not-ask-for" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt;, System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>vendorlock</category>
      <category>adobe</category>
      <category>subscription</category>
      <category>design</category>
    </item>
    <item>
      <title>The Backup That Wasn't</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Thu, 30 Apr 2026 10:18:25 +0000</pubDate>
      <link>https://dev.to/vivian-voss/the-backup-that-wasnt-20gi</link>
      <guid>https://dev.to/vivian-voss/the-backup-that-wasnt-20gi</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrrm301dxiryewgw0do1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrrm301dxiryewgw0do1.png" alt="A cyberpunk-noir operations room at night. A young developer with long dark wavy hair and a pink cat-ear headset, stands at a console facing a wall of monitors that scroll PostgreSQL logs in green and red. One large screen reads " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tales from the Bare Metal, Episode 01&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;« Thou shalt not trust a backup thou hast not restored! »&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At half past eleven on the night of Tuesday, 31 January 2017, an engineer at GitLab.com typed &lt;code&gt;rm -rf&lt;/code&gt; on what they believed was the secondary PostgreSQL database. The terminals on their screen were visually identical, save for the hostname in the prompt. Two seconds later, when they realised the prompt did not say what they thought it said, they killed the command. By that point, three hundred gigabytes of production data had been removed.&lt;/p&gt;

&lt;p&gt;That was the easy part. The hard part came over the next eighteen hours, as the team discovered, in a sequence that has since become teaching material, that none of their five backup mechanisms had been working.&lt;/p&gt;

&lt;p&gt;This is a long-documented incident. GitLab's response, by industry standards, was extraordinary. They live-streamed the recovery on YouTube. They published their internal chat logs. They wrote a postmortem so detailed and so honest that it remains, nearly a decade later, one of the most widely cited operational documents in software engineering. The point of revisiting it now is not the story; the story is well-known. The point is the pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Happened, in Sequence
&lt;/h2&gt;

&lt;p&gt;The day had not been routine. From around 19:00 UTC, GitLab's primary database had been under unusual load, suspected at the time to be coordinated spam-account creation. The on-call engineers had been working through the load issue for hours. By 23:00 UTC, replication between the primary and the secondary had stalled: the secondary's WAL receiver could not keep up, and the secondary fell sufficiently far behind that recovery would require re-seeding it.&lt;/p&gt;

&lt;p&gt;The engineer in question had earlier that day created an LVM snapshot of the production database, intending to use it to set up a staging instance for testing pgpool-II. That snapshot, taken around 17:20 UTC, was a side effect of unrelated work; it had nothing to do with backups, and was scoped for staging.&lt;/p&gt;

&lt;p&gt;At around 23:30 UTC, while attempting to clean up the secondary's data directory in preparation for re-seeding, the engineer ran the cleanup command on the wrong host. The terminals were visually identical. The prompt difference was a hostname they had been working in for several hours. They killed the command within seconds. Approximately 300 GB had been removed from the primary. Affected: roughly 5,000 projects, 5,000 comments, and 700 newly created user accounts created between 17:20 and 23:30 UTC.&lt;/p&gt;

&lt;p&gt;Service was taken offline at 23:30. Recovery began immediately. The team turned to their backups in sequence. Each one, in turn, did not work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Five Mechanisms, in Order
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;pg_dump backups to S3.&lt;/strong&gt; GitLab's primary off-site backup mechanism was a wrapper script that took daily &lt;code&gt;pg_dump&lt;/code&gt; exports and uploaded them to Amazon S3. The script had been working for a long time. It stopped working silently after a PostgreSQL upgrade: the wrapper script invoked an older &lt;code&gt;pg_dump&lt;/code&gt; binary which produced an error against the upgraded server, but the script swallowed the error and produced empty output files. The S3 bucket was full of zero-byte files going back several months.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Email alerts for backup failures.&lt;/strong&gt; The pg_dump script sent failure-notification emails when something went wrong. Those emails would have caught the upgrade-mismatch problem on day one. They did not arrive. A change to the email infrastructure (DMARC) elsewhere in the organisation, made for unrelated reasons, caused the alert emails to be silently rejected at the receiving end. They had been rejected for months. No one noticed because the absence of failure emails was the same signal as success.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LVM snapshots.&lt;/strong&gt; GitLab had no scheduled LVM snapshot strategy for production databases. The single snapshot that existed on 31 January was the one the engineer had taken six hours earlier for the unrelated staging-pgpool work. By coincidence, this snapshot was the most recent operational backup of the database that existed anywhere in the organisation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure disk snapshots.&lt;/strong&gt; The cloud platform on which GitLab.com was hosted at the time offered automated disk snapshots. They had not been enabled for the database servers. The decision was deliberate: cost considerations, plus a stated intention to rely on PostgreSQL replication and pg_dump. Restoring from Azure snapshot, when investigated during recovery, was estimated to take days rather than hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WAL archiving.&lt;/strong&gt; PostgreSQL supports continuous archiving of write-ahead log segments, which would have allowed point-in-time recovery to any moment within the archive window. WAL archiving had never been configured.&lt;/p&gt;

&lt;p&gt;Recovery used the LVM snapshot. Copying the data from the staging host back to production took roughly eighteen hours, bottlenecked by the network storage's 60 Mbps throughput. By 17:00 UTC on 1 February, GitLab.com was back online, missing the data that had changed between 17:20 and 23:30 UTC the previous day.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Context, in Fairness to the Build
&lt;/h2&gt;

&lt;p&gt;Each of these five mechanisms had been a reasonable design at the time it was built. pg_dump-to-S3 worked correctly the day it was deployed. LVM snapshots had a clear scope (staging) that was honoured. Azure snapshots were a deliberate cost-trade decision with documented reasoning. WAL archiving was on the long roadmap. The DMARC change, which silently severed the alerting chain, was made by another team, in another part of the organisation, for a reason that had nothing to do with PostgreSQL.&lt;/p&gt;

&lt;p&gt;Three systemic conditions made the outcome likely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First: redundancy of mechanism is not redundancy of recovery.&lt;/strong&gt; Five backup mechanisms feel like resilience. In practice, none of them had ever been exercised end-to-end against the specific scenario "the primary is gone; restore from the most recent good source within an hour". The drills that would have surfaced each mechanism's silent failure were not part of routine operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second: the absence of a failure signal is not the same as success.&lt;/strong&gt; The team's confidence in pg_dump rested on the steady absence of failure emails. That signal had been broken for months. Monitoring that depends on negative signals (silence as good news) is a class of monitoring that fails open. The fix is positive monitoring: a periodic heartbeat that says "this thing ran today, here is its output, here is its size, here is its checksum".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Third: the path between backup and restore had no single owner.&lt;/strong&gt; Backup configuration sat with one team; restore procedures sat with another; the email infrastructure sat with a third. No one team owned the integration test that would have walked through the entire path on a regular cadence. The handoffs were the gap.&lt;/p&gt;

&lt;p&gt;These are not excuses. The wrong directory was deleted, by an engineer who knew which directory they intended to delete, on a host whose hostname they could read. The error happened. But the consequences of the error (six hours of unrecoverable data, eighteen hours of downtime, public restoration on YouTube) were architectural, not behavioural. A different architecture would have absorbed the same error in minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Principle
&lt;/h2&gt;

&lt;p&gt;Backups are not backups until they have been restored.&lt;/p&gt;

&lt;p&gt;This is older than every database. The unixoid expression of it is small enough to fit in a few lines of shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# /etc/cron.weekly/restore-test&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-euo&lt;/span&gt; pipefail

&lt;span class="nv"&gt;backup&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;latest_backup&lt;span class="si"&gt;)&lt;/span&gt;                      &lt;span class="c"&gt;# find the most recent&lt;/span&gt;
restore_to_sandbox &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$backup&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;                  &lt;span class="c"&gt;# restore to a throwaway env&lt;/span&gt;
verify_checksum &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$backup&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; sandbox_env         &lt;span class="c"&gt;# compare row counts or checksums&lt;/span&gt;
report_success &lt;span class="s2"&gt;"restore-test passed: &lt;/span&gt;&lt;span class="nv"&gt;$backup&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="c"&gt;# broadcast success on slack/email/wiki&lt;/span&gt;

&lt;span class="c"&gt;# anything failing aborts via set -e and triggers an oncall page.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The shape is what matters. The signal is positive: the cron job is required to broadcast success, every week. If a week passes with no success message, on-call is paged. Silence is treated as failure, not as the absence of failure.&lt;/p&gt;

&lt;p&gt;The further structural change is to put backup configuration and restore verification under one team's ownership. The path from "backup runs nightly" to "we just restored last week's backup and it worked" has to belong to one group of humans who own the whole sequence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the Pattern Travels
&lt;/h2&gt;

&lt;p&gt;The principle does not depend on PostgreSQL. It does not depend on Unix. It applies anywhere data is being protected against loss.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud-managed databases.&lt;/strong&gt; AWS RDS, Google Cloud SQL, Azure Database for PostgreSQL all offer automated snapshots. The snapshot is taken; the question is whether you have ever restored one to a sandbox database and confirmed your application can connect to it and read every table. If you have not, you are guessing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes StatefulSets with PVC snapshots.&lt;/strong&gt; PVC snapshots are convenient. They are also untested by default. The drill is identical: weekly, restore the snapshot to a sandbox cluster, run the application's startup health checks against it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Object storage.&lt;/strong&gt; S3 versioning, Backblaze B2, Cloudflare R2. These are good systems. They protect against accidental deletion if and only if you have, at some point, recovered an object from them by version, with the production-side application code, and verified it was correct. Otherwise it is faith.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Git-LFS, large media stores, vendored binary archives.&lt;/strong&gt; The same logic. The cost of weekly verification is small. The cost of discovering, mid-incident, that the chain has been broken for six months is everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filesystem and host backups.&lt;/strong&gt; The classic case. Backup tapes that have been rotated for years, never read. The drill is to read one back, mount it, diff against a recent snapshot, and confirm the bytes are present.&lt;/p&gt;

&lt;p&gt;In every case, the same shape: the backup is provisional until proven by restore. The cost of the proof is hours. The cost of relying on faith is the whole company.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Take Home
&lt;/h2&gt;

&lt;p&gt;If your operational situation reminds you of any of the following, treat it as a thing to investigate this week:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have multiple backup mechanisms and trust their combination.&lt;/li&gt;
&lt;li&gt;Your backup-status monitoring relies on the absence of failure messages.&lt;/li&gt;
&lt;li&gt;Backup configuration and restore responsibility live with different teams.&lt;/li&gt;
&lt;li&gt;You have not actually restored a production-grade backup to a clean environment in the last quarter.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these was true at GitLab on 30 January 2017. Each of them is true at many organisations now.&lt;/p&gt;

&lt;p&gt;The fix is not a new tool. The fix is a one-shell-script weekly drill, a positive heartbeat, and a single person whose job description includes "the backup chain works end-to-end". The cost is small. The alternative is broadcasting your recovery on YouTube to a peak audience of five thousand strangers, which GitLab handled with grace, and which most organisations would not.&lt;/p&gt;

&lt;p&gt;Do not push this into a maintenance ticket; the ticket will be deferred each sprint until the next outage promotes it for you. Listen to the critics in your own ranks before you listen to the velocity-celebrants.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/the-backup-that-wasnt" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt;, System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>postmortem</category>
      <category>backup</category>
      <category>postgres</category>
      <category>sysadmin</category>
    </item>
    <item>
      <title>The Width You Never Had to Measure</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Wed, 29 Apr 2026 12:09:19 +0000</pubDate>
      <link>https://dev.to/vivian-voss/the-width-you-never-had-to-measure-j22</link>
      <guid>https://dev.to/vivian-voss/the-width-you-never-had-to-measure-j22</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj137qawcajlmljxq59am.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj137qawcajlmljxq59am.png" alt="A young developer at a wooden desk inspects three views of the same online shop " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Stack Patterns — Episode 14&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Every web developer has written this hack. A card component lives in a sidebar at 280 pixels on one page and on a dashboard at 1100 on another. You want the badges to disappear in the narrow case and the layout to switch from horizontal to vertical. You reach for JavaScript:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;observer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ResizeObserver&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;entries&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;entries&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;classList&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toggle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;narrow&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;entry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;contentRect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;observer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;observe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;card&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You then write the CSS twice, once for the parent class and once again for the &lt;code&gt;.narrow&lt;/code&gt; case. You hope the observer fires before the first paint, and most of the time it does, sometimes only after a flash of unstyled content. You move on, because the alternative is to wrap the component in a width-aware HOC, install a layout library, or accept the inevitable hydration mismatch in your SSR pipeline.&lt;/p&gt;

&lt;p&gt;Container queries make all of that go away.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Brief, Slightly Embarrassing History
&lt;/h2&gt;

&lt;p&gt;The idea is older than most reading this article. The early 2010s saw a steady stream of &lt;em&gt;element queries&lt;/em&gt; discussions in the front-end community, with polyfills (EQCSS and others) and a long mailing-list debate about what the missing tool should look like. The proposals all stumbled on the same architectural problem: querying an element's own size while inside the layout pass that produces that size invites circular layout, which is roughly the equivalent of asking a function for its own return value before it has finished returning.&lt;/p&gt;

&lt;p&gt;The fix took the better part of a decade and required a quiet redefinition of the problem. The element under styling cannot query its own size; that path leads to circularity. But the element's &lt;em&gt;containing context&lt;/em&gt; can. The work that became Container Queries, originally drafted in CSS Containment Module Level 3 and now living in CSS Conditional Rules Module Level 5, was championed by Miriam Suzanne (Invited Expert at the CSS Working Group) with contributions from many others. An element marks itself as a container and provides a stable size context to its descendants. The descendants then query that context. No circularity, no observer, no JavaScript.&lt;/p&gt;

&lt;p&gt;The implementations followed quickly: Chrome 106 and Safari 16 in September 2022, Firefox 110 on 14 February 2023. Global usage is just over 94 per cent as of March 2026. The feature has been cross-browser stable for more than three years.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;The simplest case takes two declarations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="nc"&gt;.card&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="py"&gt;container-type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;inline-size&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;@container&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max-width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;400px&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nc"&gt;.card&lt;/span&gt; &lt;span class="nt"&gt;h3&lt;/span&gt;   &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1rem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nc"&gt;.card&lt;/span&gt; &lt;span class="nc"&gt;.badge&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;none&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;container-type: inline-size&lt;/code&gt; tells the browser to watch the inline dimension (the writing-mode-aware horizontal axis in Latin scripts). The &lt;code&gt;@container&lt;/code&gt; rule then matches when that dimension drops below 400 pixels. Rules inside the block apply to descendants of &lt;code&gt;.card&lt;/code&gt;, never to &lt;code&gt;.card&lt;/code&gt; itself.&lt;/p&gt;

&lt;p&gt;There is also &lt;code&gt;container-type: size&lt;/code&gt;, which watches both axes. Use it sparingly: &lt;code&gt;size&lt;/code&gt; containment forces the container's block size to be intrinsic, which can collapse otherwise auto-height layouts in surprising ways. The &lt;code&gt;inline-size&lt;/code&gt; form is the safe default for component-level responsiveness, and the form most CSS examples reach for.&lt;/p&gt;

&lt;p&gt;When components nest, the closest matching ancestor wins. A query without a &lt;code&gt;container-name&lt;/code&gt; matches the nearest ancestor with &lt;code&gt;container-type&lt;/code&gt; set. To target a specific level, name your containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="nc"&gt;.sidebar&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;container-name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;side&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="py"&gt;container-type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;inline-size&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nc"&gt;.card&lt;/span&gt;    &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;container-name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;card&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="py"&gt;container-type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;inline-size&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;@container&lt;/span&gt; &lt;span class="n"&gt;card&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max-width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;400px&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c"&gt;/* only the card matters here, not the sidebar */&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Length Units That Travel With the Container
&lt;/h2&gt;

&lt;p&gt;Container queries also bring four new length units that resolve against the container rather than the viewport:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;cqw&lt;/code&gt;: 1% of the container's width&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cqh&lt;/code&gt;: 1% of the container's height&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cqi&lt;/code&gt;: 1% of the container's inline size (writing-mode-aware width)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cqb&lt;/code&gt;: 1% of the container's block size (writing-mode-aware height)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cqmin&lt;/code&gt; and &lt;code&gt;cqmax&lt;/code&gt; are also defined, taking the smaller or larger of &lt;code&gt;cqi&lt;/code&gt; and &lt;code&gt;cqb&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Combined with &lt;code&gt;clamp()&lt;/code&gt;, these allow typography that scales with the component, not the viewport:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="nt"&gt;h3&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;font-size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;clamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1rem&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="n"&gt;cqi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;1.5rem&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Drop the same component into a 280-pixel sidebar and a 1100-pixel main column, and its heading scales appropriately in both, without writing any breakpoints. The vocabulary that responsive design has wanted for fifteen years has finally arrived on the right axis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why It Works
&lt;/h2&gt;

&lt;p&gt;The cleverness, as with &lt;code&gt;@scope&lt;/code&gt; in the previous episode, is what container queries deliberately do not do. They do not query the element being styled; they query the size of an ancestor. The cycle that broke every previous attempt at element queries simply does not arise.&lt;/p&gt;

&lt;p&gt;A second piece of cleverness: the &lt;code&gt;container-type&lt;/code&gt; declaration triggers CSS containment for the chosen axes. The browser knows that nothing outside the container can affect what is inside it, and vice versa, for the purpose of size calculations. That makes the cost of container queries predictable: each container is a self-contained layout unit, evaluated once.&lt;/p&gt;

&lt;p&gt;The performance question, which used to dominate discussions of element queries, has therefore become a much smaller question. Each container has a cost (a containment scope), but it is a known cost. You do not pay for queries you do not write.&lt;/p&gt;

&lt;h2&gt;
  
  
  Combined With :has()
&lt;/h2&gt;

&lt;p&gt;Container queries become properly powerful in combination with &lt;code&gt;:has()&lt;/code&gt;, the parent-aware selector covered in episode 5 of this series. A component now reacts to two independent axes at once: its own size, via &lt;code&gt;@container&lt;/code&gt;, and its own content, via &lt;code&gt;:has()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A common pattern: cards switch to a vertical layout when narrow, but only if they actually contain an image. A text-only card at the same width keeps its inline form.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="nc"&gt;.card&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="py"&gt;container-type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;inline-size&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;display&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="py"&gt;grid-template-columns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;auto&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="n"&gt;fr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;@container&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max-width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;400px&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nc"&gt;.card&lt;/span&gt;&lt;span class="nd"&gt;:has&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nt"&gt;img&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="py"&gt;grid-template-columns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="n"&gt;fr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The same approach scales: badges that disappear only when the container is narrow AND there are more than two of them; a sidebar widget that switches layout only when it contains a form; a section heading that changes weight when it is followed by a long article. None of these require JavaScript, a class toggle, or a render hook.&lt;/p&gt;

&lt;p&gt;The composability is the broader point. Each modern CSS feature (container queries, &lt;code&gt;:has()&lt;/code&gt;, &lt;code&gt;@scope&lt;/code&gt;, &lt;code&gt;@layer&lt;/code&gt;, view transitions) is independently useful, but combinations multiply their value. The platform has spent a decade quietly assembling a vocabulary in which most former JavaScript responsibilities for layout and conditional styling become CSS again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Honest Limitations
&lt;/h2&gt;

&lt;p&gt;Three things to know before you ship container queries to production.&lt;/p&gt;

&lt;p&gt;First, &lt;code&gt;container-type: size&lt;/code&gt; disables auto-height. If you want a container to react to changes in its own height (rare, but possible), the container must have a defined height rather than letting its content determine it. For most components, you only care about width, and &lt;code&gt;inline-size&lt;/code&gt; is the correct choice. Reach for &lt;code&gt;size&lt;/code&gt; only when you genuinely need both axes.&lt;/p&gt;

&lt;p&gt;Second, &lt;em&gt;style queries&lt;/em&gt; (the form &lt;code&gt;@container style(--theme: dark) { ... }&lt;/code&gt;) are a separate, newer feature. As of 2026 they are supported in Chrome 111+ and Edge 111+ only; Firefox and Safari are still developing support. They allow a component to query the value of a custom property on its container, rather than its size, and are powerful for theming and design tokens. Until cross-browser parity arrives, treat them as progressive enhancement rather than a default tool.&lt;/p&gt;

&lt;p&gt;Third, container queries do not propagate across iframe boundaries or shadow root boundaries. A widget embedded in an iframe queries its own document's containers, not the parent page's. This is generally what you want; it is worth knowing if you build embedded components that span those boundaries.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use
&lt;/h2&gt;

&lt;p&gt;Anywhere a component lives at more than one width. Cards in a sidebar and a main grid. Article previews in a related-articles strip and a featured slot. Dashboard widgets that the user can resize. Embedded widgets where you do not control the host layout. Anywhere your team currently maintains two or three classes (&lt;code&gt;.card&lt;/code&gt;, &lt;code&gt;.card--narrow&lt;/code&gt;, &lt;code&gt;.card--wide&lt;/code&gt;) coordinated by JavaScript.&lt;/p&gt;

&lt;p&gt;For a new design system, build it in containers from the start. Each component sets a &lt;code&gt;container-type&lt;/code&gt; on its outer element and queries it from within. Page-level media queries become a layer above, handling viewport-scoped concerns: navigation collapse, hero sizing, things that genuinely depend on the device. Component-level concerns drop a level and stay there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Layout, Civilised
&lt;/h2&gt;

&lt;p&gt;Episode 11 of this series gave native page transitions; episode 12 gave a built-in deep clone; episode 13 gave native CSS scoping. Each replaced a stack of build-time tooling and runtime libraries with a few lines of standard CSS or JavaScript. Container queries belong to the same lineage: a feature the platform has been quietly building for a decade, while the framework ecosystem invented and reinvented increasingly elaborate workarounds.&lt;/p&gt;

&lt;p&gt;Three years on from cross-browser support, the workaround code is still in production codebases everywhere, and the platform feature still has under-used potential. If your team is still measuring component widths in JavaScript, the cascade has been waiting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/the-width-you-never-had-to-measure" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>css</category>
      <category>webdev</category>
      <category>frontend</category>
      <category>programming</category>
    </item>
    <item>
      <title>pf</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Tue, 28 Apr 2026 08:25:47 +0000</pubDate>
      <link>https://dev.to/vivian-voss/pf-2aae</link>
      <guid>https://dev.to/vivian-voss/pf-2aae</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pmadvbujnnql0aaw1ct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8pmadvbujnnql0aaw1ct.png" alt="A young developer stands on the bridge of an interstellar station before a large semi-transparent hexagonal shield lattice. Three hexagons glow warm amber, inscribed "&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Technical Beauty — Episode 33&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You have written this rule, or you have written something near enough to it: &lt;code&gt;iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT&lt;/code&gt;. You have written it because the alternative is dropping every packet that is the response to a packet you let through, which is, on reflection, nearly all of them. You have also, at some point, lost twenty minutes to the question of which chain a packet enters first when it is being forwarded but also locally addressed, and twenty more to whether &lt;code&gt;INPUT&lt;/code&gt; runs before or after &lt;code&gt;FORWARD&lt;/code&gt; for a particular interface bridge.&lt;/p&gt;

&lt;p&gt;You are not a bad systems administrator. You are a systems administrator reading a tool that does not want to be read.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Mid-Release Crisis
&lt;/h2&gt;

&lt;p&gt;In May 2001, OpenBSD did something that most operating systems would not have the nerve to do mid-release. It pulled its firewall out of the source tree.&lt;/p&gt;

&lt;p&gt;The firewall was IPFilter, written and maintained by Darren Reed since the mid-1990s. It was the de-facto BSD-world packet filter, deeply integrated into OpenBSD, and a fixture of the OpenBSD security story. Reed informed the OpenBSD project that IPFilter's licence did not in fact permit the kinds of modifications OpenBSD was making. After some unproductive correspondence, Theo de Raadt removed IPFilter from the OpenBSD CVS tree on 30 May 2001. The release was four months away.&lt;/p&gt;

&lt;p&gt;This left OpenBSD without a packet filter. Nobody, internally, was lined up to write one.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Wrong Person for the Job
&lt;/h2&gt;

&lt;p&gt;Daniel Hartmeier was a Swiss developer who had been contributing peripherally to OpenBSD. He had never written kernel code. He read about Drawbridge, an Ethernet-layer filter from Texas A&amp;amp;M, and noticed that the filter itself was essentially a single C module with a small, comprehensible kernel interface. That, he thought, looked like something he could learn.&lt;/p&gt;

&lt;p&gt;In a later interview he was characteristically dry about the experience: "If I had known in advance how many nights I would spend, I might have given up. But the progress kept me motivated."&lt;/p&gt;

&lt;p&gt;He committed the first version of pf to the OpenBSD CVS tree on 24 June 2001, twenty-five days after IPFilter was removed. By the end of that month, his code was filtering packets and performing network address translation. Five months later, on 1 December 2001, pf shipped as the default firewall in OpenBSD 3.0.&lt;/p&gt;

&lt;p&gt;A man who had never touched the kernel wrote, in four weeks of summer evenings and a few months of polish, a firewall that would still be running, twenty-five years later, on roughly a billion devices.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Design
&lt;/h2&gt;

&lt;p&gt;The point of pf is that the rules read like English. Here is a working firewall:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;block&lt;/span&gt; &lt;span class="n"&gt;in&lt;/span&gt; &lt;span class="n"&gt;all&lt;/span&gt;
&lt;span class="n"&gt;pass&lt;/span&gt; &lt;span class="n"&gt;in&lt;/span&gt; &lt;span class="n"&gt;on&lt;/span&gt; &lt;span class="n"&gt;em0&lt;/span&gt; &lt;span class="n"&gt;proto&lt;/span&gt; &lt;span class="n"&gt;tcp&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt; &lt;span class="m"&gt;22&lt;/span&gt; &lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;
&lt;span class="n"&gt;pass&lt;/span&gt; &lt;span class="n"&gt;out&lt;/span&gt; &lt;span class="n"&gt;keep&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three lines. The first denies all inbound traffic by default. The second allows inbound TCP to port 22 (SSH) on the external interface, and tells pf to remember the connection in its state table so the responses get back. The third allows all outbound traffic, again with state.&lt;/p&gt;

&lt;p&gt;That is the whole firewall. Not the introduction. Not the simplified example. The whole firewall.&lt;/p&gt;

&lt;p&gt;There are no chains. There are no separate tables for filter, NAT, mangle, and raw. There is no &lt;code&gt;conntrack&lt;/code&gt; module to load and configure. There is no question about which hook a packet enters first under what bridging conditions. Rules are read top to bottom. The last matching rule wins, unless a rule is marked &lt;code&gt;quick&lt;/code&gt;, in which case it short-circuits. That is the entire evaluation model.&lt;/p&gt;

&lt;p&gt;The grammar is small enough to fit on the back of a postcard, but expressive enough that the same vocabulary handles filtering, network address translation, traffic queueing, packet scrubbing, anchors (named subgroups for hierarchical configuration), tables (named lists of addresses, updatable at runtime), and policy routing. New features extend the grammar. They do not invent new tools and new flags.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contrast
&lt;/h2&gt;

&lt;p&gt;iptables was, as it happens, also released in 2001, also as a successor to an older system (ipchains). Rusty Russell wrote it. It is, by any reasonable engineering standard, a competent piece of work. It is also, by any reasonable readability standard, the wrong shape.&lt;/p&gt;

&lt;p&gt;In iptables, a rule is a tuple of flags applied imperatively to a table inside a chain inside a hook point. The user has to know about chains (&lt;code&gt;INPUT&lt;/code&gt;, &lt;code&gt;OUTPUT&lt;/code&gt;, &lt;code&gt;FORWARD&lt;/code&gt;, &lt;code&gt;PREROUTING&lt;/code&gt;, &lt;code&gt;POSTROUTING&lt;/code&gt;), tables (&lt;code&gt;filter&lt;/code&gt;, &lt;code&gt;nat&lt;/code&gt;, &lt;code&gt;mangle&lt;/code&gt;, &lt;code&gt;raw&lt;/code&gt;, &lt;code&gt;security&lt;/code&gt;), targets (&lt;code&gt;ACCEPT&lt;/code&gt;, &lt;code&gt;DROP&lt;/code&gt;, &lt;code&gt;REJECT&lt;/code&gt;, &lt;code&gt;LOG&lt;/code&gt;, &lt;code&gt;MASQUERADE&lt;/code&gt;, &lt;code&gt;SNAT&lt;/code&gt;, &lt;code&gt;DNAT&lt;/code&gt;), and the order in which all of these interact for a packet that is, say, locally generated and forwarded over a bridge to a destination behind NAT. Connection tracking is a separate kernel module with its own configuration. Quality of service is a separate tool entirely (&lt;code&gt;tc&lt;/code&gt;). The whole apparatus rewards memorisation rather than reasoning.&lt;/p&gt;

&lt;p&gt;This is, as I understand it, why nftables exists. The Linux netfilter team, in roughly the late 2000s, took a long look at iptables, took a long look at pf, and rather quietly concluded that the second one had the right shape. nftables, released in January 2014, borrows pf's design philosophy without quite borrowing pf itself: one consistent grammar, one tool, one configuration model. One does admire the gesture, even as one notes that the conversion is still ongoing more than a decade later.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Proof
&lt;/h2&gt;

&lt;p&gt;Twenty-five years on, pf is the firewall in OpenBSD, FreeBSD, NetBSD, DragonFly BSD, and macOS. It is what your iPhone and iPad use to filter network traffic. It is what runs underneath pfSense, OPNsense, and a substantial fraction of the commercial firewall appliances sold to enterprises that have no idea their security perimeter is configured in a Swiss developer's grammar.&lt;/p&gt;

&lt;p&gt;The current OpenBSD pf is not the same code as Hartmeier's 2001 commit. Henning Brauer and Ryan McBride, in particular, have extended it considerably: redesigned the rule evaluation engine, added sophisticated state-tracking options, integrated with CARP for high availability, and rebuilt the queueing system. The grammar, however, has remained backward-compatible to a remarkable degree. A pf.conf written in 2003 still mostly parses on a current OpenBSD machine. The dialect has grown, but the language has not changed.&lt;/p&gt;

&lt;p&gt;This is one of the markers of a well-designed system: the original creators no longer need to be there for the work to continue, and the work that follows feels of a piece with the work that began.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Point
&lt;/h2&gt;

&lt;p&gt;pf was not engineered to scale to a billion devices. It was engineered to be readable. Hartmeier wrote it because the alternative was unreadable, and because OpenBSD needed a firewall in less time than was reasonable, and because he wanted to learn the kernel. Twenty-five years later it is the firewall language of the BSD world and a substantial portion of the Apple device estate, and Linux has, in nftables, paid it the considerable compliment of imitation.&lt;/p&gt;

&lt;p&gt;Sometimes a system holds up because the original design knew what was missing. pf is one of those systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/pf" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>openbsd</category>
      <category>networking</category>
      <category>freebsd</category>
      <category>sysadmin</category>
    </item>
    <item>
      <title>What the Bootloader Knows</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Mon, 27 Apr 2026 08:24:54 +0000</pubDate>
      <link>https://dev.to/vivian-voss/what-the-bootloader-knows-42l9</link>
      <guid>https://dev.to/vivian-voss/what-the-bootloader-knows-42l9</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtiybja5xwnyly4fh123.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtiybja5xwnyly4fh123.png" alt="A warm editorial illustration in painterly style. A wide canyon seen from a slightly elevated angle. On the far left bank, a small stone fortress labelled " span="" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Unix Way — Episode 14&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Between the firmware that knows almost nothing and the kernel that must know everything, there is a small program with a rather strange job. The bootloader is the one piece of software whose design quietly encodes what the operating system above believes itself to be.&lt;/p&gt;

&lt;p&gt;Three answers to the same question. Each tells a different story.&lt;/p&gt;

&lt;h2&gt;
  
  
  FreeBSD's Loader
&lt;/h2&gt;

&lt;p&gt;Three stages. &lt;code&gt;boot0&lt;/code&gt; is a 446-byte master boot record. &lt;code&gt;boot1&lt;/code&gt; reads the partition table and locates &lt;code&gt;loader&lt;/code&gt;. &lt;code&gt;loader(8)&lt;/code&gt; is a Forth interpreter, around 600 KB on amd64.&lt;/p&gt;

&lt;p&gt;Forth, because in 1992 someone decided the bootloader should be programmable without becoming an operating system. Forth gives you a small, deterministic, stack-based language with no runtime assumptions about memory layout or operating system services. It is enough to express "look at this dataset, decide which kernel to load, present a menu, hand off."&lt;/p&gt;

&lt;p&gt;The loader understands UFS and ZFS natively. It can mount a ZFS dataset as root. It knows what a Boot Environment is. It presents the list of available Boot Environments as a menu before the kernel starts. None of this is bolted on. It is the design.&lt;/p&gt;

&lt;p&gt;The pre-install bootloader for ZFS-on-root systems writes the loader into a small partition outside the ZFS pool, but the loader itself reads the pool metadata, walks the dataset hierarchy, and offers the user the choice of which root dataset to boot. This is the enabling primitive on which the entire FreeBSD Boot Environment workflow stands.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Userland Built
&lt;/h2&gt;

&lt;p&gt;The loader exposed Boot Environments as a first-class concept. The userland question was: how does an admin create, list, activate, destroy, and switch between them?&lt;/p&gt;

&lt;p&gt;The first answer was a shell script called &lt;code&gt;manageBE&lt;/code&gt;, written in the early days of ZFS-on-FreeBSD. Functional, but not pleasant to use.&lt;/p&gt;

&lt;p&gt;In 2012, a Polish FreeBSD admin called &lt;a href="https://vermaden.wordpress.com/" rel="noopener noreferrer"&gt;vermaden&lt;/a&gt; wrote &lt;code&gt;beadm(8)&lt;/code&gt; in POSIX &lt;code&gt;sh(1)&lt;/code&gt; and &lt;code&gt;awk(1)&lt;/code&gt;, deliberately mimicking the Solaris and Illumos &lt;code&gt;beadm&lt;/code&gt; interface so that anyone coming from those systems would feel at home. The original announcement and discussion lives on the FreeBSD Forums in the thread "HOWTO ZFS Madness" (number 31662, still readable today). The script was, and remains, around fifteen hundred lines of POSIX shell and awk: small, auditable, dependency-free, runnable on any FreeBSD system since 9.x.&lt;/p&gt;

&lt;p&gt;For six years, vermaden's &lt;code&gt;beadm&lt;/code&gt; was the working standard for managing Boot Environments on FreeBSD. It was packaged in ports, recommended by the documentation, and built into countless admin workflows.&lt;/p&gt;

&lt;p&gt;In 2018, FreeBSD 12.0 shipped &lt;code&gt;bectl(8)&lt;/code&gt;, a reimplementation in C that landed in base. The motivation was straightforward: a tool used as universally as &lt;code&gt;beadm&lt;/code&gt; had become deserved to be in base, with all the testing and consistency guarantees that base implies. The C rewrite gave it tighter integration with &lt;code&gt;libbe&lt;/code&gt;, the FreeBSD library that already encoded the Boot Environment data model, and direct ZFS access without the cost of subprocess fan-out.&lt;/p&gt;

&lt;p&gt;The transition was not entirely smooth. Early &lt;code&gt;bectl&lt;/code&gt; versions had bugs that &lt;code&gt;beadm&lt;/code&gt; did not, in part because &lt;code&gt;beadm&lt;/code&gt; had been shaken out by six years of production use across hundreds of admins. There are documented cases from the 12.x era in which running &lt;code&gt;beadm&lt;/code&gt; against a &lt;code&gt;bectl&lt;/code&gt;-confused dataset state cleanly resolved the inconsistency. The bugs have largely been fixed, and &lt;code&gt;bectl&lt;/code&gt; is now the recommended tool.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;beadm&lt;/code&gt; continues to be developed and continues to ship features that &lt;code&gt;bectl&lt;/code&gt; has not adopted. The most interesting is the &lt;code&gt;REROOT&lt;/code&gt; option: after creating a Boot Environment, upgrading the running system, and discovering that something is not right, &lt;code&gt;REROOT&lt;/code&gt; swaps userspace into the pre-upgrade Boot Environment without a full reboot. The kernel stays up; the userspace is reloaded from the chosen dataset. The whole operation takes seconds rather than the half-minute of a reboot. It is the kind of feature that exists because someone was solving an actual operational problem and writing the code that fixed it, rather than waiting for the perfect abstraction.&lt;/p&gt;

&lt;p&gt;That trajectory, from &lt;code&gt;manageBE&lt;/code&gt; to &lt;code&gt;beadm&lt;/code&gt; to &lt;code&gt;bectl&lt;/code&gt;, is a model of how Unix tools mature. A clever script meets an unfilled need. A wider community adopts it and shapes it. A reimplementation in base preserves what worked and fixes what did not, while the original keeps innovating in places where base cannot move as quickly. Both tools coexist, and the choice between them is technical rather than political.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linux: LILO (1992 to 2015)
&lt;/h2&gt;

&lt;p&gt;Werner Almesberger wrote LILO at ETH Zürich in 1992 and maintained it until 1998. John Coffman took over until 2007, when development passed to Joachim Wiedorn. Active development ended in December 2015 with version 24.2, and was never resumed.&lt;/p&gt;

&lt;p&gt;The Linux Loader's design choice was a blocklist. The position of the kernel image on disk was recorded as a list of physical sectors at install time. Move the kernel, the bootloader points at gibberish. Update the kernel, run &lt;code&gt;/sbin/lilo&lt;/code&gt; before rebooting or the machine refuses to come back.&lt;/p&gt;

&lt;p&gt;This was reasonable in 1992. Disks were small, kernels were rebuilt rarely, and the BIOS understood little beyond INT 13h. The assumption was a slow, static world.&lt;/p&gt;

&lt;p&gt;By 2010 the assumption had become a trap. GPT and UEFI made blocklists structurally awkward. RAID and LVM moved blocks behind the back of the bootloader. The Linux kernel started shipping new versions every couple of months. The ritual that LILO required after each change had become the leading cause of unbootable systems for users who forgot it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linux: GRUB (2005 to present)
&lt;/h2&gt;

&lt;p&gt;The other extreme. GRUB began in 1995 as a research bootloader by Erich Boleyn, was adopted by the GNU project in 1999, and was rewritten from scratch as GRUB 2 in 2005. The result is a small operating system that runs before the operating system.&lt;/p&gt;

&lt;p&gt;GRUB 2 ships filesystem drivers for ext2, ext3, ext4, btrfs, XFS, F2FS, JFS, ReiserFS, FAT, NTFS, ISO9660, AFFS, HFS, UDF, ZFS (read-only, partial), and several others. It can decompress kernels in zlib, lzma, lz4, and zstd. It runs scripts in its own POSIX-shell-flavoured language. It supports themes, fonts, gfxmode, network boot, UEFI Secure Boot, multiboot, and chainloading. Its core is around 200 KB; its loadable modules add several megabytes.&lt;/p&gt;

&lt;p&gt;The assumption is a fragmented world. Linux distributions disagree about filesystem choice, kernel layout, root-on-everything, and boot configuration. GRUB has to understand every variant because the world above it does not converge.&lt;/p&gt;

&lt;p&gt;The cost of that assumption is everything complexity costs. The benefit is that GRUB will boot almost anything you put in front of it.&lt;/p&gt;

&lt;p&gt;The omission is interesting. GRUB has read-only ZFS support, sufficient to load a kernel from a ZFS dataset. It does not have Boot Environment awareness, because the conversation about ZFS-as-an-OS-management-layer never happened on Linux. ZFSBootMenu exists precisely to fill that gap, by replacing GRUB entirely with a small kexec-loaded Linux kernel whose only job is to present a Boot Environment menu and hand off to the chosen one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Point
&lt;/h2&gt;

&lt;p&gt;The question is not which bootloader is best. It is what the bootloader assumes about the system above it.&lt;/p&gt;

&lt;p&gt;FreeBSD's loader assumes the system above it is coherent enough to be worth talking to: that the bootloader, the filesystem, the kernel, and the userland are designed by people who can sit in the same room. From that assumption, things like Boot Environment selection at the loader become possible, and tools like &lt;code&gt;beadm&lt;/code&gt; and &lt;code&gt;bectl&lt;/code&gt; become natural.&lt;/p&gt;

&lt;p&gt;LILO assumed a slow, static world in which the admin would re-run the installer after each change. The world was no longer that world by 2005, and the bootloader could not move with it.&lt;/p&gt;

&lt;p&gt;GRUB assumes a fragmented world in which the bootloader must understand every variant of every filesystem because the distributions above it do not converge. The cost is permanent complexity, the benefit is that GRUB boots anything.&lt;/p&gt;

&lt;p&gt;Three loaders. Three theories of the operating system. Each design is correct for the world it assumes. The interesting work is choosing which world to live in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/what-the-bootloader-knows" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>freebsd</category>
      <category>linux</category>
      <category>bootloader</category>
      <category>zfs</category>
    </item>
    <item>
      <title>Why We Measure Tickets, Not Problems Prevented</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Sun, 26 Apr 2026 08:50:56 +0000</pubDate>
      <link>https://dev.to/vivian-voss/why-we-measure-tickets-not-problems-prevented-10fd</link>
      <guid>https://dev.to/vivian-voss/why-we-measure-tickets-not-problems-prevented-10fd</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F870utltlcgrgs27nujmc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F870utltlcgrgs27nujmc.png" alt="A pen-and-ink illustration in Heinrich Zille style. On the right, a young developer with long dark hair and a pink cat-ear headset, sits at a wooden desk in despair, both hands gripping her head, mouth open in anguish. Her laptop screen shows a sharply plummeting graph labelled " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;On Second Thought — Episode 05&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The dashboard is green. Velocity is up. Burndown is on track. The demo on Friday will be smooth. Production has been quietly fragile for eleven weeks, and nobody notices, because fragility does not have a column.&lt;/p&gt;

&lt;p&gt;This is the post about that column.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Axiom
&lt;/h2&gt;

&lt;p&gt;Productivity, for the purposes of any reporting line above the work itself, is what one can count. Tickets closed, story points completed, lines shipped, deploys per week, sprints reported as "successful", incident-mean-time-to-resolution charted in a quarterly review.&lt;/p&gt;

&lt;p&gt;The work that does not produce a number does not exist. The thinking that prevented the incident in the first place does not appear. The decision not to deploy on Friday afternoon does not log. The conversation in which a senior engineer talked the team out of a doomed approach is not in any system of record.&lt;/p&gt;

&lt;p&gt;This is not because the people running the dashboards are foolish. It is because the dashboard is the only thing they were given to look at, and over time, the thing one looks at becomes the thing one believes is real.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Origin
&lt;/h2&gt;

&lt;p&gt;Frederick Winslow Taylor published The Principles of Scientific Management in 1911 with rather industrious enthusiasm. The stopwatch, the time study, the one-best-way. Taylor's intent was to bring rigour to factory work; his unintended legacy was to make the act of &lt;em&gt;being measured&lt;/em&gt; the new floor of working life.&lt;/p&gt;

&lt;p&gt;Workers struck. The Watertown Arsenal foundry walked out in the summer of 1911 over the introduction of the stopwatch. The US Congress investigated, and banned time studies and pay premiums tied to them on US Government work. Taylor's specific instrument was, briefly, defeated.&lt;/p&gt;

&lt;p&gt;The instinct survived under a procession of new names: scientific management, then efficiency, then management-by-objectives, then Six Sigma, then Lean, then agile, now velocity. The vocabulary moves on every fifteen years; the underlying premise (that what cannot be counted does not count) moves not at all.&lt;/p&gt;

&lt;p&gt;In 1975, the British economist Charles Goodhart wrote, in a footnote of a paper on UK monetary policy: "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." Twenty-two years later, the anthropologist Marilyn Strathern, observing British university assessment regimes, condensed it into the version everyone now quotes: when a measure becomes a target, it ceases to be a good measure.&lt;/p&gt;

&lt;p&gt;We were warned by name, twice in one century, by people whose entire professional life had been spent watching the phenomenon. The industry that calls itself data-driven did not, in this case, read the data.&lt;/p&gt;

&lt;p&gt;In 2019, Ron Jeffries, one of the original signatories of the Agile Manifesto and the man widely credited with promoting the story point, published a public reconsideration:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I may have made the name-changing suggestion. If I did, I'm sorry now."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;He went on to recommend abandoning story-point estimation entirely. The industry, having found story points terribly useful for promotion-decisions, performance-reviews, and quarterly board reports, kept them. The dashboard, rather firmly, demands them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cost
&lt;/h2&gt;

&lt;p&gt;Consider two engineers in the same team for the same quarter.&lt;/p&gt;

&lt;p&gt;The first engineer prevents three outages. She does this by refusing to deploy on Friday afternoon when the staging environment is showing intermittent failures; by patiently explaining to a junior why the proposed cache invalidation strategy will produce a thundering herd; by spotting, in a routine code review, the off-by-one in the rate limiter that would have melted production under the next traffic spike. None of this work produces a ticket. None of it closes a backlog item. None of it is visible to her manager's manager.&lt;/p&gt;

&lt;p&gt;The second engineer cheerfully closes forty-seven tickets that quarter. He is praised in the sprint review. He ships the architecture that produces the outages the first engineer prevented. The outages are then opened as new tickets, which the team will close in subsequent sprints, generating velocity and a sense of forward motion.&lt;/p&gt;

&lt;p&gt;The first engineer is invisible to every metric in the building. The second is promoted.&lt;/p&gt;

&lt;p&gt;This is not a hypothetical. It is the architecture of the modern software organisation, applied with the consistency of a religious practice. The dashboard goes up. The system goes down. The dashboard goes up again, because the new outages are recorded as feature requests in next quarter's backlog, and closing them counts as work.&lt;/p&gt;

&lt;p&gt;The cost is not only the outages. The cost is that the first engineer, finding her judgement systematically unrewarded, eventually leaves. The second engineer, finding his ticket-throughput systematically rewarded, eventually becomes director of engineering. The system optimises the people the way one would optimise a queue, and the queue knows nothing about the building it is keeping standing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Made-in-Germany Inversion
&lt;/h2&gt;

&lt;p&gt;A short historical detour, because the contrast is precise and the contrast is the point.&lt;/p&gt;

&lt;p&gt;"Made in Germany" was an insult before it was a compliment. The British Merchandise Marks Act of 1887 was passed by Parliament after British manufacturers (Sheffield cutlery, in particular) complained that German imitations were entering the country with British-style markings. The Act required all foreign goods to be plainly marked with their country of origin, the practical purpose being to allow British consumers to recognise and refuse the inferior German imports.&lt;/p&gt;

&lt;p&gt;The plan worked exactly as designed for about a decade. Then it inverted.&lt;/p&gt;

&lt;p&gt;Within thirty years, "Made in Germany" had become a guarantee of value. German firms had used the label not to &lt;em&gt;perform&lt;/em&gt; productivity but to ship goods that did not need replacing. Solingen blades that lasted decades. Carl Zeiss optics that astronomers in Britain quietly preferred to anything domestic. Steinway pianos. Leica cameras. The label became a guarantee because the work was a guarantee.&lt;/p&gt;

&lt;p&gt;The inversion was not a marketing campaign. It was a consequence of a culture that measured the chair, not the hours.&lt;/p&gt;

&lt;p&gt;We have, rather industriously, built the inverse industry. The label is impeccable. The dashboards are green. The retros are constructive. The substance, increasingly, is not. A modern enterprise software product carries a thousand certifications, three SOC-2 audits, an ISO-27001 stamp, an SBOM, an OpenSSF Scorecard, and falls over when a region drains. The label does the work the work used to do.&lt;/p&gt;

&lt;p&gt;This is not specifically a German point. It is a craftsmanship point that happens to have a useful German example. The craftsmen of Sheffield, who triggered the 1887 Act, would have understood it perfectly well had the inversion gone the other way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Question
&lt;/h2&gt;

&lt;p&gt;If a craftsman built a chair to last fifty years, the metric was the chair, not the hours. If a Bell Labs engineer in 1955 designed a Number 5 Crossbar switch that ran in the field for thirty years, the metric was the switch, not the patches. If the engineers at Volvo who designed the three-point seatbelt in 1959 had been measured on patents-filed-per-quarter, they would not have given it away to every other manufacturer in the world, and the next sixty years of road safety would have run rather differently.&lt;/p&gt;

&lt;p&gt;What would happen to a software organisation that measured problems prevented rather than tickets closed? That measured quality held rather than features shipped? That measured judgement applied rather than activity logged?&lt;/p&gt;

&lt;p&gt;The honest answer is that nobody knows, because nobody has been allowed to try long enough for the chair to last fifty years. Software organisations rarely outlive their last quarterly review by more than two or three of them. The promotion cycle is faster than the chair.&lt;/p&gt;

&lt;p&gt;There is a quieter question underneath, which is the one this episode is really about. It is not "what metric should we use instead?" It is: &lt;em&gt;why have we agreed, as an industry of nominally clever people, to organise our working lives around an instrument we know to be wrong, in a way that has been documented for fifty years and apologised for by its inventors?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One does suspect the answer is uncomfortable. It is much easier to ship a number than to ship a thing that lasts. It is much easier to manage a number than to manage a person who is doing something difficult. It is much easier to write a quarterly review of a number than of a judgement.&lt;/p&gt;

&lt;p&gt;The dashboard is green. The chair, somewhere, is not being built.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/why-we-measure-tickets" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>productivity</category>
      <category>agile</category>
      <category>craftsmanship</category>
    </item>
    <item>
      <title>The Dependency Avalanche: 644 Strangers in Your package.json</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Sat, 25 Apr 2026 07:12:08 +0000</pubDate>
      <link>https://dev.to/vivian-voss/the-dependency-avalanche-644-strangers-in-your-packagejson-414j</link>
      <guid>https://dev.to/vivian-voss/the-dependency-avalanche-644-strangers-in-your-packagejson-414j</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj167bhhio1onaoqil13b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj167bhhio1onaoqil13b.png" alt="A dark server room lit by rows of blinking LEDs on tall racks. A weathered yellow-and-black 'SECURITY ZONE' warning sign hangs slightly askew on the back wall. On the left, a young developer in a polo shirt reading '$man woman' calmly hands a folder labelled 'AUDIT DUTY' across to a tall, faceless figure on the right. The figure is shrouded in grey-blue fog, with clearly recognisable package logos floating inside its silhouette: npm, React, Node.js, webpack, Babel, TypeScript, ESLint, Express, Jest, Vite, Tailwind. A handwritten chalk note above the figure reads 'upstream'." width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Beta Stories — Episode 09&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The promise of the modern package ecosystem was a kind one: you do not have to write everything yourself. Stand on the shoulders of giants. Reuse, do not reinvent. Anyone maintains it.&lt;/p&gt;

&lt;p&gt;The reality, measured this morning on a fresh laptop, is the avalanche this episode is named for.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Reality, in Two Numbers
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;npm install express&lt;/code&gt; on a clean directory pulls Express 5.2.1, declares 28 direct dependencies, resolves a total of 65 packages across the dependency tree, and produces a &lt;code&gt;node_modules&lt;/code&gt; of 3.6 MB. That is the smaller end of the modern web.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npx create-next-app@latest&lt;/code&gt; with the recommended defaults (TypeScript, Tailwind CSS, ESLint, App Router) creates a project whose &lt;code&gt;package.json&lt;/code&gt; declares 11 packages, resolves a transitive tree of 644 packages, and writes a &lt;code&gt;node_modules&lt;/code&gt; of 463 MB. The application, at this stage, renders a single page that says "Welcome to Next.js".&lt;/p&gt;

&lt;p&gt;Six hundred and forty-four pieces of someone else's work to render eighteen characters of text. Each package authored, in principle, by a stranger; in practice, sometimes by a small group of strangers; in a handful of unpleasant cases, by an account that has been waiting to be useful.&lt;/p&gt;

&lt;p&gt;For comparison: the same minimal HTTP responder in Rust uses &lt;code&gt;hyper&lt;/code&gt; (the lower-level HTTP foundation) or &lt;code&gt;axum&lt;/code&gt; on top of it, and keeps its dependency tree small and explicit in &lt;code&gt;Cargo.toml&lt;/code&gt;, where every crate is signed off in writing before it lands. The Go equivalent uses the standard library's &lt;code&gt;net/http&lt;/code&gt;, pulls zero external dependencies, and builds, statically linked, to a single 7 MB binary. FreeBSD ships a base userland audited as one source tree, and a ports collection where every port has a named maintainer and a fully declared dependency graph (&lt;code&gt;nginx&lt;/code&gt; and friends arrive that way, complete with provenance). All three have existed for years. None asks the team to import a stranger's recursion.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Mechanism
&lt;/h2&gt;

&lt;p&gt;The phrase that does the damage, repeated almost reflexively, is &lt;em&gt;"we do not maintain it; the upstream maintainer does."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Which means, when one reads it carefully: nobody on the team has read the code. Nobody has reviewed the commit history. Nobody has audited the build script. Nobody has checked who has commit access. Nobody has asked what the package was last week, or what it will be next week. The entire audit duty has been outsourced, in writing, to a name on an npm registry page.&lt;/p&gt;

&lt;p&gt;A 644-package install is 644 outsourced audits. The team that ships it has, by default and in good faith, agreed that some other unnamed group will do the reading on their behalf. The other unnamed group, when one goes to look, is mostly volunteers who themselves have not been paid to read the code below them either. The audit duty has been outsourced so many times that nobody is left holding it.&lt;/p&gt;

&lt;p&gt;This is the Beta Stories mechanism: software gets worse not by mistakes, but by accumulation. The decay is in the count. Every package added is a piece of code that the team has decided not to read.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Audit That (Barely) Happened
&lt;/h2&gt;

&lt;p&gt;In January 2021, an account named Jia Tan was created on GitHub. It submitted small, useful patches to xz-utils, the compression library that ships in essentially every major Linux distribution and underlies a great many compressed-archive operations across the Unix world. The patches were good. The maintainer at the time, Lasse Collin, accepted them, as one accepts good patches.&lt;/p&gt;

&lt;p&gt;By the summer of 2022, three accounts (Jia Tan, Dennis Ens, Jigar Kumar) were active in the xz-utils mailing lists and issue tracker, applying coordinated pressure on Lasse Collin to add an additional maintainer with commit rights. Lasse, by his own admission, was burnt out, unpaid, and had been carrying the project alone for years. Jia Tan was eventually granted those rights.&lt;/p&gt;

&lt;p&gt;In February and March 2024, Jia Tan committed the backdoor. The payload was inserted into xz-utils 5.6.0 (released 24 February) and 5.6.1 (released 9 March), hidden inside test fixtures and triggered through the autotools build system in a way that left the source tarball pristine to a casual reader. The build, when invoked in a particular way that distribution maintainers happened to invoke it, linked malicious object code into &lt;code&gt;liblzma&lt;/code&gt;. &lt;code&gt;liblzma&lt;/code&gt; was, in turn, linked into &lt;code&gt;sshd&lt;/code&gt; on systemd-based distributions via the &lt;code&gt;systemd-notify&lt;/code&gt; integration. The result was a remote-code-execution backdoor in OpenSSH, triggerable by any client with the right key. CVSS 10.0.&lt;/p&gt;

&lt;p&gt;By 29 March 2024, the backdoored versions had already reached Debian unstable, Fedora 40 rawhide, openSUSE Tumbleweed, Ubuntu testing, and Kali Linux. They were days, in some cases hours, away from rolling into stable releases that would have been on hundreds of millions of servers.&lt;/p&gt;

&lt;p&gt;On 28 March 2024 (the public posting was on the 29th), Andres Freund, a PostgreSQL maintainer at Microsoft, noticed that his SSH login on a Debian unstable system was taking approximately 500 milliseconds longer than usual. He had recently been benchmarking Postgres builds and was attentive to small latency drifts. He ran the login under valgrind, found memory-access errors pointing at &lt;code&gt;liblzma&lt;/code&gt;, traced the chain back through the autotools build, found the obfuscated payload in the test fixtures, and posted the disclosure to the oss-security mailing list that night.&lt;/p&gt;

&lt;p&gt;The internet was saved by one engineer noticing half a second of latency he did not expect.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Signal
&lt;/h2&gt;

&lt;p&gt;Two and a half years of patient social engineering against an unpaid maintainer. A chain of distribution maintainers signing off on routine version bumps. CI pipelines, test suites, code-review tools, all returning green. SBOM generators producing clean reports. The audit duty had been outsourced so many times that nobody was holding it.&lt;/p&gt;

&lt;p&gt;The signal one watches for, after XZ, is not "is there malicious code in your dependencies." That signal is impossible to read directly. The signal is &lt;em&gt;who is doing the reading&lt;/em&gt;. If the answer is "the upstream maintainer," ask: who is the maintainer, what is their funding, what is their burnout level, who has commit access, what has changed in the past six months. If those questions cannot be answered for a package one ships to production, one is in the avalanche zone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Boring Counter-Move
&lt;/h2&gt;

&lt;p&gt;The counter to dependency-avalanche is not "audit every package", which is impossible at 644 packages. It is &lt;strong&gt;fewer packages&lt;/strong&gt;. Each one read, each one chosen, each one with a known maintainer model.&lt;/p&gt;

&lt;p&gt;The Go standard library is one approach: a curated, audited, batteries-included foundation that removes 80 percent of the reasons one would reach for a third-party package in the first place. Rust takes the opposite end of the same idea: a deliberately small standard library plus a &lt;code&gt;Cargo.toml&lt;/code&gt; that any reviewer can read in a single sitting, with every dependency declared in writing and every transitive crate visible in &lt;code&gt;Cargo.lock&lt;/code&gt;. The FreeBSD base system is a third: a coherent userland built and audited as a single project, where what ships in &lt;code&gt;/usr/sbin/sshd&lt;/code&gt; is the same source tree as what ships in &lt;code&gt;/usr/bin/find&lt;/code&gt; and &lt;code&gt;/usr/bin/awk&lt;/code&gt;, maintained by the same project, released together. The Python standard library, before the wheel-and-PyPI culture took over, was something similar. So was the venerable Unix toolkit.&lt;/p&gt;

&lt;p&gt;None of this scales to "build any product, in any language, in any week". It scales to "build a sustainable system, with a known maintenance posture, over decades". The Beta Stories question is whether one has been paying for the first while pretending it is the second.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Closer
&lt;/h2&gt;

&lt;p&gt;XZ is the post-mortem the industry will keep referencing because it nearly worked. The patient, careful, well-funded version of the same attack will work, and one will not notice in time. The reason is not that the security tools have failed. The reason is that nobody is reading the code, and nobody has been for some time.&lt;/p&gt;

&lt;p&gt;Next time it might not be 500 milliseconds. Next time it might be 50. Next time it might be no perceptible drift at all, until the breach notification arrives in the inbox.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/dependency-avalanche" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>supplychain</category>
      <category>opensource</category>
      <category>freebsd</category>
      <category>security</category>
    </item>
    <item>
      <title>Service Mesh: The Sidecar Tax</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Fri, 24 Apr 2026 07:09:38 +0000</pubDate>
      <link>https://dev.to/vivian-voss/service-mesh-the-sidecar-tax-319f</link>
      <guid>https://dev.to/vivian-voss/service-mesh-the-sidecar-tax-319f</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femucgakqo0hkq7e8v7l2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femucgakqo0hkq7e8v7l2.png" alt="A split balance sheet: on the left (green), THE PITCH lists six service-mesh marketing promises with green ticks — mTLS between every service, observability, traffic splitting, retries and timeouts without code, language independence, the Kubernetes-native way. On the right (red), THE INVOICE lists twelve actual costs with red crosses — a dozen-plus CRDs, 166 percent mTLS latency overhead, 60 MB RAM per sidecar, 60 GB across 1,000 pods, a dedicated platform team, doubled debugging, falling CNCF adoption, Istio's own ambient-mode admission, and several reminders that the alternative has existed since the 1990s. Below, a pair of scales tilts firmly to the red side." width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Invoice — Episode 19&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;"mTLS, observability, traffic management, zero-code retries. You need a service mesh."&lt;/p&gt;

&lt;p&gt;Splendid. Let us examine what one is actually paying for.&lt;/p&gt;

&lt;p&gt;A service mesh moves cross-cutting concerns (mTLS, retries, timeouts, traffic shifting, observability) out of application code and into a proxy that sits beside each pod. Istio, the archetype, launched in 2017 as a joint project of Google, IBM, and Lyft. It graduated within the CNCF in July 2023. In the 2024 CNCF Annual Survey, service-mesh adoption across respondents fell to 42 percent, down from 50 percent the year before. That is not a catastrophe. It is, however, the first full-year decline the category has ever posted.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Complexity Invoice
&lt;/h2&gt;

&lt;p&gt;Istio ships over a dozen primary custom resource definitions across three categories (traffic management, security, telemetry) and dozens more through its operator, telemetry plugins, wasm extensions, and gateway APIs. A minimally useful installation comprises:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A control plane&lt;/strong&gt; (&lt;code&gt;istiod&lt;/code&gt;) responsible for configuration distribution, certificate issuance, xDS API serving to every sidecar&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A per-pod sidecar&lt;/strong&gt; (Envoy) injected into every workload, running a second container alongside the application&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;An ingress gateway&lt;/strong&gt; at the cluster edge, usually another Envoy in a standalone pod&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mTLS certificates&lt;/strong&gt; rotated by istiod, distributed via SDS to each sidecar&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Policy resources&lt;/strong&gt; (PeerAuthentication, RequestAuthentication, AuthorizationPolicy)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Telemetry bindings&lt;/strong&gt; (Telemetry CRDs) to send traces and metrics to external collectors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A platform team&lt;/strong&gt; that knows what each of those does, how they interact, and how to debug any given failure mode&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The CNCF's own reports describe Istio as mature, powerful, and "operationally demanding". The second adjective is the one to watch. Installing Istio in a fresh cluster takes a senior SRE about two days. Operating it for six months takes roughly 0.5 to 1.0 FTE, scaling upwards with cluster size. Debugging it at 3 a.m. is a skill one acquires by losing two nights of sleep and one customer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Latency Invoice
&lt;/h2&gt;

&lt;p&gt;Every inter-service HTTP or gRPC call now traverses two Envoy proxies: the caller's sidecar, then the callee's sidecar. Adding two proxies to every request path means adding latency. How much is now well-measured.&lt;/p&gt;

&lt;p&gt;A 2025 peer-reviewed performance comparison (published by the DeepNess Lab, &lt;em&gt;Performance Comparison of Service Mesh Frameworks: the mTLS Test Case&lt;/em&gt;) measured the overhead with mTLS enforced on otherwise identical workloads. The results:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mesh&lt;/th&gt;
&lt;th&gt;mTLS overhead vs. baseline&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Istio (sidecar mode)&lt;/td&gt;
&lt;td&gt;+166%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cilium&lt;/td&gt;
&lt;td&gt;+99%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Linkerd&lt;/td&gt;
&lt;td&gt;+33%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Istio (ambient mode)&lt;/td&gt;
&lt;td&gt;+8%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The headline number (+166 percent for Istio sidecar with mTLS) is surprising only to people who have never read the benchmark. Envoy is fast; two Envoys in the path plus TLS handshakes and certificate validation are not free. Linkerd's Rust-based &lt;code&gt;linkerd2-proxy&lt;/code&gt; is measurably lighter because it was built for the job, not adapted to it. Ambient mode, introduced in Istio 1.23 (August 2024), replaces per-pod sidecars with a shared node-level ztunnel and produces dramatically less overhead. Ambient is, in effect, Istio's own public admission that the sidecar model had a problem it could not solve by optimisation alone.&lt;/p&gt;

&lt;p&gt;A sidecar also costs memory. The Istio 1.24 documentation reports approximately 60 MB of RAM and 0.20 vCPU per Envoy sidecar at 1,000 HTTP RPS with 1 KB payloads. A cluster with 1,000 pods is therefore paying roughly 60 GB of RAM and 200 vCPU for the mesh before a single byte of application code has executed. Ambient ztunnels are smaller (approximately 12 MB RAM, 0.06 vCPU each) but you now also pay for waypoint proxies where L7 features are enabled. Either way, the total is non-zero.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Debugging Invoice
&lt;/h2&gt;

&lt;p&gt;When the mesh works, it is invisible. When it does not, the request path has doubled and so has the attack surface for bugs. A 500 that arrives at the client might originate in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The application code itself&lt;/li&gt;
&lt;li&gt;The caller's Envoy (wrong upstream cluster, circuit breaker tripped)&lt;/li&gt;
&lt;li&gt;The destination's Envoy (connection limits, bad cert rotation)&lt;/li&gt;
&lt;li&gt;A mis-parsed VirtualService or DestinationRule&lt;/li&gt;
&lt;li&gt;The mTLS trust chain (expired intermediate, wrong trust domain)&lt;/li&gt;
&lt;li&gt;istiod failing to push updated configuration within the retry window&lt;/li&gt;
&lt;li&gt;A wasm plugin throwing an exception&lt;/li&gt;
&lt;li&gt;A Kubernetes NetworkPolicy quietly dropping the packet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The distributed tracing one installed to understand the mesh is now required to understand the mesh. Troubleshooting skills become mesh-specific skills, which means they do not transfer and do not scale with engineer headcount in the obvious way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Honest Case For
&lt;/h2&gt;

&lt;p&gt;Service meshes solve a real problem for a real set of operators. If you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run more than roughly 100 microservices with cross-team ownership&lt;/li&gt;
&lt;li&gt;Have strict compliance that mandates mTLS between every internal service&lt;/li&gt;
&lt;li&gt;Operate across multiple clusters or multiple clouds with incompatible primitives&lt;/li&gt;
&lt;li&gt;Need uniform observability across polyglot services that cannot ship an OTel library&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then the tax starts to pay for itself. Everyone else: you are paying Google's architecture to solve problems you do not, in fact, have.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Alternative
&lt;/h2&gt;

&lt;p&gt;Direct HTTP or gRPC calls between services, over a network one already trusts. This is how the internet worked for three decades before sidecars existed.&lt;/p&gt;

&lt;p&gt;mTLS terminated at a single ingress gateway (HAProxy, NGINX, Envoy itself, or whatever one's load balancer of choice is), because the VPC was a trust boundary before sidecars were a marketing category. Internal traffic over plaintext inside the VPC is fine for the vast majority of workloads, and mTLS between services is a compliance requirement for a minority of them, not an architectural necessity for all of them.&lt;/p&gt;

&lt;p&gt;Tracing and metrics via an OpenTelemetry library linked into each service. OTel is language-agnostic, vendor-neutral, and five lines of initialisation in most runtimes. It sends traces and metrics via OTLP to any collector. No proxy required.&lt;/p&gt;

&lt;p&gt;Retries and timeouts in the client library (Go's &lt;code&gt;http.Client&lt;/code&gt;, Rust's &lt;code&gt;reqwest&lt;/code&gt;, Java's &lt;code&gt;RestTemplate&lt;/code&gt; or OkHttp, Python's &lt;code&gt;httpx&lt;/code&gt;, Node's &lt;code&gt;undici&lt;/code&gt;). All of these ship configurable retries, timeouts, connection pools, and circuit breakers. The retry logic that a service mesh claims to provide "without code changes" is three lines of configuration in a mature client.&lt;/p&gt;

&lt;p&gt;Authorisation at the application layer, because only the application knows what "this user may read this document" means. Delegating authorisation to a proxy is delegating it to a component that does not understand the data.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;Service mesh is sold as "zero code changes". You get that by paying:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Two proxies of latency&lt;/strong&gt; on every internal call, measurably more under mTLS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A platform team of overhead&lt;/strong&gt; to run istiod, gateways, policies, and upgrades&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A debugger's worth of new moving parts&lt;/strong&gt;: VirtualService, DestinationRule, PeerAuthentication, Envoy configuration, trust chains, wasm plugins&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All to avoid writing retry logic that any mature HTTP client already provides in three lines of configuration.&lt;/p&gt;

&lt;p&gt;The mesh was always, architecturally, a political solution to a technical problem. It existed because microservice teams did not trust each other's code, and a proxy in the middle was a way of enforcing cross-cutting concerns without convincing any one team to adopt them. The proxy became the architecture. The architecture became the operational cost centre. The cost centre produced Ambient Mode, which is the industry's second try at making sidecars not cost what sidecars cost.&lt;/p&gt;

&lt;p&gt;Meanwhile, the original alternative (a library in each service, a trusted network below, and a single ingress gateway at the edge) remained exactly what it has been since approximately 1995.&lt;/p&gt;

&lt;p&gt;The side call was always there. One simply decided it wasn't enterprise enough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/blog/service-mesh-sidecar-tax" rel="noopener noreferrer"&gt;Read the full article on vivianvoss.net →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>servicemesh</category>
      <category>kubernetes</category>
      <category>istio</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Integrated by Design: Out Today, With a Few Rather Educational Caveats</title>
      <dc:creator>Vivian Voss</dc:creator>
      <pubDate>Thu, 23 Apr 2026 06:30:44 +0000</pubDate>
      <link>https://dev.to/vivian-voss/integrated-by-design-out-today-with-a-few-rather-educational-caveats-23m8</link>
      <guid>https://dev.to/vivian-voss/integrated-by-design-out-today-with-a-few-rather-educational-caveats-23m8</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko451pzbcqgpf0wrfg0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko451pzbcqgpf0wrfg0q.png" alt="A young developer with a pink cat-ear headset and a $whoami T-shirt stands outside a cosy independent bookshop at golden hour, holding a hardcover copy of Integrated by Design up toward the camera. The book's cover shows a pixel-art illustration of the same figure beside a small server rack, against a dark navy background with a neon rainbow stripe. Behind her, warmly-lit shelves of books; at her feet, a small chalkboard sign." width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today, Integrated by Design goes on sale. 371 pages on FreeBSD, from philosophy to practice, with a subtitle one has had rather a lot of time to consider ironically: &lt;em&gt;Why the Best Systems Are the Ones You Don't Notice.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A book about invisible systems, launched by some rather visible problems.&lt;/p&gt;

&lt;p&gt;Five months of writing. Three weeks of final proofs. Then the last 72 hours, dedicated to problems one does not anticipate before one's first book. What follows is, in the interest of honesty and possibly the education of the next first-time author who stumbles on the same rakes, a field report.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Font
&lt;/h2&gt;

&lt;p&gt;The final proof arrived from the printer with a rather small complaint. One of the glyphs in the book's monospace font was quietly broken. Not all glyphs. Just the number 8.&lt;/p&gt;

&lt;p&gt;JetBrains Mono ships in sixteen styles (four weights, each in regular and italic, plus a paired set for various screen densities). The bug: the lower counter of the numeral 8, the small enclosed oval at the bottom of the figure, had an un-closed path in the outline. On every screen one could put the font in front of (Retina, non-Retina, one's own laser printer) it rendered as a clean hollow counter. Printed at full size on matte coated paper by a professional offset press, the press dutifully filled it in. Every 8 in the book became a smudge.&lt;/p&gt;

&lt;p&gt;The fix required opening the font in a glyph editor, locating the offending Bézier in the descender of the 8, closing the path by hand, and re-exporting. The re-export produced a variant I labelled "JetBrains Mono Fixed". It ships with four styles, not sixteen, for the practical reason that the book only uses four: regular, bold, italic, bold italic. The fix was re-applied to each. The source files of every chapter were recompiled, the code listings re-typeset, the PDF regenerated.&lt;/p&gt;

&lt;p&gt;The alternative would have been to re-order a proof print from the printer. A proof print, it turns out, does not arrive the next day. It arrives in seven. The book shipped on its planned launch date only because the surgery happened in-house.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cover
&lt;/h2&gt;

&lt;p&gt;The cover came from a small layered Photoshop composition. Between the first proof and the second, somewhere in the layer stack, an adjustment layer survived that was not meant to.&lt;/p&gt;

&lt;p&gt;On screen the adjustment layer produced nothing visible. On paper, printed at CMYK on the press, it produced large patches where the dark navy of the cover shifted by a few degrees towards grey-black. Subtle individually, plainly wrong once spotted. The patches did not correspond to any artwork one had intended; they were the ghost of a layer one had meant to delete.&lt;/p&gt;

&lt;p&gt;The fix was unglamorous: flatten all adjustment layers before export, audit the PDF with a preflight tool, and repeat until the printed proof and the design intent agreed. A bunch of clean exports later, the kerfuffle was gone. Lesson, pinned to the corkboard: &lt;strong&gt;flatten before export, preflight before upload, or be prepared to explain pink patches to your future self.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Price
&lt;/h2&gt;

&lt;p&gt;This one cannot be fixed as quickly. Amazon's KDP form invites one to enter the list price, and quietly means the net price. Local book VAT is then added on top at checkout: 7% in Germany, 9% in the Netherlands, 5.5% in France, 0% in Ireland and the UK (books are zero-rated), and so on across the marketplaces.&lt;/p&gt;

&lt;p&gt;The form, one suspects, was designed from a US perspective where the sticker price is the final price and sales tax appears later at the till. For European marketplaces the same form silently switches meaning, and does not see fit to display the gross total next to the input. For additional fun, the form itself is rearranged for each book type (hardcover, paperback, Kindle), so the lesson learnt from one edition does not quite transfer to the next.&lt;/p&gt;

&lt;p&gt;The round figures I had announced on the book page, I entered as though they were gross. At the moment of writing, a book announced at €90 sells for €96.30 on Amazon.de. Corrections to bring every marketplace back to round gross figures are already submitted. The prices will come down, not up. KDP advertises up to 72 hours for propagation; in practice 12 to 24 is typical. While the change is in flight, the dashboard is rather firmly locked: one cannot pause the launch, one cannot postpone by a day, one cannot edit the offending page.&lt;/p&gt;

&lt;p&gt;A helpful table of the announced round prices (the target) is on the book page. If a euro or two matters, wait until tomorrow. If it does not, the book is on Amazon now, and I am genuinely grateful for every early reader.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Kindle
&lt;/h2&gt;

&lt;p&gt;The Kindle edition has been sitting in Amazon's review queue since yesterday evening. What Amazon reviews for, exactly, is not entirely documented: format compliance, metadata validity, some degree of content review. The queue is famously unhurried and famously in Amazon's hands. It will go live when it goes live; today if one is lucky, tomorrow if not.&lt;/p&gt;

&lt;p&gt;Nothing I can do to accelerate it, which is itself rather a lesson in how much of a modern launch one does not, in fact, control.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Direct-Order Option
&lt;/h2&gt;

&lt;p&gt;A direct-order option from Voss'scher Verlag (the imprint) goes live on the book page later today. It offers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Secured EPUB&lt;/strong&gt; (with watermarking tied to the buyer's invoice, non-DRM but traceable)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secured PDF&lt;/strong&gt; (same approach, full-colour, readable on any PDF reader)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Direct-printed hardcover / paperback&lt;/strong&gt; (shipped from a local print partner, the same content, without Amazon's 40% cut and with the author paying the EU VAT rather than the customer)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For readers who would rather not route through Amazon at all, this is the path. The pricing is competitive with Amazon's round figures, and the author's margin is measurably better. Both parties win: the reader gets a clean file with no DRM lockdown, the author keeps a living wage's worth of royalty per book.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;Three days of fixing three problems at launch is not catastrophic. It is normal for a first book, and somewhat less normal for a fifth. The three rakes one steps on, in order, are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Your tools are not tested at your scale.&lt;/strong&gt; JetBrains Mono is used by millions of developers on screens. Nobody I have been able to find ever printed a long stretch of it in monospace on a 100% dot-gain matte press. The bug was there for years; it only surfaces when one subjects the font to the particular combination of ink, paper, and size that printing a book imposes. The lesson: if you are the first person to do something at a particular scale, expect to be the first person to find the bugs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Exports accumulate history.&lt;/strong&gt; A Photoshop file used across months of revisions carries every mistake one has ever made as an orphan layer. An export tool that does not actively flatten is an export tool that helpfully preserves those mistakes for the printer. The lesson: flatten is not optional, preflight is not paranoid.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Large marketplaces are designed for their dominant audience.&lt;/strong&gt; Amazon's KDP is optimised for a US English-speaking self-publisher selling ebooks to a US audience. For a European hardcover with thirteen marketplaces and thirteen VAT regimes, the interface is a minor archaeology project. The lesson: when you are the edge case of a global platform, budget extra time for the platform not noticing.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;None of this is meant as complaint. It is the cost of shipping. The book is, for all of that, out. The systems that carried it here were, as the subtitle argues, mostly invisible: the open-source font with its sixteen-year public development history, the reproducible build pipeline for the PDF, the print-on-demand global logistics chain, the pricing-and-tax engine, the review-queue-and-propagation infrastructure of the world's largest bookshop. Each of these is someone's life work, built on somebody else's earlier life work, and they all very nearly got this book to the shelf on time without my noticing.&lt;/p&gt;

&lt;p&gt;Almost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://vivianvoss.net/print/integrated-by-design" rel="noopener noreferrer"&gt;Read more about the book, and see the transparent pricing table per marketplace →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;By &lt;a href="https://vivianvoss.net" rel="noopener noreferrer"&gt;Vivian Voss&lt;/a&gt; — System Architect &amp;amp; Software Developer. Integrated by Design is out today from Voss'scher Verlag. Follow me on &lt;a href="https://www.linkedin.com/in/vvoss/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; for daily technical writing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>book</category>
      <category>freebsd</category>
      <category>publishing</category>
      <category>writing</category>
    </item>
  </channel>
</rss>
