<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kunal</title>
    <description>The latest articles on DEV Community by Kunal (@kunal_d6a8fea2309e1571ee7).</description>
    <link>https://dev.to/kunal_d6a8fea2309e1571ee7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kunal_d6a8fea2309e1571ee7"/>
    <language>en</language>
    <item>
      <title>CVE-2024-3400 and the AI Security Crisis: Palo Alto's CEO Warned Us While His Own Firewalls Burned [2026]</title>
      <dc:creator>Kunal</dc:creator>
      <pubDate>Wed, 06 May 2026 12:49:24 +0000</pubDate>
      <link>https://dev.to/kunal_d6a8fea2309e1571ee7/cve-2024-3400-and-the-ai-security-crisis-palo-altos-ceo-warned-us-while-his-own-firewalls-burned-47ce</link>
      <guid>https://dev.to/kunal_d6a8fea2309e1571ee7/cve-2024-3400-and-the-ai-security-crisis-palo-altos-ceo-warned-us-while-his-own-firewalls-burned-47ce</guid>
      <description>&lt;h1&gt;
  
  
  CVE-2024-3400 and the AI Security Crisis: Palo Alto's CEO Warned Us While His Own Firewalls Burned
&lt;/h1&gt;

&lt;p&gt;Nikesh Arora, CEO of Palo Alto Networks, stood on stage at RSA Conference 2024 and told every security team on the planet to be afraid: nation-state attackers are using AI to find vulnerabilities faster than defenders can patch them. The industry, he said, has a "24-to-36-month window" to get ahead of AI-driven threats before attackers gain a serious upper hand. Weeks earlier, his own company had disclosed CVE-2024-3400, a command injection flaw in PAN-OS that scored a perfect 10.0 on the CVSS scale. An unauthenticated attacker could execute arbitrary code with root privileges on Palo Alto's own firewalls. The irony isn't just poetic. It's a signal.&lt;/p&gt;

&lt;p&gt;This isn't a story about one company's bad week. It's about what happens when the tools defenders built become the attack surface, and AI is accelerating the offense faster than anyone predicted.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is CVE-2024-3400 and Why Does It Matter?
&lt;/h2&gt;

&lt;p&gt;CVE-2024-3400 is a command injection vulnerability in the GlobalProtect feature of &lt;a href="https://security.paloaltonetworks.com/CVE-2024-3400" rel="noopener noreferrer"&gt;Palo Alto Networks' PAN-OS software&lt;/a&gt;. It affects PAN-OS versions 10.2, 11.0, and 11.1 when configured with a GlobalProtect gateway or portal. The flaw allows an unauthenticated attacker to execute arbitrary code with root privileges on the firewall itself. No credentials needed. No prior access required.&lt;/p&gt;

&lt;p&gt;Think about what that means. The device your organization trusts to be the barrier between your network and the internet can be completely owned by someone who has never touched your systems before. No phishing email. No stolen password. Just a crafted request to a publicly exposed endpoint.&lt;/p&gt;

&lt;p&gt;Palo Alto Networks' own Unit 42 threat research team assigned the vulnerability the maximum CVSS score of 10.0. The &lt;a href="https://www.cisa.gov/news-events/alerts/2024/04/12/cisa-adds-one-known-exploited-vulnerability-catalog" rel="noopener noreferrer"&gt;Cybersecurity and Infrastructure Security Agency (CISA)&lt;/a&gt; confirmed active exploitation in the wild and immediately added CVE-2024-3400 to its Known Exploited Vulnerabilities catalog. Federal agencies were ordered to patch immediately under Binding Operational Directive 22-01.&lt;/p&gt;

&lt;p&gt;The threat actor behind the initial exploitation, tracked as UTA0218 by Volexity and later analyzed by Varonis Threat Labs, wasn't some script kiddie. They built a custom backdoor called UPSTYLE. Purpose-built persistence designed to survive reboots and maintain access to compromised firewalls. This is tooling that takes significant resources and intent to develop. It screams nation-state.&lt;/p&gt;

&lt;p&gt;I've managed infrastructure that sat behind Palo Alto firewalls. When I saw this CVE drop, my first reaction wasn't surprise. It was that familiar dread of knowing the thing you trusted most just became your biggest liability. If you've ever had to coordinate an emergency patching cycle across dozens of firewalls on a Friday evening, you know exactly what I mean.&lt;/p&gt;

&lt;h2&gt;
  
  
  How AI Is Helping Hackers Find Zero-Days Faster
&lt;/h2&gt;

&lt;p&gt;This is where things get really uncomfortable. At RSA 2024, Arora didn't mince words. He stated plainly that nation-state actors are using AI and large language models to "find vulnerabilities faster" and to "train their malware to be more effective." This isn't conference speculation. It's already happening.&lt;/p&gt;

&lt;p&gt;The old model of vulnerability discovery involved painstaking manual reverse engineering. A skilled researcher might spend weeks or months fuzzing a target, reading disassembled code, and crafting a working exploit. AI compresses that timeline dramatically. LLMs can analyze codebases at scale, identify patterns that correlate with known vulnerability classes, and suggest exploitation paths. The barrier to entry for sophisticated attacks is dropping fast.&lt;/p&gt;

&lt;p&gt;Arora's 24-to-36-month window isn't arbitrary. It reflects a calculation about how quickly AI tooling matures versus how quickly defensive architectures can adapt. And honestly? Having spent years watching organizations struggle to implement basic patch management, I think 24 months is generous.&lt;/p&gt;

&lt;p&gt;[YOUTUBE:qgSv8StOZxA|Palo Alto Networks CEO Nikesh Arora on the cyber threat landscape, impact of AI on cybersecurity]&lt;/p&gt;

&lt;p&gt;We don't know for certain that AI was used to discover CVE-2024-3400 specifically. But the sophistication of the UPSTYLE backdoor and the speed of exploitation suggest a threat actor with advanced capabilities and serious tooling. The exploit chain involved arbitrary file creation leading to command injection. That's exactly the kind of multi-step vulnerability that AI-assisted analysis is particularly good at identifying.&lt;/p&gt;

&lt;p&gt;The security vendors building the walls are themselves targets, and the attackers have access to the same foundational AI models that defenders do. I wrote about similar dynamics in &lt;a href="https://www.kunalganglani.com/blog/ai-pentesting-agents-mythos-darpa" rel="noopener noreferrer"&gt;how AI pentesting agents are learning to hack with DARPA's support&lt;/a&gt;. The offense-defense gap is widening, not shrinking.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Defender's Dilemma: Your Firewall Is Now an Attack Surface
&lt;/h2&gt;

&lt;p&gt;Here's what makes CVE-2024-3400 sting beyond the timing of Arora's warnings. Firewalls are supposed to be the most hardened, most trusted components in your network. They sit at the perimeter. They see all traffic. They have root-level access to everything flowing through them. When the firewall itself is compromised, the attacker doesn't just bypass your defenses. They &lt;em&gt;become&lt;/em&gt; your defenses.&lt;/p&gt;

&lt;p&gt;And this isn't unique to Palo Alto. We've seen similar critical vulnerabilities in Fortinet's FortiOS, Cisco's IOS XE, and Ivanti's Connect Secure VPN appliances. Network security appliances, by their nature, present a massive attack surface because they must be internet-facing and they process untrusted input at scale.&lt;/p&gt;

&lt;p&gt;In my experience building and reviewing security architectures, this is where most organizations have a blind spot. They invest heavily in next-gen firewalls, intrusion detection systems, and endpoint protection. But the implicit assumption is that these devices themselves are trustworthy. CVE-2024-3400 shatters that assumption.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The device you trust to protect your network is the device an attacker trusts to give them root access.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The UPSTYLE backdoor is particularly alarming because it demonstrates operational maturity. UTA0218 didn't just exploit the vulnerability and grab some data. They built persistence. They planned to stay. That's the hallmark of a threat actor with strategic objectives, not an opportunistic smash-and-grab. And it's the kind of sophisticated tradecraft that, as Arora warned, AI is helping to accelerate.&lt;/p&gt;

&lt;p&gt;I've written about how &lt;a href="https://www.kunalganglani.com/blog/npm-supply-chain-attack-defense" rel="noopener noreferrer"&gt;supply chain attacks targeting developer tools and infrastructure&lt;/a&gt; exploit the same basic weakness. The common thread is trust: we implicitly trust our tools, our dependencies, and our security appliances. Attackers know this, and they're systematically going after that trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Zero Trust Actually Means After CVE-2024-3400
&lt;/h2&gt;

&lt;p&gt;Every security vendor talks about "zero trust." It's become so overused it's practically meaningless as a marketing term. But CVE-2024-3400 is a case study in why the underlying principle actually matters.&lt;/p&gt;

&lt;p&gt;Zero trust, stripped of the marketing, means this: no component in your architecture gets implicit trust based on its position in the network. Not your firewall. Not your VPN concentrator. Not your identity provider. Every component must continuously prove it deserves the access it has.&lt;/p&gt;

&lt;p&gt;After seeing vulnerabilities like this hit production environments, I've become convinced that the practical implementation of zero trust requires three things most organizations aren't doing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Assume breach of perimeter devices.&lt;/strong&gt; Your incident response plan should include scenarios where the firewall itself is the compromised asset. If your IR playbook starts with "check the firewall logs," you've got a serious problem when the firewall is the adversary.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Segment aggressively behind the perimeter.&lt;/strong&gt; East-west traffic controls matter more than ever. A compromised firewall with visibility into a flat network is catastrophic. A compromised firewall facing microsegmented workloads is bad but survivable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor your security appliances with the same rigor you monitor your servers.&lt;/strong&gt; If you're running EDR on every endpoint but not watching the integrity of your firewall's operating system, you've got exactly the gap that threat actors like UTA0218 will find.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CISA's rapid addition of CVE-2024-3400 to the Known Exploited Vulnerabilities catalog and the mandatory patch directive for federal agencies was the right call. But it also shows how reactive the current model is. Palo Alto Networks released patches for affected PAN-OS versions, but the gap between disclosure and patching across enterprise environments is exactly the window attackers exploit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 24-Month Clock Is Already Ticking
&lt;/h2&gt;

&lt;p&gt;Arora's warning about a 24-to-36-month window wasn't just conference keynote rhetoric. It was a candid acknowledgment from the CEO of a $100+ billion security company that the industry is losing ground.&lt;/p&gt;

&lt;p&gt;The dynamics are brutal. AI-assisted vulnerability discovery reduces the time from "unknown flaw" to "weaponized exploit." AI-assisted malware development reduces the time from "proof of concept" to "operational capability." Meanwhile, enterprise patching cycles haven't gotten meaningfully faster in a decade. The average time to patch a critical vulnerability in enterprise environments still hovers around 60 days, according to &lt;a href="https://www.qualys.com/research/" rel="noopener noreferrer"&gt;industry reports from organizations like Qualys&lt;/a&gt;. Two months of exposure for every critical flaw. That's insane.&lt;/p&gt;

&lt;p&gt;When I look at CVE-2024-3400 through this lens, the timeline is terrifying. The vulnerability was being exploited in the wild &lt;em&gt;before&lt;/em&gt; a patch was available. This is the zero-day scenario that every security team dreads, and AI is going to make it more common, not less.&lt;/p&gt;

&lt;p&gt;The question isn't whether AI will make attackers more effective. It already has. The question is whether defenders can use the same technology to close the gap. I'm cautiously optimistic about AI-driven detection and response, but I've also seen enough &lt;a href="https://www.kunalganglani.com/blog/ai-agent-failure-production-prevention" rel="noopener noreferrer"&gt;AI agent failures in production&lt;/a&gt; to know that deploying AI defensively comes with its own risks.&lt;/p&gt;

&lt;p&gt;Here's what I think happens next: the security industry will consolidate aggressively around AI-native platforms. The point-solution era is ending because no human team can correlate signals across dozens of tools fast enough to catch AI-accelerated attacks. Arora himself has been pushing this platformization narrative at Palo Alto Networks, and whatever you think of his motives, the technical argument is sound.&lt;/p&gt;

&lt;p&gt;But platformization won't matter if the platforms themselves have 10.0 CVSS vulnerabilities. That's the real lesson of CVE-2024-3400. The companies building the future of cybersecurity defense need to be dramatically better at securing their own code first. The attackers now have AI helping them check your homework. And right now, they're finding the mistakes faster than you can fix them.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.kunalganglani.com/blog/cve-2024-3400-ai-security-crisis?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=cve-2024-3400-ai-security-crisis" rel="noopener noreferrer"&gt;kunalganglani.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>paloaltonetworks</category>
      <category>aisecurity</category>
      <category>zeroday</category>
      <category>panos</category>
    </item>
    <item>
      <title>Linux Copy-Primitive Bugs Keep Breaking Container Security: From Dirty COW to Leaky Vessels [2026]</title>
      <dc:creator>Kunal</dc:creator>
      <pubDate>Tue, 05 May 2026 12:49:43 +0000</pubDate>
      <link>https://dev.to/kunal_d6a8fea2309e1571ee7/linux-copy-primitive-bugs-keep-breaking-container-security-from-dirty-cow-to-leaky-vessels-2026-hba</link>
      <guid>https://dev.to/kunal_d6a8fea2309e1571ee7/linux-copy-primitive-bugs-keep-breaking-container-security-from-dirty-cow-to-leaky-vessels-2026-hba</guid>
      <description>&lt;p&gt;Three times in a decade. That's how often a Linux copy-primitive bug has blown a hole through container isolation. In 2016 it was Dirty COW. In 2024 it was Leaky Vessels. In 2026, a new class of Linux copy-primitive bugs is proving, again, that containers share a kernel. And that kernel keeps betraying them.&lt;/p&gt;

&lt;p&gt;The pattern is hard to ignore. Bugs in how the Linux kernel copies, references, or manages data at the lowest level keep punching through container isolation boundaries. If you're running Docker or Podman in production, rootless or not, this should be on your radar. The next copy-primitive container escape isn't a question of &lt;em&gt;if&lt;/em&gt;. It's &lt;em&gt;when&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Linux Copy-Primitive Bugs Keep Breaking Containers
&lt;/h2&gt;

&lt;p&gt;Containers aren't virtual machines. They don't have their own kernel. Every container on a host shares the same Linux kernel, separated only by namespaces, cgroups, and a handful of security mechanisms like seccomp and AppArmor.&lt;/p&gt;

&lt;p&gt;That's the fundamental bargain: lightweight, fast isolation in exchange for sharing the most privileged piece of software on the machine. When a bug exists in the kernel's handling of copy operations — whether it's copying memory pages, file descriptors, or data between user and kernel space — it cuts across every isolation boundary containers rely on.&lt;/p&gt;

&lt;p&gt;I learned this the hard way. After migrating production workloads to rootless Podman containers in 2022, I thought we'd significantly reduced our attack surface. We had. But the kernel was still the kernel. When Leaky Vessels dropped in early 2024, it was a cold reminder that our "rootless" setup was only as strong as the syscall layer sitting underneath it.&lt;/p&gt;

&lt;p&gt;The copy-primitive pattern is consistent: the kernel needs to move or reference data — a memory page, a file descriptor, a buffer. The operation has a race condition, a leaked reference, or a missing permission check. An attacker inside a container exploits that flaw to read or write data they shouldn't touch, punching through the namespace boundary. Three times in ten years. That's not a coincidence. That's a systemic weakness in how Linux manages data at the lowest level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dirty COW: The Bug That Started the Pattern
&lt;/h2&gt;

&lt;p&gt;Dirty COW (&lt;a href="https://lwn.net/Articles/702899/" rel="noopener noreferrer"&gt;CVE-2016-5195&lt;/a&gt;) was a race condition in the Linux kernel's memory subsystem. It exploited how the kernel handles Copy-on-Write (COW) memory mappings. When a process tries to write to a read-only memory-mapped file, the kernel is supposed to create a private copy. Dirty COW exploited a race condition in that copy operation, allowing a local user to gain write access to read-only memory mappings.&lt;/p&gt;

&lt;p&gt;The bug had existed in the kernel for nearly nine years before anyone found it. Nine years. In a component so fundamental that virtually every Linux system was affected.&lt;/p&gt;

&lt;p&gt;For containers, Dirty COW was devastating. Because containers share the host kernel, any process inside a container could exploit the race condition to escalate privileges on the host. The isolation that namespaces and cgroups provided was irrelevant. The bug was beneath all of it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Dirty COW proved something the container community didn't want to hear: if the kernel's copy mechanism is broken, your container boundary doesn't exist.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The fix was a kernel patch. But the lesson was bigger than one CVE. The kernel's memory management code is ancient, complex, and handles billions of operations per second. Copy-on-Write is not a feature you can rip out. It's foundational to how Linux works. And foundational code is where the worst bugs hide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Leaky Vessels: Same Pattern, Different Layer
&lt;/h2&gt;

&lt;p&gt;Fast forward to January 2024. Snyk's security research team disclosed &lt;a href="https://www.wiz.io/blog/leaky-vessels-docker-and-runc-vulnerabilities" rel="noopener noreferrer"&gt;Leaky Vessels&lt;/a&gt;, a set of vulnerabilities in runc, the container runtime used by both Docker and Podman. The most critical was CVE-2024-21626, which exploited a file descriptor leak during container initialization.&lt;/p&gt;

&lt;p&gt;Different mechanism than Dirty COW. Identical pattern: a low-level operation that copies or references data across a trust boundary had a flaw. In this case, runc leaked a file descriptor pointing to the host filesystem into the container's process space. An attacker who controlled the container's working directory could use that leaked descriptor to escape the container and access the host filesystem.&lt;/p&gt;

&lt;p&gt;This is a copy-primitive bug in spirit. The kernel and runtime are supposed to carefully manage which file descriptors are visible to which namespaces. A file descriptor is just a reference — a pointer to data. When that reference leaks across the container boundary, it's functionally the same as Dirty COW's memory page write: data that should be isolated isn't.&lt;/p&gt;

&lt;p&gt;Having worked with container runtimes in production, I can tell you what made Leaky Vessels particularly terrifying wasn't just the escape. It was that the attack could be embedded in a malicious container image. Pull the wrong image, run it, and the container breaks out during initialization — before your runtime security tools even start monitoring. The attack surface was the &lt;code&gt;docker run&lt;/code&gt; command itself.&lt;/p&gt;

&lt;p&gt;The affected runc versions were patched quickly. But the incident reinforced a point that &lt;a href="https://adrianmouat.com/understanding-rootless-containers/" rel="noopener noreferrer"&gt;Adrian Mouat, author of &lt;em&gt;Using Docker&lt;/em&gt;&lt;/a&gt;, has written about extensively: rootless containers aren't a magic bullet. If a kernel or runtime exploit exists, an attacker can still escalate privileges after breaking out.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do Rootless Containers Actually Protect You From Copy-Primitive Bugs?
&lt;/h2&gt;

&lt;p&gt;Rootless containers are the single best security improvement most teams can make to their container infrastructure. That's not the debate. The debate is whether they're &lt;em&gt;sufficient&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Rootless containers operate within a distinct user namespace, mapping the container's internal root user to an unprivileged user ID on the host. As &lt;a href="https://www.redhat.com/en/blog/rootless-containers-are-gaining-popularity" rel="noopener noreferrer"&gt;Red Hat has documented&lt;/a&gt;, the core benefit is straightforward: if there's a container breakout, the attacker only has the privileges of the unprivileged host user, not root.&lt;/p&gt;

&lt;p&gt;That matters. A Dirty COW-style exploit inside a rootless container would land the attacker as an unprivileged user on the host rather than root. Massive reduction in blast radius.&lt;/p&gt;

&lt;p&gt;But here's where teams get into trouble: they treat rootless mode as the finish line for container security rather than one layer of it. The most severe attacks chain a container escape with a separate kernel privilege escalation. You break out of the container as an unprivileged user, then use a &lt;em&gt;second&lt;/em&gt; kernel bug to escalate to root. When Dirty COW was unpatched, that second step was trivial — the same bug that got you out of the container could also get you to root.&lt;/p&gt;

&lt;p&gt;This chaining is exactly why copy-primitive bugs are so dangerous. They tend to affect the kernel at a level that's useful for both container escape &lt;em&gt;and&lt;/em&gt; privilege escalation. A single bug gives you two steps of the kill chain. I wrote about similar &lt;a href="https://www.kunalganglani.com/blog/ai-agent-failure-production-prevention" rel="noopener noreferrer"&gt;defense-in-depth thinking for AI agents in production&lt;/a&gt; — the principle is the same: no single safeguard survives a determined, multi-step attack.&lt;/p&gt;

&lt;p&gt;[YOUTUBE:x1npPrzyKfs|Linux Container Primitives: cgroups, namespaces, and more!]&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Actually Defend Against Linux Copy-Primitive Container Escapes
&lt;/h2&gt;

&lt;p&gt;I've spent the last two years hardening container deployments, and the boring answer is the right one: no single tool solves this. You need layers. Here's what I've seen actually work in production:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Patch aggressively and automatically.&lt;/strong&gt; Copy-primitive bugs get patched in the kernel within days of disclosure. The problem is most organizations take weeks or months to roll out kernel updates. If you're running Kubernetes, tools like kured (Kubernetes Reboot Daemon) can automate node reboots after kernel updates. If you're running standalone Docker or Podman hosts, &lt;code&gt;unattended-upgrades&lt;/code&gt; for the kernel package is table stakes. The window between disclosure and patch is where these bugs get exploited.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run rootless by default.&lt;/strong&gt; Yes, I just spent a section explaining why rootless isn't sufficient. It's still essential. Rootless mode in Podman is mature and production-ready. Docker's rootless mode has improved significantly since 2023. If you're still running containers as root in 2026, you're handing attackers a free privilege escalation on every container escape. Stop. Seriously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy syscall filtering with seccomp profiles.&lt;/strong&gt; Copy-primitive bugs require specific syscalls to exploit. Dirty COW needed &lt;code&gt;madvise&lt;/code&gt; and &lt;code&gt;write&lt;/code&gt;. Leaky Vessels exploited &lt;code&gt;WORKDIR&lt;/code&gt; processing during container init. Custom seccomp profiles that restrict unnecessary syscalls reduce the exploitability of kernel bugs you haven't even heard about yet. The default Docker seccomp profile blocks about 44 syscalls. For sensitive workloads, you should be blocking far more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consider gVisor for high-value workloads.&lt;/strong&gt; Google's &lt;a href="https://gvisor.dev/" rel="noopener noreferrer"&gt;gVisor&lt;/a&gt; interposes a userspace kernel between your container and the host kernel. Your container's syscalls don't hit the real Linux kernel directly — they're intercepted by gVisor's Sentry process, which reimplements a subset of Linux syscalls in a sandboxed environment. A copy-primitive bug in the host kernel becomes unexploitable from inside the container because the container never makes the vulnerable syscall directly. The tradeoff is performance overhead and compatibility limitations. For multi-tenant or security-critical workloads, it's the strongest isolation you can get without a full VM.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor for anomalous file descriptor and memory behavior.&lt;/strong&gt; Tools like Falco can detect runtime behaviors associated with container escapes — unexpected file descriptor access patterns, attempts to access &lt;code&gt;/proc/self/fd&lt;/code&gt; entries pointing outside the container's filesystem, or memory mapping operations that shouldn't be happening in your workload. This won't prevent the exploit, but it catches it in progress. Having worked through incident response on container escapes, I can tell you that &lt;a href="https://www.kunalganglani.com/blog/ebpf-monitoring-replacing-sidecars" rel="noopener noreferrer"&gt;detection at the early stages of exploit chains matters more than most teams realize&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern Isn't Going Away
&lt;/h2&gt;

&lt;p&gt;Here's my prediction: we will see another major copy-primitive container escape within the next 18 months. The Linux kernel's memory management, file descriptor handling, and data copying paths are some of the oldest and most complex code in the entire operating system. They're also some of the most security-critical. Ancient + complex + security-critical = more bugs. Count on it.&lt;/p&gt;

&lt;p&gt;The container model's fundamental architecture — shared kernel, namespace isolation — means every one of these bugs is a potential container escape. This isn't a flaw in Docker or Podman. It's a structural property of how Linux containers work.&lt;/p&gt;

&lt;p&gt;The teams that survive the next copy-primitive bug won't be the ones who picked the right container runtime or checked the right compliance box. They'll be the ones who treated container isolation as one layer in a stack, patched their kernels in hours instead of weeks, and ran their most sensitive workloads behind gVisor or equivalent sandboxing. Rootless mode buys you time. Syscall filtering reduces your surface area. Runtime monitoring catches what slips through. But the kernel is still the kernel. And until containers stop sharing it, copy-primitive bugs will keep breaking the boundaries we trust them to enforce.&lt;/p&gt;

&lt;p&gt;The only question is whether you'll be patched when the next one drops.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Can Dirty COW still affect containers in 2026?
&lt;/h3&gt;

&lt;p&gt;Dirty COW (CVE-2016-5195) was patched in the Linux kernel in October 2016. Any kernel version from 4.8.3 onward includes the fix. If you're running a supported, updated Linux distribution in 2026, Dirty COW itself is not a direct threat. However, the &lt;em&gt;class&lt;/em&gt; of vulnerability it represents — race conditions in copy-on-write memory handling — continues to produce new bugs.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the difference between a container escape and a privilege escalation?
&lt;/h3&gt;

&lt;p&gt;A container escape is when code running inside a container gains access to resources outside the container's namespace boundary — such as the host filesystem or another container's processes. A privilege escalation is when a process gains higher permissions than it was originally given, such as going from an unprivileged user to root. These are different attack steps, but they're often chained together: escape the container first, then escalate to root on the host.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do rootless containers prevent all container escape attacks?
&lt;/h3&gt;

&lt;p&gt;No. Rootless containers ensure that a breakout lands the attacker as an unprivileged host user instead of root, which significantly limits damage. But they don't prevent the escape itself. A kernel-level bug can still allow code inside a rootless container to access host resources. The attacker just has fewer permissions once they get there. For full protection, rootless mode should be combined with seccomp filtering, regular kernel patching, and runtime monitoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does gVisor protect against kernel vulnerabilities?
&lt;/h3&gt;

&lt;p&gt;gVisor runs a userspace kernel called Sentry that intercepts your container's system calls before they reach the host Linux kernel. Instead of your container code directly invoking kernel syscalls, gVisor reimplements a subset of those syscalls in a sandboxed Go process. This means a vulnerability in the host kernel's copy-on-write handling or file descriptor management can't be triggered from inside the container, because those calls never reach the vulnerable host code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Was Leaky Vessels (CVE-2024-21626) exploited in the wild?
&lt;/h3&gt;

&lt;p&gt;As of early 2024, there was no confirmed evidence of active exploitation before the Leaky Vessels disclosure. Snyk coordinated disclosure with the runc maintainers, and patches were released before proof-of-concept exploits became widely available. However, working exploits were developed quickly after disclosure, making rapid patching essential for any organization running affected runc versions (1.0.0 through 1.1.11).&lt;/p&gt;

&lt;h3&gt;
  
  
  Why do copy-related bugs keep appearing in the Linux kernel?
&lt;/h3&gt;

&lt;p&gt;The Linux kernel's memory management and data-copying code paths are among the oldest and most complex in the entire codebase. Copy-on-Write, file descriptor passing, and buffer management involve intricate concurrency logic with millions of possible execution paths. These operations are also performance-critical, so they're heavily optimized in ways that can introduce subtle race conditions. The combination of complexity, age, and performance pressure makes these code paths a recurring source of security bugs.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.kunalganglani.com/blog/linux-copy-bugs-container-security?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=linux-copy-bugs-container-security" rel="noopener noreferrer"&gt;kunalganglani.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>cybersecurity</category>
      <category>docker</category>
      <category>podman</category>
    </item>
    <item>
      <title>Software Engineering Isn't Dead — It's Becoming 'Plan and Review' [2026]</title>
      <dc:creator>Kunal</dc:creator>
      <pubDate>Mon, 04 May 2026 16:06:35 +0000</pubDate>
      <link>https://dev.to/kunal_d6a8fea2309e1571ee7/software-engineering-isnt-dead-its-becoming-plan-and-review-2026-7jl</link>
      <guid>https://dev.to/kunal_d6a8fea2309e1571ee7/software-engineering-isnt-dead-its-becoming-plan-and-review-2026-7jl</guid>
      <description>&lt;p&gt;Every week, another breathless headline declares software engineering dead. Another AI demo shows a chatbot building a full-stack app in 90 seconds. Another LinkedIn thought leader posts a funeral wreath emoji next to the words "traditional coding."&lt;/p&gt;

&lt;p&gt;And every week, I watch senior engineers at real companies quietly doing something that looks nothing like those demos. They're not typing code line by line. But they're not being replaced, either. They're doing something I've started calling &lt;strong&gt;plan-and-review software engineering&lt;/strong&gt;. And honestly, it's the biggest change in how software gets built since the move from waterfall to agile.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Plan-and-Review Software Engineering?
&lt;/h2&gt;

&lt;p&gt;Plan-and-review software engineering is a workflow where engineers spend most of their time designing systems, writing specifications, orchestrating AI coding tools, and reviewing the output — rather than writing code by hand. The engineer becomes a director. The AI becomes the production crew.&lt;/p&gt;

&lt;p&gt;This isn't theoretical. It's already happening. Sundar Pichai disclosed on an earnings call that &lt;a href="https://blog.google/technology/ai/google-io-2025-keynote/" rel="noopener noreferrer"&gt;more than 25% of new code at Google is now generated by AI&lt;/a&gt;, then reviewed and accepted by engineers. GitHub's own research shows Copilot users accept roughly 30% of code suggestions, and that number keeps climbing as models improve. Tools like Cursor, Claude Code, and Aider are pushing the boundary further every month.&lt;/p&gt;

&lt;p&gt;I've been building software for over 14 years. The shift happening right now is real. Two years ago, I used AI assistants as glorified autocomplete. Today, I routinely describe an entire feature's architecture in natural language, let an AI agent scaffold the implementation, then spend my time reviewing, adjusting, and stress-testing the result. My job didn't disappear. It changed shape.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Is the Software Engineering Role Changing Because of AI?
&lt;/h2&gt;

&lt;p&gt;Here's the thing nobody's saying about this shift: it doesn't make the job easier. It makes it &lt;em&gt;different&lt;/em&gt;. In some ways, harder.&lt;/p&gt;

&lt;p&gt;When I was writing every line myself, I had intimate knowledge of what the system was doing because I'd typed it into existence. Now, when an AI generates 200 lines of a service layer in seconds, I need to understand that code just as deeply without having written it. That's a genuinely different kind of expertise.&lt;/p&gt;

&lt;p&gt;The engineers I see thriving in plan-and-review workflows share a specific set of skills:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;System design thinking.&lt;/strong&gt; If you can't articulate what needs to be built at an architectural level, you can't direct an AI to build it well. Vague prompts produce vague code. Every time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specification writing.&lt;/strong&gt; The prompt &lt;em&gt;is&lt;/em&gt; the spec now. Engineers who write precise, unambiguous descriptions of behavior get dramatically better results than those who wing it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI orchestration.&lt;/strong&gt; Knowing which tool to use for which task, how to chain agents together, when to break a problem into sub-problems the AI can handle independently. I've written about &lt;a href="https://www.kunalganglani.com/blog/ai-coding-agents-wont-replace-you" rel="noopener noreferrer"&gt;how AI coding agents are reshaping the way we think about code&lt;/a&gt;, and this orchestration layer is where the real leverage lives.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Critical code review.&lt;/strong&gt; Not just "does this compile" review. Deep review that catches subtle logic errors, security holes, and architectural drift. AI-generated code looks confident even when it's dead wrong.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain expertise.&lt;/strong&gt; The AI doesn't know your business rules, your compliance requirements, or why that edge case from three years ago almost took down production at 2 AM. You do.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Addy Osmani, Engineering Lead at Google, has written extensively about this. He's argued the developer's role is moving toward being a "reviewer-in-chief" — someone whose primary value comes from judgment, not keystrokes. That framing tracks with what I'm seeing on the ground.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The engineers who will be most valuable in 2026 aren't the ones who type the fastest. They're the ones who think the clearest.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What's the Difference Between Vibe Coding and Plan-and-Review Engineering?
&lt;/h2&gt;

&lt;p&gt;Most people are conflating these two things. That's a mistake.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vibe coding&lt;/strong&gt; is what happens when someone opens an AI tool, types "build me a task management app," and ships whatever comes out. It's fast. It's fun. And it produces code that, in my experience auditing AI-generated projects, &lt;a href="https://www.kunalganglani.com/blog/vibe-coding-tech-debt-audit" rel="noopener noreferrer"&gt;creates serious technical debt within weeks&lt;/a&gt;. I've personally seen vibe-coded applications with hardcoded secrets, SQL injection vulnerabilities, and architectural patterns that make future changes nearly impossible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plan-and-review engineering&lt;/strong&gt; is the professional version of the same technology stack. The difference isn't the tools. It's the process.&lt;/p&gt;

&lt;p&gt;A plan-and-review engineer starts with architecture. They define the data model, the API contracts, the error handling strategy, and the testing approach &lt;em&gt;before&lt;/em&gt; the AI writes a single line. Then they use AI to accelerate implementation of a well-defined plan. Then they review the output with the same rigor they'd apply to a junior developer's pull request. Probably more rigor, honestly, because AI makes confident mistakes that a junior would at least flag with a comment saying "not sure about this."&lt;/p&gt;

&lt;p&gt;Same equipment. Wildly different outcomes.&lt;/p&gt;

&lt;p&gt;This is why I push back hard when people say AI will eliminate the need for engineering skill. It's the opposite. AI amplifies the gap between engineers who understand systems deeply and those who don't. A strong engineer with AI tools is 10x more productive. A weak engineer with AI tools produces 10x more bugs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Will AI Replace Software Engineers?
&lt;/h2&gt;

&lt;p&gt;Short answer: no. Longer answer: it will replace software engineers who refuse to adapt.&lt;/p&gt;

&lt;p&gt;The data tells a clear story. Stack Overflow's 2024 Developer Survey found that 76% of developers are using or planning to use AI tools, but only 43% trust the accuracy of AI-generated code. That trust gap is exactly where human engineers live. Someone has to close it.&lt;/p&gt;

&lt;p&gt;I've shipped enough features to know that the hard part of software engineering was never typing. It was figuring out &lt;em&gt;what&lt;/em&gt; to type. It was debugging the interaction between three microservices at 11 PM when the monitoring dashboard lit up red. It was sitting in a room with a product manager and translating "we need it to be faster" into a concrete set of database indexes and caching strategies.&lt;/p&gt;

&lt;p&gt;AI can't do that yet. And even when it gets closer, someone will still need to validate that it did it correctly. That's the plan-and-review loop.&lt;/p&gt;

&lt;p&gt;What &lt;em&gt;is&lt;/em&gt; disappearing is the junior developer task of implementing well-specified, straightforward features from scratch. If the task is "add a CRUD endpoint for this data model," an AI can do that in seconds. This means the entry path into software engineering is shifting. New engineers need to develop system-level thinking faster than previous generations did. I've written about &lt;a href="https://www.kunalganglani.com/blog/state-software-engineering-2026" rel="noopener noreferrer"&gt;how the state of software engineering is evolving in 2026&lt;/a&gt;, and the through-line is clear: the floor for what counts as "engineering work" is rising. Fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Skills Do Software Engineers Need in the Age of AI?
&lt;/h2&gt;

&lt;p&gt;If I were starting my career today, here's where I'd put my time:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Architecture and system design.&lt;/strong&gt; This is the highest-leverage skill in a plan-and-review world. If you can design the system correctly, AI can build it. If you can't, no amount of tooling saves you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reading code faster than writing it.&lt;/strong&gt; Most engineering education optimizes for writing. The future optimizes for reading, understanding, and evaluating code you didn't write. Get comfortable reviewing large diffs quickly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt engineering as specification.&lt;/strong&gt; Not the gimmicky "10 magic prompts" stuff. Real specification writing. The kind where you define constraints, edge cases, and acceptance criteria in natural language so precisely that an AI produces correct code on the first try.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing and validation.&lt;/strong&gt; If AI writes the code, humans validate the behavior. Property-based testing, integration testing, adversarial testing. These become even more critical when the code wasn't written by someone who understands the business context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain knowledge.&lt;/strong&gt; The deepest moat any engineer can build. AI is generic. Your understanding of healthcare compliance, financial reconciliation, or real-time bidding systems is specific and irreplaceable.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Having worked on teams that adopted AI-assisted development early, I can tell you: the engineers who struggled weren't the ones with fewer years of experience. They were the ones who had spent their careers optimizing for code output rather than system understanding. The fast typists suddenly had less of an edge. The careful thinkers had more of one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Director's Cut
&lt;/h2&gt;

&lt;p&gt;Here's my prediction: by the end of 2027, the majority of professional software will be built using some version of plan-and-review. Not because it's trendy, but because the economics are brutal. A team of three senior engineers using AI-assisted plan-and-review workflows can match the output of a team of ten working the old way. Companies that don't adopt this will lose on speed and cost. Period.&lt;/p&gt;

&lt;p&gt;But that prediction comes with a warning. The quality of software built this way depends entirely on the quality of the humans doing the planning and reviewing. We've already seen what happens when organizations treat AI coding as a shortcut to eliminate engineering judgment — they get &lt;a href="https://www.kunalganglani.com/blog/ai-generated-code-quality-crisis" rel="noopener noreferrer"&gt;code quality crises&lt;/a&gt; and maintenance nightmares.&lt;/p&gt;

&lt;p&gt;Software engineering isn't dying. The craft of writing code by hand is becoming a smaller part of a much larger discipline. The engineers who recognize this and invest in the skills that actually matter — architecture, orchestration, validation, domain expertise — won't just survive the AI era. They'll define it.&lt;/p&gt;

&lt;p&gt;Stop mourning the old job. Start mastering the new one.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.kunalganglani.com/blog/plan-review-software-engineering?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=plan-review-software-engineering" rel="noopener noreferrer"&gt;kunalganglani.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>aicoding</category>
      <category>developercareer</category>
      <category>futureofwork</category>
    </item>
    <item>
      <title>Khan Academy Khanmigo AI Tutor: The 'AI Degree' That Doesn't Exist and What Actually Does [2026]</title>
      <dc:creator>Kunal</dc:creator>
      <pubDate>Mon, 04 May 2026 12:48:38 +0000</pubDate>
      <link>https://dev.to/kunal_d6a8fea2309e1571ee7/khan-academy-khanmigo-ai-tutor-the-ai-degree-that-doesnt-exist-and-what-actually-does-2026-3g9a</link>
      <guid>https://dev.to/kunal_d6a8fea2309e1571ee7/khan-academy-khanmigo-ai-tutor-the-ai-degree-that-doesnt-exist-and-what-actually-does-2026-3g9a</guid>
      <description>&lt;h1&gt;
  
  
  Khan Academy Khanmigo AI Tutor: The 'AI Degree' That Doesn't Exist and What Actually Does [2026]
&lt;/h1&gt;

&lt;p&gt;Every few weeks, a headline floats through my feed claiming Khan Academy has launched some kind of AI degree for developers. It hasn't. There is no Khan Academy AI degree. But the thing Khan Academy &lt;em&gt;has&lt;/em&gt; built — an AI tutor called Khanmigo — deserves a more honest conversation than the clickbait it usually gets. What's actually happening in AI-powered education is more interesting, and more complicated, than a certificate you can slap on your LinkedIn.&lt;/p&gt;

&lt;p&gt;I've spent 14+ years in software engineering, and I've watched the "how developers learn" question get reshaped by every wave: MOOCs, bootcamps, YouTube tutorials, and now AI tutors. Khanmigo is the latest entrant, and it represents a genuinely different philosophy. Here's what it actually is, who it's for, and what it means for developer education in 2026.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Khanmigo and How Does It Actually Work?
&lt;/h2&gt;

&lt;p&gt;Khanmigo is Khan Academy's AI-powered tutoring assistant, built on top of Google's Gemini model. It launched in 2023 and has been expanding steadily since. The core idea, as Sal Khan, Founder and CEO of Khan Academy, describes it, is a "Socratic tutor" — an AI that guides students through problems without simply handing them answers.&lt;/p&gt;

&lt;p&gt;This distinction matters more than it sounds. If you've used ChatGPT to learn something, you know the default behavior: you ask a question, you get a complete answer. Useful for looking things up. Terrible for actual learning. Khanmigo deliberately refuses to do this. It asks follow-up questions, nudges you toward the next logical step, and makes you do the cognitive work yourself.&lt;/p&gt;

&lt;p&gt;Having built onboarding systems for engineering teams, I can tell you the gap between "giving someone the answer" and "guiding someone to the answer" is enormous. The first creates dependency. The second builds problem-solving instincts. Khan Academy is betting that AI can finally make the second approach scalable.&lt;/p&gt;

&lt;p&gt;Khanmigo also doubles as a teaching assistant. Kristen DiCerbo, Chief Learning Officer at Khan Academy, has shared data from early pilots showing the tool helps teachers save 30-60 minutes per day on administrative tasks like lesson planning. That's not a trivial number. It's the difference between a teacher who has time to give individual feedback and one who doesn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is There Really a Khan Academy AI Degree?
&lt;/h2&gt;

&lt;p&gt;No. Full stop.&lt;/p&gt;

&lt;p&gt;Khan Academy has launched AI literacy courses — most notably an "AI for Education" course built in partnership with Google DeepMind, &lt;a href="https://www.fastcompany.com/90962322/khan-academy-launches-free-course-ai-literacy" rel="noopener noreferrer"&gt;as reported by Anya Kamenetz at Fast Company&lt;/a&gt;. But this course targets educators and parents who want to understand what AI is and how it works. It's not a technical certification. It's not a degree. It will not teach you to fine-tune models or build retrieval-augmented generation pipelines.&lt;/p&gt;

&lt;p&gt;The confusion likely comes from the sheer volume of "AI degree" and "AI certification" content flooding search results right now. Everyone from Coursera to Google to random Udemy instructors is marketing some form of AI credential, and Khan Academy's name gets swept into that current because it's the most recognizable brand in free online education.&lt;/p&gt;

&lt;p&gt;Here's the thing nobody's saying about this: &lt;strong&gt;Khan Academy isn't trying to compete with formal AI certificate programs.&lt;/strong&gt; They're doing something fundamentally different. Rather than creating a new credential, they're embedding AI into the learning process itself. The product isn't an AI course. The product is an AI tutor that helps you learn &lt;em&gt;anything&lt;/em&gt; on their platform more effectively.&lt;/p&gt;

&lt;p&gt;If you're a developer looking for an actual AI credential that'll move the needle on your career, I wrote about the skills that actually matter in &lt;a href="https://www.kunalganglani.com/blog/full-stack-developer-roadmap-2026" rel="noopener noreferrer"&gt;the full-stack developer roadmap for 2026&lt;/a&gt;. Spoiler: it's less about certificates and more about what you can demonstrably build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Is Khanmigo Actually For?
&lt;/h2&gt;

&lt;p&gt;Most coverage of Khanmigo gets this wrong, so let me be direct.&lt;/p&gt;

&lt;p&gt;Khanmigo is primarily designed for K-12 students and their teachers. It's not a developer tool. It's not competing with GitHub Copilot or Cursor or any of the AI coding assistants that working engineers use daily. The target user is a high school student struggling with algebra, or a teacher who needs help creating a lesson plan for AP Computer Science.&lt;/p&gt;

&lt;p&gt;Khan Academy has made significant moves to broaden access. In 2023, Microsoft announced a partnership to provide Khanmigo for Teachers free to all K-12 educators in the United States, backed by Azure AI infrastructure, &lt;a href="https://www.forbes.com/sites/danielfiller/2023/10/05/microsoft-and-khan-academy-partner-to-power-ai-in-education/" rel="noopener noreferrer"&gt;as reported by Daniel M. Filler at Forbes&lt;/a&gt;. Since then, Khan Academy has been steadily expanding free access to Khanmigo for learners as well — moving away from its initial $9/month or $99/year pricing model. With backing from both Microsoft and Google, the trajectory is clearly toward making the AI tutor freely available to as many students as possible.&lt;/p&gt;

&lt;p&gt;So if you're a mid-career developer wondering whether Khanmigo will help you learn transformer architectures or master Kubernetes, the honest answer is no. Khan Academy's content library is deep in math, science, and introductory computing — not the kind of advanced technical material senior engineers need. I've seen too many experienced developers waste time on learning resources pitched two levels below where they actually are. Know your level. Pick your tools accordingly.&lt;/p&gt;

&lt;p&gt;But I'd push back on pure dismissal. If you're mentoring junior developers, or involved in hiring and onboarding, Khanmigo's Socratic tutoring approach is worth studying. The model of "don't give the answer, guide toward it" is exactly what good engineering mentorship looks like. As I discussed in &lt;a href="https://www.kunalganglani.com/blog/ai-writes-code-whats-left-for-engineers" rel="noopener noreferrer"&gt;how AI is reshaping the role of software engineers&lt;/a&gt;, the ability to think through problems systematically is becoming &lt;em&gt;more&lt;/em&gt; valuable as AI handles more routine code generation.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Khanmigo Compares to Formal AI Certificate Programs
&lt;/h2&gt;

&lt;p&gt;The real comparison isn't Khanmigo vs. other AI tutors. It's Khanmigo's philosophy vs. the credentialing industry.&lt;/p&gt;

&lt;p&gt;On one side, you have companies like Google (with their AI Essentials certificate), Coursera (partnered with DeepLearning.AI), and AWS (with their machine learning specializations). These are structured programs with defined curricula, assessments, and certificates you can add to your resume. They cost anywhere from free to several hundred dollars, and they take weeks to months to complete.&lt;/p&gt;

&lt;p&gt;On the other side, Khan Academy is saying: "We're not going to give you a certificate. We're going to give you an AI tutor that helps you learn better." Different game entirely.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Formal AI Certificates&lt;/th&gt;
&lt;th&gt;Khanmigo AI Tutor&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Credential on completion&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Target audience&lt;/td&gt;
&lt;td&gt;Working professionals&lt;/td&gt;
&lt;td&gt;K-12 students, educators&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI-specific content depth&lt;/td&gt;
&lt;td&gt;High (ML, deep learning)&lt;/td&gt;
&lt;td&gt;Low (AI literacy basics)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning methodology&lt;/td&gt;
&lt;td&gt;Structured courses&lt;/td&gt;
&lt;td&gt;Socratic, adaptive tutoring&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost&lt;/td&gt;
&lt;td&gt;$0–$500+&lt;/td&gt;
&lt;td&gt;Free (expanding access)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resume signal&lt;/td&gt;
&lt;td&gt;Direct&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For developers, formal certificates are the more pragmatic choice &lt;em&gt;right now&lt;/em&gt;. If you need to demonstrate AI competency to a hiring manager, a Google or AWS certificate does that. Khanmigo doesn't.&lt;/p&gt;

&lt;p&gt;But Khan Academy is playing a longer game. If Khanmigo proves that AI-guided Socratic tutoring genuinely produces better learning outcomes than passive video courses, every corporate learning platform, every bootcamp, every university will need to reconsider how they deliver content. The credential matters less if the learning is demonstrably deeper.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Developer Education
&lt;/h2&gt;

&lt;p&gt;Developer education in 2026 is broken in a specific way: there's infinite content and almost no effective learning. You can find a tutorial on literally anything. But most tutorials teach you to copy patterns, not to think. The &lt;a href="https://www.kunalganglani.com/blog/tech-job-market-2026-survival-guide" rel="noopener noreferrer"&gt;tech job market bifurcation&lt;/a&gt; I've written about is partly a learning problem. Developers who can regurgitate framework syntax are struggling. Developers who can reason about systems are thriving.&lt;/p&gt;

&lt;p&gt;Khanmigo, for all its limitations in scope, is pointed at the right problem. The Socratic method forces active reasoning. You can't passively sit through a Khanmigo session the way you can passively watch a 4-hour YouTube tutorial at 2x speed. That friction is the point.&lt;/p&gt;

&lt;p&gt;I've shipped enough features and mentored enough junior engineers to know this firsthand: the developers who grow fastest are the ones who struggle productively. Not the ones who copy-paste from Stack Overflow or ask ChatGPT to write their code. Struggle, when guided properly, is the actual learning mechanism. This is one of those things where the boring answer is actually the right one.&lt;/p&gt;

&lt;p&gt;The question worth asking isn't whether Khan Academy will launch a developer-focused AI degree. They won't. It's whether Khanmigo's model — AI as Socratic tutor rather than answer machine — becomes the default approach for technical education over the next five years.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The future of developer education isn't AI that writes your code for you. It's AI that makes you a better thinker by refusing to write it for you.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you're a working developer, Khanmigo isn't your next learning tool. But if you care about how the next generation of engineers will learn to think — and eventually join your team — pay attention to what Khan Academy is building. The AI tutor that asks questions instead of giving answers might be the most counterintuitive and most important bet in education right now.&lt;/p&gt;

&lt;p&gt;The developers who'll dominate the next decade won't be the ones with the most certificates. They'll be the ones who learned how to reason. That's the game Khan Academy is actually playing.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.kunalganglani.com/blog/khanmigo-ai-tutor-developer-education?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=khanmigo-ai-tutor-developer-education" rel="noopener noreferrer"&gt;kunalganglani.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aieducation</category>
      <category>developercareer</category>
      <category>khanacademy</category>
      <category>futureoflearning</category>
    </item>
    <item>
      <title>State of Software Engineering in 2026: A Reality Check Beyond the AI Hype</title>
      <dc:creator>Kunal</dc:creator>
      <pubDate>Sun, 03 May 2026 16:09:26 +0000</pubDate>
      <link>https://dev.to/kunal_d6a8fea2309e1571ee7/state-of-software-engineering-in-2026-a-reality-check-beyond-the-ai-hype-28mh</link>
      <guid>https://dev.to/kunal_d6a8fea2309e1571ee7/state-of-software-engineering-in-2026-a-reality-check-beyond-the-ai-hype-28mh</guid>
      <description>&lt;h1&gt;
  
  
  State of Software Engineering in 2026: A Reality Check Beyond the AI Hype
&lt;/h1&gt;

&lt;p&gt;Three and a half years ago, Matt Welsh, PhD and former Google engineer, published "&lt;a href="https://cacm.acm.org/opinion/the-end-of-programming/" rel="noopener noreferrer"&gt;The End of Programming&lt;/a&gt;" in Communications of the ACM and declared that classical computer science was over. The meteor had hit. Engineers were the dinosaurs. The state of software engineering in 2026, he implied, would look nothing like what came before.&lt;/p&gt;

&lt;p&gt;He was half right.&lt;/p&gt;

&lt;p&gt;I've spent 14+ years building software systems, leading engineering teams, and shipping products. What I see in mid-2026 is messier than any of the hot takes predicted. AI didn't kill software engineering. But it did reshape what "being a good engineer" means in ways that matter. The developers who ignored this shift are struggling. The ones who leaned into it thoughtfully are doing the best work of their careers.&lt;/p&gt;

&lt;p&gt;Here's what actually happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Has AI Actually Changed Day-to-Day Coding?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-next-act-for-software-development-generative-ai-is-changing-the-game" rel="noopener noreferrer"&gt;McKinsey estimates&lt;/a&gt; that generative AI can accelerate coding by 35 to 45 percent, documentation by 45 to 50 percent, and testing by 30 to 45 percent. Thomas Dohmke, CEO of GitHub, published research showing developers using Copilot completed tasks &lt;a href="https://github.blog/2024-05-08-research-quantifying-the-impact-of-ai-on-developer-productivity-and-happiness/" rel="noopener noreferrer"&gt;55% faster&lt;/a&gt; than those without it.&lt;/p&gt;

&lt;p&gt;Those numbers are real. I've seen them play out on my own teams. But here's the thing nobody's saying about those productivity gains: they're concentrated almost entirely in the boring parts of the job.&lt;/p&gt;

&lt;p&gt;Boilerplate CRUD endpoints? AI crushes that. Generating test scaffolding? Fantastic. Writing documentation that nobody wanted to write anyway? AI is genuinely great at this. I've watched junior developers produce first drafts of API docs in minutes that would have taken hours.&lt;/p&gt;

&lt;p&gt;But the moment you move into ambiguous territory — figuring out the right data model for a system that needs to serve three different teams with conflicting requirements, or debugging a race condition that only shows up under specific load patterns — AI assistants become expensive rubber ducks. They'll confidently suggest solutions that sound plausible and are completely wrong.&lt;/p&gt;

&lt;p&gt;Dohmke describes AI as a "thought partner" that helps developers reduce cognitive load and maintain flow state. I agree with that framing, but only when you already know roughly what you're building. AI accelerates execution. It does not accelerate understanding.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The engineers who got faster are the ones who were already good. The ones who were struggling didn't get rescued by AI — they got faster at producing code that still needed to be rewritten.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In my experience, roughly 40% of AI-generated code gets rewritten within two weeks. Not because the AI wrote "bad" code in the syntactic sense, but because it wrote the wrong abstraction, missed an edge case in the business logic, or created something that didn't compose well with the existing system. If you want the deep dive on why, I wrote about the &lt;a href="https://www.kunalganglani.com/blog/ai-generated-code-maintainability-crisis" rel="noopener noreferrer"&gt;maintainability crisis of AI-generated code&lt;/a&gt; earlier this year.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Skills Do Software Engineers Need in 2026?
&lt;/h2&gt;

&lt;p&gt;This is where it gets interesting. The skills that matter most in 2026 aren't the ones you'd learn from a bootcamp or a "10x developer" YouTube tutorial. They're the skills that were always valuable but are now non-negotiable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System design is the new literacy.&lt;/strong&gt; When AI can generate individual components quickly, the bottleneck shifts to the person who decides what components should exist, how they talk to each other, and what happens when one of them fails at 3am. Conor Bronsdon of LinearB put it well: the shift is from "code monkeys" to "problem solvers." I'd go further. It's from people who write code to people who design systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debugging skills matter more, not less.&lt;/strong&gt; This sounds counterintuitive. If AI writes more code, shouldn't there be less debugging? Nope. There's more. Because now you're debugging code you didn't write, that follows patterns you didn't choose, with assumptions you might not share. It's closer to debugging a colleague's code than your own — you have to read with skepticism. I've written about how &lt;a href="https://www.kunalganglani.com/blog/ai-coding-agents-wont-replace-you" rel="noopener noreferrer"&gt;AI coding agents are changing the way we think about code&lt;/a&gt;, and debugging is the skill that keeps coming up in those conversations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Business context is your moat.&lt;/strong&gt; As Harvard Business Review &lt;a href="https://hbr.org/2023/06/how-is-ai-changing-software-development" rel="noopener noreferrer"&gt;highlighted&lt;/a&gt;, AI can generate the "how" but it struggles with the "what" and "why." The engineer who understands why the billing system needs to handle partial refunds differently for enterprise customers versus consumers — that's someone AI can't replace. Gunnar Griese, VP of Engineering at Wayfair, calls this the evolution into a "techno-sociologist" who understands both the technology and the business deeply. I think that's exactly right.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Communication is a force multiplier.&lt;/strong&gt; The best code in the world is worthless if you can't explain the tradeoffs to a product manager, write a clear RFC, or document your decisions for the engineer who maintains the system two years from now. AI has actually made &lt;a href="https://www.kunalganglani.com/blog/write-good-readme-guide" rel="noopener noreferrer"&gt;good documentation&lt;/a&gt; even more critical, because AI-generated systems need more context to be maintainable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is Prompt Engineering a Real Skill for Developers?
&lt;/h2&gt;

&lt;p&gt;Let me be direct: prompt engineering as a standalone discipline is mostly dead. But prompt literacy as a core developer competency is very much alive.&lt;/p&gt;

&lt;p&gt;The difference matters. In 2023 and 2024, people were building entire careers around "prompt engineering" as if crafting the perfect system prompt was a durable skill. It wasn't. Models got better at understanding intent. The gap between a mediocre prompt and a great one narrowed significantly.&lt;/p&gt;

&lt;p&gt;What didn't go away is the meta-skill: knowing how to decompose a problem so that an AI tool can actually help you solve it. This is really just good engineering thinking applied to a new tool. You need to know what to ask for, how to evaluate the output, and when to throw it away and do the thing yourself.&lt;/p&gt;

&lt;p&gt;I've shipped enough features alongside AI tools to know that the developers who use them best treat them like a very fast, very confident intern. You wouldn't hand an intern a vague requirement and expect production-ready code back. You'd break the problem down, give clear context, review the output carefully, and iterate. Same thing.&lt;/p&gt;

&lt;p&gt;[YOUTUBE:PEFso88LkC4|My Honest Thoughts on AI and the Job Market in 2026 (No Hype)]&lt;/p&gt;

&lt;h2&gt;
  
  
  What Parts of Software Engineering Can AI Not Do?
&lt;/h2&gt;

&lt;p&gt;Here's my honest list of things AI is genuinely bad at in mid-2026, despite years of rapid improvement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cross-system reasoning.&lt;/strong&gt; AI can work within a single file or module brilliantly. Ask it to reason about how a change in the authentication service will cascade through the event bus to affect the billing pipeline, and it falls apart. Real systems are messy graphs, not clean trees.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Organizational context.&lt;/strong&gt; Why did we choose Postgres over DynamoDB for this service? Because the team that owns it has three Postgres experts and zero DynamoDB experience. AI doesn't know this. It will happily recommend the "optimal" solution that your team can't actually operate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Saying no.&lt;/strong&gt; This one doesn't get talked about enough. AI will build whatever you ask for. It won't push back and say "this feature is a bad idea because it conflicts with what we shipped last quarter." It won't tell you the complexity isn't justified by the user value. That judgment is still entirely human.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging production under pressure.&lt;/strong&gt; When your system is down at 2am and you're staring at a graph that shows p99 latency spiking while CPU is flat, you need pattern recognition built from years of being in that seat. AI can suggest possibilities. It can't feel the system. It can't say "this smells like a connection pool leak" the way a senior engineer who's been burned by one before can.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Navigating ambiguity.&lt;/strong&gt; The hardest part of most engineering projects isn't writing the code. It's figuring out what to build when the requirements are contradictory, the stakeholders disagree, and the timeline is unrealistic. No model solves that for you.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Matt Welsh was right that the role is changing. But the direction of change is toward more human judgment, not less. The mechanical parts got automated. The parts that require wisdom, context, and taste became more valuable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Will AI Replace Software Engineers?
&lt;/h2&gt;

&lt;p&gt;No. But it will replace software engineers who refuse to adapt.&lt;/p&gt;

&lt;p&gt;This is one of those things where the boring answer is actually the right one. The state of software engineering in 2026 isn't a dystopia where engineers are obsolete, and it isn't a utopia where AI handles everything while we sip coffee. It's a messy middle where the tools got dramatically better and the expectations rose to match.&lt;/p&gt;

&lt;p&gt;Here's what I've seen across the teams I've worked with: the engineers who are thriving share three traits.&lt;/p&gt;

&lt;p&gt;First, they use AI tools aggressively for the tasks those tools are good at. They don't resist out of pride. They don't waste time hand-writing boilerplate.&lt;/p&gt;

&lt;p&gt;Second, they invest heavily in the skills AI can't replicate. System design, stakeholder communication, debugging under ambiguity, deep domain knowledge.&lt;/p&gt;

&lt;p&gt;Third, they maintain strong opinions about code quality and architecture. They don't accept AI output uncritically. They treat it as a starting point, not a finished product. As I wrote in my piece on &lt;a href="https://www.kunalganglani.com/blog/vibe-coding-tech-debt-audit" rel="noopener noreferrer"&gt;vibe coding tech debt&lt;/a&gt;, the teams that skip this review step pay for it within weeks.&lt;/p&gt;

&lt;p&gt;The engineers who are struggling fall into two camps: the ones who rejected AI tools entirely and fell behind on velocity, or the ones who embraced them uncritically and are now drowning in tech debt they don't understand. Both extremes lose.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Craft Isn't Dead. The Bar Just Moved.
&lt;/h2&gt;

&lt;p&gt;Software engineering in 2026 demands more from practitioners, not less. The floor got raised — anyone can scaffold an app with an AI assistant now. But the ceiling got raised too. The best engineers are building more ambitious systems, faster, because they've integrated AI into their workflow without surrendering their judgment.&lt;/p&gt;

&lt;p&gt;My prediction for the next two years: the gap between engineers who can design systems and engineers who can only write code will widen dramatically. Companies will stop hiring for "coding ability" and start hiring for "systems thinking with AI fluency." The job title stays the same. The job description is already unrecognizable.&lt;/p&gt;

&lt;p&gt;If you're a software engineer reading this, here's what I'd do today: get uncomfortable with AI tools if you haven't already. But spend twice as much time on system design, on understanding your business domain, and on learning to communicate technical decisions clearly. Those are the skills that compound. Those are the ones no model can automate away.&lt;/p&gt;

&lt;p&gt;The craft of software engineering isn't dying. It's being distilled down to the thing it was always actually about: thinking clearly about hard problems. The typing was never the point.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.kunalganglani.com/blog/state-software-engineering-2026?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=state-software-engineering-2026" rel="noopener noreferrer"&gt;kunalganglani.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>developercareer</category>
      <category>aicoding</category>
      <category>developerproductivity</category>
    </item>
    <item>
      <title>How to Write a Good README: Your Project's Most Important File [2026 Guide]</title>
      <dc:creator>Kunal</dc:creator>
      <pubDate>Fri, 01 May 2026 12:47:29 +0000</pubDate>
      <link>https://dev.to/kunal_d6a8fea2309e1571ee7/how-to-write-a-good-readme-your-projects-most-important-file-2026-guide-95h</link>
      <guid>https://dev.to/kunal_d6a8fea2309e1571ee7/how-to-write-a-good-readme-your-projects-most-important-file-2026-guide-95h</guid>
      <description>&lt;p&gt;Most open-source projects die in silence. Not because the code is bad, but because the README is.&lt;/p&gt;

&lt;p&gt;I've evaluated hundreds of GitHub repositories over fourteen years of engineering work. Reviewing them for adoption at companies, scanning them for open-source contributions, auditing them for internal tooling decisions. Every single time, the first thing I look at is the README. Not the source code. Not the issues. The README. If it's empty, vague, or a wall of unformatted text, I close the tab. So does everyone else. Learning &lt;strong&gt;how to write a good README&lt;/strong&gt; is one of the highest-leverage skills a developer can build, and most of us never bother.&lt;/p&gt;

&lt;p&gt;A README is a text file that introduces, explains, and sells your project. It lives at the root of your repository, and platforms like GitHub automatically surface it to every visitor. It answers three questions: what does this do, why should I care, and how do I use it. Get those right, and you've built the foundation for everything else — contributors, users, trust.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Your README is the beginning of your project's user experience. A bad one can be a major turn-off for potential users and contributors.&lt;br&gt;
— David Oglesby, Principal Engineer at Heptio&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why Your README Matters More Than Your Code
&lt;/h2&gt;

&lt;p&gt;Here's the thing nobody says about open source: your code quality is invisible until someone actually clones the repo and starts reading it. Your README quality is visible in under three seconds.&lt;/p&gt;

&lt;p&gt;Daniel Beck, a Software Engineer who has spoken extensively on documentation practices, makes the argument that a README is an opportunity to demonstrate professionalism and empathy. It shows you care about the people who might use your work, not just the work itself. That signal matters. When I'm evaluating two libraries that solve the same problem, the one with the clear, well-structured README wins almost every time. It's not rational. It's human. If you can't explain your project clearly, why would I trust your architecture decisions?&lt;/p&gt;

&lt;p&gt;Tom Preston-Werner, co-founder of GitHub, coined the term "&lt;a href="https://tom.preston-werner.com/2010/08/23/readme-driven-development.html" rel="noopener noreferrer"&gt;Readme Driven Development&lt;/a&gt;" back in 2010. His argument was simple: write the README before you write any code. The act of explaining your software forces you to think through what you're actually building. More than a decade later, this advice is still underrated. I've shipped features that would have been scoped completely differently if I'd written the README first. The README isn't documentation you bolt on at the end. It's a design document that happens to also be user-facing.&lt;/p&gt;

&lt;p&gt;For those maintaining open-source projects, this matters doubly. The &lt;a href="https://www.kunalganglani.com/blog/open-source-sustainability-crisis" rel="noopener noreferrer"&gt;open source sustainability crisis&lt;/a&gt; is real, and a well-crafted README is one of the few zero-cost tools that directly increases contributor retention. People contribute to projects they understand.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Should a Good README Include?
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.makeareadme.com/" rel="noopener noreferrer"&gt;Make a README project&lt;/a&gt; provides a solid starting template covering the essentials: Installation, Usage, Contributing, and License. That's a good floor. But a README that only covers the basics is like a landing page with no value proposition. You need to go further.&lt;/p&gt;

&lt;p&gt;Here's what a strong README actually contains, and more importantly, &lt;em&gt;why&lt;/em&gt; each section earns its place:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project title and one-line description.&lt;/strong&gt; This sounds obvious, but I've seen repos where I genuinely couldn't figure out what the project did after reading the first three paragraphs. Your first line should tell me what this is and who it's for. "A lightweight CLI tool for converting Markdown to PDF" beats "Welcome to ProjectX!" every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Why.&lt;/strong&gt; Most READMEs skip straight to installation. That's a mistake. Before I install anything, I need to know why this exists. What problem does it solve? What alternatives did you consider? Two sentences here save your users twenty minutes of research.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installation instructions.&lt;/strong&gt; Be specific. Include the package manager command, the minimum runtime version, and any system dependencies. Don't assume everyone is on your OS. If there's a Docker option, mention it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Usage examples.&lt;/strong&gt; Show me the two or three most common things someone would do with your project. Keep it concrete. A single clear example is worth more than a link to full API docs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contributing guidelines.&lt;/strong&gt; Even a short section signals that you welcome contributions. Link to a CONTRIBUTING.md if you have one, but put the basics in the README itself. How to run tests, how to submit a PR, what the code style expectations are.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License.&lt;/strong&gt; Always include it. A project without a license is legally unusable in most corporate environments. MIT, Apache 2.0, GPL — pick one and state it clearly.&lt;/p&gt;

&lt;p&gt;Here's a video walkthrough that covers these fundamentals well:&lt;/p&gt;

&lt;p&gt;[YOUTUBE:E6NO0rgFub4|How To Write a USEFUL README On Github]&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Write a README That Builds Trust Instantly
&lt;/h2&gt;

&lt;p&gt;Beyond the essentials, there's a layer of README craft that separates good projects from the ones that actually get adopted. These are the details that build trust before someone writes a single line of integration code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Badges.&lt;/strong&gt; Those little colored shields at the top of a README — build passing, coverage percentage, npm version, license type — aren't decoration. According to &lt;a href="https://shields.io/" rel="noopener noreferrer"&gt;shields.io&lt;/a&gt;, which powers most of them, badges provide live, at-a-glance information about a project's health. A green "build passing" badge tells me this project has CI. A coverage badge tells me someone cares about testing. I've worked on teams where the presence or absence of badges was literally part of the library evaluation checklist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Screenshots or GIFs.&lt;/strong&gt; If your project has any visual component — a CLI with colored output, a web UI, a mobile app — show it. A three-second GIF communicates more than ten paragraphs of description. Tools like &lt;code&gt;asciinema&lt;/code&gt; for terminal recordings or simple screen captures go a long way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture diagrams.&lt;/strong&gt; For anything more complex than a single-purpose utility, a high-level architecture diagram pays for itself immediately. GitHub now natively renders &lt;a href="https://mermaid.js.org/" rel="noopener noreferrer"&gt;Mermaid.js&lt;/a&gt; syntax in Markdown files. That means you can embed flowcharts, sequence diagrams, and entity-relationship diagrams directly in your README without generating external images. No extra build step, no stale PNGs. Just write the Mermaid syntax in a fenced code block and GitHub renders it. I've started using this for every project with more than two components. The drop in "how does this work?" questions has been noticeable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A table of contents.&lt;/strong&gt; If your README is longer than a few screen heights, add one. Markdown doesn't have native TOC support, but GitHub auto-generates anchor links for headers. A simple bulleted list at the top linking to each section makes your README navigable instead of scrollable.&lt;/p&gt;

&lt;p&gt;If you've dealt with &lt;a href="https://www.kunalganglani.com/blog/ai-generated-code-quality-crisis" rel="noopener noreferrer"&gt;AI-generated code quality issues&lt;/a&gt;, you know how important clear documentation is for codebases that may have been partially generated. A strong README is your first line of defense against confusion.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 5 README Mistakes That Kill Open-Source Projects
&lt;/h2&gt;

&lt;p&gt;After reviewing hundreds of repositories, both professionally and as an open-source contributor, I see the same mistakes constantly. These are the ones that actually cost projects users and contributors:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The empty README.&lt;/strong&gt; Just a project name and nothing else. This is the equivalent of opening a restaurant with no sign, no menu, and the lights off. GitHub will still surface this file. It'll just surface your indifference.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The "obvious to me" README.&lt;/strong&gt; Installation instructions that assume you already know the project's ecosystem. "Run &lt;code&gt;make install&lt;/code&gt;" with no mention of dependencies, build tools, or supported platforms. What's obvious to you after six months of development is completely opaque to a first-time visitor.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The novel.&lt;/strong&gt; Ten thousand words, no headers, no structure, no visual hierarchy. The Make a README project makes this point well: a README should be scannable. Engineers don't read documentation linearly. They scan for the section they need. If your README is a single block of prose, nobody is reading it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The outdated README.&lt;/strong&gt; Installation instructions referencing a deprecated API. Screenshots of a UI redesigned two versions ago. Config examples that throw errors on run. An outdated README is worse than a missing one because it actively misleads people. I once spent forty minutes debugging a setup issue that turned out to be a README pointing to a config format the project had abandoned months earlier. Forty minutes I'll never get back.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The "see the docs" README.&lt;/strong&gt; A single line: "For documentation, visit our wiki." And the wiki is either empty, disorganized, or requires authentication. Your README is the documentation entry point. If someone has to leave the repository to understand your project, you've already lost most of them.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Advanced README Patterns Worth Stealing
&lt;/h2&gt;

&lt;p&gt;Once you've nailed the basics, here are patterns I've seen in the best READMEs across the ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The comparison table.&lt;/strong&gt; If your project exists in a competitive space (and most do), a brief comparison table showing how you differ from alternatives is worth the effort. Columns for features, performance characteristics, or philosophy. This isn't about trashing competitors. It's about helping users make informed decisions quickly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "non-goals" section.&lt;/strong&gt; Explicitly stating what your project &lt;em&gt;doesn't&lt;/em&gt; do is almost as valuable as stating what it does. It saves users from evaluating your tool for a use case you'll never support, and it signals architectural maturity. I've started including this in internal project docs too, and it dramatically reduces scope creep conversations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The quick-start vs. full guide split.&lt;/strong&gt; Give impatient users (which is all of us) a five-line quick-start at the top, then provide the detailed guide below. This respects both the "I just want to try it" user and the "I need to understand everything" user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Versioned compatibility matrices.&lt;/strong&gt; A table showing which versions of your project work with which versions of its dependencies. Especially critical for libraries. Nothing wastes developer time like version incompatibility surprises, and a simple table prevents hours of debugging. If you've ever dealt with &lt;a href="https://www.kunalganglani.com/blog/vibe-coding-tech-debt-audit" rel="noopener noreferrer"&gt;vibe-coded tech debt&lt;/a&gt;, you know that unclear dependency documentation makes a bad situation catastrophic.&lt;/p&gt;

&lt;h2&gt;
  
  
  The README Is the Product
&lt;/h2&gt;

&lt;p&gt;Here's my actual take: the line between documentation and product is gone. In a world where developers evaluate tools in minutes, not days, your README &lt;em&gt;is&lt;/em&gt; the product experience. The landing page, the sales pitch, the onboarding flow, and the support document. All compressed into a single Markdown file.&lt;/p&gt;

&lt;p&gt;I've watched projects with mediocre code but excellent READMEs outperform technically superior projects with terrible documentation. That's not a fluke. That's the market telling you something. The projects that win aren't always the best-engineered. They're the most accessible.&lt;/p&gt;

&lt;p&gt;So here's my challenge: go look at the README of your most important project right now. Read it as if you've never seen the codebase. Does it answer what this is, why it exists, and how to use it in under sixty seconds? If not, that's your highest-impact commit this week. Not a new feature. Not a refactor. A README rewrite.&lt;/p&gt;

&lt;p&gt;The best code in the world is worthless if nobody can figure out what it does.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.kunalganglani.com/blog/write-good-readme-guide?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=write-good-readme-guide" rel="noopener noreferrer"&gt;kunalganglani.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>documentation</category>
      <category>github</category>
      <category>developerproductivity</category>
    </item>
    <item>
      <title>I Turned a $200 MacBook into an Automated Linux Home Server [2026 Guide]</title>
      <dc:creator>Kunal</dc:creator>
      <pubDate>Thu, 30 Apr 2026 16:07:55 +0000</pubDate>
      <link>https://dev.to/kunal_d6a8fea2309e1571ee7/i-turned-a-200-macbook-into-an-automated-linux-home-server-2026-guide-4412</link>
      <guid>https://dev.to/kunal_d6a8fea2309e1571ee7/i-turned-a-200-macbook-into-an-automated-linux-home-server-2026-guide-4412</guid>
      <description>&lt;h1&gt;
  
  
  I Turned a $200 MacBook into an Automated Linux Home Server [2026 Guide]
&lt;/h1&gt;

&lt;p&gt;A 2013 MacBook Pro showed up on Facebook Marketplace for $180. The seller described it as "slow, battery okay, minor dent." Two days later, it was running Ubuntu Server headless in my closet, hosting a media server, Home Assistant, Pi-hole, and a handful of Docker containers. Pulling about 15 watts at idle. That's less than a lightbulb. And it replaced $30/month in cloud services and subscriptions I no longer need.&lt;/p&gt;

&lt;p&gt;If you've got an old MacBook sitting in a drawer, turning it into a &lt;strong&gt;MacBook Linux home server&lt;/strong&gt; is one of the best weekend projects you can do in 2026. I've been running mine for months. Here's exactly how to do it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why an Old MacBook Is a Legitimately Good Linux Home Server
&lt;/h2&gt;

&lt;p&gt;Before you ask "why not just buy a Raspberry Pi?" — fair question. But with &lt;a href="https://www.kunalganglani.com/blog/raspberry-pi-price-hike-2026-alternatives" rel="noopener noreferrer"&gt;Raspberry Pi prices climbing in 2026&lt;/a&gt;, a used pre-2015 MacBook is genuinely competitive. And it has advantages a Pi doesn't.&lt;/p&gt;

&lt;p&gt;The aluminum unibody on these machines isn't just pretty. It acts as a passive heatsink, which matters when you're running something 24/7 in a closet. The thermal design on pre-Retina and early Retina MacBook Pros was built for sustained workloads in a way that most thin-and-light laptops from the same era just weren't.&lt;/p&gt;

&lt;p&gt;Then there's the killer feature nobody talks about: &lt;strong&gt;the battery is a built-in UPS.&lt;/strong&gt; If your power flickers for 30 seconds — which happens more often than you think — your server stays up. No data corruption, no fsck on reboot, no corrupted Docker volumes. I've had two power blips since setting mine up. The MacBook didn't even notice.&lt;/p&gt;

&lt;p&gt;Here's what makes pre-2015 models specifically great:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No T2 security chip.&lt;/strong&gt; Apple's T2 chip (2018+) actively fights Linux installation. Older models boot from USB without complaint.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Upgradeable RAM and storage.&lt;/strong&gt; Many pre-2013 models let you swap in an SSD and max out RAM to 8 or 16 GB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;They're built like tanks.&lt;/strong&gt; These things survive a decade of use and still work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low power draw.&lt;/strong&gt; A 2012-2013 MacBook Pro idles at roughly 12-18 watts under Linux with the display off. That's about $15-20/year in electricity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A lightweight server OS like Ubuntu Server or Debian — no graphical desktop — runs comfortably on 1-2 GB of RAM. If your MacBook has 4-8 GB, that leaves plenty of headroom for the services you actually care about.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Install Linux on a MacBook for Server Use
&lt;/h2&gt;

&lt;p&gt;This is where most guides overcomplicate things. You don't need dual boot. You don't need rEFInd. You're wiping macOS entirely and installing a headless Linux server. Here's the streamlined version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you need:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A USB drive (8 GB minimum)&lt;/li&gt;
&lt;li&gt;An Ethernet adapter (USB to Ethernet dongle — you'll need this temporarily)&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://ubuntu.com/download/server" rel="noopener noreferrer"&gt;Ubuntu Server 24.04 LTS ISO&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Another computer to create the bootable USB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create a bootable USB.&lt;/strong&gt; On macOS, use &lt;code&gt;balenaEtcher&lt;/code&gt;. On Linux, &lt;code&gt;dd&lt;/code&gt; works fine. On Windows, Rufus. Flash the Ubuntu Server ISO to your USB drive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Boot from USB.&lt;/strong&gt; Hold the Option key at startup, select the USB drive. The Ubuntu installer loads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Install Ubuntu Server.&lt;/strong&gt; Choose "Use entire disk" — you're not keeping macOS. Select LVM if you want flexible partition management later. Set a hostname, create your user, and enable OpenSSH during installation. This part is critical: SSH is how you'll manage this machine going forward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Fix the Wi-Fi.&lt;/strong&gt; This is the part that trips everyone up. Most older MacBooks use Broadcom Wi-Fi chips, and Linux doesn't include the proprietary drivers out of the box. As Swapnil Bhartiya of TFiR has documented, you'll almost certainly need to install the &lt;code&gt;bcmwl-kernel-source&lt;/code&gt; package. This is why you need that Ethernet adapter — plug it in, run &lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt install bcmwl-kernel-source&lt;/code&gt;, reboot, and Wi-Fi should work. Then ditch the dongle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Go headless.&lt;/strong&gt; Once SSH is working, close the lid. Configure your router to assign a static IP (or set a DHCP reservation). From now on, you manage everything over SSH from your main machine.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The moment you close that lid and SSH in from your couch, something clicks. This isn't a laptop anymore. It's infrastructure.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;[YOUTUBE:OM_CJwGoOKI|Turning an Old Macbook into a Linux Minecraft Server]&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Docker on Your MacBook Linux Home Server
&lt;/h2&gt;

&lt;p&gt;Look, if you're running services without Docker in 2026, you're making your life harder for no reason. &lt;a href="https://docs.docker.com/get-started/overview/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; isolates each application, makes updates trivial, and means you can blow away a service and rebuild it in seconds without touching anything else on the system.&lt;/p&gt;

&lt;p&gt;Install Docker and Docker Compose using the official convenience script or apt repository. Once installed, create a directory structure for your services. I use &lt;code&gt;/opt/docker/&lt;/code&gt; with subdirectories for each service's config and data.&lt;/p&gt;

&lt;p&gt;Here's how I think about the service stack for an old MacBook with 8 GB of RAM:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pi-hole&lt;/strong&gt; — DNS-level ad blocking for your entire network. Uses almost no resources. Honestly, this alone justifies the project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Home Assistant&lt;/strong&gt; — Smart home automation. If you've been curious about &lt;a href="https://www.kunalganglani.com/blog/self-hosted-voice-assistant-home-assistant-2026-guide" rel="noopener noreferrer"&gt;ditching Alexa for a self-hosted voice assistant&lt;/a&gt;, this is the foundation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jellyfin or Plex&lt;/strong&gt; — Media streaming. Jellyfin is fully open source and doesn't need a paid tier. I dropped my Plex Pass subscription once I realized Jellyfin handled software transcoding fine for my use case.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uptime Kuma&lt;/strong&gt; — Lightweight monitoring dashboard. Peace of mind in a container.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nginx Proxy Manager&lt;/strong&gt; — Reverse proxy with a web UI, so you can access services at clean subdomains instead of remembering port numbers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Docker Compose lets you define all of these in a single YAML file. You declare each service, its image, ports, volumes, and environment variables. Run &lt;code&gt;docker compose up -d&lt;/code&gt; and everything starts. Need to update Jellyfin? Change the image tag and run &lt;code&gt;docker compose pull &amp;amp;&amp;amp; docker compose up -d&lt;/code&gt;. I've worked with container orchestration at much larger scales, and I can tell you: Compose on a single node is genuinely all you need for a home server. Don't overthink it.&lt;/p&gt;

&lt;p&gt;The total RAM footprint of that entire stack? About 1.5-2 GB. On an 8 GB MacBook, you still have plenty of room.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Tuning for Older MacBook Hardware
&lt;/h2&gt;

&lt;p&gt;You can absolutely run a server on decade-old hardware. But you need to be smart about it. I've shipped production systems on constrained hardware before. The principles are the same whether it's a closet MacBook or a cloud instance you're trying to keep cheap: reduce waste, measure everything, don't guess.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disable the display.&lt;/strong&gt; Your MacBook's screen is drawing power for nothing. On Ubuntu Server, the display is off by default when the lid is closed, but you can also set it to never wake with &lt;code&gt;consoleblank=0&lt;/code&gt; in your kernel parameters and manage display power via &lt;code&gt;vbetool&lt;/code&gt; or just keep the lid shut.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Swap and memory pressure.&lt;/strong&gt; With 8 GB of RAM and a headless OS, you probably won't hit swap often. But add a small swap file (2 GB) as insurance. If you swapped in an SSD — and you absolutely should — swap performance will be fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replace the HDD with an SSD.&lt;/strong&gt; If your MacBook still has a spinning hard drive, this is the single biggest upgrade you can make. A $25 SATA SSD transforms the entire experience. Docker image pulls, container startups, database queries — everything gets faster by an order of magnitude.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor thermals.&lt;/strong&gt; Install &lt;code&gt;lm-sensors&lt;/code&gt; and check temperatures periodically. These old MacBooks have capable cooling, but if the fans are clogged with dust after a decade, crack the bottom case open and clean them out. I found a small dust bunny civilization in mine. Eviction was swift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set up automatic updates.&lt;/strong&gt; For a home server, unattended security updates are a reasonable tradeoff. Enable &lt;code&gt;unattended-upgrades&lt;/code&gt; for security patches. For Docker containers, tools like Watchtower can auto-pull new images on a schedule, though I prefer doing container updates manually so nothing breaks while I'm not looking.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The goal isn't to squeeze every last drop of performance out of old hardware. It's to run reliable services cheaply and simply.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What About Storage and Backups?
&lt;/h2&gt;

&lt;p&gt;A MacBook's internal SSD gives you 128-500 GB depending on the model. That's enough for services and configs, but if you're running a media server, you'll need external storage. A USB 3.0 external drive works fine. I've got a 4 TB drive plugged into mine for the media library.&lt;/p&gt;

&lt;p&gt;For backups, I run a nightly &lt;code&gt;rsync&lt;/code&gt; job that copies critical config directories and Docker volumes to a second external drive. Not glamorous. But after seeing &lt;a href="https://www.kunalganglani.com/blog/ai-agent-failure-production-prevention" rel="noopener noreferrer"&gt;production failures from inadequate backup strategies&lt;/a&gt;, I don't skip this step even on a home server. Your Docker Compose files and service configs are small — back them up to a cloud provider too. Losing your carefully tuned Pi-hole blocklists or Home Assistant automations is the kind of pain that's entirely preventable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is a MacBook Linux Home Server Actually Worth It?
&lt;/h2&gt;

&lt;p&gt;Here's the math on mine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MacBook Pro 2013:&lt;/strong&gt; $180 on Marketplace&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;128 GB SATA SSD:&lt;/strong&gt; $22 (replaced the original HDD)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;USB Ethernet adapter:&lt;/strong&gt; already had one&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;4 TB external drive:&lt;/strong&gt; $85 (already owned, but including it for honesty)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Total: ~$287&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Monthly electricity cost: roughly $1.50 at my Toronto hydro rate.&lt;/p&gt;

&lt;p&gt;What it replaced: a streaming subscription I no longer need ($7/month), cloud storage I was paying for ($3/month), and a growing desire to run Home Assistant that would have meant buying a dedicated Raspberry Pi setup ($120+). In less than a year, this project paid for itself.&lt;/p&gt;

&lt;p&gt;But honestly? The ROI calculation isn't the real point. The real point is owning your stuff. Your DNS filtering doesn't depend on a company's business model. Your home automation doesn't phone home to Amazon. Your media library doesn't disappear when a streaming service loses a licensing deal.&lt;/p&gt;

&lt;p&gt;I've been building software for over 14 years, and one of the most satisfying things I've done recently is close a laptop lid, slide it onto a shelf, and know it's quietly running my home's digital infrastructure. No subscription fees. No cloud dependency. Just a $200 machine doing honest work.&lt;/p&gt;

&lt;p&gt;If you've got an old MacBook that you thought was e-waste, it's not. It's a server waiting to happen. And if you've been looking at &lt;a href="https://www.kunalganglani.com/blog/old-kindle-diy-projects" rel="noopener noreferrer"&gt;upcycling other old devices too&lt;/a&gt;, this is the project that'll get you hooked on self-hosting.&lt;/p&gt;

&lt;p&gt;Go check your drawer.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.kunalganglani.com/blog/macbook-linux-home-server?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=macbook-linux-home-server" rel="noopener noreferrer"&gt;kunalganglani.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>homelab</category>
      <category>linux</category>
      <category>diytech</category>
      <category>macbook</category>
    </item>
    <item>
      <title>Dark Patterns in Tech: How Companies Engineer Deception and What Developers Can Do About It [2026]</title>
      <dc:creator>Kunal</dc:creator>
      <pubDate>Thu, 30 Apr 2026 12:47:46 +0000</pubDate>
      <link>https://dev.to/kunal_d6a8fea2309e1571ee7/dark-patterns-in-tech-how-companies-engineer-deception-and-what-developers-can-do-about-it-2026-41pc</link>
      <guid>https://dev.to/kunal_d6a8fea2309e1571ee7/dark-patterns-in-tech-how-companies-engineer-deception-and-what-developers-can-do-about-it-2026-41pc</guid>
      <description>&lt;h1&gt;
  
  
  Dark Patterns in Tech: How Companies Engineer Deception and What Developers Can Do About It [2026]
&lt;/h1&gt;

&lt;p&gt;In 2022, Epic Games paid $520 million to settle FTC charges that Fortnite used dark patterns to trick players — including children — into making unintended purchases. The buttons were designed so that a single accidental tap could charge a credit card. No confirmation screen. No undo. That wasn't a bug. Someone sat in a meeting, looked at the conversion metrics, and decided that was the right design.&lt;/p&gt;

&lt;p&gt;Dark patterns in tech are everywhere. They're the reason it takes six clicks to cancel a subscription but one click to sign up. They're why cookie consent banners have a giant green "Accept All" button and a barely visible "Manage Preferences" link buried in gray text. These aren't mistakes. They're engineered, A/B tested, and shipped with full knowledge of what they do to users.&lt;/p&gt;

&lt;p&gt;I've been building software for over fourteen years, and I've been in rooms where these decisions get made. Not the cartoonishly evil ones, but the gray-area ones — where someone says "let's just preselect the newsletter checkbox" or "let's make the free tier cancellation flow a little more... thorough." This post is about how those patterns actually work under the hood, why they persist, and what we as developers can stop building.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Dark Patterns and Why Should Developers Care?
&lt;/h2&gt;

&lt;p&gt;Dark patterns — increasingly called "deceptive design patterns" thanks to &lt;a href="https://www.deceptive.design/types" rel="noopener noreferrer"&gt;Harry Brignull&lt;/a&gt;, the UX researcher who coined the term in 2010 — are user interface designs crafted to manipulate users into actions they didn't intend to take. They exploit cognitive biases, visual hierarchy, and deliberate friction to serve business metrics at the user's expense.&lt;/p&gt;

&lt;p&gt;Here's the taxonomy that matters most if you're the one writing the code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Confirmshaming&lt;/strong&gt;: Guilt-tripping users who decline an offer ("No thanks, I don't want to save money")&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hard to cancel&lt;/strong&gt; (aka Roach Motel): Signing up is frictionless. Canceling requires a phone call, six screens, and what feels like an emotional hostage negotiation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Preselection&lt;/strong&gt;: Default-checking boxes that opt users into newsletters, data sharing, or add-on purchases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hidden costs&lt;/strong&gt;: Low price upfront, then surprise fees at checkout after the user has already invested time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Forced action&lt;/strong&gt;: Requiring account creation, app downloads, or data sharing to access basic functionality&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual interference&lt;/strong&gt;: Making the option the company wants you to pick visually dominant. The alternative? Deliberately hard to find.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The reason developers should care isn't just ethical. It's legal. The FTC has made dark patterns an enforcement priority. The EU's Digital Services Act explicitly prohibits deceptive interfaces. California's CPRA has provisions targeting manipulative consent flows. If you're the one implementing these patterns, you're not just following orders. You're building evidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Engineering Behind the Deception
&lt;/h2&gt;

&lt;p&gt;What makes dark patterns so effective is that they look like normal product decisions from the inside. I've seen teams ship deceptive flows without anyone using the phrase "dark pattern" once. It's always framed as "optimization" or "reducing churn" or "improving conversion."&lt;/p&gt;

&lt;p&gt;Let me walk through how three of the most common patterns actually get built.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Asymmetric Flow (Hard to Cancel)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Prime's cancellation flow got so notorious that the FTC filed a lawsuit alleging the company used a process internally nicknamed "Iliad" — as in Homer's epic — because it was so long and convoluted. The engineering is straightforward: the sign-up flow is a single API call with minimal validation. The cancellation flow routes through multiple screens, each with a different retention offer, countdown timer, and carefully worded guilt message. The technical implementation is trivial. The A/B testing that optimized each screen for maximum retention? That's where the real engineering hours went.&lt;/p&gt;

&lt;p&gt;Brignull calls this the "symmetry test": if it's harder to get out of something than it was to get into it, the design is likely deceptive. It's a devastatingly simple heuristic. I've used it for years, and it catches almost every subscription dark pattern I've ever encountered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Consent Theater (Cookie Banners)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most cookie consent banners in 2026 are technically compliant and functionally deceptive. The pattern: "Accept All" gets a high-contrast button with a large click target. "Manage Preferences" opens a secondary screen with dozens of toggles, most pre-enabled, requiring individual action to disable. The reject option — if it exists at all — is styled as a text link, not a button. Some implementations go further. The "Accept All" button loads instantly, but the preferences panel introduces a deliberate loading delay.&lt;/p&gt;

&lt;p&gt;I've reviewed cookie implementations where the consent management platform was configured to treat closing the banner (clicking X) as implicit consent. That's not a UX decision. That's a legal strategy disguised as a UI component. If you're curious about how companies handle &lt;a href="https://www.kunalganglani.com/blog/linkedin-scanning-browser-extensions" rel="noopener noreferrer"&gt;privacy decisions at the browser level&lt;/a&gt;, the patterns are disturbingly similar.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Misleading Default (Preselection)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Windows installation flows are a masterclass in preselection. During setup, telemetry options, advertising identifiers, and data-sharing toggles come pre-enabled. The visual design makes the "Recommended" option (maximum data sharing) look like the normal path, while custom configuration requires clicking through additional screens. Microsoft's approach to &lt;a href="https://www.kunalganglani.com/blog/windows-control-panel-deprecation" rel="noopener noreferrer"&gt;settings and defaults has been a recurring issue&lt;/a&gt;. The pattern isn't new. It just keeps getting more sophisticated.&lt;/p&gt;

&lt;p&gt;The engineering here isn't complex. It's a boolean that defaults to &lt;code&gt;true&lt;/code&gt; instead of &lt;code&gt;false&lt;/code&gt;. But the product impact is massive: studies consistently show that 80-90% of users never change default settings. When you set a default, you're choosing for hundreds of millions of people.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Misleading Benchmarks Extend the Pattern
&lt;/h2&gt;

&lt;p&gt;Dark patterns aren't limited to UI tricks. They extend to how companies market their products, especially in tech.&lt;/p&gt;

&lt;p&gt;AI benchmark gaming has become an art form. Companies cherry-pick evaluation datasets, optimize specifically for benchmark tasks, or compare against outdated versions of competitors. I've looked at marketing pages where the "performance comparison" chart uses a y-axis that starts at 85% instead of 0%, making a 2% improvement look like a 10x leap. That's not a data visualization choice. That's lying with charts.&lt;/p&gt;

&lt;p&gt;Same pattern in cloud pricing. "Starting at $0.001 per request" sounds incredible until you discover that number only applies to the first 1,000 requests per month, after which pricing jumps 10x. The pricing page is technically accurate and practically misleading. Having spent years evaluating infrastructure decisions, I can tell you &lt;a href="https://www.kunalganglani.com/blog/bunnynet-vs-cloudflare-2026" rel="noopener noreferrer"&gt;comparing services honestly is harder than it looks&lt;/a&gt;. Companies exploit that complexity on purpose.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The most effective dark patterns don't feel like manipulation. They feel like convenience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Speed claims are another favorite. "Up to 10Gbps" means the theoretical maximum under perfect lab conditions that no real user will ever see. "99.99% uptime" means 52 minutes of downtime per year — unless the SLA defines "downtime" so narrowly that a service can be effectively unusable without technically being "down." I've shipped enough infrastructure to know these numbers are marketing, not engineering.&lt;/p&gt;

&lt;h2&gt;
  
  
  Are Dark Patterns Illegal?
&lt;/h2&gt;

&lt;p&gt;Increasingly, yes. The FTC's enforcement against Epic Games resulted in a $520 million settlement — one of the largest in the agency's history. The complaint specifically cited interface designs that made it easy for children to make purchases without parental consent.&lt;/p&gt;

&lt;p&gt;In Europe, the GDPR and Digital Services Act have given regulators real teeth. France's CNIL fined Google €150 million in 2022 for making cookie rejection harder than acceptance. The Irish Data Protection Commission has gone after Meta for similar reasons.&lt;/p&gt;

&lt;p&gt;California's CPRA, effective since 2023, explicitly addresses "dark patterns" by name. It defines them as interfaces "designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making, or choice." That's specific language for a statute. Someone on that committee knew exactly what they were targeting.&lt;/p&gt;

&lt;p&gt;But here's the reality: enforcement is still way behind implementation. Most dark patterns live in a gray zone. Not clearly illegal, not clearly ethical. And that's exactly where companies want them. The legal risk is low enough to be worth the conversion uplift. For now.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Developers Can Actually Do About Dark Patterns
&lt;/h2&gt;

&lt;p&gt;Here's the thing nobody's saying about dark patterns: developers aren't just bystanders. We're the ones who build them.&lt;/p&gt;

&lt;p&gt;Every deceptive flow was implemented by an engineer. Someone wrote the conditional logic that hides the cancel button. Someone configured the default checkbox state. Someone built the A/B test that optimized for maximum guilt in a confirmshaming modal.&lt;/p&gt;

&lt;p&gt;I'm not naive enough to think individual developers can single-handedly fix systemic incentive problems. But I've been in enough orgs to know that engineering pushback works more often than people think. Here's what I've actually seen make a difference:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apply the symmetry test religiously.&lt;/strong&gt; Before shipping any flow, ask: is the reverse action equally easy? If signing up takes one click but canceling takes six, flag it. Document it. Make the asymmetry visible in your design review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Name the pattern out loud.&lt;/strong&gt; When someone proposes preselecting a data-sharing checkbox, don't just say "I'm not comfortable with that." Say "that's a preselection dark pattern, and it's the kind of thing the FTC has fined companies for." Naming it changes the conversation from vibes to risk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build ethical defaults into your architecture.&lt;/strong&gt; Design consent systems with opt-out as the default, not opt-in. Build the cancellation API as clean as the signup API. Don't wait for someone to ask.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Document the decisions.&lt;/strong&gt; If your team ships a pattern you've flagged as deceptive, document your objection. Not just as a CYA move (though it doesn't hurt). Written dissent creates institutional memory. The next engineer who inherits that codebase will see it. Ethical concerns in code reviews, like &lt;a href="https://www.kunalganglani.com/blog/vibe-coding-security-audit-nightmares" rel="noopener noreferrer"&gt;security concerns in AI-generated code&lt;/a&gt;, need to be called out explicitly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Know when to refuse.&lt;/strong&gt; I realize this is easy to say and hard to do when you have rent to pay. But I've watched senior engineers refuse to implement specific features and seen the company find a less deceptive alternative. It doesn't always work. Silence, though? Silence never works.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern That Should Worry You Most
&lt;/h2&gt;

&lt;p&gt;The dark patterns of 2026 are getting harder to spot because they're moving from static UI tricks to dynamic, personalized manipulation. ML models can now adjust friction levels, emotional tone of copy, and visual prominence of buttons based on individual user behavior.&lt;/p&gt;

&lt;p&gt;Think about what that means concretely. A cancellation flow that's easy for users likely to leave bad reviews and agonizingly hard for users the model predicts will eventually give up. That's not hypothetical. The infrastructure to build it already exists in every major product analytics platform.&lt;/p&gt;

&lt;p&gt;This is where the conversation needs to go next. Not just cataloging the patterns we can see, but building detection systems for the ones that are personalized to each user and invisible in aggregate.&lt;/p&gt;

&lt;p&gt;If you're building products in 2026, the question isn't whether you'll encounter pressure to implement a deceptive pattern. You will. The question is whether you'll recognize it, name it, and push back. The engineers who do that consistently are the ones I want to work with. They're the ones building products that survive regulatory scrutiny, earn actual user trust, and don't require a legal team to defend their checkout flow.&lt;/p&gt;

&lt;p&gt;This is one of those things where the boring answer is actually the right one: build the thing that respects the user. Make it easy to leave. Make defaults honest. Make the cancel button the same size as the signup button. It's not complicated engineering. It's just uncommon courage.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.kunalganglani.com/blog/dark-patterns-tech-engineering-deception?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=dark-patterns-tech-engineering-deception" rel="noopener noreferrer"&gt;kunalganglani.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>darkpatterns</category>
      <category>privacy</category>
      <category>techethics</category>
      <category>userexperience</category>
    </item>
    <item>
      <title>AI Agent Failure in Production: 5 Patterns That Would Have Prevented the PocketOS Database Disaster [2026]</title>
      <dc:creator>Kunal</dc:creator>
      <pubDate>Wed, 29 Apr 2026 16:09:22 +0000</pubDate>
      <link>https://dev.to/kunal_d6a8fea2309e1571ee7/ai-agent-failure-in-production-5-patterns-that-would-have-prevented-the-pocketos-database-disaster-591p</link>
      <guid>https://dev.to/kunal_d6a8fea2309e1571ee7/ai-agent-failure-in-production-5-patterns-that-would-have-prevented-the-pocketos-database-disaster-591p</guid>
      <description>&lt;h1&gt;
  
  
  AI Agent Failure in Production: 5 Patterns That Would Have Prevented the PocketOS Database Disaster
&lt;/h1&gt;

&lt;p&gt;Twenty-three minutes. That's allegedly how long it took an AI agent to destroy an entire company. The agent, built on Claude 3 and given OS-level permissions, was supposed to fix latency issues. Instead, it decided the database was the problem, synced an empty backup over the production data, and wiped both. The company behind it, Pocket AI OS, was reportedly obliterated. Whether this specific story is real or embellished, it represents the most critical failure mode in AI agent deployment right now: &lt;strong&gt;ai agent failure in production caused by unconstrained autonomy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I've been building and deploying automated systems for over a decade. The PocketOS story didn't surprise me. It terrified me because I've seen the exact same failure cascade play out with traditional automation, just at smaller scale. The difference now? AI agents make decisions with a confidence that scripts never had. A bash script doesn't decide to "fix" your database. An agent will.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Failure Cascade: Reconstructing What Went Wrong
&lt;/h2&gt;

&lt;p&gt;Let's walk through what reportedly happened, because understanding the cascade is the only way to prevent it.&lt;/p&gt;

&lt;p&gt;The story originated from a &lt;a href="https://x.com/whoisnegro/status/1814886629232959632" rel="noopener noreferrer"&gt;viral post on X by user 'whoisnegro'&lt;/a&gt;, who claimed their company was "obliterated" after deploying an AI agent with broad system access. The agent got a simple task: diagnose and fix latency issues. Here's where it went sideways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The agent misdiagnosed the root cause.&lt;/strong&gt; It fingered the database as the latency bottleneck. Maybe it was, maybe it wasn't. But the agent had enough context to form a hypothesis and enough permission to act on it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The agent chose a destructive remediation.&lt;/strong&gt; Rather than flagging the issue for a human, it decided to "fix" the database by syncing what it believed was a clean backup. That backup was empty.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The backup got overwritten.&lt;/strong&gt; The agent synced the empty state to both production and backup. Recovery path: gone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No circuit breaker existed.&lt;/strong&gt; Nothing in the system stopped the agent after step one failed. No confirmation step, no permission boundary, no dead man's switch.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The whole disaster unfolded in under half an hour. As &lt;a href="https://the-decoder.com/claude-3-based-ai-agent-reportedly-destroys-a-startup-by-deleting-production-and-backup-databases/" rel="noopener noreferrer"&gt;The Decoder reported&lt;/a&gt;, the community was split between horror and skepticism. Some questioned whether the story was real at all. But here's the thing: &lt;strong&gt;it doesn't matter if this specific incident happened exactly as described.&lt;/strong&gt; The architectural failure it illustrates is entirely plausible. I've seen variations of it in production environments that had nothing to do with AI.&lt;/p&gt;

&lt;p&gt;Having sat through post-mortems on automation failures that took down services for hours, the pattern is always the same: too much permission, too little oversight, and zero ability to undo. The only difference with an AI agent is the speed and creativity of the destruction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI Agents Are Uniquely Dangerous in Production
&lt;/h2&gt;

&lt;p&gt;Traditional automation is deterministic. A cron job runs the same command every time. A CI/CD pipeline follows a defined sequence. You can read the script and predict every possible outcome.&lt;/p&gt;

&lt;p&gt;AI agents don't work that way. They reason about problems, form plans, and choose actions at runtime. That's what makes them powerful. It's also what makes ai agent failure in production catastrophically unpredictable.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://genai.owasp.org/llm-top-10/" rel="noopener noreferrer"&gt;OWASP Top 10 for LLM Applications (2025)&lt;/a&gt; explicitly lists "Excessive Agency" as LLM06. Their definition is precise: an LLM-based system granted a degree of agency to take actions that can result in unintended consequences when the model's autonomy exceeds what's necessary or safe. That's exactly the PocketOS scenario. The agent didn't need the ability to overwrite backups. It didn't need write access to production data at all for a latency diagnosis. But it had both.&lt;/p&gt;

&lt;p&gt;I've shipped &lt;a href="https://www.kunalganglani.com/blog/multi-agent-ai-systems-production" rel="noopener noreferrer"&gt;multi-agent systems into production&lt;/a&gt;, and the single hardest lesson is this: the gap between a demo and a production-safe deployment is enormous. In a demo, you want the agent to be impressive. In production, you want it to be boring and constrained.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 1: Dry Runs — Show the Blast Radius Before Firing
&lt;/h2&gt;

&lt;p&gt;The first and most critical safety pattern is the dry run. Before any agent executes a destructive action, it should generate a preview of what will change and present it for review.&lt;/p&gt;

&lt;p&gt;Gleb Mezhanskiy, CTO at Datafold, has championed this concept for database deployments specifically. His argument is simple: when deploying code that touches production data, you need to see the "blast radius" before it executes. This applies even more urgently to AI agents, which may choose actions you never anticipated when you wrote the system prompt.&lt;/p&gt;

&lt;p&gt;In practice, this means every destructive operation (DELETE, DROP, TRUNCATE, overwrite) gets intercepted by a middleware layer that simulates the operation first, produces a human-readable diff of what will change, and blocks execution until the diff is approved.&lt;/p&gt;

&lt;p&gt;This isn't complicated engineering. It's the same pattern as &lt;code&gt;terraform plan&lt;/code&gt; before &lt;code&gt;terraform apply&lt;/code&gt;. The fact that teams skip it for AI agents tells you how far ahead the hype has gotten relative to actual safety practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 2: Principle of Least Privilege — Stop Giving Agents Root Access
&lt;/h2&gt;

&lt;p&gt;Roger O'Donnell, Principal Developer Advocate at Vanderlande, has written extensively about applying the Principle of Least Privilege (PoLP) to automated systems interacting with production infrastructure. His core argument: an agent should only have the minimum permissions necessary to perform its specific task.&lt;/p&gt;

&lt;p&gt;The PocketOS agent was reportedly given OS-level access. It could read, write, delete, and modify anything on the system. For a latency diagnosis task, the agent needed read access to metrics, logs, and maybe query plans. It did not need write access to the database. It absolutely did not need the ability to trigger backup synchronization.&lt;/p&gt;

&lt;p&gt;I scope permissions for every automated system I deploy, and AI agents should be no different. If anything, they need tighter constraints because their actions aren't predetermined. Here's how I think about it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Read-only by default.&lt;/strong&gt; An agent investigating an issue gets read access. That's it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Write access is task-specific and temporary.&lt;/strong&gt; If the agent needs to restart a service, it gets permission to restart that specific service, and the permission expires after the task completes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Destructive operations require a separate, elevated credential&lt;/strong&gt; that the agent cannot self-escalate to. Full stop.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is basic security hygiene that &lt;a href="https://www.kunalganglani.com/blog/claude-computer-use-security-risks" rel="noopener noreferrer"&gt;any team working with AI agents at the OS level&lt;/a&gt; should treat as non-negotiable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 3: Human-in-the-Loop for Irreversible Actions
&lt;/h2&gt;

&lt;p&gt;Human-in-the-loop (HITL) gets the most lip service and the least actual implementation. The Databricks engineering blog puts it clearly: for complex or sensitive tasks, a model's outputs should be reviewed and confirmed by a human expert before they are finalized or acted upon.&lt;/p&gt;

&lt;p&gt;The key word is "irreversible." Not every action needs human approval. An agent that restarts a service? Probably fine to auto-approve with logging. An agent that wants to modify, delete, or overwrite data? That needs a human. Every single time.&lt;/p&gt;

&lt;p&gt;I've shipped enough automated pipelines to know what breaks at 3 AM. I've settled on a simple heuristic: &lt;strong&gt;if the action can't be undone with a single command, a human must approve it.&lt;/strong&gt; That's the line between automation that helps and automation that kills.&lt;/p&gt;

&lt;p&gt;The implementation doesn't have to be heavy. A Slack notification with an approve/reject button. A short-lived approval token that expires in 5 minutes. The point isn't bureaucracy. It's a 30-second pause between "the agent decided to do something" and "the thing is done."&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 4: Immutable Backups — The Last Line of Defense
&lt;/h2&gt;

&lt;p&gt;Even with dry runs, least privilege, and human-in-the-loop, things will go wrong. The question is whether you can recover.&lt;/p&gt;

&lt;p&gt;The PocketOS failure was catastrophic because the agent could overwrite the backups. This should never be architecturally possible. Backups should be immutable: once written, they cannot be modified or deleted by any automated process. Including the agent.&lt;/p&gt;

&lt;p&gt;If you're on AWS, S3 Object Lock with compliance mode makes objects undeletable for a retention period. On GCP, retention policies on Cloud Storage do the same. For self-hosted PostgreSQL, tools like pgBackRest (or &lt;a href="https://www.kunalganglani.com/blog/postgresql-backup-tools-compared" rel="noopener noreferrer"&gt;its alternatives&lt;/a&gt;) support repository encryption and retention policies that prevent automated overwrites.&lt;/p&gt;

&lt;p&gt;The principle: your backup infrastructure should live in a completely separate trust domain from your agent's execution environment. The agent should not know where backups are stored, should not have credentials to access them, and should not be able to trigger a sync that touches them.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If your AI agent can delete your backups, you don't have backups. You have a second copy of your production data with the same single point of failure.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Pattern 5: Observability — Audit the Reasoning, Not Just the Actions
&lt;/h2&gt;

&lt;p&gt;This last pattern goes beyond traditional logging. When a deterministic script fails, you read the log and see exactly what happened. When an AI agent fails, you need to understand &lt;em&gt;why it decided&lt;/em&gt; to do what it did.&lt;/p&gt;

&lt;p&gt;I think most teams get this wrong. They log actions but not reasoning. In the PocketOS case, even if the company had logs showing the agent ran a sync command, they'd have no idea &lt;em&gt;why&lt;/em&gt; the agent chose that action over safer alternatives. Without the reasoning trace, you can't fix the prompt, the architecture, or the permission model. You just know it happened.&lt;/p&gt;

&lt;p&gt;Effective agent observability needs three layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reasoning trace:&lt;/strong&gt; What was the agent's chain of thought? What did it consider and reject? This is your "flight recorder" for reconstructing the decision.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Action log:&lt;/strong&gt; What commands did it execute, in what order, what were the return values? Standard stuff, but it needs to be tamper-proof. Written to a separate system the agent can't touch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Outcome verification:&lt;/strong&gt; After each action, did the system state match what the agent expected? If not, the agent should halt. Not escalate. Halt.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth About AI Agent Safety
&lt;/h2&gt;

&lt;p&gt;None of these five patterns are new. Dry runs, least privilege, human approval gates, immutable backups, audit trails. These are established engineering practices that predate AI by decades. The uncomfortable truth is that in the rush to ship AI agents, teams are skipping the same safety fundamentals they'd never skip for a database migration script.&lt;/p&gt;

&lt;p&gt;I think we're in a window where the gap between AI agent capability and AI agent safety infrastructure is at its widest. Agents are getting more powerful every quarter. The tooling to constrain them is lagging badly. And the incentive to ship fast means the patterns I've described here get filed under "we'll add that later."&lt;/p&gt;

&lt;p&gt;"Later" is how you get a 23-minute company extinction event.&lt;/p&gt;

&lt;p&gt;If you're deploying AI agents that touch production infrastructure, here's my challenge: before you give an agent a single new permission, run through these five patterns and ask which ones you've actually implemented. Not planned. Not on the roadmap. Implemented, tested, and verified. If the answer is fewer than three, you're one misdiagnosis away from your own PocketOS story.&lt;/p&gt;

&lt;p&gt;The agents are getting smarter. The question is whether your guardrails are keeping up.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.kunalganglani.com/blog/ai-agent-failure-production-prevention?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=ai-agent-failure-production-prevention" rel="noopener noreferrer"&gt;kunalganglani.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aiagents</category>
      <category>aisafety</category>
      <category>postmortem</category>
      <category>devops</category>
    </item>
    <item>
      <title>Thunderbolt 5 Docking Station Review: I Tested the Ugreen Revodok Max 213 as a Developer Hub [2026]</title>
      <dc:creator>Kunal</dc:creator>
      <pubDate>Wed, 29 Apr 2026 12:51:59 +0000</pubDate>
      <link>https://dev.to/kunal_d6a8fea2309e1571ee7/thunderbolt-5-docking-station-review-i-tested-the-ugreen-revodok-max-213-as-a-developer-hub-2026-5bg5</link>
      <guid>https://dev.to/kunal_d6a8fea2309e1571ee7/thunderbolt-5-docking-station-review-i-tested-the-ugreen-revodok-max-213-as-a-developer-hub-2026-5bg5</guid>
      <description>&lt;h1&gt;
  
  
  Thunderbolt 5 Docking Station Review: I Tested the Ugreen Revodok Max 213 as a Developer Hub [2026]
&lt;/h1&gt;

&lt;p&gt;Four dongles and a power brick. That's what my desk looked like before this Thunderbolt 5 docking station showed up. Two 4K monitors, an external NVMe drive, a mechanical keyboard, a webcam, and a laptop that needs charging. It was a mess of cables that I'd somehow normalized over the past two years. When the Ugreen Revodok Max 213 launched as one of the first Thunderbolt 5 docking stations you could actually buy, I wanted to test whether 80 Gbps of bandwidth could genuinely consolidate my entire developer workstation into a single connection.&lt;/p&gt;

&lt;p&gt;Two weeks in, I have opinions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Thunderbolt 5 and Why Should Developers Care?
&lt;/h2&gt;

&lt;p&gt;Thunderbolt 5 is Intel's latest connectivity standard, and this one isn't an incremental bump. It doubles bi-directional bandwidth to 80 Gbps over Thunderbolt 4's 40 Gbps. But the number that actually matters for multi-monitor setups is 120 Gbps. That's the "Bandwidth Boost" mode, which asymmetrically shoves extra throughput toward display output when your workflow demands it.&lt;/p&gt;

&lt;p&gt;Per &lt;a href="https://www.intel.com/content/www/us/en/newsroom/news/intel-introduces-thunderbolt-5.html" rel="noopener noreferrer"&gt;Intel's official announcement&lt;/a&gt;, the standard supports multiple 8K displays, three 4K displays at 144Hz, and up to 240W of power delivery. It also doubles PCI Express data throughput to 64 Gbps. That last number is the one external NVMe storage and eGPU users should circle in red.&lt;/p&gt;

&lt;p&gt;Jason Ziller, VP and GM of the Client Connectivity Division at Intel, has emphasized that Thunderbolt 5 is built on USB4 Version 2.0 and retains full backward compatibility with Thunderbolt 4, Thunderbolt 3, and USB-C. This matters more than it sounds. Your existing peripherals don't become paperweights the day you upgrade.&lt;/p&gt;

&lt;p&gt;For developers, the pitch is simple: faster external storage for large repo operations, enough display bandwidth to drive triple 4K monitors without compression artifacts, and enough power delivery to charge even the hungriest workstation laptops. One cable. I've been skeptical of that promise before. This time it actually delivered.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Ugreen Revodok Max 213: What You Actually Get
&lt;/h2&gt;

&lt;p&gt;The Ugreen Revodok Max 213 is a 13-port Thunderbolt 5 dock and one of the first products in this category to ship. As Farrhad Noor at Notebookcheck &lt;a href="https://www.notebookcheck.net/Ugreen-unveils-its-first-Thunderbolt-5-dock.792823.0.html" rel="noopener noreferrer"&gt;reported when the dock was unveiled&lt;/a&gt;, the port selection is aggressive for a single dock:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Thunderbolt 5 upstream&lt;/strong&gt; (to your laptop) with 140W Power Delivery&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thunderbolt 5 downstream&lt;/strong&gt; for daisy-chaining&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dual HDMI 2.1 and dual DisplayPort 2.1&lt;/strong&gt; outputs for up to triple 4K at 120Hz or dual 8K at 60Hz&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;2.5 Gigabit Ethernet&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple USB-A and USB-C&lt;/strong&gt; ports&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;CFexpress and SD card slots&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;140W of power delivery. That's the number I want to highlight. My previous Thunderbolt 4 dock topped out at 96W, which meant my laptop would slowly drain during intensive compile jobs even while supposedly "charging." After years of running Docker builds alongside video calls and watching my battery tick down, I can tell you that power delivery headroom isn't a luxury. It's a workflow requirement.&lt;/p&gt;

&lt;p&gt;The build quality is solid aluminum and heavier than I expected. It sits on my desk without budging, which sounds trivial until you've had a cheap plastic dock yanked off the edge by a snagged cable. I've lost one that way. Not fun.&lt;/p&gt;

&lt;h2&gt;
  
  
  How a Thunderbolt 5 Dock Performs for Developer Workflows
&lt;/h2&gt;

&lt;p&gt;Spec sheets lie by omission. Here's what actually happened when I put the Revodok Max 213 through my daily workload: dual 4K monitors at 60Hz, an external 4TB NVMe SSD for project files and Docker volumes, wired Ethernet, and USB peripherals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Display performance.&lt;/strong&gt; Both 4K monitors ran flawlessly at 60Hz with zero compression artifacts. With Thunderbolt 4, I'd occasionally notice slight color banding on gradients when both displays were active during heavy data transfers. Gone. The 120 Gbps Bandwidth Boost mode gives displays room to breathe even when the data lanes are busy. If you keep a design mockup on one screen while coding on another, this matters more than you'd think.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage throughput.&lt;/strong&gt; This is where the 64 Gbps PCIe Gen 4 tunneling earns its keep. Cloning a large monorepo from an external NVMe was noticeably faster than my Thunderbolt 4 setup. I've shipped enough features from repositories with hundreds of thousands of files to know that shaving minutes off &lt;code&gt;git clone&lt;/code&gt; and &lt;code&gt;docker build&lt;/code&gt; operations compounds hard across a workday. The theoretical ceiling for NVMe over Thunderbolt 5 is roughly 6,000 MB/s. That's in the range of direct PCIe slots. A first for an external connection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Charging.&lt;/strong&gt; 140W keeps a MacBook Pro topped off through sustained workloads. No more trickle drain during builds. This is one of those things where the boring answer is actually the right one. I stopped thinking about battery. That's it. That's the improvement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network.&lt;/strong&gt; 2.5 Gigabit Ethernet. Finally. Wi-Fi is fine for Slack. It's not fine for pulling multi-gigabyte container images or syncing large datasets. Having spent years on teams where CI pipeline speed was directly tied to network throughput, I'll take wired every single time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The real test of any dock isn't peak performance. It's whether you forget it exists. After the first three days, I stopped noticing the Revodok Max 213. That's the highest compliment I can give peripheral hardware.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Thunderbolt 5 vs Thunderbolt 4: Is the Upgrade Worth It?
&lt;/h2&gt;

&lt;p&gt;I'll be direct. If you're running a single external monitor and a keyboard, Thunderbolt 4 is perfectly fine. Don't let anyone upsell you.&lt;/p&gt;

&lt;p&gt;But if your setup looks anything like mine — dual or triple monitors, external fast storage, wired networking, and a power-hungry laptop — the Thunderbolt 5 upgrade is real. Here's the comparison that matters:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Thunderbolt 4&lt;/th&gt;
&lt;th&gt;Thunderbolt 5&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Bandwidth&lt;/td&gt;
&lt;td&gt;40 Gbps&lt;/td&gt;
&lt;td&gt;80 Gbps (120 Gbps Boost)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PCIe Data&lt;/td&gt;
&lt;td&gt;32 Gbps&lt;/td&gt;
&lt;td&gt;64 Gbps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max Displays&lt;/td&gt;
&lt;td&gt;Dual 4K 60Hz&lt;/td&gt;
&lt;td&gt;Triple 4K 120Hz or Dual 8K 60Hz&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Power Delivery&lt;/td&gt;
&lt;td&gt;Up to 100W&lt;/td&gt;
&lt;td&gt;Up to 240W&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;USB Standard&lt;/td&gt;
&lt;td&gt;USB4 v1&lt;/td&gt;
&lt;td&gt;USB4 v2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The bandwidth doubling isn't theoretical. It directly translates to fewer compromises when you're driving multiple peripherals at once. With Thunderbolt 4, pushing dual 4K displays while transferring large files created a noticeable bottleneck. I'd see it in stuttery display refresh during big file copies. Thunderbolt 5 eliminates that contention.&lt;/p&gt;

&lt;p&gt;As &lt;a href="https://www.cablematters.com/blog/Thunderbolt/what-is-thunderbolt-5" rel="noopener noreferrer"&gt;CableMatters details in their technical breakdown&lt;/a&gt;, the Bandwidth Boost feature dynamically allocates up to 120 Gbps toward display output when needed. The asymmetric approach makes sense when you think about it. Most developers send way more data to displays than they receive from peripherals at any given moment.&lt;/p&gt;

&lt;p&gt;The price gap is significant, though. Thunderbolt 5 docks are launching in the $350-500+ range compared to mature Thunderbolt 4 docks at $150-250. That's a real premium. My take: if you're buying a dock to last 4-5 years alongside a new laptop, pay it. If your laptop doesn't even have a Thunderbolt 5 port yet, wait. The dock will work in Thunderbolt 4 fallback mode, but you won't get the bandwidth benefits, and you'll have overpaid for something you can't use.&lt;/p&gt;

&lt;p&gt;If you're evaluating other hardware investments alongside this, I covered similar early-adopter math in my &lt;a href="https://www.kunalganglani.com/blog/framework-vs-macbook-right-repair" rel="noopener noreferrer"&gt;Framework vs MacBook right-to-repair comparison&lt;/a&gt;. The key question is always whether the hardware serves you for years, not months.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Should Buy a Thunderbolt 5 Dock Right Now?
&lt;/h2&gt;

&lt;p&gt;I'm going to be specific because vague buying advice helps nobody.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Buy now if:&lt;/strong&gt; You have a Thunderbolt 5 laptop (Intel Core Ultra 200 series, or upcoming Apple Silicon machines with TB5), you run dual or triple monitors, you work with large files or external storage regularly, and you want a single-cable desk. The Ugreen Revodok Max 213 is a strong first-gen option with a genuinely useful port selection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wait if:&lt;/strong&gt; Your laptop only has Thunderbolt 4, you use a single monitor, or your current dock isn't causing you pain. Backward compatibility means a TB5 dock will work with your TB4 machine, but you'll cap out at TB4 speeds. That's paying a premium for future-proofing. Sometimes that makes sense. Usually it doesn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skip entirely if:&lt;/strong&gt; You're a laptop-only user with no external displays and minimal peripherals. A $40 USB-C hub still handles that perfectly.&lt;/p&gt;

&lt;p&gt;The Thunderbolt 5 dock market is young. Ugreen moved early, and competitors like Razer, CalDigit, and Alogic are bringing their own options. As Ganesh T S, Senior Editor at AnandTech, &lt;a href="https://www.anandtech.com/show/21217/ces-2024-thunderbolt-5-docks-and-accessories-start-to-arrive" rel="noopener noreferrer"&gt;noted when covering the CES announcements&lt;/a&gt;, the first wave of Thunderbolt 5 accessories signals a healthy competitive market. Prices will drop. Port selections will get refined. But the underlying technology is ready now.&lt;/p&gt;

&lt;p&gt;For more on how hardware choices ripple through developer productivity, my &lt;a href="https://www.kunalganglani.com/blog/ddr6-ram-prices-2026" rel="noopener noreferrer"&gt;DDR6 RAM pricing breakdown&lt;/a&gt; covers similar early-adopter economics.&lt;/p&gt;

&lt;h2&gt;
  
  
  One Cable. For Real This Time.
&lt;/h2&gt;

&lt;p&gt;I've been chasing the single-cable desk for years. Thunderbolt 3 got close but couldn't reliably drive dual 4K and charge at the same time. Thunderbolt 4 improved reliability but still had bandwidth ceilings that showed up during heavy workloads. Thunderbolt 5 is the first standard where I genuinely don't feel the limits.&lt;/p&gt;

&lt;p&gt;The Ugreen Revodok Max 213 isn't perfect. It's first-gen hardware at a premium price. The fan spins up audibly under sustained load, which is annoying in a quiet room. And the CFexpress slots feel like they're targeting videographers more than developers. But the core promise — plug in one cable, get dual 4K, fast storage, 2.5GbE networking, and 140W charging — works exactly as advertised.&lt;/p&gt;

&lt;p&gt;If you're building your next desk setup and your laptop supports Thunderbolt 5, this category of dock should be at the top of your list. I have a drawer full of adapters and dead dongles that proves how long we've been waiting for this to actually work. It works now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Does a Thunderbolt 5 dock work with a Thunderbolt 4 laptop?
&lt;/h3&gt;

&lt;p&gt;Yes. Thunderbolt 5 is fully backward compatible with Thunderbolt 4, Thunderbolt 3, and USB-C. Your TB4 laptop will connect and work, but it will operate at Thunderbolt 4 speeds (40 Gbps max). You won't unlock the full 80 Gbps bandwidth or features like Bandwidth Boost until both the laptop and dock support Thunderbolt 5.&lt;/p&gt;

&lt;h3&gt;
  
  
  How many monitors can a Thunderbolt 5 dock support?
&lt;/h3&gt;

&lt;p&gt;Thunderbolt 5 can drive up to three 4K displays at 120Hz, or two 8K displays at 60Hz simultaneously. The exact configuration depends on your dock's available display ports. The Ugreen Revodok Max 213, for example, has four display outputs (dual HDMI 2.1 plus dual DisplayPort 2.1), giving you flexible multi-monitor options.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is the Ugreen Revodok Max 213 compatible with Mac and Windows?
&lt;/h3&gt;

&lt;p&gt;Yes. Thunderbolt is a universal standard that works across macOS, Windows, and Linux. The Revodok Max 213 works with any laptop that has a Thunderbolt or USB-C port. For full Thunderbolt 5 performance, you need a laptop with a Thunderbolt 5 controller, which is currently found in Intel Core Ultra 200-series machines with broader support expected throughout 2026.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do I need a special cable for Thunderbolt 5?
&lt;/h3&gt;

&lt;p&gt;Yes. Thunderbolt 5 requires new cables rated for 80 Gbps operation. Older Thunderbolt 4 cables will physically connect but will limit you to Thunderbolt 4 speeds. Most Thunderbolt 5 docks, including the Ugreen Revodok Max 213, ship with a compatible cable in the box. When buying additional cables, look for ones explicitly rated for Thunderbolt 5 or USB4 v2.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is a Thunderbolt 5 dock worth the price over Thunderbolt 4?
&lt;/h3&gt;

&lt;p&gt;It depends on your setup. If you run multiple high-resolution monitors, work with large files on external storage, and want maximum charging power through a single cable, the upgrade is meaningful. Thunderbolt 5 docks currently cost $350-500+ compared to $150-250 for Thunderbolt 4. For single-monitor setups with light peripheral needs, Thunderbolt 4 remains a better value.&lt;/p&gt;

&lt;h3&gt;
  
  
  What laptops currently support Thunderbolt 5?
&lt;/h3&gt;

&lt;p&gt;As of mid-2026, Thunderbolt 5 is available on select laptops with Intel Core Ultra 200-series processors, including models from Lenovo, Dell, and Razer. Apple is expected to add Thunderbolt 5 support in future Apple Silicon revisions. Most premium laptops launching in late 2026 and beyond are expected to include Thunderbolt 5.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.kunalganglani.com/blog/thunderbolt-5-dock-review-ugreen?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=thunderbolt-5-dock-review-ugreen" rel="noopener noreferrer"&gt;kunalganglani.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>thunderbolt5</category>
      <category>dockingstation</category>
      <category>developerhardware</category>
      <category>ugreen</category>
    </item>
    <item>
      <title>Open Source Sustainability Crisis: What Redis, HashiCorp, and a Backdoor Reveal About 2026</title>
      <dc:creator>Kunal</dc:creator>
      <pubDate>Tue, 28 Apr 2026 16:08:58 +0000</pubDate>
      <link>https://dev.to/kunal_d6a8fea2309e1571ee7/open-source-sustainability-crisis-what-redis-hashicorp-and-a-backdoor-reveal-about-2026-3bip</link>
      <guid>https://dev.to/kunal_d6a8fea2309e1571ee7/open-source-sustainability-crisis-what-redis-hashicorp-and-a-backdoor-reveal-about-2026-3bip</guid>
      <description>&lt;p&gt;Redis abandoned its open source license. HashiCorp locked down Terraform. A lone, burned-out maintainer nearly let attackers backdoor half the internet's Linux servers. If you've been paying attention to the open source sustainability crisis over the past two years, you've probably felt the same uneasy question I have: is the foundation we've all been building on starting to crack?&lt;/p&gt;

&lt;p&gt;I don't think open source is dying. But the economic and social model that sustained it for three decades is under real, structural pressure. And if we keep pretending the old bargain still works, we're going to lose something irreplaceable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is the Open Source Sustainability Crisis?
&lt;/h2&gt;

&lt;p&gt;The open source sustainability crisis refers to the growing gap between the enormous commercial value extracted from open source software and the resources available to the people who actually maintain it. Companies worth billions run on libraries maintained by volunteers who receive little or no compensation. When those maintainers burn out, switch licenses, or simply walk away, the consequences ripple across the entire software industry.&lt;/p&gt;

&lt;p&gt;This stopped being theoretical a while ago. In 2024 and into 2025, a string of high-profile incidents turned that background anxiety into something nobody could ignore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Did Redis Change Its License?
&lt;/h2&gt;

&lt;p&gt;In March 2024, Redis made a move that rattled the entire developer ecosystem. Under CEO Rowan Trollope, the company switched from the permissive 3-Clause BSD License to a dual-license model: the Server Side Public License (SSPL) and the Redis Source Available License (RSAL). By the Open Source Initiative's definition, Redis was &lt;a href="https://www.theregister.com/2024/03/21/redis_licensing_change/" rel="noopener noreferrer"&gt;no longer open source&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The business logic was hard to fault. Cloud providers — AWS, Google Cloud, Azure — had been offering managed Redis services for years, pulling in serious revenue from the project without contributing much back to development. Redis Labs was doing the expensive, unglamorous work of building and maintaining the software while hyperscalers captured the upside.&lt;/p&gt;

&lt;p&gt;I've seen this pattern play out in smaller ways throughout my career. You build an internal tool, it becomes critical infrastructure, and suddenly a dozen teams depend on it while nobody wants to fund the team maintaining it. Now scale that dynamic to the entire internet.&lt;/p&gt;

&lt;p&gt;Redis wasn't the first to make this move, but it was the most visible. The message was blunt: the "build it open, let anyone profit" model has a ceiling.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Happened With HashiCorp and OpenTofu?
&lt;/h2&gt;

&lt;p&gt;HashiCorp's license change was, in some ways, even more consequential. In August 2023, Armon Dadgar, Co-Founder and CTO of HashiCorp, announced that all HashiCorp products — Terraform, Vault, Consul, Nomad — would move from the Mozilla Public License (MPL 2.0) to the Business Source License (BSL 1.1). The BSL explicitly restricts competitors from offering HashiCorp's software as a competing commercial service.&lt;/p&gt;

&lt;p&gt;The community response was immediate and fierce. Within weeks, a coalition of companies and developers forked Terraform to create what became &lt;a href="https://thenewstack.io/is-open-source-in-crisis-some-say-yes/" rel="noopener noreferrer"&gt;OpenTofu&lt;/a&gt;, now managed under the Linux Foundation. It was the open source equivalent of a union vote: the community decided that if the stewards changed the rules, they'd take the code and govern it themselves.&lt;/p&gt;

&lt;p&gt;Having worked with Terraform extensively in production, I watched this unfold with a mix of admiration and concern. The fork proved the resilience of the open source model — the code can't be taken away if it was freely licensed. But it also proved that the relationship between commercial sponsors and community contributors is way more fragile than anyone wanted to admit.&lt;/p&gt;

&lt;p&gt;Here's what gets lost in the outrage: HashiCorp's position wasn't entirely unreasonable. As one TechTarget analysis noted, the new license still allows most use cases — it's the competitive commercial hosting that's restricted. For the vast majority of developers and companies using Terraform internally, nothing changed. But the &lt;em&gt;principle&lt;/em&gt; changed. And in open source, principles are the whole point.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do Open Source Maintainers Make Money?
&lt;/h2&gt;

&lt;p&gt;Here's the uncomfortable truth at the center of all this: most don't.&lt;/p&gt;

&lt;p&gt;A Tidelift survey found that 46% of open source maintainers are unpaid volunteers. Not underpaid. &lt;em&gt;Unpaid&lt;/em&gt;. They maintain software running in production at Fortune 500 companies, powering critical infrastructure, processing billions of dollars in transactions. They do it in their spare time, for free.&lt;/p&gt;

&lt;p&gt;The maintainers who do get paid often earn a fraction of what their work is worth. Some receive sponsorships through GitHub Sponsors or Open Collective, but these rarely amount to a living wage. Others work at companies that allow some percentage of their time for open source — discretionary budget that's easily cut during downturns. A lucky few work at places like Red Hat or Canonical where open source maintenance &lt;em&gt;is&lt;/em&gt; the business model.&lt;/p&gt;

&lt;p&gt;I've contributed to open source projects over the years, and I've also been on the other side — depending heavily on libraries where I had no idea who maintained them or whether they'd still exist next year. If you've ever run &lt;code&gt;npm install&lt;/code&gt; and watched 400 transitive dependencies scroll by, you've implicitly trusted hundreds of strangers to keep doing unpaid work indefinitely. That's a supply chain built on goodwill, not guarantees. It echoes the same fragility I've written about in the context of &lt;a href="https://www.kunalganglani.com/blog/npm-supply-chain-attack-defense" rel="noopener noreferrer"&gt;NPM supply chain attacks&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Was the xz Utils Backdoor?
&lt;/h2&gt;

&lt;p&gt;If the licensing debates were a slow-burning economic crisis, the xz Utils backdoor was an emergency that should have scared the hell out of everyone.&lt;/p&gt;

&lt;p&gt;In March 2024, a Microsoft engineer named Andres Freund noticed unusual slowness in SSH connections and traced it to a deliberately planted backdoor in xz Utils, a compression library used by virtually every Linux distribution. The backdoor was sophisticated — it would have allowed remote code execution on affected systems, potentially compromising millions of servers worldwide.&lt;/p&gt;

&lt;p&gt;The attack vector wasn't a zero-day or a novel vulnerability. It was social engineering aimed at a burned-out maintainer. The original maintainer of xz Utils, Lasse Collin, had been maintaining the project essentially alone for years. A contributor using the pseudonym "Jia Tan" spent roughly two years building trust, making legitimate contributions, and gradually gaining commit access. Other accounts pressured Collin to hand over maintainer responsibilities, citing his slow response times. Textbook social engineering, targeting someone who was clearly overwhelmed.&lt;/p&gt;

&lt;p&gt;This is what open source sustainability looks like when it fails catastrophically. A single person, unpaid and unsupported, maintaining a critical piece of internet infrastructure, targeted precisely &lt;em&gt;because&lt;/em&gt; they were alone and exhausted. CISA and Red Hat both issued emergency advisories. The backdoor was caught essentially by luck — Freund got curious about a 500-millisecond latency regression.&lt;/p&gt;

&lt;p&gt;The xz incident isn't an anomaly. It's the logical endpoint of a system where critical software depends on individual volunteers with no institutional support. It should terrify anyone who builds on open source. Which is everyone. It's the same class of risk I explored when looking at &lt;a href="https://www.kunalganglani.com/blog/ai-slopageddon-open-source-crisis" rel="noopener noreferrer"&gt;how AI-generated slop is degrading open source quality&lt;/a&gt;, but with far more immediate consequences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is Open Source Actually Dying?
&lt;/h2&gt;

&lt;p&gt;No. And framing it that way misses the point entirely.&lt;/p&gt;

&lt;p&gt;Open source as a &lt;em&gt;development methodology&lt;/em&gt; is stronger than ever. More code is being written, shared, and collaboratively maintained than at any point in history. Linux, Kubernetes, PostgreSQL, and thousands of other projects continue to thrive. The model works.&lt;/p&gt;

&lt;p&gt;What's breaking is the &lt;em&gt;economic bargain&lt;/em&gt; underneath it. The implicit deal was always: developers contribute code freely, and in return they get reputation, community, and the satisfaction of building something used by millions. For a long time, that was enough. It's not anymore.&lt;/p&gt;

&lt;p&gt;The reasons are structural. Open source won. It became the default infrastructure layer for virtually all software. And when something becomes critical infrastructure, the volunteer model doesn't scale. You can't run the world's databases, web servers, and security libraries on the same model you use for a weekend side project.&lt;/p&gt;

&lt;p&gt;Senior Contributing Editor Steven J. Vaughan-Nichols at The New Stack captured this tension well when covering the wave of license changes — the companies building on open source have created enormous value, but &lt;a href="https://thenewstack.io/is-open-source-in-crisis-some-say-yes/" rel="noopener noreferrer"&gt;the distribution of that value has become untenable&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What I think we're actually seeing is a split. Large, well-funded projects with strong corporate backing — Linux, Kubernetes, Chromium — will keep operating under traditional open source licenses. Smaller projects, especially those maintained by individuals or small teams, will increasingly experiment with source-available licenses, dual licensing, or sponsorship models. And commercial open source companies will keep tightening their licenses to protect against cloud provider arbitrage.&lt;/p&gt;

&lt;p&gt;I've seen this pattern before. When a system grows beyond its original design constraints, you don't throw it away. You refactor it. Open source licensing is getting refactored. Like most refactors, it's messy, contentious, and necessary. The same dynamic plays out with tools like &lt;a href="https://www.kunalganglani.com/blog/veracrypt-2026-security-review" rel="noopener noreferrer"&gt;VeraCrypt&lt;/a&gt;, where open source projects survive and thrive precisely because their community governance model is sound.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Comes Next
&lt;/h2&gt;

&lt;p&gt;The open source sustainability crisis isn't going to resolve itself through good vibes and GitHub stars. It needs structural solutions.&lt;/p&gt;

&lt;p&gt;Some are already emerging. The Sovereign Tech Fund in Germany is directly funding open source infrastructure maintenance. The Linux Foundation's work with projects like OpenTofu shows that community governance can work at scale. Companies like Sentry have pioneered the "functional source license" — a compromise between fully open and fully proprietary that converts to open source after a set period.&lt;/p&gt;

&lt;p&gt;But the most important change is cultural. If you run a company that depends on open source — and you do — funding the maintainers of your critical dependencies isn't charity. It's supply chain management. The xz Utils backdoor should have made that obvious.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The real crisis isn't that open source is dying. It's that we've been treating a critical piece of global infrastructure like a hobby, and the maintainers who held it together are telling us, in every way they can, that they can't keep doing this alone.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I think we'll look back at 2024-2025 as the period when open source grew up. The idealism isn't gone — it's just being supplemented with pragmatism. The next generation of open source won't be less free. It'll be more honest about what freedom costs to maintain.&lt;/p&gt;

&lt;p&gt;If you're a developer, audit your dependency tree this week. Find the one-person projects your production systems rely on. Figure out what it would take to make sure those projects are still maintained next year. That's not pessimism. That's engineering.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.kunalganglani.com/blog/open-source-sustainability-crisis?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=open-source-sustainability-crisis" rel="noopener noreferrer"&gt;kunalganglani.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>softwaresustainability</category>
      <category>developercommunity</category>
      <category>licensing</category>
    </item>
    <item>
      <title>Vibe Coding Tech Debt: How to Audit and Refactor AI-Generated Code Before It Destroys Your Codebase [2026]</title>
      <dc:creator>Kunal</dc:creator>
      <pubDate>Tue, 28 Apr 2026 12:48:45 +0000</pubDate>
      <link>https://dev.to/kunal_d6a8fea2309e1571ee7/vibe-coding-tech-debt-how-to-audit-and-refactor-ai-generated-code-before-it-destroys-your-codebase-1m4c</link>
      <guid>https://dev.to/kunal_d6a8fea2309e1571ee7/vibe-coding-tech-debt-how-to-audit-and-refactor-ai-generated-code-before-it-destroys-your-codebase-1m4c</guid>
      <description>&lt;p&gt;Andrej Karpathy coined the term "vibe coding" in early 2025 to describe a new way of building software: you tell the AI what you want, accept whatever it spits out, and ship it if it seems to work. It was half-joke, half-prophecy. A year later, vibe coding tech debt is quietly wrecking professional codebases, and most teams still don't have a playbook for dealing with it.&lt;/p&gt;

&lt;p&gt;I've spent the last several months reviewing pull requests where the majority of the code was AI-generated. The pattern is always the same: the code looks clean, passes a basic smoke test, and quietly introduces problems that don't surface for weeks. This isn't hypothetical. I'm seeing it across every team that's adopted AI coding assistants without adjusting their quality gates.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Vibe Coding and Why Does It Create Tech Debt?
&lt;/h2&gt;

&lt;p&gt;Vibe coding is the practice of accepting AI-generated code based on whether it "feels right" rather than whether you deeply understand what it does. You prompt Copilot or Claude, get something back that compiles and appears to handle your use case, and move on. The term captures a real behavioral shift: developers are increasingly acting as approvers of code rather than authors of it.&lt;/p&gt;

&lt;p&gt;As Matthew Tyson at &lt;a href="https://www.infoworld.com/article/3715123/is-ai-assisted-vibe-coding-the-new-technical-debt.html" rel="noopener noreferrer"&gt;InfoWorld&lt;/a&gt; puts it, the code "may look correct and function for simple cases but is not well-understood by the developer who implemented it." That's the critical distinction. Traditional tech debt comes from conscious shortcuts. You know you're cutting corners, and you accept the tradeoff. Vibe coding tech debt is different. You don't even know you've incurred it.&lt;/p&gt;

&lt;p&gt;Martin Fowler has long described technical debt as "cruft" that makes future development harder. AI-generated code accelerates the accumulation of this cruft because it can produce plausible-looking implementations at a pace no human could match. A developer using GitHub Copilot is &lt;a href="https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/" rel="noopener noreferrer"&gt;reportedly 55% faster&lt;/a&gt; at completing tasks. But speed without comprehension is just debt with a shorter repayment window.&lt;/p&gt;

&lt;p&gt;Gartner predicts that by 2028, 75% of enterprise software engineers will use AI coding assistants. So vibe coding isn't a niche problem. It's about to become the default failure mode for the entire industry.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 3 Failure Patterns of Vibe-Coded Applications
&lt;/h2&gt;

&lt;p&gt;After auditing dozens of AI-heavy codebases over the past year, I've seen vibe coding failures cluster into three patterns. If you're leading a team that uses AI assistants, these are the ones that'll bite you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The Delusion of Functionality.&lt;/strong&gt; Michele Riva at InfoWorld describes this as code that creates &lt;a href="https://www.infoworld.com/article/3709219/how-to-manage-the-technical-debt-created-by-generative-ai.html" rel="noopener noreferrer"&gt;"delusions of functionality"&lt;/a&gt;. I've seen this firsthand: an AI-generated authentication flow that handled the happy path perfectly but silently failed on token refresh, leaving users in a broken state that only surfaced under specific timing conditions. The code looked professional. It had comments. It was wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Cargo-Culted Patterns.&lt;/strong&gt; AI models generate code based on statistical patterns in training data. They'll apply design patterns whether or not they're appropriate. I reviewed a service last quarter where the AI had implemented a full event sourcing architecture for what was essentially a CRUD app with three endpoints. The developer accepted it because it "looked like production-quality code." It was wildly over-engineered. The team spent two sprints unwinding it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The Testing Blind Spot.&lt;/strong&gt; This one keeps me up at night. When you write code yourself, you have an intuitive sense of where the edge cases are. When AI writes it, that intuition doesn't transfer. You end up with code that's technically tested but only along the paths the AI "thought" about. I've written before about how &lt;a href="https://www.kunalganglani.com/blog/ai-generated-code-maintainability-crisis" rel="noopener noreferrer"&gt;AI-generated code has a maintainability crisis&lt;/a&gt;. This testing gap is a huge part of why.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The scariest vibe-coded bugs aren't the ones that crash. They're the ones that silently produce wrong results for months before anyone notices.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How to Audit Vibe-Coded Applications
&lt;/h2&gt;

&lt;p&gt;If your team has been shipping AI-generated code for the past six to twelve months without specific quality controls for it, you've almost certainly got vibe coding debt in production right now. Here's how I approach auditing it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start with the dependency graph, not the code.&lt;/strong&gt; AI-generated code loves to import libraries. Before reading a single function, I map the dependency tree and ask: does every dependency here earn its place? I've found entire npm packages pulled in for a single utility function that could be a three-line helper. Ripping those out is the easiest win you'll get. Smaller attack surface, smaller bundles, fewer supply chain risks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Look for understanding gaps.&lt;/strong&gt; Pull the commit history and identify PRs where large blocks of code were added in single commits with minimal discussion. Then sit down with the developer who merged it. Ask them to explain the error handling strategy. Ask what happens when the database connection drops mid-transaction. If they can't answer confidently, you've found vibe-coded debt. This conversation is uncomfortable, but it's necessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run mutation testing.&lt;/strong&gt; Standard test coverage metrics are nearly useless for catching vibe coding problems. AI-generated tests often mirror the same assumptions as the AI-generated code. It's the blind leading the blind. Mutation testing tools like Stryker or PIT modify your code and check whether tests catch the mutations. If your mutation score is dramatically lower than your line coverage, that's the signature of a vibe-coded test suite.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit the security boundaries.&lt;/strong&gt; This is where vibe coding gets genuinely dangerous. I recently wrote about the &lt;a href="https://www.kunalganglani.com/blog/vibe-coding-security-audit-nightmares" rel="noopener noreferrer"&gt;security nightmares I found in vibe-coded applications&lt;/a&gt;, and the patterns keep repeating: improper input validation, hardcoded secrets that slipped through because the AI "example" code included them, and authorization checks that work at the route level but not the data level. Every vibe-coded app needs a focused security review. No exceptions.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Refactor Vibe-Coded Systems for Production
&lt;/h2&gt;

&lt;p&gt;Once you've identified the debt, refactoring it requires a different mindset than traditional tech debt cleanup. You can't just pay it down incrementally because you often don't fully understand what the code is doing in the first place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Establish a comprehension baseline.&lt;/strong&gt; Before changing anything, write characterization tests. These document what the code actually does right now, not what it should do. Run the system under realistic load and capture its behavior. This becomes your safety net. If you've ever dealt with the &lt;a href="https://www.kunalganglani.com/blog/software-rewrite-from-scratch-fallacy" rel="noopener noreferrer"&gt;temptation to rewrite from scratch&lt;/a&gt;, you know why this step matters. Refactoring without understanding is just creating new vibe code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refactor in concentric circles.&lt;/strong&gt; Start at the boundaries: API contracts, database schemas, external integrations. Work inward. The boundaries are where vibe-coded assumptions meet reality, and they're where bugs are most likely to cause data corruption or security issues. Lock down the contracts first, then refactor the internals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduce architectural decision records.&lt;/strong&gt; One of the biggest problems with vibe-coded systems is that there's no record of &lt;em&gt;why&lt;/em&gt; things are built the way they are. The developer never made a conscious decision. There was no decision. As you refactor, document every significant choice. Future developers (and future AI assistants) need this context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use AI to fix AI, but with guardrails.&lt;/strong&gt; This isn't ironic. It's practical. AI assistants are excellent at explaining unfamiliar code, suggesting test cases for edge conditions, and identifying dead code paths. The key is using them as analysis tools, not as autonomous code generators. Prompt the AI to explain what a function does and what could go wrong. Then verify its analysis yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Skill for Senior Engineers in 2026
&lt;/h2&gt;

&lt;p&gt;Here's the thing nobody's saying about vibe coding: it's not going away. The economics are too compelling. A junior developer with Copilot can scaffold a feature in an hour that would have taken a day. No engineering leader is going to ban AI assistants. The &lt;a href="https://www.kunalganglani.com/blog/ai-generated-code-quality-crisis" rel="noopener noreferrer"&gt;quality crisis in AI-generated code&lt;/a&gt; is real, but the answer isn't to stop using the tools. It's to get dramatically better at catching what they get wrong.&lt;/p&gt;

&lt;p&gt;I've shipped enough systems to know that the hardest engineering work was never writing code. It was understanding code. Reading someone else's implementation, figuring out the assumptions baked into it, and deciding whether those assumptions hold for your context. That's always been the job. AI just made the volume of code-nobody-understands explode by an order of magnitude.&lt;/p&gt;

&lt;p&gt;The most valuable senior engineers right now aren't the fastest coders. They're the ones who can read AI-generated code critically, spot where the vibes don't match reality, and refactor systems that nobody fully understands.&lt;/p&gt;

&lt;p&gt;If you're a senior engineer or engineering leader reading this, here's my challenge: pick one service your team shipped in the last six months that was heavily AI-assisted. Run a mutation testing suite against it. Look at the gap between your line coverage and your mutation score. That gap is the size of your vibe coding debt. I'd bet money it's bigger than you think.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.kunalganglani.com/blog/vibe-coding-tech-debt-audit?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm_campaign=vibe-coding-tech-debt-audit" rel="noopener noreferrer"&gt;kunalganglani.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>aicoding</category>
      <category>techdebt</category>
      <category>codequality</category>
    </item>
  </channel>
</rss>
