<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Soroush Hashemi</title>
    <description>The latest articles on DEV Community by Soroush Hashemi (@hashemi_soroush).</description>
    <link>https://dev.to/hashemi_soroush</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hashemi_soroush"/>
    <language>en</language>
    <item>
      <title>A Funeral for the Coder</title>
      <dc:creator>Soroush Hashemi</dc:creator>
      <pubDate>Sun, 15 Mar 2026 17:05:00 +0000</pubDate>
      <link>https://dev.to/hashemi_soroush/a-funeral-for-the-coder-320k</link>
      <guid>https://dev.to/hashemi_soroush/a-funeral-for-the-coder-320k</guid>
      <description>&lt;p&gt;The service isn't in a church. It's at my desk, at 2 a.m., in the glow of a single monitor. The only sound is the clack of mechanical switches.&lt;/p&gt;

&lt;p&gt;Today, I turn 30. For 18 of those years — two-thirds of my life — I have been a coder.&lt;/p&gt;

&lt;p&gt;The grief didn't hit all at once. It crept in with every line of auto-generated code that was just... correct. Every tab-completion that finished my thought before I had it. At some point, I stopped writing code and started approving it. I didn't notice the shift until it was already over.&lt;/p&gt;

&lt;p&gt;Tonight, the room is quiet enough to say goodbye. I am holding a funeral for the craft I spent a lifetime learning.&lt;/p&gt;




&lt;p&gt;The Kid in me speaks first.&lt;/p&gt;

&lt;p&gt;He's twelve. He's clutching a worn-out QBasic book to his chest. He doesn't really understand what's happening, but he knows something important is gone.&lt;/p&gt;

&lt;p&gt;"We learned a secret language," he says. "Not everyone could read it. Not everyone wanted to. But those of us who did — we found each other. We were the ones who stayed up building things nobody asked us to build, just because we could. We could type words into a blank screen and make something appear out of nothing. Actual nothing."&lt;/p&gt;

&lt;p&gt;He pauses, looks down at the book.&lt;/p&gt;

&lt;p&gt;"Now everyone speaks it. Or rather, no one needs to. You just ask, and the machine writes it for you. The secret is out. The club is dissolved. I feel alone again."&lt;/p&gt;

&lt;p&gt;He sits down and doesn't say anything else.&lt;/p&gt;




&lt;p&gt;The Craftsman goes next.&lt;/p&gt;

&lt;p&gt;He's older. He stares at his hands like they've forgotten what they're for.&lt;/p&gt;

&lt;p&gt;"Code was a language between us," he says. "Not the machine's language. Ours. The way you named a variable told me how you thought. The way you structured a module told me what you cared about. A comment that just said // TODO: fix this properly — I could feel the 1 a.m. frustration behind it. A perfectly reduced function — I could feel the pride."&lt;/p&gt;

&lt;p&gt;He pauses.&lt;/p&gt;

&lt;p&gt;"And I spoke back. The way I wrote code was the way I showed up. My taste was in there. My stubbornness. My elegance on good days, my exhaustion on bad ones. When someone reviewed my pull request, they weren't just reading logic. They were reading me."&lt;/p&gt;

&lt;p&gt;His voice cracks.&lt;/p&gt;

&lt;p&gt;"Now every codebase reads the same. Clean. Consistent. Anonymous. The machine writes it, and it's fine. It works. But nobody is in there anymore. Code used to be how I got to know people. Really know them. And how they got to know me. That's gone. And I don't know where else to find it."&lt;/p&gt;




&lt;p&gt;The Detective leans against the back wall, arms crossed. She's been quiet this whole time.&lt;/p&gt;

&lt;p&gt;"I lived for the hunt," she says. "Three days into a race condition across four services. No error message. No way to reproduce it. Just a system that corrupted data at random and nobody knew why."&lt;/p&gt;

&lt;p&gt;She pushes off the wall.&lt;/p&gt;

&lt;p&gt;"You don't sleep during those hunts. You don't eat right. You just pull on threads. You read logs until the lines blur. You add a print statement, deploy, wait, watch. Nothing. You add another. Nothing. You start talking to yourself. You start suspecting everything. And then — at some unholy hour, in some file you've read forty times — you see it. One wrong assumption. One misplaced call. And the whole thing unravels in your hands like a knot."&lt;/p&gt;

&lt;p&gt;She goes quiet for a moment.&lt;/p&gt;

&lt;p&gt;"That feeling. I don't know how to describe it to someone who hasn't felt it. It's not satisfaction. It's not pride. It's closer to... solving a riddle that the system whispered to you in a language only you were patient enough to learn."&lt;/p&gt;

&lt;p&gt;She looks at the casket.&lt;/p&gt;

&lt;p&gt;"Now I describe the symptoms to AI and it hands me the answer. Correct. Instant. Empty. The answer was never the point. The chase was the point. And nobody's chasing anything anymore."&lt;/p&gt;




&lt;p&gt;The Engineer is the last to stand.&lt;/p&gt;

&lt;p&gt;He doesn't look at the room. He looks at the casket. He's here for a friend.&lt;/p&gt;

&lt;p&gt;"You were always the best part of me," he says quietly. "I can sit in a room and debate architecture for hours, and it all sounds brilliant until someone has to write it. You were the one who kept me honest. You were the proof."&lt;/p&gt;

&lt;p&gt;He steps closer.&lt;/p&gt;

&lt;p&gt;"And you were the one I came to when I was stuck. Remember? When a problem got too big and I couldn't think anymore, I'd pull up a chair next to you and say, 'let's just write something.' And we would. Some boring endpoint. A migration script. Nothing important. You'd be typing, and I'd be watching, and we'd argue about naming things. We always argued about naming things."&lt;/p&gt;

&lt;p&gt;He almost smiles.&lt;/p&gt;

&lt;p&gt;"Sometimes I'd take the keyboard. Sometimes you'd nudge me out of the way because I was too slow. We'd have music on, drinks on the desk, and we'd lose track of time. And somewhere in the middle of it — while we were deep in some function neither of us would remember writing — I'd see something. A pattern in the code. A shape I hadn't noticed from above. And suddenly the architecture problem I'd been stuck on for days would just... click. You'd look at me and go, 'what?' And I wouldn't even know how to explain it. It just happened. It always just happened when I was sitting next to you."&lt;/p&gt;

&lt;p&gt;He looks down at the casket.&lt;/p&gt;

&lt;p&gt;"Now I never visit. There's nothing to visit. AI writes the code, and I stay up in the architecture where the air is thin and there's no rest. I don't just miss you. I'm worse without you. And I don't think anyone else knows that yet."&lt;/p&gt;




&lt;p&gt;I sit here in the quiet. The monitor hums.&lt;/p&gt;

&lt;p&gt;I know the Engineer in me survives. The systems still need to be designed, the trade-offs still need to be weighed, and the architecture still needs to hold up under pressure. That work isn't going anywhere.&lt;/p&gt;

&lt;p&gt;But the Coder — the one who wrote it all by hand, who had opinions about bracket placement, who could mass-produce microservices with muscle memory alone — that person is gone.&lt;/p&gt;

&lt;p&gt;I don't know how to end a funeral. I don't think you're supposed to know. You just sit there for a while, in the quiet, and you remember.&lt;/p&gt;

&lt;p&gt;Eighteen years. What a run.&lt;/p&gt;




&lt;p&gt;I've been programming since I was 12 and working as a software engineer for 10 years. I wrote this at 2 a.m. on my 30th birthday.&lt;/p&gt;

</description>
      <category>career</category>
      <category>ai</category>
      <category>programming</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>The Butterfly Effect in Networked Agents</title>
      <dc:creator>Soroush Hashemi</dc:creator>
      <pubDate>Tue, 24 Feb 2026 10:17:53 +0000</pubDate>
      <link>https://dev.to/hashemi_soroush/the-butterfly-effect-in-networked-agents-7i5</link>
      <guid>https://dev.to/hashemi_soroush/the-butterfly-effect-in-networked-agents-7i5</guid>
      <description>&lt;p&gt;Imagine your bank's customer support is entirely run by AI agents. A triage agent classifies your issue and routes you to specialized agents — billing, fraud, technical support — with an escalation agent handling complex disputes.&lt;/p&gt;

&lt;p&gt;Your bank upgrades the triage agent. The new model is smarter — edge cases that used to get misrouted to billing now correctly go to the fraud team. Every test passes. Accuracy improves. The upgrade is a success.&lt;/p&gt;

&lt;p&gt;Within a week, the fraud agent is drowning. Its volume doubles, and half the new queries are ambiguous cases that it was never calibrated for. False positives spike. Legitimate customers get their accounts frozen. The downstream escalation agent, designed for rare, complex disputes, is flooded with angry customers who can't access their money. Resolution times triple.&lt;/p&gt;

&lt;p&gt;Every agent is doing exactly what it was designed to do. No agent is broken. The system is broken.&lt;/p&gt;

&lt;p&gt;Welcome to the butterfly effect in networked agents — a small improvement in one agent silently reshaping the operating conditions of every agent downstream. Not by sending wrong data, but by sending a &lt;em&gt;different distribution&lt;/em&gt; of correct data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ey3023dujjhg0vc4xt5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ey3023dujjhg0vc4xt5.png" alt="The Agent Hierarchy" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Is Different From Traditional Software
&lt;/h2&gt;

&lt;p&gt;In traditional software, when you change the behavior of an upstream service, you think about the downstream impact. You version your API. You communicate breaking changes. You run integration tests.&lt;/p&gt;

&lt;p&gt;But this isn't a breaking change. The triage agent's API didn't change. Its output schema is identical. The downstream agents receive the same data types in the same format. What changed is the statistical profile of the requests flowing through the system — and we have no tooling to version, communicate, or test for that.&lt;/p&gt;

&lt;p&gt;Traditional monitoring doesn't help either. The fraud agent's error rate went up, so you investigate. Its model is fine. Its prompts are fine. You A/B test new prompts, retrain on fresh data, and tweak thresholds. But it still won't match its previous performance, because it was dealing with simpler, pre-filtered cases before. The problem isn't the fraud agent. The inputs changed, not the agent.&lt;/p&gt;

&lt;p&gt;This isn't a new problem, strictly speaking. Data drift has been a known challenge in machine learning for years — models degrade when their input distribution shifts away from their training data, whether due to upstream changes or seasonal shifts in user behavior. But historically, it stayed low-priority. ML-based systems were mostly self-contained. A recommendation engine, a fraud detector, a search ranker — each operated within a single cluster in companies, fed by data pipelines that teams controlled. Building and deploying ML systems required specialists who could train models and maintain infrastructure. ML systems were rarely chained together, so distributional shifts were rarely caused by upstream ML system updates.&lt;/p&gt;

&lt;p&gt;Agents change the equation entirely. The barrier to building and deploying an agent is a fraction of what it takes to build traditional ML software. You don't need a team of ML engineers to train and deploy a model — you need a prompt and an API key. Every company can now embed agents in every component of its software. And once agents are everywhere, networking them becomes inevitable. When agents from different companies are chained together, the input distribution of every downstream agent is shaped by the behavior of upstream agents it doesn't control, can't monitor, and may not even know about. Data drift goes from an internal ops concern to an inter-organizational reliability problem — and the tooling hasn't caught up.&lt;/p&gt;

&lt;p&gt;This is the fundamental asymmetry: their improvement is your regression. The triage agent got better at its job, and that made every downstream agent worse at theirs. Not because anything broke, but because the downstream agents were implicitly calibrated to the old distribution, and nobody knew that was a dependency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8qlfkxckld9bk0ow9hp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8qlfkxckld9bk0ow9hp.png" alt="Traditional Software vs Networked Agents" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Now, Imagine This Across Companies
&lt;/h2&gt;

&lt;p&gt;The call center example happens inside one organization. That means one team can, in theory, detect the problem, trace the cause, and recalibrate the chain — even if it takes weeks.&lt;/p&gt;

&lt;p&gt;Now imagine the agents are owned by different companies.&lt;/p&gt;

&lt;p&gt;The triage agent is a third-party service your bank subscribes to. The fraud detection agent is from another vendor. The escalation routing goes through a customer experience platform built by a fourth company. Each company maintains its own agent, model, and deployment schedule.&lt;/p&gt;

&lt;p&gt;The triage vendor upgrades their model. Your fraud vendor's agent starts underperforming. Your fraud vendor investigates, finds nothing wrong, and blames your data. You escalate to the CX platform vendor, which sees degraded metrics but has no access to the fraud or triage models. The triage vendor, meanwhile, is celebrating improved accuracy numbers and has no idea their upgrade destabilized two downstream systems they've never heard of.&lt;/p&gt;

&lt;p&gt;Nobody can see the full chain. Nobody has the access, the context, or the incentive to diagnose a cross-boundary distributional shift. The problem festers until someone manually connects the dots — or until it shows up in quarterly customer churn numbers, triggering a post-mortem that takes months.&lt;/p&gt;

&lt;p&gt;This is where the butterfly effect becomes truly dangerous. Inside one company, it's an operational headache. Across companies, it's an invisible failure with no owner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gi8wnsarskns65nvnja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gi8wnsarskns65nvnja.png" alt="Cross Company Example" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Industry Needs
&lt;/h2&gt;

&lt;p&gt;We spent decades building reliability infrastructure for traditional software composition — type systems, semantic versioning, contract testing, and dependency management. For networked agents, we have nothing equivalent. Here's what needs to exist.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Distribution-aware contracts&lt;/strong&gt;. Agent-to-agent interfaces need to specify not just data types but expected input distributions. "This agent expects 80-90% of inputs to be likely fraud cases" is a distributional contract. If the upstream agent's behavior shifts that distribution outside the agreed range, it should be treated as a breaking change — even if the schema is identical.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dependency-aware deployment&lt;/strong&gt;. When you upgrade one agent, the deployment pipeline should automatically identify which downstream agents are affected and trigger re-evaluation. This is the agent equivalent of "which services depend on this one?" — except the dependency isn't in the API contract, it's in the statistical properties of the data flowing between them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Continuous distributional monitoring&lt;/strong&gt;. Every agent should track the statistical profile of its inputs over time — volume, composition, feature distributions. When the profile shifts beyond a threshold, alert. This is data drift monitoring applied to agent communication, and it should be as standard as uptime monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated recalibration pipelines&lt;/strong&gt;. When a distributional shift is detected, downstream agents should have automated pipelines to re-run evaluations, adjust thresholds, update few-shot examples, or trigger fine-tuning. Manual recalibration doesn't scale — especially across organizational boundaries.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What You Can Do Today
&lt;/h2&gt;

&lt;p&gt;The industry-level solutions don't exist yet. But if you're building agent systems, there are practical steps you can take now.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor input distributions&lt;/strong&gt;, not just output quality. If the volume or composition of queries hitting an agent shifts significantly, that's a signal — even if no individual query looks wrong. Track category distributions, query length distributions, and topic clustering over time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Treat upstream agent upgrades as deployment events for your entire chain&lt;/strong&gt;. When you upgrade one agent, re-run evals on every downstream agent. Don't just test the agent you changed. Test the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build recalibration into your ops process&lt;/strong&gt;. Have a runbook for "upstream distribution changed." Know in advance which few-shot examples, thresholds, and fine-tuning data need updating, and how to update them quickly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Design agents to be robust to distributional shift&lt;/strong&gt;. If your fraud agent completely falls apart when its input distribution shifts from 90% fraud to 50% fraud, it's too tightly calibrated. Build in headroom. Test against varied distributions, not just the current one.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qrqx630kn0wzhieswi0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qrqx630kn0wzhieswi0.png" alt="The Cascade" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Road Ahead
&lt;/h2&gt;

&lt;p&gt;We're at an inflection point. The industry is building hierarchies of agents — within companies and across them — and deploying them into high-stakes workflows without the reliability infrastructure to support composition.&lt;/p&gt;

&lt;p&gt;The butterfly effect in networked agents isn't a theoretical concern. It's the inevitable consequence of chaining systems that are each calibrated to their current operating conditions, without accounting for the fact that those conditions are shaped by every other agent in the chain. One upgrade anywhere changes operating conditions everywhere downstream.&lt;/p&gt;

&lt;p&gt;We solved software composability once. We'll solve it again. But the first step is recognizing that the failure mode is different this time. It's not crashes and errors. It's a silent distributional shift that degrades the entire system while every individual component reports healthy.&lt;/p&gt;

&lt;p&gt;The teams that recognize this early and build for it will ship reliable agent systems. The teams that don't will spend months debugging invisible failures that no single team caused, and no single team can fix.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you're building multi-agent systems or thinking about agent reliability, I'd like to hear how you're approaching this. The problem is real, the tooling isn't, and it's a gap worth closing together.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>mlops</category>
      <category>ai</category>
      <category>sre</category>
    </item>
  </channel>
</rss>
