<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aditya Agarwal</title>
    <description>The latest articles on DEV Community by Aditya Agarwal (@adioof).</description>
    <link>https://dev.to/adioof</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/adioof"/>
    <language>en</language>
    <item>
      <title>React won't die because AI won't let it</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sun, 03 May 2026 13:08:50 +0000</pubDate>
      <link>https://dev.to/adioof/react-wont-die-because-ai-wont-let-it-4ne1</link>
      <guid>https://dev.to/adioof/react-wont-die-because-ai-wont-let-it-4ne1</guid>
      <description>&lt;p&gt;All frameworks are eventually replaced. React is probably the first that won’t be.&lt;/p&gt;

&lt;p&gt;It's not the best language out there, it's not the language developers love the most, it's the language the robots just won't quit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Default Output Problem
&lt;/h2&gt;

&lt;p&gt;Request ChatGPT to develop a todo app for you. You'll receive React. Request Copilot to generate the basic structure of a component. React. Request Claude to design a prototype for a dashboard. React.&lt;/p&gt;

&lt;p&gt;It's not a conspiracy theory, it's the power of statistics. React has been the leader in frontend development for 10 years. This means that 10 years of Stack Overflow responses, blog entries, tutorials, and open-source repos are included by default in every major LLM.&lt;/p&gt;

&lt;p&gt;Newer frameworks like Solid and Svelte are genuinely excellent. They solve real problems React still struggles with. But they have a fraction of the written material online, which means they have a fraction of the representation in AI training corpora.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Assistants Are the New Defaults
&lt;/h2&gt;

&lt;p&gt;Think about how many developers now start projects by prompting an AI. Not all of them, sure. But enough to matter.&lt;/p&gt;

&lt;p&gt;If a junior developer asks an AI for advice, and the response is React code, it's not a neutral suggestion. It's a recommendation machine with the most significant bias in the world towards a framework. Then that junior developer goes and ships React, blogs about React, and trains the next model on React.&lt;/p&gt;

&lt;p&gt;→ More React code in the wild means more React in future training data.&lt;br&gt;
→ More React in training data means AI generates even more React.&lt;br&gt;
→ The loop compounds every single day.&lt;/p&gt;

&lt;p&gt;This is a force that no number of "Svelte has a better developer experience" blog posts can overcome. Not that the point is invalid. It's just that there are more of these articles.&lt;/p&gt;

&lt;h2&gt;
  
  
  npm Downloads Don't Lie
&lt;/h2&gt;

&lt;p&gt;The npm download numbers of React continue to rise. Every quarter, a tweet claims that React is dead. Every quarter, the graph goes up.&lt;/p&gt;

&lt;p&gt;The critics who express their views are correct about React having some shortcomings. We had to deal with class components for many years. The mental model of hooks confuses many people. Server components can be very difficult to understand.&lt;/p&gt;

&lt;p&gt;However, all of that becomes irrelevant if the complete AI-aided development pipeline is set to React output. Previously, framework adoption was determined by developer preference, but now it's by AI defaults. 🤖&lt;/p&gt;

&lt;h2&gt;
  
  
  This Isn't About Quality Anymore
&lt;/h2&gt;

&lt;p&gt;I need to clarify that I'm not saying React should be in this role. The reactivity model of Solid is better. The compiler approach of Svelte is beautiful. The learning curve of Vue is easier.&lt;/p&gt;

&lt;p&gt;But "deserves" doesn't drive adoption at scale. Distribution does. And React now has the most powerful distribution channel in the history of software development: it's the default answer every AI gives to every frontend question.&lt;/p&gt;

&lt;p&gt;The winners going forward won’t be the frameworks with the best APIs. They’ll be the ones creating enough training data to change the default AI. That’s a brutal, unglamorous grind. And React is a decade ahead.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Would It Take to Break the Cycle?
&lt;/h2&gt;

&lt;p&gt;To be honest, I don't think there's anything other than a significant AI company intentionally biasing the model towards another framework that would fix this. Or a framework so unique that the prompts would have to include its name to sample it each time.&lt;/p&gt;

&lt;p&gt;Sure, when the default prompt is to create a "web app" and the default response is React, we continue this pattern. The React loop isn't about developers making a free choice anymore. It's become an emergent reaction to how LLMs operate. 😅&lt;/p&gt;

&lt;p&gt;The tech industry often discusses the concept of using the best tool for a job. However, if your AI assistant provides you with only one tool, you will have to adjust the job to fit that tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So here's what I want to know: if AI training data locks in today's winners permanently, does framework innovation even matter anymore?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>react</category>
      <category>ai</category>
      <category>webdev</category>
      <category>frameworks</category>
    </item>
    <item>
      <title>14 years at one company broke a senior dev's career. Loyalty is a scam.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Fri, 01 May 2026 13:25:48 +0000</pubDate>
      <link>https://dev.to/adioof/14-years-at-one-company-broke-a-senior-devs-career-loyalty-is-a-scam-4l2d</link>
      <guid>https://dev.to/adioof/14-years-at-one-company-broke-a-senior-devs-career-loyalty-is-a-scam-4l2d</guid>
      <description>&lt;p&gt;A senior frontend dev spent 14 years at one company, got laid off, and discovered they couldn't pass a modern interview. That's not a cautionary tale. That's the system working as designed.&lt;/p&gt;

&lt;p&gt;I keep thinking about this story. It surfaced in a developer community known for strict anti-venting rules, and it still resonated hard. That tells you something about how many people saw themselves in it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Comfort Trap
&lt;/h2&gt;

&lt;p&gt;Fourteen years is a long time. Long enough to become the person everyone asks about that one legacy system. Long enough to stop learning things that scare you.&lt;/p&gt;

&lt;p&gt;The dev reported being completely unprepared for today's job market. Not because they were bad at their job. Because their job had quietly stopped demanding growth years ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  Loyalty Is a One-Way Contract
&lt;/h2&gt;

&lt;p&gt;People never talk about what being comfortable costs you: the company isn’t being loyal, it’s being efficient. You’re a known quantity. You have institutional knowledge. And you’re probably underpaid compared to the market.&lt;/p&gt;

&lt;p&gt;The day the spreadsheet says your role is a cost and not a benefit, fourteen years of “loyalty” goes out the window in a single calendar invite with HR. There’s no pep talk. No time to upskill. Just the severance package they already calculated and a LinkedIn that says “Single-company career.”&lt;/p&gt;

&lt;p&gt;I’ve seen it done to people I work with. It sucks every time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Job-Hopping Is Self-Defense
&lt;/h2&gt;

&lt;p&gt;I used to think multiple moves looked flaky. Now I think staying too long without growing is the riskier bet. Here’s why:&lt;/p&gt;

&lt;p&gt;→ Every job switch forces you to learn new codebases, new tools, new team dynamics&lt;br&gt;
→ Interviews keep your skills market-tested, not just employer-tested&lt;br&gt;
→ Compensation resets happen at transitions, not during annual reviews&lt;br&gt;
→ You build a network across companies instead of a reputation inside one&lt;/p&gt;

&lt;p&gt;It’s not about running from a good thing in a bad day. But if you haven’t interviewed in three years, you have no clue what you’re worth or what seams are stretching. That’s not solid ground. That’s a blindfold. 🎯&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Problem Is Invisible Decay
&lt;/h2&gt;

&lt;p&gt;Skills don't rot overnight. They rot over years while you feel productive. You ship features, close tickets, mentor juniors. Everything feels fine.&lt;/p&gt;

&lt;p&gt;Then one day you're in a take-home assignment using a framework you've only read about. Or you're whiteboarding system design patterns your company never needed. The gap isn't small. It's a canyon you didn't notice forming because you never looked down.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Works
&lt;/h2&gt;

&lt;p&gt;I don't think the answer is "hop every 18 months" like some career advice suggests. The answer is intentional pressure-testing:&lt;/p&gt;

&lt;p&gt;→ Interview once a year even if you're happy — treat it like a health check&lt;br&gt;
→ Build side projects with tools your company doesn't use&lt;br&gt;
→ Track what the market demands and honestly assess your gaps&lt;br&gt;
→ Have a "what if I got laid off tomorrow" plan that isn't panic&lt;/p&gt;

&lt;p&gt;Staying somewhere for a decade can be great if you're genuinely growing. But growth means discomfort. If you haven't been uncomfortable at work in a while, you might just be stagnating with a senior title. 😅&lt;/p&gt;

&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;The industry doesn't reward loyalty. It rewards optionality. The developers who survive layoffs unscathed aren't the ones who never get cut — they're the ones who can land somewhere new within weeks because they never stopped being marketable.&lt;/p&gt;

&lt;p&gt;Comfort is nice. Comfort without awareness is a trap with a delayed trigger.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Have you ever stayed somewhere too long and only realized it after leaving? What was the wake-up call?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>career</category>
      <category>layoffs</category>
      <category>seniority</category>
      <category>hiring</category>
    </item>
    <item>
      <title>Taste" is the new 10x. Senior devs who can't curate AI output are cooked.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Fri, 01 May 2026 10:12:07 +0000</pubDate>
      <link>https://dev.to/adioof/taste-is-the-new-10x-senior-devs-who-cant-curate-ai-output-are-cooked-48ld</link>
      <guid>https://dev.to/adioof/taste-is-the-new-10x-senior-devs-who-cant-curate-ai-output-are-cooked-48ld</guid>
      <description>&lt;p&gt;Writing code is easily done, but identifying the code that ought to be available is what's difficult nowadays.&lt;/p&gt;

&lt;p&gt;Many experienced engineers are having a conversation regarding the capability of "taste" to be the most crucial ability of the AI age. "Taste" here doesn't refer to fashion. Instead, it’s about judgment. It is about being able to assess the AI output and understand precisely what’s inappropriate about it, the type of thing that will add up as technical debt, before it's pushed through.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Right Now
&lt;/h2&gt;

&lt;p&gt;Junior-level AI virtually generates code for free thanks to AI. You can develop a feature immediately that would have needed an entire day to build. But being "functional" and being "ready to ship" are not equivalent to each other.&lt;/p&gt;

&lt;p&gt;The space amid those two terms is where skilled engineers show their real value. Or where they reveal they have been slacking off.&lt;/p&gt;

&lt;h2&gt;
  
  
  Taste Is Not Vibes
&lt;/h2&gt;

&lt;p&gt;I am referring to taste, which is how to distinguish code that needs to be written. 2 decades of experience, and here's what I will reveal about this taste concept. It's not some type of cryptic, elusive instinct. It's a pattern recognition that develops as you observe programs succeed and fail for years.&lt;/p&gt;

&lt;p&gt;→ Knowing that a 200-line abstraction will save you now but cost you in six months.&lt;br&gt;
→ Recognizing when AI output follows the "happy path" but ignores every edge case your users will hit.&lt;br&gt;
→ Feeling the friction in an API surface before anyone files a bug.&lt;/p&gt;

&lt;p&gt;There is an ongoing debate: does having good taste is enough or it should be combined with solid architecture knowledge? In my opinion, having good taste without architecture knowledge is mere personal preference. Having architecture knowledge without good taste leads to unnecessary complexity. You must have both. However, there is a lack of good taste right now because no one is educating for it. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Role Shift Nobody Prepared For
&lt;/h2&gt;

&lt;p&gt;Earlier, senior developers were appreciated because they could build things quickly and well. Now AI builds things quickly. The "well" part depends entirely on you.&lt;/p&gt;

&lt;p&gt;Your role is more similar to a film director than a construction worker. You're inspecting, selecting, altering. You will have to reject 80% of automatically generated content and understand the reason behind it. This skill is completely different from writing code independently. &lt;/p&gt;

&lt;p&gt;If your only advantage is typing speed and syntax, you are already in distress. In fact, these were never senior skills — it's just that the industry has allowed some people to think that way. 🫠&lt;/p&gt;

&lt;h2&gt;
  
  
  How You Build Taste
&lt;/h2&gt;

&lt;p&gt;You can't read a book about it. You build taste by shipping things, watching them break, and developing strong opinions about why.&lt;/p&gt;

&lt;p&gt;→ Review more code than you write. Especially code that's been in production for a year.&lt;br&gt;
→ Study systems that aged well. Ask what decisions made them resilient.&lt;br&gt;
→ When AI gives you output, don't accept it — interrogate it. What would you change? Why?&lt;/p&gt;

&lt;p&gt;The engineers I respect most can look at a PR and say "this works but it's wrong" and then articulate the reason in one sentence. That's taste. It's specific, defensible, and earned. 🎯&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;Some senior engineers are going to struggle here. Not because they lack knowledge, but because they never developed the editorial instinct. They were builders, not curators. And the industry rewarded building for two decades.&lt;/p&gt;

&lt;p&gt;That era is ending. The 10x engineer of 2025 isn't writing 10x more code. They're preventing 10x more bad code from shipping.&lt;/p&gt;




&lt;p&gt;So here's what I want to know: &lt;strong&gt;Do you think "taste" is a real, trainable skill — or is it just gatekeeping dressed up in a new outfit?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>React won't die because LLMs won't let it</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Thu, 30 Apr 2026 19:16:00 +0000</pubDate>
      <link>https://dev.to/adioof/react-wont-die-because-llms-wont-let-it-8o</link>
      <guid>https://dev.to/adioof/react-wont-die-because-llms-wont-let-it-8o</guid>
      <description>&lt;p&gt;Almost all AI programming tools use React as the standard. This single piece of information will have a greater influence on frontend development in the next ten years than any technological indicator ever will.&lt;/p&gt;

&lt;p&gt;Consider this: When a million engineers ask ChatGPT to “create a todo app for me”, they receive React. When somebody uses v0 or Bolt to bootstrap a SaaS dashboard, they get Next.js. React won by being the default.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Training Data Flywheel
&lt;/h2&gt;

&lt;p&gt;LLMs are only as good as the data they’re trained on. The web contains more React examples than any other code. Thus, LLMs create React examples more proficiently than anything else.&lt;/p&gt;

&lt;p&gt;This makes the circle virtually unbreakable. More React results in more React in the training data to follow. More React in the data trains a better React model. A better React model leads to more people producing React. 🔄&lt;/p&gt;

&lt;p&gt;→ React dominates training corpora&lt;br&gt;
→ LLMs generate React with higher quality and confidence&lt;br&gt;
→ AI-powered tools choose React as their default&lt;br&gt;
→ New projects ship React, producing more training data&lt;br&gt;
→ The cycle repeats&lt;/p&gt;

&lt;p&gt;Svelte and Solid are genuinely excellent. But they have a fraction of the representation in the data that shapes these models. It doesn't matter if your framework is technically superior when the robots haven't read enough of it to be fluent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vibe Coding Is a React Monoculture
&lt;/h2&gt;

&lt;p&gt;"Vibe coding" — where you describe what you want and an AI builds it — is exploding. And it's overwhelmingly React.&lt;/p&gt;

&lt;p&gt;v0 generates React components. Bolt scaffolds Next.js apps. When non-technical founders prototype their ideas with AI, they're unknowingly casting a vote for React's continued dominance. They don't care about the framework. They care that it works. And React works because the AI knows it best.&lt;/p&gt;

&lt;p&gt;This is the worst possible news for framework diversity. Innovation used to spread through blog posts, conference talks, and developer curiosity. Now adoption spreads through whatever the LLM spits out first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Should Bother You (Even If You Love React)
&lt;/h2&gt;

&lt;p&gt;I don't hate React. It's fine. It does the job. But "fine" shouldn't win by default forever.&lt;/p&gt;

&lt;p&gt;Monocultures are fragile. They stifle the experimentation that gave us hooks, signals, compiled reactivity, and server components in the first place. If every new project starts as React because an AI decided, we lose the pressure that forces frameworks to evolve.&lt;/p&gt;

&lt;p&gt;The irony is brutal: React's best features were inspired by ideas from smaller frameworks. Signals came from Solid. Compilation came from Svelte. If those frameworks can't gain adoption because LLMs ignore them, React loses its own innovation pipeline. 😬&lt;/p&gt;

&lt;h2&gt;
  
  
  What Would It Take to Break the Loop?
&lt;/h2&gt;

&lt;p&gt;Honestly? I'm not sure it can be broken organically. A few things that &lt;em&gt;might&lt;/em&gt; help:&lt;/p&gt;

&lt;p&gt;→ AI tool builders intentionally offering framework choice upfront&lt;br&gt;
→ Smaller frameworks investing heavily in structured training data and documentation&lt;br&gt;
→ A major LLM provider partnering with a non-React framework as a first-class target&lt;/p&gt;

&lt;p&gt;But the incentives don't align. Tool builders want reliable output. Reliable output means React. Nobody's going to ship a worse product to promote ecosystem diversity out of principle.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;React is unlikely to become obsolete because it's the best tool for the job. It's unlikely to fade out because it has the largest, most engaged community. It's no longer possible to have an alternative perspective because &lt;strong&gt;the machines are the ones imposing it on us&lt;/strong&gt;, and we are increasingly accepting the decisions of the machines. &lt;/p&gt;

&lt;p&gt;There’s no conspiracy here, just logic. The amount of data used to train the model has created a force of gravity, and every line of code generated by AI makes it stronger.  🕳️&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So the real issue is:&lt;/strong&gt; if you're starting a new project today and an AI presents you with React —will you take it, or will you resist the current? And if you do choose to resist, why?&lt;/p&gt;

</description>
      <category>react</category>
      <category>ai</category>
      <category>webdev</category>
      <category>frameworks</category>
    </item>
    <item>
      <title>AI doesn't replace junior devs. It makes senior devs do junior work.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Wed, 29 Apr 2026 19:09:18 +0000</pubDate>
      <link>https://dev.to/adioof/ai-doesnt-replace-junior-devs-it-makes-senior-devs-do-junior-work-edj</link>
      <guid>https://dev.to/adioof/ai-doesnt-replace-junior-devs-it-makes-senior-devs-do-junior-work-edj</guid>
      <description>&lt;p&gt;Here's the deal: companies are firing junior developers and handing their work to AI. But that work doesn't disappear. It just rolls uphill to the most expensive people in the building.&lt;/p&gt;

&lt;p&gt;The "replace juniors with AI" strategy sounds brilliant in a board meeting. In practice, it's a false economy that burns out your best engineers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hospital Secretary Problem
&lt;/h2&gt;

&lt;p&gt;There's a well-known pattern in healthcare. Hospitals cut administrative staff to save money. Suddenly, doctors — the highest-paid people in the building — are filling out forms, scheduling appointments, and doing data entry.&lt;/p&gt;

&lt;p&gt;The work didn't vanish. It just got redistributed to people who cost 5x more per hour.&lt;/p&gt;

&lt;p&gt;That's exactly what's happening in software right now. Multiple companies have publicly reduced junior hiring, citing AI capabilities. The pitch is simple: "Copilot can do what a junior does."&lt;/p&gt;

&lt;p&gt;Except it can't. Not without supervision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Seniors Are Now Babysitting LLMs
&lt;/h2&gt;

&lt;p&gt;Here's what actually happens when you replace a junior with an AI tool. A senior engineer prompts the LLM. They review the output. They catch the subtle bugs. They refactor the weird architectural choices. They fix the hallucinated API calls.&lt;/p&gt;

&lt;p&gt;Senior engineers are increasingly reporting that they spend significant time reviewing and fixing AI-generated code. That's not leverage. That's a new job responsibility nobody signed up for. 🫠&lt;/p&gt;

&lt;p&gt;→ Before: Senior architects systems, junior implements under guidance, senior reviews.&lt;br&gt;
→ After: Senior architects systems, prompts AI, reviews AI output, fixes AI output, then reviews their own fixes.&lt;/p&gt;

&lt;p&gt;You didn't remove the junior's workload. You removed the junior and gave a senior a worse version of the same task — one that &lt;em&gt;looks&lt;/em&gt; done but requires forensic inspection.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Cost Nobody's Tracking
&lt;/h2&gt;

&lt;p&gt;The real damage isn't just salary arbitrage gone wrong. It's attention.&lt;/p&gt;

&lt;p&gt;A senior fixing AI slop isn't designing systems. They're not mentoring. They're not making the architectural calls that actually move the product forward. You're paying principal-engineer rates for code-review grunt work.&lt;/p&gt;

&lt;p&gt;And let me get to the real part that keeps me up at night: if the junior roles are the training ground, you cut them off and you're eating your seed corn. In five years, you'll have a generation gap with nobody ready to step into senior roles. &lt;/p&gt;

&lt;p&gt;The pipeline doesn't refill itself. 🔥 &lt;/p&gt;

&lt;h2&gt;
  
  
  The Honest Math
&lt;/h2&gt;

&lt;p&gt;If you're a manager considering this trade, do the real math: &lt;/p&gt;

&lt;p&gt;→ Junior salary: X&lt;br&gt;
→ Senior time spent babysitting AI: Y hours × senior hourly rate&lt;br&gt;
→ Opportunity cost of senior not doing senior work: enormous and invisible&lt;/p&gt;

&lt;p&gt;That second line item never shows up in the "AI savings" spreadsheet. But it's real. Every senior who spends an afternoon debugging AI-generated garbage is an afternoon of system design that didn't happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Works
&lt;/h2&gt;

&lt;p&gt;AI is a great &lt;em&gt;amplifier&lt;/em&gt; for juniors, not a replacement for them. Give a junior developer AI tools and senior mentorship? Now you've got acceleration. The junior learns faster. The senior stays focused on high-leverage work. The AI handles boilerplate under human supervision at the appropriate pay grade.&lt;/p&gt;

&lt;p&gt;That's the model that makes economic sense. Not "fire the cheap people and make the expensive people do their jobs plus a new meta-job."&lt;/p&gt;




&lt;p&gt;The companies cutting junior roles aren't saving money. They're hiding costs in senior burnout and pretending the spreadsheet tells the whole story.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's your experience?&lt;/strong&gt; Are you a senior spending more time wrangling AI output than you expected? Or a junior watching doors close? I'd love to hear how this is playing out on your team.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>programming</category>
      <category>webdev</category>
      <category>career</category>
    </item>
    <item>
      <title>That $500k AI rewrite story is actually a story about test suites</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Wed, 29 Apr 2026 10:16:59 +0000</pubDate>
      <link>https://dev.to/adioof/that-500k-ai-rewrite-story-is-actually-a-story-about-test-suites-4mpf</link>
      <guid>https://dev.to/adioof/that-500k-ai-rewrite-story-is-actually-a-story-about-test-suites-4mpf</guid>
      <description>&lt;p&gt;A company recently made waves claiming AI saved them an estimated $300k per year in compute costs by rewriting a JavaScript library into Go. Sounds like an AI miracle story, right? I think the headline buried the lede.&lt;/p&gt;

&lt;p&gt;The real hero wasn't the AI. It was the test suite that already existed before anyone typed a single prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Story Caught My Eye
&lt;/h2&gt;

&lt;p&gt;Everyone loves a good "AI saves the day" narrative. It's clean. It's shareable. It makes executives reach for their wallets.&lt;/p&gt;

&lt;p&gt;But when you read the details, a different picture emerges. The team had a thorough, battle-tested suite of tests covering the original JavaScript library. They used those tests to validate every single piece of Go code the AI generated.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI Was a Fast Typist
&lt;/h2&gt;

&lt;p&gt;Here's what actually happened. The AI translated syntax from one language to another. That's genuinely useful work — nobody wants to manually port a library line by line.&lt;/p&gt;

&lt;p&gt;But "translating syntax" is not "engineering a reliable system." The test suite is what told them the AI's output was correct. Without it, they'd have shipped a pile of Go-shaped guesses into production. 🎯&lt;/p&gt;

&lt;p&gt;Think about it this way:&lt;/p&gt;

&lt;p&gt;→ AI without tests = fast, confident, potentially wrong&lt;br&gt;
→ AI with tests = fast, confident, &lt;em&gt;verified&lt;/em&gt;&lt;br&gt;
→ Tests without AI = slow but still reliable&lt;/p&gt;

&lt;p&gt;The $500k savings came from the infrastructure improvements of running Go instead of JavaScript. The AI just accelerated the migration timeline. The tests are what made that acceleration &lt;em&gt;safe&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  We've Seen This Movie Before
&lt;/h2&gt;

&lt;p&gt;This isn't a new pattern. Every productivity multiplier in software history has the same dependency: you need a way to know the output is correct.&lt;/p&gt;

&lt;p&gt;Compilers need type systems. Refactoring tools need test coverage. Code generators need validation. AI code assistants need the exact same thing.&lt;/p&gt;

&lt;p&gt;Strip away the test suite from this story and you get a team that shipped an unverified rewrite to production. That's not a $500k savings story. That's a "we got lucky" story — or worse, a postmortem waiting to happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth About AI Rewrites
&lt;/h2&gt;

&lt;p&gt;I think a lot of teams are going to try to replicate this win. Most of them will fail. Not because the AI isn't capable of translating code. But because they don't have the test coverage to catch the places where the AI gets it subtly wrong.&lt;/p&gt;

&lt;p&gt;And subtle bugs in a rewrite are the worst kind. Everything &lt;em&gt;looks&lt;/em&gt; right. The code compiles. The happy paths work. Then six weeks later you're debugging a race condition the AI introduced because it didn't understand an implicit contract in the original code. 🔥&lt;/p&gt;

&lt;p&gt;The unsexy prerequisite to every AI-assisted rewrite is the same thing that was unsexy before AI existed: disciplined test-driven development.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Actually Celebrate
&lt;/h2&gt;

&lt;p&gt;If I were on that team, I'd be proud of the engineers who built and maintained that test suite long before AI was in the picture. They're the ones who created the safety net that made a fast migration possible.&lt;/p&gt;

&lt;p&gt;The AI deserves credit for speed. Absolutely. But speed without correctness is just generating bugs faster.&lt;/p&gt;

&lt;p&gt;→ The investment in tests paid off years later in a way nobody predicted&lt;br&gt;
→ Good engineering practices compound over time&lt;br&gt;
→ AI amplifies your existing discipline — or your existing chaos&lt;/p&gt;

&lt;h2&gt;
  
  
  So What's the Takeaway?
&lt;/h2&gt;

&lt;p&gt;Next time you see a viral story about AI saving a company a fortune, look for the boring infrastructure underneath. There's almost always a test suite, a CI pipeline, or a team of careful engineers doing the unglamorous work that made the AI's output trustworthy. AI is a powerful tool. But tools don't deserve credit for the craftsmanship. 🛠️&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's a question for you:&lt;/strong&gt; If your team attempted an AI-assisted rewrite of a core system the following day, would your test coverage be sufficient to identify the errors?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>AI didn't save $500k. A test suite did.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Tue, 28 Apr 2026 10:11:37 +0000</pubDate>
      <link>https://dev.to/adioof/ai-didnt-save-500k-a-test-suite-did-26e</link>
      <guid>https://dev.to/adioof/ai-didnt-save-500k-a-test-suite-did-26e</guid>
      <description>&lt;p&gt;Behind every successful AI rewrite story, there is an unsung hero. And it's never the AI.&lt;/p&gt;

&lt;p&gt;Chances are you've already come across the Reco story. Their AI tool rewrote JSONata from JavaScript to Go in a day or so. The end result? Half a million dollars per year less on infra costs. Insane. Blog gold.&lt;/p&gt;

&lt;p&gt;Yet, I can't stop ruminating on the part no one really shared.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real MVP wasn't the model
&lt;/h2&gt;

&lt;p&gt;JSONata had an existing test suite. A good one. The type of test suite someone spent years working on, testing edge cases that'd likely give you nightmares.&lt;/p&gt;

&lt;p&gt;That test suit is what made sure every line of Go code the AI squirted out was valid. Remove it, and it's not a $500k success. It's a 'hunch-based' reimplementation that probably works. Probably. 🤷&lt;/p&gt;

&lt;p&gt;The AI didn't have a clue if the 'ported' code was correct. The tests did.&lt;/p&gt;

&lt;h2&gt;
  
  
  Speed without verification is just fast failure
&lt;/h2&gt;

&lt;p&gt;The way it's presented is what bothers me. Saying "AI ported a language tool in days" makes it sound like the AI actually did all the engineering. But the porting itself isn't the complicated part. The complicated part is understanding whether the port was done correctly or not.&lt;/p&gt;

&lt;p&gt;If you look at the process that way, you get:&lt;/p&gt;

&lt;p&gt;→ AI generates thousands of lines of Go in hours&lt;br&gt;
→ Engineers run the existing test suite against the output&lt;br&gt;
→ Tests catch regressions, edge cases, type mismatches&lt;br&gt;
→ Engineers fix what the AI got wrong&lt;br&gt;
→ Tests pass, code ships&lt;/p&gt;

&lt;p&gt;Remove second, three, and four, and you don't have anything left. You just have a nice autocomplete that, essentially, is creating a codebase that no one should have faith in.&lt;/p&gt;

&lt;h2&gt;
  
  
  We keep crediting the flashy tool
&lt;/h2&gt;

&lt;p&gt;This situation is repeated in many places. A team uses AI to achieve something impressive, while the blog post starts with "We used AI". The actual testing infrastructure, the CI pipeline, all the regression tests that have been built up over the years are mentioned very superficially in the ninth paragraph.&lt;/p&gt;

&lt;p&gt;It's like giving the bulldozer credit for the building at the construction site. Yeah, the bulldozer is fast, but the reason the building isn't just falling down is that somebody made blueprints and came in and inspected it.&lt;/p&gt;

&lt;p&gt;I'm not bashing AI here. I enjoy and benefit from AI tools every day. They're great. But the model didn't magically save $500k because it's so smart. It saved $500k because someone, somewhere, likely years ago, wrote tests that caught the mistakes of an AI. That person will never have a viral blog post written about them. 😅&lt;/p&gt;

&lt;h2&gt;
  
  
  The uncomfortable takeaway
&lt;/h2&gt;

&lt;p&gt;If your codebase doesn't have sufficient test coverage, AI rewrites are just rolling the dice. Not a strategy.&lt;/p&gt;

&lt;p&gt;The teams getting real value out of AI assisted porting and migration are the teams that already did the hard work of testing. The AI just speeds up what good engineering was already making possible. It doesn't replace that.&lt;/p&gt;

&lt;p&gt;→ No test suite = no way to validate AI output at scale&lt;br&gt;
→ Strong test suite = AI becomes a genuine force multiplier&lt;br&gt;
→ The investment in tests pays off in ways nobody predicted when they wrote them&lt;/p&gt;

&lt;p&gt;The boring, unsexy work is always the load-bearing work. Always. 🏗️&lt;/p&gt;

&lt;h2&gt;
  
  
  So what now
&lt;/h2&gt;

&lt;p&gt;So, next time you read an "AI saved us $X" headline, ask to see the test suite. Ask about their CI pipeline. Ask to speak to the engineer who spent a month of evenings in 2019 writing the edge case tests. That's where the real magic happens.&lt;/p&gt;

&lt;p&gt;That's where the real story is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's the most underappreciated piece of engineering infrastructure on your team?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>go</category>
      <category>testing</category>
      <category>opinion</category>
    </item>
    <item>
      <title>AI coding assistants are building the same app 10 million times</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Mon, 27 Apr 2026 16:23:50 +0000</pubDate>
      <link>https://dev.to/adioof/ai-coding-assistants-are-building-the-same-app-10-million-times-f5c</link>
      <guid>https://dev.to/adioof/ai-coding-assistants-are-building-the-same-app-10-million-times-f5c</guid>
      <description>&lt;p&gt;Every AI coding assistant on the planet is quietly converging on the same architecture. And nobody's talking about what happens when that architecture has a bad day.&lt;/p&gt;

&lt;p&gt;Open Claude Code, Cursor, or any AI-powered tool. Ask it to build you a SaaS app. You'll get Next.js, Vercel, Supabase, Tailwind, maybe Prisma. Every single time.&lt;/p&gt;

&lt;p&gt;That's not a coincidence. It's a monoculture. And monocultures have a way of collapsing all at once.&lt;/p&gt;

&lt;h2&gt;
  
  
  The vibecoding assembly line
&lt;/h2&gt;

&lt;p&gt;The "vibecoding" trend made this worse overnight. People prompt their way to a working app without making a single architectural decision themselves. The AI makes every decision for them. And it always makes the &lt;em&gt;same&lt;/em&gt; decision.&lt;/p&gt;

&lt;p&gt;This isn't about whether Next.js or Supabase are good tools. They're fine. The problem is that millions of apps now share identical dependency trees, identical auth patterns, identical deployment pipelines.&lt;/p&gt;

&lt;p&gt;→ Same framework versions&lt;br&gt;
→ Same ORM configurations&lt;br&gt;
→ Same auth libraries wrapping the same providers&lt;br&gt;
→ Same serverless functions on the same infrastructure&lt;/p&gt;

&lt;p&gt;Security researchers have already flagged this. When AI-generated codebases all look the same, one CVE doesn't just affect "some apps." It affects &lt;em&gt;the default app&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  One CVE to rule them all
&lt;/h2&gt;

&lt;p&gt;Think about the blast radius for a moment. A critical vulnerability in a popular Next.js middleware pattern wouldn't just hit teams who chose that pattern deliberately. It would hit every vibecoded app that inherited it by default.&lt;/p&gt;

&lt;p&gt;We've seen supply chain attacks before. Log4j was brutal. But Log4j lived in codebases that were at least &lt;em&gt;architecturally diverse&lt;/em&gt;. The apps around it were different shapes and sizes.&lt;/p&gt;

&lt;p&gt;Now imagine Log4j, but every app has the same shape. Same entry points. Same data flow. Same deployment target. An attacker doesn't need to figure out how &lt;em&gt;your&lt;/em&gt; app works. They already know. 🎯&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI doesn't know it's doing this
&lt;/h2&gt;

&lt;p&gt;Let me explain — Claude Code and Cursor are not intentionally aiming to foster a monoculture. They're simply going for what's effective. Next.js documentation is awesome. Supabase APIs are neat. Vercel simplifies deployment.&lt;/p&gt;

&lt;p&gt;The AI suggests widely-used options because the more popular they are, the more training data there is. The more training data, the better the suggestions. Better suggestions lead to wider adoption, fueling the loop even more.&lt;/p&gt;

&lt;p&gt;Truth be told, if you're already on the $20 Claude Pro plan, you're consuming your quota so quickly that you won't really be able to stop and ponder over which architecture would make more sense for your use case. You'll just go with whatever the AI suggested and push ahead. For actual coding work, you'd need a Max plan — let alone for investing time to challenge the defaults.&lt;/p&gt;

&lt;p&gt;That's where the problem lies. &lt;strong&gt;The economics of AI-assisted coding essentially encourage you to embrace the defaults.&lt;/strong&gt; And the defaults are all identical.   &lt;/p&gt;

&lt;h2&gt;
  
  
  What would actually help
&lt;/h2&gt;

&lt;p&gt;I’m not saying everyone should go build apps in Haskell out of spite. But a few things would make this less dangerous.&lt;/p&gt;

&lt;p&gt;→ AI tools should randomize or rotate their default suggestions based on project context&lt;br&gt;
→ Security teams should start modeling "AI-default blast radius" as a real threat vector&lt;br&gt;
→ If you're vibecoding, at least &lt;em&gt;read&lt;/em&gt; the dependency list before you ship&lt;br&gt;
→ Framework maintainers need to understand they're now critical infrastructure whether they signed up for it or not&lt;/p&gt;

&lt;p&gt;The boring truth is that architectural diversity is a security feature. It always has been. We just never had a machine capable of eliminating it at scale before. 😅&lt;/p&gt;

&lt;h2&gt;
  
  
  This isn't a Next.js problem
&lt;/h2&gt;

&lt;p&gt;I want to be clear — swap Next.js for any framework. The issue isn’t the specific stack. It’s the &lt;strong&gt;convergence pattern itself&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If every AI tool defaulted to Rails and Heroku, I’d be writing the same article with different nouns. The risk is homogeneity at a scale we’ve never had to think about.&lt;/p&gt;

&lt;p&gt;We went from “everyone copies the same tutorial” to “a machine generates the same app millions of times” in about eighteen months. That’s a fundamentally different threat model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The next big supply chain attack won't need to be clever. It'll just need to target the default.&lt;/strong&gt; 🔓&lt;/p&gt;

&lt;p&gt;So here's what I'm wondering — if you're using AI coding tools daily, when was the last time you actually &lt;em&gt;chose&lt;/em&gt; your stack instead of accepting whatever the AI suggested?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>webdev</category>
      <category>opinion</category>
    </item>
    <item>
      <title>The Next.js, Vercel, Supabase monoculture is one CVE away from disaster</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sun, 26 Apr 2026 16:25:56 +0000</pubDate>
      <link>https://dev.to/adioof/the-nextjs-vercel-supabase-monoculture-is-one-cve-away-from-disaster-3ije</link>
      <guid>https://dev.to/adioof/the-nextjs-vercel-supabase-monoculture-is-one-cve-away-from-disaster-3ije</guid>
      <description>&lt;p&gt;Every AI coding assistant on the planet is funneling you toward the same three vendors. That's not a convenience. That's a single point of failure for the entire indie web.&lt;/p&gt;

&lt;p&gt;A recent Vercel security incident lit up the developer discourse, and it forced a question nobody wants to sit with: what happens when half the indie web shares the same blast radius?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Default Stack Nobody Chose
&lt;/h2&gt;

&lt;p&gt;Ask any AI coding agent to spin up a project. Nine times out of ten you'll get Next.js, deployed on Vercel, with Supabase for the backend. It's not a conspiracy. These tools have great docs, great DX, and great SEO in training data.&lt;/p&gt;

&lt;p&gt;But "great defaults" at scale become a monoculture. And monocultures don't fail gracefully.&lt;/p&gt;

&lt;p&gt;The vibecoding wave — low-effort, AI-generated projects shipped fast — has accelerated this. People aren't choosing this stack after careful evaluation. They're accepting the first suggestion from their copilot and moving on.&lt;/p&gt;

&lt;h2&gt;
  
  
  One CVE, Thousands of Apps
&lt;/h2&gt;

&lt;p&gt;Here's what keeps me up at night. A critical vulnerability in Vercel's edge middleware doesn't just hit one company. It hits every vibecoded SaaS, every weekend project, every "I shipped in 48 hours" launch that accepted the defaults.&lt;/p&gt;

&lt;p&gt;→ Same runtime means same vulnerability surface&lt;br&gt;
→ Same auth provider means one breach pattern to learn&lt;br&gt;
→ Same deployment pipeline means one supply chain to compromise&lt;/p&gt;

&lt;p&gt;This isn't theoretical risk. We literally just watched a security incident ripple through the ecosystem. The next one might not be a near-miss.&lt;/p&gt;

&lt;p&gt;Traditional monocultures at least evolved organically — LAMP stack dominance happened over years, giving the ecosystem time to build defenses. This one is being accelerated by AI recommendations at a pace we've never seen.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Vendor Lock-in Nobody Notices
&lt;/h2&gt;

&lt;p&gt;There's a subtler problem underneath the security angle. When your AI agent writes Next.js-specific code with Vercel-specific conventions and Supabase-specific client calls, you're locked in before you've made a single conscious architectural decision.&lt;/p&gt;

&lt;p&gt;Try migrating a vibecoded Next.js app to Cloudflare Workers or Fly.io. The AI didn't write portable code. It wrote Vercel code. Your "framework" choice was actually a platform choice wearing a framework's clothes.&lt;/p&gt;

&lt;p&gt;→ Next.js features increasingly assume Vercel deployment&lt;br&gt;
→ Supabase client code doesn't translate cleanly to other Postgres hosts&lt;br&gt;
→ The switching cost is invisible until you try to switch&lt;/p&gt;

&lt;h2&gt;
  
  
  This Is a Systemic Risk, Not a Vendor Problem
&lt;/h2&gt;

&lt;p&gt;I want to be clear — I don't think Vercel, Next.js, or Supabase are bad tools. I've used all three. They're genuinely good.&lt;/p&gt;

&lt;p&gt;The problem is concentration, not quality. When one security advisory can cascade across a massive percentage of new web apps, we've built a fragile system. And nobody is pricing in that fragility. 🧨&lt;/p&gt;

&lt;p&gt;The AI agent ecosystem makes this worse every day. Every new developer who prompts "build me a SaaS" gets the same answer. The funnel narrows. The blast radius grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Would Diversification Even Look Like?
&lt;/h2&gt;

&lt;p&gt;I don't have a clean fix. But I have instincts about where to push.&lt;/p&gt;

&lt;p&gt;→ AI coding tools should randomize stack recommendations, or at least present alternatives&lt;br&gt;
→ Developers should treat "what the AI suggests" as a starting point, not an architecture decision&lt;br&gt;
→ We need deployment-agnostic frameworks to stay viable — SvelteKit, Remix, Astro, whatever keeps the ecosystem honest&lt;br&gt;
→ Platform-specific features should come with explicit lock-in warnings 🔒&lt;/p&gt;

&lt;p&gt;The boring answer is "evaluate your tools." But the honest answer is that vibecoding has made evaluation a lost art. People are shipping before they've even read the docs of the stack they're running on.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Question
&lt;/h2&gt;

&lt;p&gt;We got lucky this time. The recent incident was a wake-up call, not a catastrophe. But the conditions that created the risk haven't changed. If anything, they're accelerating.&lt;/p&gt;

&lt;p&gt;The indie web used to be beautifully chaotic — a dozen stacks, a hundred hosting providers, no single throat to choke. We're voluntarily giving that up for convenience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So here's what I want to know: are you actively choosing your stack, or are you just accepting whatever your AI agent suggests?&lt;/strong&gt; And if it's the latter — what would it take to make you stop and think before you ship?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Claude Code doesn't make your product better. It makes it bigger.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sun, 26 Apr 2026 10:21:02 +0000</pubDate>
      <link>https://dev.to/adioof/claude-code-doesnt-make-your-product-better-it-makes-it-bigger-1pid</link>
      <guid>https://dev.to/adioof/claude-code-doesnt-make-your-product-better-it-makes-it-bigger-1pid</guid>
      <description>&lt;p&gt;Shipping faster has never been easier. Shipping &lt;em&gt;better&lt;/em&gt; has never been harder.&lt;/p&gt;

&lt;p&gt;Claude Code is the most impressive code-generation tool I've ever used. It's also the fastest way to turn a clean codebase into an unmaintainable mess if nobody's paying attention.&lt;/p&gt;

&lt;p&gt;And right now, not enough people are paying attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  The output is not the outcome
&lt;/h2&gt;

&lt;p&gt;Ethan Ding recently published a piece arguing that Claude Code isn't actually making products better. It's making them &lt;em&gt;bigger&lt;/em&gt;. More files, more abstractions, more surface area — but not more value for users.&lt;/p&gt;

&lt;p&gt;This tracks with what I've seen. The tool is astonishingly good at &lt;em&gt;producing&lt;/em&gt; code. It's terrible at knowing when code shouldn't exist.&lt;/p&gt;

&lt;p&gt;According to Sentry's David Cramer, AI-generated output is "increasingly complex and hard-to-maintain code" that's a drag on long-term development. Not some rando blog. The co-founder of one of the most widely-used developer tools on the planet. 🔥&lt;/p&gt;

&lt;h2&gt;
  
  
  The k-shaped curve is real
&lt;/h2&gt;

&lt;p&gt;The controversial theory emerging is that AI coding tools create a k-shaped curve. Senior engineers get orders of magnitude more productive. Everyone else gets orders of magnitude more &lt;em&gt;prolific&lt;/em&gt; — which is a very different thing.&lt;/p&gt;

&lt;p&gt;If you have the taste to reject bad suggestions, Claude Code is a superpower. You know which abstractions are unnecessary. You know when a 200-line file should stay a 200-line file.&lt;/p&gt;

&lt;p&gt;If you don't have that taste yet, you accept everything. The codebase bloats. The PR looks impressive. The product doesn't actually improve.&lt;/p&gt;

&lt;p&gt;→ Senior engineers use Claude Code to &lt;strong&gt;skip&lt;/strong&gt; boilerplate they already understand.&lt;br&gt;
→ Junior engineers use it to &lt;strong&gt;generate&lt;/strong&gt; boilerplate they've never questioned.&lt;br&gt;
→ Same tool, opposite outcomes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Faster is not the metric
&lt;/h2&gt;

&lt;p&gt;Here's what I find concerning. The discussion about AI coding tools revolves solely around one aspect: speed. "Deliver 10 times faster." "Prototype within a weekend what previously required a month."&lt;/p&gt;

&lt;p&gt;No one stops to ask: is this feature necessary? No one stops to question: is this abstraction worth it?&lt;/p&gt;

&lt;p&gt;Claude Code is more than happy to generate a whole module for you that you'll never use. It'll incorporate a service layer, a repository pattern, and three helper files for no reason other than those ten lines could be in a handler. And, it'll do it quickly.&lt;/p&gt;

&lt;p&gt;Speed without filter is just chaos with a keyboard. That's not an improvement in efficiency. That's technical debt in the making.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pricing tells you something too
&lt;/h2&gt;

&lt;p&gt;The $20 Claude Pro plan is exhausted, and you are back to standard usage, almost immediately. It's essentially a trial tier for code generation. You're going to need the Max plan to utilize Claude Code on a daily basis.&lt;/p&gt;

&lt;p&gt;This basically means that the developers who benefit most from this tool are the ones whose employers finance it. Hence, the k-shaped curve strikes again. &lt;/p&gt;

&lt;p&gt;The junior developer working on a side project isn't in the same situation. This person is being rate-limited while programming a feature and is then provided with half-written code that they cannot fully comprehend. 😬&lt;/p&gt;

&lt;h2&gt;
  
  
  So what do we actually do?
&lt;/h2&gt;

&lt;p&gt;I don't mean that you shouldn't use Claude Code. I use it too. It works wonderfully for certain things, such as tests, migrations, or repetitive refactoring. &lt;/p&gt;

&lt;p&gt;However, I've started to approach the tool in the same way I would deal with a first draft when onboarding a new employee. I read through every line, delete a lot, and ask more often "do we really need this?" rather than "does this function?" &lt;/p&gt;

&lt;p&gt;→ The best code review skill in 2025 is knowing what to &lt;strong&gt;delete&lt;/strong&gt; from AI output.&lt;br&gt;
→ The most valuable engineer isn't the one who ships the most code. It's the one who ships the &lt;em&gt;least&lt;/em&gt; code that solves the problem.&lt;/p&gt;

&lt;p&gt;The tool itself is not the issue. It's the uncritical use of it that's problematic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's your experience been — has Claude Code actually made your product better, or just your codebase bigger?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claudecode</category>
      <category>productivity</category>
      <category>opinion</category>
    </item>
    <item>
      <title>AI saved a company $500k. The test suite did the actual work.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sat, 25 Apr 2026 19:08:11 +0000</pubDate>
      <link>https://dev.to/adioof/ai-saved-a-company-500k-the-test-suite-did-the-actual-work-308l</link>
      <guid>https://dev.to/adioof/ai-saved-a-company-500k-the-test-suite-did-the-actual-work-308l</guid>
      <description>&lt;p&gt;Many people are sharing the story of Reco that saved $500k a year using AI to rewrite a JavaScript JSONata implementation to Go. But most of those people are sharing the wrong lesson.&lt;/p&gt;

&lt;p&gt;The headline makes it sound like magic. Take some AI, point it at some old code, get new code, save half a mil. But if you actually read what went down, the magic part wasn't the AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Part Nobody Wants to Talk About
&lt;/h2&gt;

&lt;p&gt;Reco didn't just throw their codebase at an LLM and hope for the best. They adopted the jsonata-js official test suite (1,778 tests), and ported it over to Go as a specification that would guide every step of the rewrite.&lt;/p&gt;

&lt;p&gt;That's the boring part. That's also the part that makes it work.&lt;/p&gt;

&lt;p&gt;→ The AI didn't decide &lt;em&gt;what&lt;/em&gt; the Go code should do. The tests did.&lt;br&gt;
→ The AI didn't validate correctness. The spec did.&lt;br&gt;
→ The AI was the typist. The engineering team was the architect.&lt;/p&gt;

&lt;p&gt;Remove those guardrails and you don't get a half-mil saving yarn. You get a half-mil clean-up horror story.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI-as-Magic vs. AI-as-Tool
&lt;/h2&gt;

&lt;p&gt;The online discourse split into two predictable camps. One side says this proves AI can replace developers. The other side says it proves nothing because AI just did grunt work.&lt;/p&gt;

&lt;p&gt;Both miss the point. The real story is about &lt;em&gt;what was already in place before AI entered the picture&lt;/em&gt;. A team that writes thorough tests and maintains a real spec can hand boring translation work to an LLM. A team without those things can't hand &lt;em&gt;anything&lt;/em&gt; to an LLM safely.&lt;/p&gt;

&lt;p&gt;AI is a force multiplier. But a multiplier on zero is still zero. 🤷&lt;/p&gt;

&lt;h2&gt;
  
  
  The Unsexy Investment That Pays Off
&lt;/h2&gt;

&lt;p&gt;I think about this every time someone on my team pushes back on writing tests. "We're moving fast." "We'll add them later." "The code is simple enough."&lt;/p&gt;

&lt;p&gt;Maybe. But you're also closing the door on every future shortcut. Good test coverage isn't just about catching bugs today. It's a machine-readable description of what your software &lt;em&gt;is&lt;/em&gt;. That description becomes incredibly valuable the moment you want to migrate, rewrite, or — yes — hand work to an AI.&lt;/p&gt;

&lt;p&gt;Reco's test suite wasn't written for AI. It was written because that's solid engineering. The AI benefit was a side effect of doing the fundamentals right.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Actually Means for You
&lt;/h2&gt;

&lt;p&gt;If you're excited about AI-assisted rewrites, great. Start by asking yourself one question: could you hand your codebase to a &lt;em&gt;new human developer&lt;/em&gt; with just your tests and docs, and would they know exactly what the code should do?&lt;/p&gt;

&lt;p&gt;→ If yes, congrats — you're AI-ready.&lt;br&gt;
→ If no, the LLM won't magically figure it out either.&lt;/p&gt;

&lt;p&gt;AI technology does not eliminate the importance of engineering fundamentals. Instead, it highlights the benefits of such practices. Teams that put effort into so-called “boring” but necessary work – such as tests, specifications, and well-defined interfaces – were making a solid investment. Others found themselves producing fast, unprecedented amounts of garbage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;Yes, the $500k quote is accurate. But the merit goes to the engineers who had already created a test suite that was valuable and reliable long before a single prompt was entered. The AI model merely represented the last step; the first 25 were solid engineering practices.&lt;/p&gt;

&lt;p&gt;So, would an AI model be able to rewrite your current project using only the tests and the documentation you built along with it?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The mid-level engineer is the real casualty of AI. Not the junior.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sat, 25 Apr 2026 16:19:05 +0000</pubDate>
      <link>https://dev.to/adioof/the-mid-level-engineer-is-the-real-casualty-of-ai-not-the-junior-3oe1</link>
      <guid>https://dev.to/adioof/the-mid-level-engineer-is-the-real-casualty-of-ai-not-the-junior-3oe1</guid>
      <description>&lt;p&gt;Everyone's wringing their hands about juniors getting replaced by AI. They're worried about the wrong people.&lt;/p&gt;

&lt;p&gt;The developer most at risk isn't the one still learning. It's the one whose entire job is turning tickets into pull requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Ticket-to-PR Pipeline Has a New Operator
&lt;/h2&gt;

&lt;p&gt;Ivan Turkovic wrote an essay recently that stopped me mid-scroll. His argument is simple and uncomfortable: mid-level engineers are the real casualties of AI, not juniors.&lt;/p&gt;

&lt;p&gt;Think about what a typical mid-level engineer does day to day. They take a well-scoped ticket, understand the codebase patterns, and produce working code that follows those patterns. They're reliable. They're consistent. They're exactly the profile AI is getting scary good at replicating.&lt;/p&gt;

&lt;p&gt;Juniors, on the other hand, are still in the struggle phase. They're building mental models. They're learning &lt;em&gt;why&lt;/em&gt; things work, not just &lt;em&gt;how&lt;/em&gt; to ship them. You can't automate the process of becoming a better thinker. You can absolutely automate "convert this Jira ticket to a PR that passes CI."&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Isn't a Death Sentence for the Industry
&lt;/h2&gt;

&lt;p&gt;Here's where it gets interesting. Turkovic cites Jevons paradox — the idea that when something becomes cheaper to produce, demand for it doesn't shrink. It explodes.&lt;/p&gt;

&lt;p&gt;When steam engines got more efficient, the world didn't use less coal. It used way more. If AI makes software cheaper to build, companies won't hire fewer developers. They'll want to build more software. Total developer employment might actually &lt;em&gt;rise&lt;/em&gt; 🤯&lt;/p&gt;

&lt;p&gt;But the composition of that employment shifts. The roles that survive look different from the roles that existed before.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Builder Archetype
&lt;/h2&gt;

&lt;p&gt;The survivor in this story isn't the fastest coder. It's the person Turkovic calls the "builder" — someone with taste and judgment.&lt;/p&gt;

&lt;p&gt;→ Builders decide &lt;em&gt;what&lt;/em&gt; to build, not just &lt;em&gt;how&lt;/em&gt; to build it.&lt;br&gt;
→ Builders smell a bad abstraction before it ships.&lt;br&gt;
→ Builders know when the AI-generated code is subtly wrong in ways that won't show up until production.&lt;br&gt;
→ Builders make decisions that compound over months, not just close tickets that expire in sprints.&lt;/p&gt;

&lt;p&gt;This is the part that's hard to hear if you're mid-career and comfortable. The skills that got you promoted from junior to mid — consistency, pattern-following, reliable output — are exactly the skills that AI replicates best.&lt;/p&gt;

&lt;p&gt;The skills that protect you are fuzzier. Taste. Judgment. Knowing when to say no. Knowing when a feature request is actually three different problems wearing a trench coat 😄&lt;/p&gt;

&lt;h2&gt;
  
  
  The Junior Advantage Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;Juniors have something mid-levels often lose: they're still in learning mode. They haven't optimized for throughput yet. They're still asking "why" instead of just shipping.&lt;/p&gt;

&lt;p&gt;That questioning mindset is actually the foundation of the builder archetype. A junior who struggles through understanding &lt;em&gt;why&lt;/em&gt; a system is designed a certain way is building exactly the judgment muscles that AI can't replace.&lt;/p&gt;

&lt;p&gt;It's a mid-level who hasn't asked 'why' in three years and just transcribes specs into code. That's a process, not a skill. And processes get automated.&lt;/p&gt;

&lt;h2&gt;
  
  
  So What Do You Actually Do?
&lt;/h2&gt;

&lt;p&gt;If you're that mid-level right now and you feel a pit in your stomach, don't worry. Awareness is the first step.&lt;/p&gt;

&lt;p&gt;→ Start caring about product outcomes, not just code output.&lt;br&gt;
→ Develop opinions about architecture that go beyond "this is how we've always done it."&lt;br&gt;
→ Practice saying "we shouldn't build this" with a good reason attached.&lt;br&gt;
→ Use AI tools aggressively — not to coast, but to free up time for the judgment work that matters.&lt;/p&gt;

&lt;p&gt;The goal isn't to outcode the AI. It's to outsmart it. The bar for 'I wrote functioning clean code' continues to get lower and lower. The bar for 'I made the right judgment about what to build and how to build it' hasn't moved an inch 🎯&lt;/p&gt;

&lt;p&gt;The mid-level role is not going away. It's changing. The question is: Will you change with it, or will you find yourself standing in the same place the factory line used to be?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's the skill you've developed in your career that you're most confident AI &lt;em&gt;can't&lt;/em&gt; replicate?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>ai</category>
      <category>career</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
