<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Thiago Pacheco</title>
    <description>The latest articles on DEV Community by Thiago Pacheco (@pacheco).</description>
    <link>https://dev.to/pacheco</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pacheco"/>
    <language>en</language>
    <item>
      <title>The Most Important Skill in Tech Is Too Expensive to Learn</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Sun, 19 Apr 2026 16:11:39 +0000</pubDate>
      <link>https://dev.to/pacheco/the-most-important-skill-in-tech-is-too-expensive-to-learn-1j2e</link>
      <guid>https://dev.to/pacheco/the-most-important-skill-in-tech-is-too-expensive-to-learn-1j2e</guid>
      <description>&lt;p&gt;I spent last weekend trying to build a feature with an open-source model running locally. Qwen, 32 billion parameters. I gave it the same task I’d done with Claude the week before — a well-scoped feature, clear spec, defined constraints. The kind of work where I know exactly what good output looks like.&lt;/p&gt;

&lt;p&gt;It took me four attempts to get something that compiled. Not something that worked well — something that compiled. The model kept losing context halfway through, hallucinating imports that didn’t exist, and confidently generating patterns that contradicted what I’d specified three prompts earlier. I spent more time correcting its output than it would’ve taken me to write the thing from scratch.&lt;/p&gt;

&lt;p&gt;The same task with Claude Opus 4.6 in Claude Code? One pass. Clean implementation. Twenty minutes.&lt;/p&gt;

&lt;p&gt;And before you say “just use a better open-source model” — I know. The strongest open-source models today are genuinely capable. But running them at full quality locally requires serious hardware. We’re talking high-end GPUs, machines that cost thousands of dollars. If you don’t have that, your alternative is a provider like OpenRouter — more accessible, but sustained agentic sessions still add up fast. You can quantize the models to fit smaller hardware, but you’re trading quality for affordability, which is the whole problem.&lt;/p&gt;

&lt;p&gt;Either way you’re paying. Local hardware or API costs. And the people who most need access to this technology are the ones least able to afford either option.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Skill That Runs Everything Now
&lt;/h2&gt;

&lt;p&gt;Using AI effectively is becoming the most important skill in the industry. And I don’t mean prompting — that’s the surface-level version of the conversation that keeps people stuck.&lt;/p&gt;

&lt;p&gt;There are actually two layers to this skill, and both of them have an access problem.&lt;/p&gt;

&lt;p&gt;The first layer is the judgment. It’s knowing how to scope a problem so the model can handle it. It’s developing the instinct for when the model is right and when it’s subtly wrong in ways that won’t show up until production. It’s understanding how to work &lt;em&gt;with&lt;/em&gt; the model’s strengths and around its weaknesses. This is the soft skill side — the part that requires reps with models that are good enough to teach you something. If the model you’re working with fails in ways that have nothing to do with your approach — losing context, ignoring constraints, hallucinating — you’re not developing the skill. You’re just debugging a bad tool.&lt;/p&gt;

&lt;p&gt;The second layer is the one that doesn’t get talked about enough: the practical configuration.&lt;/p&gt;

&lt;p&gt;Look at what’s happening with Claude Code right now. There’s an entire ecosystem forming around it — CLAUDE.md files that teach the agent your project’s conventions, subagent configurations that break complex work into orchestrated pieces, hooks that enforce guardrails automatically, skills and plugins that extend what the agent can do. People are building and sharing these configurations the way they used to share dotfiles or ESLint configs. It’s becoming its own discipline.&lt;/p&gt;

&lt;p&gt;And it matters. A well-configured Claude Code setup with proper project memory, clear guidelines that evolve with the codebase, and hooks that catch mistakes before they compound — that’s not a nice-to-have anymore. That’s the difference between the agent producing useful work and producing junk. Learning how to structure that configuration, how to set up subagents for different tasks, how to write project guidelines that actually steer the model’s behavior — these are real, practical, in-demand skills.&lt;/p&gt;

&lt;p&gt;But here’s the problem: all of that knowledge is being built on top of Claude Code specifically. The skills, the hooks, the configuration patterns, the community sharing best practices — it’s all deeply tied to a $100-200/month tool running the most expensive models available. The more sophisticated the ecosystem gets, the deeper the lock-in. And the deeper the lock-in, the more expensive it becomes to develop the skills that actually matter.&lt;/p&gt;

&lt;p&gt;It’s not just “learn to use AI.” It’s “learn to configure and orchestrate AI agents at a level of sophistication that requires sustained access to premium tools.” And that’s a much harder problem to solve with free tiers and quantized local models.&lt;/p&gt;

&lt;p&gt;Both layers of this skill are becoming as foundational as knowing Git or being able to navigate a codebase. Except they’re evolving faster than any of those did, and the cost of staying current is real money.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Model Is the Bottleneck
&lt;/h2&gt;

&lt;p&gt;People talk about harnesses — Claude Code vs Cursor vs Codex vs whatever dropped this week. And yeah, the tooling matters. But you can run Claude Code with a local model if you know the tricks. You can plug open-source models into most of these harnesses.&lt;/p&gt;

&lt;p&gt;It doesn’t fix the problem.&lt;/p&gt;

&lt;p&gt;Because the output is limited by the model itself. A great harness with a mediocre model produces mediocre results with better formatting. The agent can manage files, run commands, iterate on errors — but if the model behind it can’t hold context or reason through the nuance of what you’re building, the loop just generates more mistakes faster.&lt;/p&gt;

&lt;p&gt;The top-tier models — Claude Opus 4.6, GPT-5.4 — are meaningfully better at the things that matter for real development work. They hold context longer. They understand relationships between components. They catch their own mistakes more often. They produce code that requires less intervention. These aren’t marginal benchmark differences. These are differences you feel in every single session.&lt;/p&gt;

&lt;p&gt;And every one of those models costs money. Claude Pro is $20/month and you’ll hit rate limits in a day doing serious work. The Max plan that actually lets you use Claude Code without interruption is $100-200/month — and that only covers Claude Code, not other harnesses. Cursor Pro, Copilot Pro — more subscriptions stacking up. If you want the workflow that actually builds the skill the market is demanding, you’re spending real money every month.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who Gets Left Behind
&lt;/h2&gt;

&lt;p&gt;Think about three people.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The junior developer, fresh out of school.&lt;/strong&gt; They’ve heard AI is important. Maybe they’ve used ChatGPT for homework. But the landscape of AI coding tools is an overwhelming mess — Copilot, Cursor, Claude Code, Codex, Windsurf, and a dozen others, each with different pricing, different paradigms, different ecosystems. They don’t know which one matters. They don’t know which one to invest in. And the ones that would actually teach them the most important patterns cost money they don’t have. Sure, there are free tiers — Copilot gives you 2,000 completions and 50 chat requests a month, and OpenRouter has free models with rate limits. But those tiers are built for tasting, not for training. You can’t develop real fluency in 50 requests a month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The experienced developer who got laid off.&lt;/strong&gt; Yesterday they had Claude Pro through their company, API access for experiments, maybe a Cursor license on the company card. Today they have none of it. The skill they were building — the one that was making them genuinely more effective — just got cut off overnight. And the market they’re re-entering expects AI proficiency as a baseline. They know what they’re missing because they’ve felt the difference. That might be worse than never having had it at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The career switcher.&lt;/strong&gt; Someone coming from another field, trying to break into tech. They’re already learning to code, which is hard enough. Now they need to learn to work with AI too, but the models that would give them meaningful reps are priced for people with engineering salaries. They’re trying to build a skill they can’t afford to practice.&lt;/p&gt;

&lt;p&gt;Each of these people has a slightly different version of the same problem: &lt;strong&gt;the skill the market values most is developing behind a price tag most people can’t justify.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some companies are subsidizing tools for their employees now. That’s real, and it’s good. But it only helps people who already have jobs. It does nothing for the people trying to get in. And even for employed developers, there’s a difference between using a company-provided tool in a company-specific workflow and building genuine AI fluency that transfers. Getting good at your team’s setup isn’t the same as understanding the patterns deeply enough to adapt when everything changes in six months. Which it will.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I’m Actually Doing About It
&lt;/h2&gt;

&lt;p&gt;I’d feel dishonest writing this without being transparent about where I am personally.&lt;/p&gt;

&lt;p&gt;I’ve been deliberately pushing open-source and cheaper models harder in my own workflow. Running local models on my machine, using cheaper options through OpenRouter, trying to find the ceiling of what’s accessible today. Not because I think they’re better — I just spent several paragraphs telling you they’re not. But because I think there’s real value in mapping out what’s possible without the premium price tag.&lt;/p&gt;

&lt;p&gt;Here’s what I’ve found so far, honestly.&lt;/p&gt;

&lt;p&gt;Local models on the modest hardware I have today are very behind. It’s not close. The gap between what I get locally and what Claude Opus 4.6 or GPT-5.4 produce isn’t a minor quality difference — it’s a fundamentally different experience. The local models lose context, miss nuance, and require constant hand-holding that defeats the purpose of the workflow.&lt;/p&gt;

&lt;p&gt;The cheaper models through OpenRouter are better — you can get genuinely good responses. But there’s a catch: you have to constrain the work to small, well-defined tasks to get consistent output. You can’t be vague. You can’t be high-level. You can’t describe what you want broadly and trust the model to figure out the details the way you can with the top-tier models. Every task needs to be broken down, specified precisely, and scoped tightly.&lt;/p&gt;

&lt;p&gt;And that creates its own problem. Because sometimes, by the time you’ve broken the work down small enough and specified it precisely enough for the cheaper model to handle it reliably, you’ve already done most of the thinking. At that point, it’s genuinely faster to just write the code yourself than to spend the time guiding the model through it.&lt;/p&gt;

&lt;p&gt;That’s the real gap. It’s not just quality of output — it’s how much of your own effort is required to get there. The top-tier models let you think at a higher level of abstraction. The accessible ones force you back down into the details, which is exactly where AI was supposed to save you time.&lt;/p&gt;

&lt;p&gt;I haven’t tested Gemma 4 deeply yet — I have high hopes for it given what the benchmarks are showing. But I’m not going to claim results I haven’t experienced. For now, I keep pushing because I believe the trajectory is real. But the honest answer is that nothing I’ve tried on the accessible side comes close to what the premium models deliver.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Industry Problem We’re Building
&lt;/h2&gt;

&lt;p&gt;It takes years to develop a senior engineer. I’ve written about the pipeline problem — how cutting junior hiring today creates a senior shortage in 7-10 years. But this is a different angle on the same structural failure.&lt;/p&gt;

&lt;p&gt;Even the developers who do break in — if they can’t afford to develop AI fluency early, they’re starting with a deficit that compounds over time. The developers who had access to top-tier models from day one are building intuitions, workflows, and judgment that the others can’t match. Not because of talent. Because of access.&lt;/p&gt;

&lt;p&gt;We’re building a two-tier system. People who learned to work with AI at the highest level because they could afford to, and people who picked up what they could from free tiers and rate-limited demos. The gap between those two isn’t trivial — it’s the difference between building the instinct for what works and just knowing it exists in theory.&lt;/p&gt;

&lt;p&gt;For decades, the software industry had a genuine claim to accessibility in at least one respect: the tools were free. You could learn to code with free software, contribute to open-source projects, build a portfolio, and land a job without spending a dollar on tooling. The playing field wasn’t level — it never is — but the tools didn’t gatekeep you.&lt;/p&gt;

&lt;p&gt;AI is changing that equation. Not because the models are secret — many are open-weight. But because the models that are good enough to build the skills that matter require compute that costs real money. And nobody seems to be treating this as the urgent problem it is.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bet
&lt;/h2&gt;

&lt;p&gt;I don’t have a clean answer for this. If I did, I’d be building a company, not writing a blog post.&lt;/p&gt;

&lt;p&gt;But I’m not betting blind either. The trajectory is real.&lt;/p&gt;

&lt;p&gt;Google just released Gemma 4 under Apache 2.0 — a family of models designed to run on consumer hardware, with coding benchmarks that show massive jumps over the previous generation. DeepSeek keeps pushing the boundaries of what’s possible at low cost, with their next model aiming for frontier performance under an open license. Qwen continues to improve. The open-source community is moving fast, and the gap between these models and the proprietary ones is genuinely narrowing.&lt;/p&gt;

&lt;p&gt;But narrowing isn’t closed. And the people who need access most can’t wait for the trajectory to finish.&lt;/p&gt;

&lt;p&gt;I’m betting that how accessible these tools become in the next two years will shape the entire next generation of professionals. The people entering the industry right now, the people trying to transition, the ones who got pushed out and are fighting their way back — they’re being shaped by what they can and can’t access today. If the most important skill of their era is only learnable at premium prices, we’re not just failing them individually. We’re hollowing out the pipeline the entire industry depends on.&lt;/p&gt;

&lt;p&gt;What we call “software developer” is becoming something else. AI engineer, maybe. Whatever it gets called, the core competency is shifting — less about writing code, more about orchestrating intelligence. Making judgment calls about what to build and how to specify it. That competency needs to be learnable at every price point. Not just the premium tier.&lt;/p&gt;

&lt;p&gt;For now, I’m going to keep pushing open-source models in my workflow. Keep documenting what works and where the walls are. Keep being honest about the gap while working to close it, even in my own small corner.&lt;/p&gt;

&lt;p&gt;Because if the most important skill in tech is too expensive to learn, we have a bigger problem than any model can solve.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/" rel="noopener noreferrer"&gt;Google Gemma 4 announcement&lt;/a&gt; — Apache 2.0 open models designed for consumer hardware, with native agentic and coding capabilities&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.nxcode.io/resources/news/deepseek-v4-release-specs-benchmarks-2026" rel="noopener noreferrer"&gt;DeepSeek V4 specs and benchmarks&lt;/a&gt; — ~1T parameter MoE model, 37B active per token, targeting frontier performance under open license&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://unsloth.ai/docs/models/qwen3.5" rel="noopener noreferrer"&gt;Qwen 3.5 local hardware guide&lt;/a&gt; — Running large open-source models on consumer devices with quantization&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.gemma4.wiki/guide/gemma-4-benchmarks" rel="noopener noreferrer"&gt;Gemma 4 coding benchmarks&lt;/a&gt; — LiveCodeBench scores jumping from 29.1 to 80.0 between Gemma 3 and Gemma 4&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/features/copilot/plans" rel="noopener noreferrer"&gt;GitHub Copilot plans&lt;/a&gt; — Free tier: 2,000 completions, 50 chat requests/month&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.nxcode.io/resources/news/claude-code-pricing-2026-free-api-costs-max-plan" rel="noopener noreferrer"&gt;Claude Code pricing breakdown&lt;/a&gt; — Max plan at $100-200/month for sustained use&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://openrouter.ai/openrouter/free" rel="noopener noreferrer"&gt;OpenRouter free models&lt;/a&gt; — 29 free models with rate limits (20 req/min, 200 req/day)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.indiatoday.in/jobs/story/ai-divide-ai-usage-rising-fast-india-workers-face-pressure-adopt-manage-systems-educ-2892584-2026-04-07" rel="noopener noreferrer"&gt;AI divide at work — ETS report&lt;/a&gt; — Regular AI users feel more secure; others are falling behind&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://epoch.ai/blog/open-models-report" rel="noopener noreferrer"&gt;Open vs closed AI models — Epoch AI&lt;/a&gt; — Tracking the gap between open-weight and proprietary models over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/the-most-important-skill-in-tech-is-too-expensive-to-learn/" rel="noopener noreferrer"&gt;The Most Important Skill in Tech Is Too Expensive to Learn&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>careersoftskills</category>
      <category>strategicai</category>
      <category>aiaccessibility</category>
      <category>aicosts</category>
    </item>
    <item>
      <title>No Developer Feels AI Literate Right Now — Not Even the Ones Building It</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Sun, 19 Apr 2026 16:11:18 +0000</pubDate>
      <link>https://dev.to/pacheco/no-developer-feels-ai-literate-right-now-not-even-the-ones-building-it-4egp</link>
      <guid>https://dev.to/pacheco/no-developer-feels-ai-literate-right-now-not-even-the-ones-building-it-4egp</guid>
      <description>&lt;p&gt;There’s a specific kind of anxiety that hits at 11 PM when you’re scrolling through someone’s thread about the AI workflow that supposedly changed everything. You were productive today. You shipped code. But now you’re wondering if the way you shipped it is already obsolete.&lt;/p&gt;

&lt;p&gt;That feeling? It’s not going away. And I say that as someone who builds AI features every day at my job — user-facing products, developer tools, infrastructure — and uses AI to build them too. I’ve been deep in this flow for a while, and I still don’t feel AI literate. Nobody does.&lt;/p&gt;

&lt;p&gt;That’s the whole thesis.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Illusion of AI Fluency
&lt;/h2&gt;

&lt;p&gt;There’s a dangerous gap forming right now — the distance between “I can get AI to produce code” and “I understand what’s happening well enough to make consistent, reliable decisions.”&lt;/p&gt;

&lt;p&gt;It’s like ordering coffee in another language and thinking you’re fluent. The demo works. But what happens when the context window fills up and the model starts hallucinating? When the tool you’ve been relying on ships a breaking change to how it handles context, and your entire workflow stops working?&lt;/p&gt;

&lt;p&gt;That’s where actual literacy lives. Not in the output — in understanding the mechanics well enough to troubleshoot, adapt, and make real decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Arc: Convergence to Divergence
&lt;/h2&gt;

&lt;p&gt;Here’s a pattern worth naming, because it explains why everything feels so chaotic.&lt;/p&gt;

&lt;p&gt;For the past couple of years, AI coding tools followed roughly the same trajectory. First the conversational phase — chat with a model, get code back. Then the agentic phase — let the model execute actions, read files, run commands. Then context management became the bottleneck — RAG, context windows, retrieval strategies. Then skills and MCPs emerged as ways to extend what agents could do.&lt;/p&gt;

&lt;p&gt;Every major tool went through these same stages. Claude Code, Cursor, Copilot, Codex — the patterns were recognizable across all of them. If you learned one, the mental models transferred.&lt;/p&gt;

&lt;p&gt;That’s no longer true.&lt;/p&gt;

&lt;p&gt;The tools are diverging. Fast. Claude Code now has hooks, subagents, trust modes, and a growing ecosystem of skills. Cursor has its own rules system with a fundamentally different interaction model. Codex has AGENTS.md. Amp, OpenCode, and a dozen others are carving their own paths.&lt;/p&gt;

&lt;p&gt;Each tool is developing its own opinion about how development should work. And those opinions are starting to meaningfully diverge.&lt;/p&gt;

&lt;p&gt;This is React vs Angular vs Vue all over again — except the stakes are higher. That was about which UI library renders faster. This is about how you think, plan, and build software at a fundamental level.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Best Practice Treadmill
&lt;/h2&gt;

&lt;p&gt;A few weeks ago, the developer community was buzzing about skills replacing MCPs. Skills were simpler, lighter, didn’t require running separate processes. The consensus was forming: skills are the future, MCPs are the past.&lt;/p&gt;

&lt;p&gt;Then optimizations changed the calculus on MCP context consumption. Suddenly MCPs were more viable again. The narrative flipped.&lt;/p&gt;

&lt;p&gt;Now? People use a mix of both. Some skills actually wrap MCPs internally. Nobody’s sure if that’s a good pattern or an anti-pattern.&lt;/p&gt;

&lt;p&gt;This all happened in the span of a few weeks. And it’s the new normal — the “right way” to structure your AI workflow has a half-life measured in weeks, not months.&lt;/p&gt;

&lt;h2&gt;
  
  
  So What Does This Mean for Your Career?
&lt;/h2&gt;

&lt;p&gt;If senior engineers with years of pattern recognition and deep technical foundations feel lost — what does this mean for someone finishing college right now? Or someone transitioning into tech?&lt;/p&gt;

&lt;p&gt;I’ll be direct: it’s harder than ever to get a job as a software engineer. You don’t just need to know how to code anymore. You need to know how to code, how to work with AI, which AI tools to invest in, and how to recognize when current practices expire. Most bootcamps and university programs haven’t even begun to address this. And companies don’t know what to test for either — interview loops are still measuring skills from two years ago while the actual job involves orchestrating agents and making architectural decisions AI can’t make for you.&lt;/p&gt;

&lt;p&gt;So the question people are asking — “Is it even worth learning software engineering right now?” — is genuine.&lt;/p&gt;

&lt;p&gt;Here’s what I think: the answer is yes. But the approach has to change.&lt;/p&gt;

&lt;p&gt;The demand for engineers isn’t dying — job openings have surged this year, and companies that replaced senior engineers with juniors-plus-AI are already course-correcting. Human judgment, architectural thinking, and the ability to make sense of complex systems still matter. But you can’t just learn to code and expect that to be enough anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pick One Tool. Get Dangerously Good at It.
&lt;/h2&gt;

&lt;p&gt;Stop trying to learn every coding harness. Don’t split your attention between Claude Code, Codex, Amp, OpenCode, and whatever drops next week. Pick one. Commit to it. Go deep.&lt;/p&gt;

&lt;p&gt;There’s research backing this up. BCG found that productivity increases with one or two AI tools, peaks around three, and actively drops when you add a fourth. They’re calling it “AI brain fry” — more tools means more context switching, more cognitive load, worse outcomes. Mastering one tool isn’t just a preference. It’s the strategy that actually works.&lt;/p&gt;

&lt;p&gt;I’ll tell you what I use: Claude Code. It has the richest set of capabilities right now — hooks, skills, subagents, MCP integrations, trust modes — and Anthropic’s models consistently deliver. The community around it is the most active I’ve seen. That could change in six months. But the point isn’t really the specific tool.&lt;/p&gt;

&lt;p&gt;When you deeply learn one tool — when you understand how it manages context, how its agentic loop works, how to structure your projects for it — you develop transferable mental models. You learn what “good context management” means, not just how one tool implements it. You learn why hooks exist, why skills exist, why MCPs exist. Those patterns survive the churn.&lt;/p&gt;

&lt;p&gt;If the tools diverge to the point where switching becomes necessary, you’ll transition from a place of strength — deep literacy in one ecosystem — rather than shallow familiarity with five.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build Something That Felt Impossible
&lt;/h2&gt;

&lt;p&gt;Theory doesn’t stick without practice.&lt;/p&gt;

&lt;p&gt;Think of a project you’ve always wanted to build but never had the time. Something slightly out of reach — too many moving parts, too much boilerplate, too many unknowns. Now try to build it with your AI coding tool of choice.&lt;/p&gt;

&lt;p&gt;Here’s my example. I’d been wanting to rebuild my blog with a custom WordPress theme for months. I knew exactly what I wanted — the design, the deployment pipeline, the git integration. What I didn’t have was the time to write it all out.&lt;/p&gt;

&lt;p&gt;So I started prompting my agent harness while at the gym. Between sets, between exercises — describing what I needed, reviewing what came back, steering it. Within about three days of gym sessions and prompting, the new blog was live.&lt;/p&gt;

&lt;p&gt;That was a genuine wow moment. Not the viral demo kind — the personal kind. It didn’t come for free. I knew what to ask for, which tools to use, how to set up the deployment. But I never had to remember WordPress internals or look at a single line of code.&lt;/p&gt;

&lt;p&gt;Don’t aim for perfection. Just try to get it done and pay attention to how it feels. You’re going to land in one of two places.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You build the thing faster than you ever could have alone.&lt;/strong&gt; Maybe the code isn’t pristine. Maybe you restarted a couple of times. But it works, and you built it in hours instead of weeks. That wow moment is fuel — it motivates you to refine your prompts, learn the next layer, keep pushing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Or you struggle.&lt;/strong&gt; The agent loops. It misunderstands your intent. You feel like you’d be faster doing it yourself.&lt;/p&gt;

&lt;p&gt;If you land here, don’t get discouraged. And don’t become someone who dismisses AI as useless based on a bad first experience.&lt;/p&gt;

&lt;p&gt;What almost certainly happened is that your scope was too broad. You gave the agent a vague, ambitious prompt and expected it to figure out the details. That’s the most common mistake starting out.&lt;/p&gt;

&lt;p&gt;The fix: shrink the scope. Way down. Pick the smallest piece — a single endpoint, one component, a basic data model — and try again. Get one small thing working. Feel what it’s like when the tool actually helps. That’s your baseline. From there, you gradually expand — bigger scope, better context, more trust in the loop.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;Nobody has this figured out. Not the senior engineers. Not the tool makers. Not the influencers who post their workflows like they’ve cracked the code.&lt;/p&gt;

&lt;p&gt;The developers who are going to thrive aren’t the ones who memorize every feature of every tool — they’re the ones who build a learning rhythm they can sustain. Context management, agentic workflows, prompt design, scope control. These patterns are more stable than the specific implementations, and they’re what make you dangerous regardless of which tool you’re holding.&lt;/p&gt;

&lt;p&gt;Pick one tool. Build one thing. Learn one lesson at a time.&lt;/p&gt;




&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://newsletter.pragmaticengineer.com/p/the-impact-of-ai-on-software-engineers-2026" rel="noopener noreferrer"&gt;The impact of AI on software engineers in 2026&lt;/a&gt; — Pragmatic Engineer survey (900+ engineers on AI tool usage, costs, and uneven effects across experience levels)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.businessinsider.com/ai-brain-fry-bcg-consulting-exhaustion-agents-work-2026-3" rel="noopener noreferrer"&gt;BCG: AI Brain Fry study&lt;/a&gt; — Productivity peaks at 2-3 AI tools and drops at 4+&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.metaintro.com/blog/software-engineer-job-listings-spike-2026-ai-demand" rel="noopener noreferrer"&gt;Software engineer job listings up 30% in 2026&lt;/a&gt; — 67,000+ openings per TrueUp&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.franksworld.com/2026/04/15/why-companies-are-quietly-rehiring-software-engineers-in-the-age-of-ai/" rel="noopener noreferrer"&gt;Why companies are quietly rehiring software engineers&lt;/a&gt; — The “boomerang effect”: ~35% of new hires are former employees&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.technologyreview.com/2026/04/14/1134397/redefining-the-future-of-software-engineering/" rel="noopener noreferrer"&gt;Redefining the future of software engineering&lt;/a&gt; — MIT Tech Review on agentic AI as the “third shift”&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://blog.jetbrains.com/research/2026/04/which-ai-coding-tools-do-developers-actually-use-at-work/" rel="noopener noreferrer"&gt;JetBrains: Which AI coding tools do developers use at work?&lt;/a&gt; — 74% of developers adopted AI tools; Claude Code and Cursor tied at 18%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/feeling-of-being-behind-is-permanent/" rel="noopener noreferrer"&gt;No Developer Feels AI Literate Right Now — Not Even the Ones Building It&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>aicoding</category>
    </item>
    <item>
      <title>Spec-Driven Development Isn’t Waterfall — But It Keeps Ending Up There</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Fri, 17 Apr 2026 22:05:14 +0000</pubDate>
      <link>https://dev.to/pacheco/spec-driven-development-isnt-waterfall-but-it-keeps-ending-up-there-4eei</link>
      <guid>https://dev.to/pacheco/spec-driven-development-isnt-waterfall-but-it-keeps-ending-up-there-4eei</guid>
      <description>&lt;p&gt;&lt;em&gt;Spec-driven development isn’t supposed to be waterfall. But without clear workflows and better tooling, it’s easy to end up there.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I recently went deep on spec-driven development. The idea was straightforward: before writing any code, define everything. The full vision, the context, the trade-offs, where the feature fits in the existing architecture, the references. Hand it all to the AI agent with crystal-clear guidance so it can build with minimal supervision.&lt;/p&gt;

&lt;p&gt;It looked promising. For about two days.&lt;/p&gt;

&lt;p&gt;What I ended up with was thousands of lines of specification documents. Documents that were incredibly hard to review — not because they were poorly structured, but because most of them were generated by the AI itself, correlating the existing architecture, docs, and my guidance into something that &lt;em&gt;looked&lt;/em&gt; authoritative. Clear explanations. Perfect formatting. Confident reasoning about every decision.&lt;/p&gt;

&lt;p&gt;And that’s exactly where it got scary.&lt;/p&gt;




&lt;h2&gt;The Confidence Problem Nobody Mentions&lt;/h2&gt;

&lt;p&gt;Here’s the thing about AI-generated specs that the SDD evangelists aren’t talking about: &lt;strong&gt;the AI makes everything look correct.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When a spec is well-formatted, internally consistent, and confidently explained, your brain wants to trust it. It reads like something a senior architect wrote after careful deliberation. But it’s not. It’s a language model pattern-matching against your inputs, producing the most plausible-sounding output it can.&lt;/p&gt;

&lt;p&gt;And here’s the trap within the trap: if you push back on something that feels off, the AI doesn’t defend its reasoning. It folds. It assumes you’re right, tries to course correct, and often makes the output &lt;em&gt;worse&lt;/em&gt;. Hallucination risk actually goes up when you question it, because now it’s reconciling your objection with a position it never truly held. It’s not reasoning. It’s pleasing.&lt;/p&gt;

&lt;p&gt;So you’re stuck. Trust the spec and risk building on wrong assumptions. Question the spec and risk destabilizing it further. Either way, you’ve generated thousands of lines of documentation that are incredibly hard to confidently validate.&lt;/p&gt;

&lt;p&gt;After spending far too long trying to get those specs to a place I trusted, it hit me — this felt familiar.&lt;/p&gt;




&lt;h2&gt;The Intent vs. the Reality&lt;/h2&gt;

&lt;p&gt;Here’s where I want to be fair: spec-driven development isn’t &lt;em&gt;supposed&lt;/em&gt; to be big design upfront.&lt;/p&gt;

&lt;p&gt;Marc Brooker, the person building Kiro at AWS, explicitly says “you don’t need to, and probably shouldn’t, develop the entire specification upfront.” Kiro’s own workflow is feature-scoped — requirements, design, and tasks for a single story, not a whole system. GitHub’s Spec-Kit runs in a loop: specify, plan, tasks, repeat per change request. OpenSpec literally states “specs are not frozen contracts; update them when reality changes.”&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;vision&lt;/em&gt; of SDD is iterative. Living documents. Feature-level scope. Incremental refinement.&lt;/p&gt;

&lt;p&gt;But the vision and the practice aren’t the same thing yet.&lt;/p&gt;

&lt;p&gt;Birgitta Böckeler, writing on Martin Fowler’s site, tried to untangle what SDD actually means right now and found the definition “still in flux.” She identified three levels — spec-first, spec-anchored, and spec-as-source — and noted that most tools are only spec-first. They help you write a spec before coding, but don’t have clear strategies for maintaining or evolving that spec over time. Even GitHub Spec-Kit’s own community is confused about whether a spec is supposed to live beyond a single change request.&lt;/p&gt;

&lt;p&gt;The methodology is ahead of the tooling. SDD can absolutely work — but right now, the tools and workflows don’t do enough to keep you on the iterative path. Without clear guardrails for when to stop specifying and start building, teams default to the thing that feels most natural: writing everything down upfront, as thoroughly as possible, before anyone touches code.&lt;/p&gt;

&lt;p&gt;That’s what happened to me. Not because I didn’t know better. Because nothing in the workflow guided me toward “that’s enough, go build and come back.”&lt;/p&gt;




&lt;h2&gt;The Waterfall Gravity&lt;/h2&gt;

&lt;p&gt;There’s a reason teams keep falling into this pattern. It has gravitational pull.&lt;/p&gt;

&lt;p&gt;When you tell an AI agent to help you write a spec, it &lt;em&gt;wants&lt;/em&gt; to be comprehensive. It will map out every component, every edge case, every integration point — because that’s what “thorough” looks like in its training data. And as a developer, you &lt;em&gt;want&lt;/em&gt; to feel like you’ve thought of everything before handing off to an autonomous agent. The combination of an AI that defaults to exhaustive and a human who defaults to cautious creates thousands of lines of documentation almost by accident.&lt;/p&gt;

&lt;p&gt;This is the same dynamic that made waterfall feel so appealing in the first place. The Agile Manifesto exists because planning everything upfront didn’t work:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Responding to change over following a plan.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;SDD proponents will rightly say “that’s not what we’re advocating.” And they’re right. But the tooling needs to actively enforce iterative scope — otherwise that gravity keeps pulling teams toward specifying everything before building anything. The intent is agile. The default behavior, without better rails, is waterfall.&lt;/p&gt;

&lt;p&gt;Of course, agile itself got convoluted over the years. Certifications, rituals, dogma that drifted far from the original insight. But the core idea never stopped being true: you learn more from building than from planning.&lt;/p&gt;

&lt;p&gt;Thoughtworks — the company whose chief scientist co-authored the Agile Manifesto — just released Technology Radar v34 this week, warning that as AI accelerates code generation, “established practices that ensure discipline become more vital.” They’re not pushing SDD. They’re pushing fundamentals. Iteration. Feedback loops.&lt;/p&gt;




&lt;h2&gt;What SDD Gets Right&lt;/h2&gt;

&lt;p&gt;I don’t want to dismiss the methodology. There are real benefits when it works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents need constraints.&lt;/strong&gt; Without scope boundaries, they expand. Tell an agent to build auth and it’ll add OAuth, SSO, and MFA because that’s what “auth” means in its training data. A spec that says “OAuth is out of scope” genuinely saves time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context improves quality.&lt;/strong&gt; An agent with the full picture makes fewer locally-right-but-globally-wrong decisions. The spec gives it a map, not just turn-by-turn directions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Team alignment.&lt;/strong&gt; Multiple people or agents working on the same system need a shared reference point. Specs provide that.&lt;/p&gt;

&lt;p&gt;The problem was never “should we spec?” It’s “how do we make sure we spec iteratively instead of falling into the trap of specifying everything at once?”&lt;/p&gt;




&lt;h2&gt;Thinking in Graphs&lt;/h2&gt;

&lt;p&gt;Here’s where I’ve landed — and maybe this is just how my brain works, but I think it applies broadly.&lt;/p&gt;

&lt;p&gt;I think about projects as graphs, not documents.&lt;/p&gt;

&lt;p&gt;At the top, there’s a direction. A north star. What we’re building and why. High-level, intentional, human-defined. It doesn’t need to specify every data model or API contract. It needs to be clear about the destination.&lt;/p&gt;

&lt;p&gt;From that direction, you break down into milestones. Each milestone is a meaningful checkpoint — something you can ship, test, or validate. Not a document section. A real deliverable.&lt;/p&gt;

&lt;p&gt;Each milestone has its own tasks. And here’s the critical part: &lt;strong&gt;the depth of planning for each task happens when you start working on it, not months before.&lt;/strong&gt; You plan the first milestone in detail. You sketch milestone three at a high level. When you finish milestone one, you know things you didn’t know before — and that knowledge shapes how you plan milestone two.&lt;/p&gt;

&lt;p&gt;The direction flows down from the north star. Discovery happens at every node.&lt;/p&gt;

&lt;p&gt;Say you’re building a new integration. The north star says: “Users can sync data between System A and System B in real time.” Milestone one might be a basic one-way sync — and when you build it, you discover the API rate limits aren’t what the docs claimed. That changes everything about milestone two. If you’d fully specced bidirectional sync upfront, you’d be rewriting specs instead of shipping software.&lt;/p&gt;

&lt;p&gt;This works for humans because it provides structure without drowning you in premature detail. It works for AI agents for the same reason — they need guidance and constraints, but they also need room to discover things during implementation that no spec could have predicted.&lt;/p&gt;

&lt;p&gt;Full upfront specification tries to flatten the graph into a document. Every node defined, every edge mapped, before you’ve traversed any of them. That’s not engineering. That’s prophecy.&lt;/p&gt;




&lt;h2&gt;What This Looks Like in Practice&lt;/h2&gt;

&lt;p&gt;I’m still figuring this out. There is no perfect workflow yet — that’s kind of the point. But here’s the pattern that’s working:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thin specs, not thick ones.&lt;/strong&gt; One page for the current milestone, not twenty pages for the whole system. Define the outcome, the constraints, what’s out of scope. Leave room for what you don’t know yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Iterate the spec, not just the code.&lt;/strong&gt; The spec changes every cycle. Decisions made, assumptions validated or invalidated, things learned by building. A living document, not a contract. This is what SDD’s proponents advocate — we just need clearer workflows and tooling to make this the default path instead of something you have to consciously enforce.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use agents for exploration, not just execution.&lt;/strong&gt; Code is cheap now. Build a quick prototype to test an architectural assumption. Throw it away if it’s wrong. A throwaway prototype costs you nothing. Specifying the wrong architecture upfront costs you everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep the loop tight.&lt;/strong&gt; In traditional agile, the sprint was two weeks. With agents, the feedback loop can be hours. Specify, build, test, learn, adjust. But only if you keep the scope small enough to actually iterate.&lt;/p&gt;




&lt;h2&gt;The Gap That Needs Filling&lt;/h2&gt;

&lt;p&gt;The industry is living through a real-time methodology shift:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Vibe coding&lt;/strong&gt; — prompt and pray. Fast, chaotic, doesn’t scale.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;Spec-driven development&lt;/strong&gt; — specify, then execute. Sound in theory, but easy to fall into big design upfront without clear process guardrails.&lt;/li&gt;



&lt;li&gt;
&lt;strong&gt;What comes next&lt;/strong&gt; — the iterative spec workflow that SDD envisions, supported by tooling and processes that actively keep teams on that path. Thin specs. Fast execution. Continuous refinement. Direction without prophecy.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is the same arc software development has followed before. Waterfall promised control through upfront planning. Agile recognized that the plan always changes. We’re at that inflection point again — the insight of iterative development needs to be baked into the tools and workflows, not just the blog posts and documentation.&lt;/p&gt;

&lt;p&gt;A project with a clear direction, meaningful milestones, and task-level depth that’s earned at execution time — not guessed at months before — works for both humans and AI agents. Structure without rigidity. Guidance without false certainty.&lt;/p&gt;

&lt;p&gt;The agents are fast. The models are capable. But the bottleneck was never the code.&lt;/p&gt;

&lt;p&gt;It was knowing what to build. And you only learn that by building.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Spec-driven development’s vision is right: specs should be living, iterative, and at the center of how we build software with AI. What we need now is better tooling and clearer processes to make that vision the default — so teams stay on the iterative path instead of drifting into the upfront-planning trap that agile was invented to escape. The Agile Manifesto didn’t expire. It just got a new executor — and that executor needs better guardrails.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/spec-driven-development-waterfall-trap/" rel="noopener noreferrer"&gt;Spec-Driven Development Isn’t Waterfall — But It Keeps Ending Up There&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>developmentbestpract</category>
      <category>agile</category>
    </item>
    <item>
      <title>Clean Code Is Dead (And I Hate That I Agree)</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Sun, 12 Apr 2026 10:53:52 +0000</pubDate>
      <link>https://dev.to/pacheco/clean-code-is-dead-and-i-hate-that-i-agree-4kme</link>
      <guid>https://dev.to/pacheco/clean-code-is-dead-and-i-hate-that-i-agree-4kme</guid>
      <description>&lt;p&gt;I’ve spent my career fighting for clean code. In code reviews, in architecture meetings, in those long debates about naming conventions that everyone pretends to hate but secretly cares about. Readable code. Well-structured code. Code that respects the next person who has to touch it.&lt;/p&gt;

&lt;p&gt;I’m starting to realize that none of that might matter anymore.&lt;/p&gt;




&lt;h2&gt;
  
  
  Clean Code Was Always a Human Interface
&lt;/h2&gt;

&lt;p&gt;Every clean code practice we follow was invented to solve a human problem.&lt;/p&gt;

&lt;p&gt;Descriptive variable names? So a human can read it. Separation of concerns? So a human can navigate it. Consistent formatting, small functions, clear abstractions? All of it — designed to make code convenient for people to write and to read.&lt;/p&gt;

&lt;p&gt;The entire philosophy assumes that humans are the primary audience of source code.&lt;/p&gt;

&lt;p&gt;But what happens when they’re not?&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Doesn’t Need Your Clean Code
&lt;/h2&gt;

&lt;p&gt;The more we rely on AI to write, review, and maintain code, the less we actually know the implementation details. And I don’t mean that in a lazy way — I mean structurally. The workflow is changing. You describe what you want, AI generates it, you review the output at a high level, and you move on.&lt;/p&gt;

&lt;p&gt;AI doesn’t care about your variable names. It doesn’t need elegant abstractions to understand what’s happening. It processes the entire codebase — messy or clean — with the same indifference. It doesn’t get confused by a 500-line function. It doesn’t lose context the way a human does after scrolling through too many files.&lt;/p&gt;

&lt;p&gt;I had a moment recently that made this click. I was reviewing AI-generated code and caught myself leaving comments about naming and structure — the same feedback I’d give a junior dev. Then I paused. Who was I writing these comments for? The AI would regenerate the whole thing from scratch on the next prompt anyway. I was applying human code review instincts to a process that doesn’t have a human on the receiving end (sort of). Old habits addressing a problem that no longer exists.&lt;/p&gt;

&lt;p&gt;The practices we built specifically for human readability and human convenience are becoming overhead. In some cases, they’re becoming a bottleneck — extra layers of abstraction that add complexity without benefiting the thing that’s actually doing the reading.&lt;/p&gt;

&lt;p&gt;This isn’t a thought experiment. This is already happening in how teams ship software.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Highest Level Language Is English Now
&lt;/h2&gt;

&lt;p&gt;If readability stops being the priority, what takes its place? Performance.&lt;/p&gt;

&lt;p&gt;If AI can handle the complexity regardless, why optimize for human readability when you can optimize for raw execution speed? The ideal language for AI-driven development might not be Python or TypeScript. It might be C. It might be Rust. It might be something even lower level where AI has fine-grained control over memory, threading, and every implementation detail — things that are painful for humans but trivial for a model that doesn’t get frustrated.&lt;/p&gt;

&lt;p&gt;We’ve always talked about “high level” and “low level” languages. High level meant closer to human thinking, low level meant closer to the machine. But now there’s a level above all of them.&lt;/p&gt;

&lt;p&gt;English. Portuguese. Mandarin. Whatever you speak.&lt;/p&gt;

&lt;p&gt;Natural language is the highest level language now. LLMs are remarkable polyglots — they work fluently in all of them. And code? Code is just the compilation target.&lt;/p&gt;

&lt;p&gt;We went from writing machine instructions, to writing human-readable code, to just… describing what we want in plain words. Each step abstracted away more control. Each step moved us further from the metal.&lt;/p&gt;

&lt;h2&gt;
  
  
  We’re Losing Control at Every Layer
&lt;/h2&gt;

&lt;p&gt;It’s not just that AI writes the code. People use AI to plan the work, brainstorm the architecture, make decisions about what to build and how to build it. The entire pipeline — from idea to implementation — is being routed through language models.&lt;/p&gt;

&lt;p&gt;And LLMs are dangerously convincing. Their reasoning is well-structured even when the underlying data is fabricated or slightly off. I’ve caught myself reading an AI-generated explanation, thinking “yeah, that makes sense,” only to realize later that a key detail was subtly wrong. Or worse — never realizing it at all. The convincing tone becomes a trap.&lt;/p&gt;

&lt;p&gt;You could argue that humans were never perfectly accurate either. Fair. We’ve always built software on incomplete knowledge and best guesses. But there was something grounding about having a person in the loop who had intuition, experience, and skin in the game. Someone who could smell when something was off, even if they couldn’t articulate why.&lt;/p&gt;

&lt;p&gt;The more we delegate — not just the coding, but the thinking — the more that instinct fades. And I’m not sure we’re paying enough attention to what we’re losing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maybe I’m Too Attached to the Craft
&lt;/h2&gt;

&lt;p&gt;Maybe I’m romanticizing this. Maybe code was always just a means to an end and I turned it into something more than it needed to be. I built part of my identity around writing good code, caring about architecture, treating the codebase as a product in itself. It’s hard to watch that become irrelevant and not take it personally.&lt;/p&gt;

&lt;p&gt;Maybe I’m onto something. Maybe the people who cared about the craft will be the ones who notice when the quality starts slipping in ways that AI can’t detect. Or maybe that’s just what I tell myself to feel relevant.&lt;/p&gt;

&lt;p&gt;I genuinely don’t know.&lt;/p&gt;

&lt;p&gt;And I can’t be a hypocrite about it. This very piece — I’m using AI to help me review it, refine the structure, make sure it reads well. I’m literally writing about the death of human craft while using the thing that’s killing it to help me write better.&lt;/p&gt;

&lt;p&gt;But the ideas are mine. The opinions are mine. The discomfort is mine. AI didn’t tell me to feel this way — I felt it, and then I used a tool to articulate it more clearly. There’s a difference between using AI as a tool and being used by it. At least I think there is.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Mental Model Shift
&lt;/h2&gt;

&lt;p&gt;I don’t have a solution. But I’ve been rethinking how I relate to the work, and that’s helped more than any specific tool or workflow.&lt;/p&gt;

&lt;p&gt;The shift is this: if code is becoming the compilation target, then what you’re really building isn’t the code — it’s the system of decisions that produces it. Your taste. Your standards. Your judgment about what good looks like. That’s the actual product now.&lt;/p&gt;

&lt;p&gt;And that’s something you can teach to AI.&lt;/p&gt;

&lt;p&gt;I’ve been experimenting with this — taking the patterns I’ve developed over years of writing software and encoding them into the tools I work with. Not just “generate a function that does X” but “here’s how I think about error handling, here’s my preference on abstraction depth, here’s what I consider acceptable tradeoffs.” The more specific you get about your own engineering philosophy, the more the output starts to feel like yours instead of generic AI slop.&lt;/p&gt;

&lt;p&gt;This isn’t complicated or expensive. The tooling to build your own AI workflows — agents that understand how &lt;em&gt;you&lt;/em&gt; work — is accessible today in a way that would’ve been unthinkable two years ago. You don’t need a team or a platform. You need clarity about your own standards and the willingness to invest time in teaching them.&lt;/p&gt;

&lt;p&gt;If you’ve spent years developing engineering taste, that taste is now &lt;em&gt;leverage&lt;/em&gt;. You can apply it at a scale that was never possible when you had to write every line yourself. More ambitious projects. More complex systems. Things that would’ve required a team, handled by one person with clear vision and the right tools.&lt;/p&gt;

&lt;p&gt;It only works if you stay in the driver’s seat though. If you’re the one making the calls about what ships and what gets thrown away. Not a consumer of whatever AI generates, but the lead. The final authority.&lt;/p&gt;

&lt;p&gt;And right now, I’m watching a lot of people quietly stop being that.&lt;/p&gt;

&lt;h2&gt;
  
  
  I Don’t Have a Clean Answer
&lt;/h2&gt;

&lt;p&gt;If language models keep evolving at even half the pace we’ve seen over the last couple of years, the industry in five years looks nothing like it does today. The way we think about programming, about code quality, about what it means to be a software engineer — all of it is up for renegotiation.&lt;/p&gt;

&lt;p&gt;I don’t have a neat conclusion. I have a tension I’m sitting with, and I think a lot of developers feel it too even if they haven’t put words to it yet.&lt;/p&gt;

&lt;p&gt;Clean code might be dead. The practices, the principles, the carefully named variables and thoughtfully extracted functions — they might genuinely become artifacts of an era when humans needed to read what humans wrote.&lt;/p&gt;

&lt;p&gt;But the intention behind clean code? Caring about what you build. Taking pride in the craft. Giving a damn about quality even when no one is looking?&lt;/p&gt;

&lt;p&gt;That can’t die. Unless we let it.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/clean-code-is-dead/" rel="noopener noreferrer"&gt;Clean Code Is Dead (And I Hate That I Agree)&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>developmentbestpract</category>
      <category>aicoding</category>
    </item>
    <item>
      <title>You Think, AI Executes: The Skills That Actually Matter</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Mon, 06 Apr 2026 02:58:16 +0000</pubDate>
      <link>https://dev.to/pacheco/you-think-ai-executes-the-skills-that-actually-matter-1e7p</link>
      <guid>https://dev.to/pacheco/you-think-ai-executes-the-skills-that-actually-matter-1e7p</guid>
      <description>&lt;p&gt;The most valuable developer skill right now isn't writing more code faster. It's learning unfamiliar codebases, building context that guides decisions, planning strategic approaches to problems, and shipping production code with confidence.&lt;/p&gt;

&lt;p&gt;I recently added &lt;code&gt;.env&lt;/code&gt; file support to &lt;a href="https://github.com/joerdav/xc" rel="noopener noreferrer"&gt;xc&lt;/a&gt;, a Markdown-based task runner written in Go. The codebase was completely unfamiliar. I'm not a Go expert. But in 2.5 hours, I went from zero knowledge to a production-ready pull request with 84% test coverage and zero bugs in manual testing.&lt;/p&gt;

&lt;p&gt;Here's what's different: &lt;strong&gt;I didn't write a single line of code.&lt;/strong&gt; Not one. AI wrote everything—tests, implementation, integration, documentation. My role was entirely different: I questioned, I planned, I directed, I reviewed. I read the code, but I didn't write it.&lt;/p&gt;

&lt;p&gt;This isn't another "I asked ChatGPT to build an app" story. This is about the skills that separate developers who use AI as a force multiplier from those who just ask it to generate code. It's about onboarding fast, documenting strategically, planning thoroughly, directing execution, and reviewing confidently. The code writing? That's handled.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpc3lwr2dgyj25mysreu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpc3lwr2dgyj25mysreu.png" alt="📁" width="72" height="72"&gt;&lt;/a&gt; Complete &lt;code&gt;.ai/&lt;/code&gt; folder in the working fork:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/sudoish/xc/tree/ai-context/.ai" rel="noopener noreferrer"&gt;github.com/sudoish/xc/tree/ai-context/.ai&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9f19bu2w1s9oqo72994z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9f19bu2w1s9oqo72994z.png" alt="🔀" width="72" height="72"&gt;&lt;/a&gt; Production-ready PR:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/joerdav/xc/pull/167" rel="noopener noreferrer"&gt;github.com/joerdav/xc/pull/167&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvs7fddjqg8063ot203d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvs7fddjqg8063ot203d.png" alt="💡" width="72" height="72"&gt;&lt;/a&gt; The &lt;code&gt;.ai/&lt;/code&gt; folder lives in a separate &lt;code&gt;ai-context&lt;/code&gt; branch so it doesn't clutter the main codebase but remains available for reference and iteration.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;Most AI coding demos show you the magic: "I asked ChatGPT to build X and it worked!" They skip the parts that actually matter for professional development: How do you onboard to a codebase you've never seen? How do you make architectural decisions when you don't understand the patterns yet? How do you ensure your code is production-ready when AI helped write it?&lt;/p&gt;

&lt;p&gt;These are the skills that matter now. Code generation is table stakes. What matters is context building, strategic planning, and confident execution.&lt;/p&gt;

&lt;p&gt;Here's the project: &lt;a href="https://github.com/joerdav/xc" rel="noopener noreferrer"&gt;xc&lt;/a&gt;, a task runner that reads tasks from Markdown files. About 5,000 lines of Go. Completely unfamiliar to me. The feature request was straightforward: add &lt;code&gt;.env&lt;/code&gt; file support (&lt;a href="https://github.com/joerdav/xc/issues/162" rel="noopener noreferrer"&gt;Issue #162&lt;/a&gt;). In 2.5 hours, using free AI models and a structured approach, I went from knowing nothing about the codebase to a merged pull request.&lt;/p&gt;

&lt;p&gt;The difference wasn't better prompts. It was better process.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Actual Workflow: What I Did vs What AI Did
&lt;/h2&gt;

&lt;p&gt;Here's the honest breakdown of who did what. I didn't write a single line of code myself. That's not the valuable work anymore.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I did:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Explored the codebase with AI&lt;/strong&gt; — Asked questions, challenged its understanding, verified explanations against the actual code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built the &lt;code&gt;.ai/&lt;/code&gt; structure&lt;/strong&gt; — Wrote context docs, ADRs, rules, and implementation specs based on my growing understanding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Questioned the strategy&lt;/strong&gt; — Evaluated alternatives, captured trade-offs, made architectural decisions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Directed the implementation&lt;/strong&gt; — "Follow the spec. Implement test 1. Now test 2." Each step validated before moving forward&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reviewed iteratively&lt;/strong&gt; — Asked AI to review the code, digested its findings, confirmed issues, asked it to fix them. Repeated multiple times&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Final deep review&lt;/strong&gt; — Read through the entire PR on GitHub, verified everything made sense, marked ready for review&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What AI did:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Answered my questions&lt;/strong&gt; — Explained architecture, pointed me to relevant files, clarified patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wrote all the code&lt;/strong&gt; — Tests, implementation, integration, everything&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Found its own bugs&lt;/strong&gt; — Self-review caught 5 issues before I even looked at the code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fixed the issues&lt;/strong&gt; — Applied fixes based on its own review findings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Followed the plan&lt;/strong&gt; — Implemented exactly what the spec described, in the order specified&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What we did together:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built understanding through conversation&lt;/li&gt;
&lt;li&gt;Validated each step before proceeding&lt;/li&gt;
&lt;li&gt;Caught subtle bugs through TDD&lt;/li&gt;
&lt;li&gt;Created production-ready code with high confidence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key insight: &lt;strong&gt;I never typed code.&lt;/strong&gt; I read it, reviewed it, directed changes to it. But I didn't write it. My value was in understanding, planning, and judgment. AI's value was in execution and self-checking. This is the new division of labor.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Four Skills
&lt;/h2&gt;

&lt;p&gt;This walkthrough demonstrates four skills that matter more than code generation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skill 1: Rapid Onboarding.&lt;/strong&gt; Learning an unfamiliar codebase fast by building structured context instead of reading every file. The &lt;code&gt;.ai/&lt;/code&gt; folder captures architecture, patterns, and limitations in a way both humans and AI can reference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skill 2: Strategic Documentation.&lt;/strong&gt; Building documentation that guides development, not just records it. Architecture Decision Records (ADRs) capture the "why" behind choices, evaluate alternatives, and create a shared understanding before code is written.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skill 3: Systematic Planning.&lt;/strong&gt; Breaking down problems into testable steps. Each test defines expected behavior. Each implementation proves the behavior works. Each commit tells part of the story. No guessing, no hoping.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skill 4: Confident Execution.&lt;/strong&gt; Shipping code you trust because you've tested it thoroughly, reviewed it critically, and validated it works in real scenarios. AI can help write code, but you own the quality.&lt;/p&gt;

&lt;p&gt;These skills work regardless of the AI tool you use. They work with free models. They work on unfamiliar codebases.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Feature Request
&lt;/h2&gt;

&lt;p&gt;First, a quick primer on how xc works: it's a task runner that reads tasks directly from your &lt;code&gt;README.md&lt;/code&gt; (or any markdown file). Tasks are defined as markdown headings with code blocks. When you run &lt;code&gt;xc test&lt;/code&gt;, it finds the &lt;code&gt;## test&lt;/code&gt; heading in your README and executes the code block beneath it. The genius is that your documentation &lt;em&gt;is&lt;/em&gt; your task runner, so they never get out of sync.&lt;/p&gt;

&lt;p&gt;A user opened &lt;a href="https://github.com/joerdav/xc/issues/162" rel="noopener noreferrer"&gt;Issue #162&lt;/a&gt; asking for &lt;code&gt;.env&lt;/code&gt; file support. They wanted to use the same set of tasks for different environments without cluttering the Markdown with environment variables.&lt;/p&gt;

&lt;p&gt;Before the feature, you'd have to write this in your README.md:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## deploy&lt;/span&gt;

Deploy to production.

Env: &lt;span class="nv"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgres://prod/db, &lt;span class="nv"&gt;API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;secret123, &lt;span class="nv"&gt;ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;production

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;kubectl apply -f deployment.yaml&lt;/p&gt;

&lt;p&gt;Then run with &lt;code&gt;xc deploy&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;After the feature, your README stays clean:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## deploy&lt;/span&gt;

Deploy to production.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;kubectl apply -f deployment.yaml&lt;/p&gt;

&lt;p&gt;The environment variables live in a separate &lt;code&gt;.env&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="py"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;postgres://prod/db&lt;/span&gt;
&lt;span class="py"&gt;API_KEY&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;secret123&lt;/span&gt;
&lt;span class="py"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;production&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You still run the same command, but now the credentials are managed in &lt;code&gt;.env&lt;/code&gt; instead of cluttering your documentation.&lt;/p&gt;

&lt;p&gt;Simple ask, but the implementation requires real decisions. When do you load the files? What about overrides? How do you handle security? What about backward compatibility?&lt;/p&gt;

&lt;h2&gt;
  
  
  The &lt;code&gt;.ai/&lt;/code&gt; Structure: Context as Code
&lt;/h2&gt;

&lt;p&gt;Before writing any code, I created a structured context folder. This turned out to be the key to working with AI effectively. It's not about better prompts, it's about better &lt;strong&gt;structure&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Full &lt;code&gt;.ai/&lt;/code&gt; folder:&lt;/strong&gt; &lt;a href="https://github.com/sudoish/xc/tree/ai-context/.ai" rel="noopener noreferrer"&gt;github.com/sudoish/xc/tree/ai-context/.ai&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The folder looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.ai/
├── agents.md # Who's working on what
├── context.md # Project overview, architecture
├── architecture/
│ ├── decisions.md # Current design patterns
│ └── adrs/
│ └── 001-dotenv-support.md # Design decisions for this feature
├── rules/
│ ├── code-style.md # Go conventions
│ ├── testing.md # TDD workflow
│ └── commits.md # Commit message format
└── tasks/
    └── 001-dotenv-implementation.md # Step-by-step plan

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; This structure is an investment, not overhead you repeat for every feature. You build it once during your first feature, then leverage it for every feature after. The &lt;code&gt;context.md&lt;/code&gt;, &lt;code&gt;architecture/decisions.md&lt;/code&gt;, and &lt;code&gt;rules/&lt;/code&gt; files rarely change. Each new feature just adds a new ADR (like &lt;code&gt;002-api-caching.md&lt;/code&gt;) and a new task spec (like &lt;code&gt;002-api-caching-implementation.md&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Think of it like setting up your development environment. The initial setup takes time, but every feature after that is faster because the foundation exists.&lt;/p&gt;

&lt;p&gt;Each file serves a specific purpose. The &lt;a href="https://github.com/sudoish/xc/blob/ai-context/.ai/context.md" rel="noopener noreferrer"&gt;&lt;code&gt;context.md&lt;/code&gt;&lt;/a&gt; file becomes AI's memory. It explains what xc does, how it's architected with its &lt;code&gt;cmd/&lt;/code&gt;, &lt;code&gt;models/&lt;/code&gt;, &lt;code&gt;run/&lt;/code&gt;, and &lt;code&gt;parser/&lt;/code&gt; packages, what key behaviors exist like dependencies and environment handling, and what current limitations we're working around. Every time I ask AI a question, this context gets included automatically.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/sudoish/xc/blob/ai-context/.ai/rules/testing.md" rel="noopener noreferrer"&gt;&lt;code&gt;rules/testing.md&lt;/code&gt;&lt;/a&gt; file defines the TDD workflow we follow: write a failing test first (red), write minimal code to make it pass (green), clean up without changing behavior (refactor), then commit. This keeps both me and AI honest. No skipping tests. No shortcuts.&lt;/p&gt;

&lt;p&gt;The real gem is &lt;a href="https://github.com/sudoish/xc/blob/ai-context/.ai/architecture/adrs/001-dotenv-support.md" rel="noopener noreferrer"&gt;&lt;code&gt;adrs/001-dotenv-support.md&lt;/code&gt;&lt;/a&gt;, the Architecture Decision Record. This is where design happens. It's not "build me a feature," it's "here's why we chose this approach." We decided to load .env files at application startup rather than per-task, to support &lt;code&gt;.env.local&lt;/code&gt; overrides, to skip world-readable files for security, and to add CLI flags like &lt;code&gt;--env-file&lt;/code&gt; and &lt;code&gt;--no-env&lt;/code&gt;. We considered alternatives like per-task loading (rejected as too complex) and requiring an explicit flag (rejected as too much friction). This ADR becomes the source of truth. When AI suggests something different, I can just say "check the ADR."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The living documentation principle:&lt;/strong&gt; As the codebase evolves, so does the &lt;code&gt;.ai/&lt;/code&gt; folder. When you add a new feature, you write a new ADR (002, 003, etc.). When architecture changes, you update &lt;code&gt;architecture/decisions.md&lt;/code&gt; or add a new ADR explaining the change. When patterns emerge, you document them. The folder grows with the project, but the structure stays the same. Each feature builds on the understanding captured before it.&lt;/p&gt;

&lt;p&gt;This means the second feature is faster than the first. The third is faster than the second. The documentation compounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Task Spec: Planning Before Coding
&lt;/h2&gt;

&lt;p&gt;Before writing any code, I created &lt;a href="https://github.com/sudoish/xc/blob/ai-context/.ai/tasks/001-dotenv-implementation.md" rel="noopener noreferrer"&gt;&lt;code&gt;tasks/001-dotenv-implementation.md&lt;/code&gt;&lt;/a&gt;, a step-by-step plan for implementing the feature. This isn't a project management document. It's a development spec that breaks the feature into TDD cycles.&lt;/p&gt;

&lt;p&gt;The spec listed each test I needed to write, what behavior it should verify, and the expected implementation. Test for file not found. Test for loading valid env. Test for .env.local overrides. Test for security checks. Each one became a TDD cycle.&lt;/p&gt;

&lt;p&gt;This is what makes AI effective. Without the spec, I'd be asking AI "what should I do next?" every five minutes. With the spec, I'm asking "implement the next test according to the plan." The spec keeps development focused and systematic. It's the difference between wandering and following a map.&lt;/p&gt;

&lt;p&gt;For your second feature, you write a new spec. For your third, another one. The format is consistent, but each spec is tailored to its feature. This is the work that makes development fast and confident.&lt;/p&gt;

&lt;h2&gt;
  
  
  The TDD Flow: Red → Green → Refactor → Commit
&lt;/h2&gt;

&lt;p&gt;Here's where the real work happens. Each test defines acceptance criteria for exactly what needs to be built.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cycle 1: Valid .env should load variables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First behavior: if a &lt;code&gt;.env&lt;/code&gt; file exists and contains &lt;code&gt;KEY=value&lt;/code&gt; pairs, those should be loaded into the environment. Test written, test failed (red)—no loader existed yet. Implementation added using the godotenv library (green). Test passed. Committed with "load env vars from dotenv file".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cycle 2: .env.local should override .env&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Expected behavior: if both &lt;code&gt;.env&lt;/code&gt; and &lt;code&gt;.env.local&lt;/code&gt; exist, and both define the same variable, the &lt;code&gt;.env.local&lt;/code&gt; value wins. This is crucial for local development where you want to override defaults without modifying the base file. Test written, test failed initially because I was using the wrong function, fixed the implementation, test passed. Committed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cycle 3: World-readable files should be skipped&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Security requirement: if a &lt;code&gt;.env&lt;/code&gt; file has permissions that allow other users to read it (like &lt;code&gt;chmod 644&lt;/code&gt;), skip loading it and warn the user. This prevents accidentally exposing secrets. Test created, test failed (secrets were being loaded), added permission check, test passed. Committed.&lt;/p&gt;

&lt;p&gt;This rhythm of define → test → implement → verify → commit creates a clean history. When I looked at the final commit log, I could see exactly how the feature evolved: add godotenv dependency, load env vars from dotenv file, support dotenv local overrides, add security check for world readable files, integrate dotenv loading into main, add env file cli flags. Thirteen commits total, each one atomic and meaningful. Each commit is a story about one specific behavior being added.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Review Process
&lt;/h2&gt;

&lt;p&gt;After the implementation was done, I did a deep review of my own code. I found five issues that needed fixing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue 1: Test Isolation (Critical)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tests were modifying the global environment without properly restoring it. If a test set &lt;code&gt;TEST_KEY=value&lt;/code&gt;, the cleanup would delete it, but what if that key already existed before the test ran? The cleanup wasn't restoring the original value, just removing the key. This breaks parallel test execution because tests can interfere with each other.&lt;/p&gt;

&lt;p&gt;The fix: create a helper function that saves the current state of environment variables before the test runs, then restores that exact state (including whether the variable existed at all) when the test completes. Now tests are safe to run in parallel. Committed with "add test environment isolation helper".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue 2: Windows Test Bug (Critical)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One test needed to skip execution on Windows because file permission models are different. I had written the check incorrectly, reading from an environment variable instead of the language's built-in constant. This would break Windows CI. Small mistake, but important. Fixed and committed with "fix windows test skip to use runtime goos".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue 3: Early Exit Timing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The .env loading was happening even for commands like &lt;code&gt;--help&lt;/code&gt; and &lt;code&gt;--version&lt;/code&gt;, which meant users could see security warnings when just checking the version. Moved the loading to happen after those early exits. Performance optimization and better user experience. Committed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue 4: Error Context&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When file operations failed, errors didn't indicate which file caused the problem. Added context wrapping so errors show the specific file path. Makes debugging much easier. Committed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue 5: Test Coverage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One helper function didn't have its own test. Added coverage to bring the total to 84%. Committed.&lt;/p&gt;

&lt;p&gt;Each issue got its own fix, its own verification, its own commit. The same disciplined process for fixes that I used for features.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Manual Testing
&lt;/h2&gt;

&lt;p&gt;Code works in tests, but does it work for real users? I installed my version and created a test project to verify everything worked end-to-end.&lt;/p&gt;

&lt;p&gt;I created a &lt;code&gt;.env&lt;/code&gt; file with some variables, created a &lt;code&gt;.env.local&lt;/code&gt; file that overrode some of them, and made sure the permissions were correct with &lt;code&gt;chmod 600&lt;/code&gt;. Then I added a task to my &lt;code&gt;README.md&lt;/code&gt; to verify the variables were loaded:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In README.md:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## check-env&lt;/span&gt;

Check loaded environment variables.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;echo "Environment: $ENV"&lt;br&gt;
echo "Database: $DATABASE_URL"&lt;br&gt;
echo "API Key: ${API_KEY:0:8}..."&lt;/p&gt;

&lt;p&gt;When I ran &lt;code&gt;xc check-env&lt;/code&gt;, I saw exactly what I expected. The &lt;code&gt;xc&lt;/code&gt; command read the task from the README and executed it with the environment variables from &lt;code&gt;.env&lt;/code&gt; and &lt;code&gt;.env.local&lt;/code&gt;. The environment was set to "development" from the base .env, but the database URL and API key were overridden by .env.local. Perfect.&lt;/p&gt;

&lt;p&gt;I ran eight manual test scenarios: default .env loading, .env.local overrides, the –no-env flag skipping loading, –env-file loading a custom path, security warnings for world-readable files, task-level Env statements still working, –help not loading .env (avoiding unnecessary warnings), and a real-world multi-variable scenario. All eight passed.&lt;/p&gt;
&lt;h2&gt;
  
  
  The PR
&lt;/h2&gt;

&lt;p&gt;I submitted everything as &lt;a href="https://github.com/joerdav/xc/pull/167" rel="noopener noreferrer"&gt;PR #167&lt;/a&gt;. The changes included thirteen commits (eight for the feature, five for fixes), about 200 lines of code including tests, four unit tests plus six integration tests, 84% code coverage, and zero bugs found in manual testing.&lt;/p&gt;

&lt;p&gt;The documentation was complete with a README section showing examples, a &lt;code&gt;.env.example&lt;/code&gt; template file, the load order documented clearly, and security best practices explained. Most importantly, everything was backward compatible. Existing task-level &lt;code&gt;Env:&lt;/code&gt; statements still work exactly as before.&lt;/p&gt;
&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;.ai/&lt;/code&gt; folder was the game-changer. Instead of writing long prompts like "Build me a .env loader with security checks and…", I could just say "Implement the loader per ADR-001". The ADR contains all the decisions. AI just implements them.&lt;/p&gt;

&lt;p&gt;I used free models throughout. No expensive API calls. The key wasn't the model, it was the context. Clear architecture docs, explicit ADRs, and well-defined tests gave AI everything it needed to generate good code.&lt;/p&gt;

&lt;p&gt;TDD kept everything honest. Every cycle followed the same pattern: write a test that defines the behavior, let AI suggest an implementation, let the test validate it works, then commit. No guessing. No "it probably works." The test proves it.&lt;/p&gt;

&lt;p&gt;Thirteen commits might seem like a lot for 200 lines of code, but each commit serves a purpose. Each one is reviewable on its own. Each one tells part of the story. Each one is revertible if needed. Git bisect works perfectly with this kind of history.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;.env.local&lt;/code&gt; override issue shows the workflow clearly. AI suggested the wrong approach first, using &lt;code&gt;Load()&lt;/code&gt; instead of &lt;code&gt;Overload()&lt;/code&gt;. But the test caught it. That's how it should work: AI suggests, test validates, human decides.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Real Value
&lt;/h2&gt;

&lt;p&gt;This isn't about "AI wrote code for me." It's about process, collaboration, and documentation.&lt;/p&gt;

&lt;p&gt;The process matters. Structured context in the &lt;code&gt;.ai/&lt;/code&gt; folder. Design decisions captured in ADRs. TDD discipline with tests written first. Small commits with one change at a time. This is how you ship production code.&lt;/p&gt;

&lt;p&gt;The collaboration matters. AI acts as a pair programmer, not a magic wand. Tests validate AI suggestions. Human makes the design decisions. Both contribute to better code.&lt;/p&gt;

&lt;p&gt;The documentation matters. Future contributors now have context about the project, the architecture, and why decisions were made the way they were. The implementation plan is explicit. The tests document the expected behavior. Six months from now, none of this is lost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The compounding matters most.&lt;/strong&gt; You build the foundation once. Every feature after that leverages it. The second feature doesn't need new &lt;code&gt;context.md&lt;/code&gt; or &lt;code&gt;rules/&lt;/code&gt; files, just a new ADR and task spec. The third feature is even faster. The documentation evolves as the codebase evolves. New ADRs when architecture changes. Updates to &lt;code&gt;context.md&lt;/code&gt; when understanding deepens. Updates to &lt;code&gt;rules/&lt;/code&gt; when patterns emerge. The investment pays dividends forever.&lt;/p&gt;
&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;Want to replicate this process? Pick a project and create the &lt;code&gt;.ai/&lt;/code&gt; structure right in your working directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; .ai/&lt;span class="o"&gt;{&lt;/span&gt;architecture/adrs,rules,tasks&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the &lt;a href="https://github.com/sudoish/xc/tree/ai-context/.ai" rel="noopener noreferrer"&gt;template structure&lt;/a&gt; as a guide. Build the foundation files once (&lt;code&gt;context.md&lt;/code&gt;, &lt;code&gt;architecture/decisions.md&lt;/code&gt;, &lt;code&gt;rules/&lt;/code&gt;), then for each feature add a new ADR and task spec. The &lt;code&gt;.ai/&lt;/code&gt; folder lives alongside your code and evolves with it—commit it with your changes so it stays in sync.&lt;/p&gt;

&lt;p&gt;Direct AI through TDD: "Implement test 1 from the spec." AI writes the test and implementation. "Run it." Test passes. "Commit." Repeat. When done, have AI review its own work, confirm findings, direct fixes. Then do your final review for strategic correctness.&lt;/p&gt;

&lt;p&gt;Each feature adds a new ADR and task spec to the &lt;code&gt;.ai/&lt;/code&gt; folder. The foundation files rarely change. The documentation compounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Timeline
&lt;/h2&gt;

&lt;p&gt;I spent about 45 minutes on documentation upfront: exploring the codebase with AI, questioning its understanding, writing the ADRs, rules, and context. This sounds like a lot, but it's a one-time investment. The &lt;code&gt;context.md&lt;/code&gt;, &lt;code&gt;architecture/decisions.md&lt;/code&gt;, and &lt;code&gt;rules/&lt;/code&gt; files I wrote for this first feature will be reused for every future feature. I'll only spend 10-15 minutes on feature-specific docs (ADR + task spec) for the next feature.&lt;/p&gt;

&lt;p&gt;The implementation took 40 minutes: directing AI through TDD cycles, one test at a time, validating each step. Integration of CLI flags and wiring into main.go took 15 minutes of the same directed approach. Documentation like README updates and examples took another 15 minutes. Manual testing took 15 minutes: I installed the binary and ran real scenarios. The review process took 30 minutes: first AI reviewed its own code (found 5 issues), then I reviewed the fixes, then I did a final deep review on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; AI wrote 100% of the code. I wrote 100% of the strategy, asked 100% of the questions, and made 100% of the decisions. I reviewed every line, but I didn't type any of them. Total time from fork to production-ready PR was about 2.5 hours.&lt;/p&gt;

&lt;p&gt;If I added a second feature tomorrow, it would take less time. By the third feature even faster. The documentation compounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;The complete &lt;code&gt;.ai/&lt;/code&gt; structure and documentation is at &lt;a href="https://github.com/sudoish/xc/tree/ai-context/.ai" rel="noopener noreferrer"&gt;github.com/sudoish/xc/tree/ai-context/.ai&lt;/a&gt;. The pull request with all code and tests is at &lt;a href="https://github.com/joerdav/xc/pull/167" rel="noopener noreferrer"&gt;github.com/joerdav/xc/pull/167&lt;/a&gt;. The working fork is at &lt;a href="https://github.com/sudoish/xc" rel="noopener noreferrer"&gt;github.com/sudoish/xc&lt;/a&gt;. The original issue is &lt;a href="https://github.com/joerdav/xc/issues/162" rel="noopener noreferrer"&gt;joerdav/xc#162&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The &lt;code&gt;.ai/&lt;/code&gt; folder lives in a separate branch in this example only because I wanted to reference it for this article without including it in the PR to the upstream project. In your own work, keep the &lt;code&gt;.ai/&lt;/code&gt; folder in your main working branch and commit it with your changes—it should evolve alongside your code, not separately.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Skills That Actually Matter
&lt;/h2&gt;

&lt;p&gt;AI wrote every line of code. I read every line, but I didn't write any of them. The feature is production-ready because I focused on what actually matters.&lt;/p&gt;

&lt;p&gt;The four skills transformed from framework to practice: rapid onboarding through questioning AI and building structured context, strategic documentation through ADRs written before code, systematic planning through testable specs, and iterative review through AI self-checks followed by strategic verification.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;.ai/&lt;/code&gt; folder, the ADRs, the task specs, the review cycles—they all worked exactly as planned. The result: 84% coverage, zero bugs, 2.5 hours from fork to production-ready PR.&lt;/p&gt;

&lt;p&gt;These skills work with free models. They work on unfamiliar codebases. They separate developers who use AI effectively from those who just generate code and hope it works.&lt;/p&gt;

&lt;p&gt;The magic isn't in the AI. It's in the process. And the process is this: &lt;strong&gt;you think, you plan, you direct, you review. AI executes.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Your Career
&lt;/h2&gt;

&lt;p&gt;The developer who can onboard to unfamiliar codebases fast, document decisions strategically, plan systematically, and execute with confidence is far more valuable than the developer who can write code quickly. Because here's the reality: &lt;strong&gt;code writing is no longer the bottleneck, it never was. AI just made this a lot more evident&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I shipped a production-ready feature to an unfamiliar codebase in 2.5 hours without writing a single line of code. The bottleneck wasn't typing. It was understanding, planning, and judging. Those are the skills that matter.&lt;/p&gt;

&lt;p&gt;AI tools are getting better at code generation every month. They're not getting better at understanding your codebase's architecture, making strategic trade-offs, or ensuring production quality. Those skills are still yours. Those skills are what companies pay for.&lt;/p&gt;

&lt;p&gt;The question isn't "Will AI replace developers?" It's "Which developers will thrive when everyone has access to AI?" The answer is the ones who master onboarding, documentation, planning, and review. The ones who understand that their job is no longer to write code—it's to think clearly, plan thoroughly, and judge correctly.&lt;/p&gt;

&lt;p&gt;This is the junior dev role being redefined. It's not about writing boilerplate anymore. That work is done. It's about learning systems fast, making good decisions, directing execution, and ensuring quality. If you can do that, you're not competing with AI. You're orchestrating it.&lt;/p&gt;

&lt;p&gt;Writing code is optional. Reading it, understanding it, and judging it—those aren't.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post documents a real open source contribution made using AI as a pair programmer. All code, tests, documentation, and the complete &lt;code&gt;.ai/&lt;/code&gt; folder structure are publicly available in the &lt;a href="https://github.com/sudoish/xc/tree/ai-context/.ai" rel="noopener noreferrer"&gt;sudoish/xc fork&lt;/a&gt; for anyone who wants to replicate this approach.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/ai-driven-development-xc-dotenv/" rel="noopener noreferrer"&gt;You Think, AI Executes: The Skills That Actually Matter&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>uncategorized</category>
      <category>ai</category>
      <category>developerskills</category>
      <category>developmentprocess</category>
    </item>
    <item>
      <title>How We Made It Nearly Impossible to Become a Developer</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Sun, 29 Mar 2026 16:47:08 +0000</pubDate>
      <link>https://dev.to/pacheco/how-we-made-it-nearly-impossible-to-become-a-developer-4oog</link>
      <guid>https://dev.to/pacheco/how-we-made-it-nearly-impossible-to-become-a-developer-4oog</guid>
      <description>&lt;p&gt;I once interviewed a senior software engineer. Almost 10 years of experience. Proven track record of delivery. Solid industry knowledge. The kind of person you’d want on your team without a second thought.&lt;/p&gt;

&lt;p&gt;They completed the technical challenge. Not flawlessly — there were considerations, trade-offs they made that weren’t all correct. But when we reviewed their decisions together, the reasoning was sound. They showed commitment to their choices and could articulate why they went the direction they did. Some answers were vague in spots, some mistakes were real, but nothing that wouldn’t get corrected in the first week on the job with actual codebase context. The kind of gaps that disappear when you’re working on real problems instead of performing in a vacuum.&lt;/p&gt;

&lt;p&gt;We didn’t hire them.&lt;/p&gt;

&lt;p&gt;Not because I didn’t want to. I did. But the compounded small mistakes added up under the scoring rubric, and the final grade wasn’t strong enough to sell to the hiring managers. The rules of the process made a good engineer look like a bad candidate.&lt;/p&gt;

&lt;p&gt;And I get it — those rules exist to keep the bar high, to ensure we only hire top talent. At least, that’s what every company believes. But what I’ve seen throughout my career, on both sides of the table, is that the process doesn’t filter for the best engineers. It filters for the best interviewers. And we lose great colleagues — dedicated, talented people — because they didn’t fit the rule book.&lt;/p&gt;

&lt;p&gt;That was a senior engineer with a decade of experience. Now imagine you’re a junior with none.&lt;/p&gt;

&lt;p&gt;The software industry has a hiring problem. Not the kind where we can’t find people — the kind where we’ve made it nearly impossible for new people to get in.&lt;/p&gt;

&lt;p&gt;Entry-level developer hiring has collapsed — some reports show drops of 60% or more in the past year, with actual hires into junior roles falling as much as 73%. CS graduates are sitting at 6.1% unemployment according to the Federal Reserve Bank of New York — more than double the overall national rate. And the majority of tech leaders say they plan to reduce entry-level hiring even further while increasing AI investment.&lt;/p&gt;

&lt;p&gt;But the pipeline didn’t break overnight. It’s been cracking for years. AI just kicked the door in.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Interview Problem Nobody Wants to Fix
&lt;/h2&gt;

&lt;p&gt;The interview process was broken long before AI showed up. And I’ll say what a lot of people in the industry think but won’t say out loud: &lt;strong&gt;the standard software engineering interview process is unrealistic, unnecessarily demanding, and a terrible predictor of on-the-job performance.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Think about what we ask candidates to do. Solve algorithmic puzzles on a whiteboard or shared screen. Explain their thought process in real time while simultaneously figuring out the solution. Design systems on the spot for problems they’ve never encountered in that specific framing. All while someone watches and judges every hesitation.&lt;/p&gt;

&lt;p&gt;Here’s the thing — that’s not how software development works. Not even close.&lt;/p&gt;

&lt;p&gt;Real engineering is focused, deep work. It’s sitting alone with a problem for hours, researching approaches, trying things, breaking things, iterating. It’s the exact opposite of performing under observation with a timer running. Most developers do their best work when they’re left alone to think. Asking them to showcase and explain how they’d deliver features while they’re still processing the problem doesn’t test their engineering ability. It tests their ability to perform under artificial pressure.&lt;/p&gt;

&lt;p&gt;And yet, this is how we gatekeep an entire profession.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Even a Principal Engineer Can’t Pass
&lt;/h2&gt;

&lt;p&gt;I have a close friend who’s a principal engineer. He’s delivered massive projects — systems that required intense complexity, heavy reliability, and serious scalability. The kind of work that keeps companies running. I’ve watched him turn down offers from companies that couldn’t skip the whiteboard stage. His track record speaks for itself, but he knows the process doesn’t.&lt;/p&gt;

&lt;p&gt;He straight up refuses to do technical interviews. Hates the process. Never performed well in them.&lt;/p&gt;

&lt;p&gt;But that wasn’t always the case. Early in his career, he had no choice. He went through the motions, sat through the whiteboard sessions, stumbled through the live coding exercises. And that’s exactly how he learned he was terrible at it. Not terrible at engineering — terrible at the performance.&lt;/p&gt;

&lt;p&gt;If a principal engineer with years of proven delivery struggles with this process, what does that tell us? It tells us we’re measuring the wrong thing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Culture fit, communication, problem-solving mindset, willingness to learn — these should carry far more weight than whether someone can implement a binary tree traversal from memory while a stranger watches.&lt;/strong&gt; But the industry has standardized around LeetCode-style assessments like they’re some universal truth, and we’ve collectively decided that this is just how it works.&lt;/p&gt;

&lt;p&gt;It’s not. It’s a choice. And it’s a bad one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Wrong Skills at the Worst Time
&lt;/h2&gt;

&lt;p&gt;Here’s where it gets really damaging for juniors specifically.&lt;/p&gt;

&lt;p&gt;When you’re just starting your career, you have limited time, limited money, and unlimited pressure. The message the industry sends you is clear: grind LeetCode. Master data structures and algorithms. Practice system design for systems you’ve never built. Get good at performing.&lt;/p&gt;

&lt;p&gt;So that’s what people do. They spend months — sometimes six months or more — focused entirely on interview preparation instead of actually building things, learning real-world patterns, or developing the engineering intuition that makes someone genuinely valuable.&lt;/p&gt;

&lt;p&gt;We’re literally telling the next generation of developers to optimize for the wrong skills from day one. And then we wonder why new hires can’t navigate a real codebase.&lt;/p&gt;

&lt;p&gt;The industry has created a perverse incentive: &lt;strong&gt;becoming good at getting hired and becoming good at the job are two completely different skill paths.&lt;/strong&gt; And for juniors who are just figuring out what software engineering even is, being forced down the interview prep path first is actively harmful to their development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Now Add AI to the Mix
&lt;/h2&gt;

&lt;p&gt;As if the interview gauntlet wasn’t enough, the industry just added a new requirement: you need to be proficient with AI tools.&lt;/p&gt;

&lt;p&gt;On the surface, this makes sense. AI-assisted development is becoming standard practice. Companies want developers who can leverage these tools effectively. Fair enough.&lt;/p&gt;

&lt;p&gt;But think about what we’re actually asking.&lt;/p&gt;

&lt;p&gt;Yes, some AI coding tools have free tiers now. GitHub Copilot has one. Cursor has a free plan. But if you’re just starting out — fresh from school, finishing a bootcamp, or self-teaching — do you even know that? The AI tooling landscape is an overwhelming mess of options, hype, and conflicting advice. Experienced developers struggle to keep up with what’s worth using. How is someone who’s still learning what a REST API is supposed to navigate that?&lt;/p&gt;

&lt;p&gt;And the free tiers only get you so far. The tools that companies actually expect proficiency in — Copilot Pro, Cursor Pro, Claude Pro — cost $10 to $20 per month each. If you want a serious AI-assisted workflow, you’re looking at $30-50/month minimum. That might not sound like much to someone employed, but when you’re unemployed, every dollar matters. Asking someone without income to pay for premium AI tools so they can develop the skills needed to get a job is a catch-22.&lt;/p&gt;

&lt;p&gt;There are too many unknowns when you’re starting out. Every conversation about AI in development assumes a baseline of knowledge and context that juniors simply don’t have yet. And instead of helping them build that foundation, we’re adding it to the list of things they need to figure out on their own before we’ll even consider hiring them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Senior Shortage Nobody Sees Coming
&lt;/h2&gt;

&lt;p&gt;Here’s the part that should terrify every tech leader who’s currently celebrating their AI-powered lean engineering team: &lt;strong&gt;you’re eating your seed corn.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It takes roughly 7 to 10 years to develop a senior engineer. Not just someone with “senior” in their title — someone who can architect systems, mentor teams, make judgment calls under uncertainty, and understand the business context of technical decisions. That kind of expertise doesn’t come from tutorials or AI tools. It comes from years of making mistakes, shipping real products, debugging production incidents at 2 AM, and slowly building the intuition that separates an engineer from someone who writes code.&lt;/p&gt;

&lt;p&gt;If we’re not hiring juniors now, we won’t have mid-level engineers in 3-5 years. And we won’t have seniors in 7-10 years.&lt;/p&gt;

&lt;p&gt;The Stanford Digital Economy Lab data already shows it: employment for software developers aged 22-25 has dropped roughly 20% since late 2022, while developers over 26 remain stable. The two groups tracked perfectly until ChatGPT launched, then diverged sharply. We’re watching the pipeline dry up in real time.&lt;/p&gt;

&lt;p&gt;And here’s the irony that makes it worse: companies are cutting juniors because they believe AI replaces what juniors did. But the data tells a different story. Google’s DORA 2024 report found that a 25% increase in AI adoption translated to just a 2% productivity gain — while executives at those same companies were telling their boards that AI had boosted output by 25%. The gap between measured reality and executive perception is staggering, and companies are making structural hiring decisions based on that perception, not the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Juniors were never just “cheap labor” who wrote boilerplate.&lt;/strong&gt; They stress-tested documentation. They exposed hidden assumptions in systems. They forced seniors to articulate knowledge that would otherwise stay implicit. They built institutional memory.&lt;/p&gt;

&lt;p&gt;A senior with Copilot can write code faster, sure — but faster code was never the bottleneck.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Squeeze
&lt;/h2&gt;

&lt;p&gt;So let’s put it all together. If you’re a junior developer in 2026, here’s your reality:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;There are barely any jobs for you.&lt;/strong&gt; Entry-level hiring has collapsed. Companies want seniors only.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The jobs that exist demand more than ever.&lt;/strong&gt; The few junior roles left aren’t really junior anymore — they want 2-3 years of experience, AI proficiency, and system design knowledge.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The interview process is designed against you.&lt;/strong&gt; Months of LeetCode prep that teaches you nothing about real engineering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You need tools you might not know exist or can’t afford.&lt;/strong&gt; AI proficiency is expected, but the landscape is overwhelming and the good stuff costs money you don’t have.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The anxiety is crushing.&lt;/strong&gt; The pressure to be the best, to stand out in a market with fewer openings and more candidates, is driving people out before they even start.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And the result? People are giving up. Not because they can’t code. Not because they’re not smart enough. Because the path from “I want to be a software developer” to actually being one has become so hostile, so expensive, and so demoralizing that it’s not worth it anymore.&lt;/p&gt;

&lt;p&gt;Can you blame them?&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Needs to Change
&lt;/h2&gt;

&lt;p&gt;I don’t have a clean five-point solution. Anyone who does is selling something. But I know what direction we should be moving.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rethink interviews from the ground up.&lt;/strong&gt; Pair programming sessions, take-home projects with reasonable time limits, portfolio reviews, trial periods — there are better ways to assess ability than making people perform algorithms under pressure. If your interview process can’t distinguish between a great engineer who interviews poorly and a mediocre one who interviews well, the process is broken. Not the candidate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invest in juniors as a strategic decision, not charity.&lt;/strong&gt; The companies that hire and develop juniors now will have the experienced engineers everyone else is desperate for in 2030. A handful of companies are already doubling down on junior hiring. They’re not being generous — they’re playing the long game while everyone else optimizes for this quarter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stop pretending AI replaced the junior role.&lt;/strong&gt; It replaced the boilerplate. The questions juniors ask, the assumptions they challenge, the documentation they stress-test — that’s not automatable. If your team stopped growing because you thought Copilot could replace a curious 23-year-old, you’re going to feel that decision in five years.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Question
&lt;/h2&gt;

&lt;p&gt;The software industry has spent the last two decades complaining about a talent shortage. And now, faced with the largest pool of motivated CS graduates and career switchers in history, we’ve decided the best strategy is to lock the door and let AI handle it.&lt;/p&gt;

&lt;p&gt;If senior engineers can’t pass technical interviews, if junior roles demand senior skills, if the tools you need are buried in a landscape designed for people who already know what they’re doing, and if the entire process optimizes for performance over competence — then the pipeline isn’t just broken. We broke it. Deliberately, through a thousand small decisions that each seemed reasonable in isolation but collectively created a system that’s eating its own future.&lt;/p&gt;

&lt;p&gt;The question isn’t whether this will catch up with us. It’s whether we’ll have anyone left in the pipeline to fix it when it does.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/junior-developer-pipeline-broken/" rel="noopener noreferrer"&gt;How We Made It Nearly Impossible to Become a Developer&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>careergrowth</category>
    </item>
    <item>
      <title>The AI Productivity Lie Nobody Wants to Admit</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Sat, 28 Mar 2026 12:37:32 +0000</pubDate>
      <link>https://dev.to/pacheco/the-ai-productivity-lie-nobody-wants-to-admit-4a6o</link>
      <guid>https://dev.to/pacheco/the-ai-productivity-lie-nobody-wants-to-admit-4a6o</guid>
      <description>&lt;p&gt;I’ve been producing bad code. And it’s not because I forgot how to code.&lt;/p&gt;

&lt;p&gt;I’ve tried every workflow. Terminal agents, IDE copilots, full vibe coding, augmented coding. I keep exploring because that’s what I do — evaluate, keep what works, move on from what doesn’t.&lt;/p&gt;

&lt;p&gt;But here’s where I am right now: most of the time, it’s still more reliable for me to write the code myself than to let the agent do it.&lt;/p&gt;

&lt;p&gt;When the AI writes it, yes — it’s faster sometimes. But then I review it. I find things. I correct things. And suddenly I’m in a loop where I’m either spending more time than if I just did it myself, or the same time doing a more tedious version of the work. I’m not writing code anymore. I’m auditing code I didn’t write and don’t fully trust.&lt;/p&gt;

&lt;p&gt;And I’m starting to wonder what that’s doing to my engineering skills. Not because AI replaced them. Because the constant pressure to delegate everything is pulling me away from the deep thinking that built those skills in the first place.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pressure Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;The expectation to be dramatically more productive with AI is real. It’s coming from management, from Twitter, from inside your own head. Every demo makes it look like everyone else figured it out and you’re falling behind.&lt;/p&gt;

&lt;p&gt;And yeah — AI can be a huge boost. But only if you have perfect project structure, perfect product context, perfect documentation.&lt;/p&gt;

&lt;p&gt;That doesn’t exist. Not in any real codebase I’ve ever worked on.&lt;/p&gt;

&lt;p&gt;You know what exists? Legacy code. Tech debt. Patterns that were “temporary” three years ago. Business logic that lives in someone’s head and nowhere else.&lt;/p&gt;

&lt;p&gt;When you point an AI agent at that, it doesn’t fix the problems. It copies them. It amplifies them. It confidently reproduces your worst patterns at scale.&lt;/p&gt;

&lt;p&gt;So now you’re not just dealing with tech debt. You’re dealing with AI-generated tech debt that looks clean because the agent formatted it nicely.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Data That Should Make You Uncomfortable
&lt;/h2&gt;

&lt;p&gt;Here’s what made me feel less crazy about all of this. And each study hits harder than the last.&lt;/p&gt;

&lt;h3&gt;
  
  
  You’re Not Even Faster
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;METR&lt;/a&gt;, a nonprofit research organization, ran what might be the most rigorous study on this topic to date. They took 16 experienced open-source developers — people who maintain large repositories, averaging 22,000+ stars and over a million lines of code — and had them work on real issues in their own codebases. Real bugs, real features, real refactors. Not toy problems.&lt;/p&gt;

&lt;p&gt;Half the time they could use AI. Half the time they couldn’t.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The result? When developers used AI tools, they took 19% longer to complete their tasks.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not faster. Slower.&lt;/p&gt;

&lt;p&gt;But here’s the part that should genuinely unsettle you: before the study, these developers predicted AI would make them 24% faster. After using it — after actually experiencing the slowdown — they still believed AI had made them 20% faster.&lt;/p&gt;

&lt;p&gt;19% slower. Felt 20% faster. &lt;strong&gt;A 40 percentage point gap between perception and reality.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is what I call the speed mirage. You feel like you’re flying. The data says you’re walking backwards. And you can’t even tell.&lt;/p&gt;

&lt;h3&gt;
  
  
  You Understand Less of What You Ship
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.anthropic.com/research/AI-assistance-coding-skills" rel="noopener noreferrer"&gt;Anthropic&lt;/a&gt;, the company that makes Claude, ran a randomized controlled trial on their own tool. Developers using AI assistance scored &lt;strong&gt;17% lower on comprehension tests&lt;/strong&gt; compared to developers who coded manually. The AI group finished slightly faster, but that speed difference wasn’t even statistically significant.&lt;/p&gt;

&lt;p&gt;Marginal speed gain. Real understanding loss.&lt;/p&gt;

&lt;p&gt;It gets worse. The developers who delegated code generation to AI scored below 40% on comprehension. The ones who used AI for conceptual questions — asking “why” and “how does this work” — scored above 65%.&lt;/p&gt;

&lt;p&gt;Same tool. Completely different outcomes depending on how you used it. And the biggest gap was in debugging — the skill you need most when things break in production.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Industry Already Knows
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://survey.stackoverflow.co/2025/ai" rel="noopener noreferrer"&gt;Stack Overflow’s 2025 Developer Survey&lt;/a&gt;: 84% of developers use or plan to use AI tools, but only 33% trust the output. Down from 43% the year before.&lt;/p&gt;

&lt;p&gt;Adoption up. Trust down.&lt;/p&gt;

&lt;p&gt;So we’re not faster. We understand less. And we don’t even trust what we ship. But we keep using it because everyone else seems to have figured it out.&lt;/p&gt;




&lt;h2&gt;
  
  
  Throughput vs. Confidence
&lt;/h2&gt;

&lt;p&gt;Let me name the thing clearly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The industry is optimizing for throughput when it should be optimizing for confidence.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Throughput is lines of code generated. PRs opened. Features “shipped.” It’s the metric that looks good on a dashboard and falls apart in production.&lt;/p&gt;

&lt;p&gt;Confidence is different. Do I understand what this code does? Do I trust it handles the edge cases? Can I debug it at 2 AM when something breaks?&lt;/p&gt;

&lt;p&gt;Vibe coding optimizes for throughput. You get a dopamine spike. You feel productive. And then you spend the rest of the day cleaning up after the machine.&lt;/p&gt;

&lt;p&gt;I’m not anti-AI. I use it every day. It’s incredible for researching tradeoffs, validating ideas, catching things I missed in review. When I use AI as a thinking partner, it genuinely makes me better.&lt;/p&gt;

&lt;p&gt;But when I use it as a coding replacement, it makes my output worse. And that’s the gap the industry isn’t willing to talk about.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Confidence Boundary
&lt;/h2&gt;

&lt;p&gt;So where does this leave us?&lt;/p&gt;

&lt;p&gt;I don’t have the perfect workflow. I’m not going to pretend I do. But I’ve been paying attention to what actually works, and the pattern is consistent.&lt;/p&gt;

&lt;p&gt;The developers getting the best results from AI aren’t the ones who figured out the perfect prompt. They’re the ones who figured out &lt;strong&gt;what to delegate and what to keep.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I’ve been calling this the confidence boundary.&lt;/p&gt;

&lt;p&gt;Here’s a real example. I had a feature to build recently. Instead of just opening the terminal and prompting the agent to do it, I stopped. I wrote the spec first. A clear, detailed explanation of what needed to be accomplished. The edge cases that the implementation had to survive. The constraints. The things I explicitly didn’t want.&lt;/p&gt;

&lt;p&gt;Then I handed that to the agent and let it implement against my spec.&lt;/p&gt;

&lt;p&gt;Because I did the thinking upfront, reviewing the output took minutes instead of hours. I knew exactly what should be there and what shouldn’t.&lt;/p&gt;

&lt;p&gt;But here’s the thing nobody tells you — and this is where it gets uncomfortable.&lt;/p&gt;

&lt;p&gt;To get a good result from the agent, you have to be &lt;em&gt;very&lt;/em&gt; specific. You’re writing a detailed spec, thinking through edge cases, defining constraints. And at some point you realize: &lt;strong&gt;for certain features, you’ve already done most of the hard work.&lt;/strong&gt; The thinking &lt;em&gt;is&lt;/em&gt; the work.&lt;/p&gt;

&lt;p&gt;At that point, it’s genuinely faster to just write the code yourself and use the agent as a pair reviewer to make sure you’re on the right track.&lt;/p&gt;

&lt;p&gt;Other times — boilerplate, scaffolding, repetitive patterns, implementations where the spec is clear and the risk is low — full delegation is absolutely the move. Hand the agent the guidance and let it run.&lt;/p&gt;

&lt;p&gt;The real skill isn’t prompting. It’s learning what to delegate and what to keep. And that judgment comes from understanding your codebase, the complexity of the task, and honestly — how much you trust the output for that specific context.&lt;/p&gt;

&lt;p&gt;The messier your codebase — legacy code, real tech debt, patterns with history the agent will never know — the more that judgment matters. The tooling is irrelevant. Neovim, Cursor, whatever. &lt;strong&gt;The bottleneck is you.&lt;/strong&gt; Whether you know where your confidence boundary is and whether you’re honest about it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bet I’m Making
&lt;/h2&gt;

&lt;p&gt;If you feel like AI is making you more productive but less confident in what you ship — you’re not falling behind. You’re paying attention.&lt;/p&gt;

&lt;p&gt;If your engineering skills feel soft because you keep delegating instead of thinking — that’s not paranoia. The research says it’s real.&lt;/p&gt;

&lt;p&gt;The speed mirage is powerful. It feels like progress. The dashboards say it’s progress. But if you can’t explain what you shipped, debug it when it breaks, or trust it handles the edge cases — that’s not progress. That’s debt with a nice commit message.&lt;/p&gt;

&lt;p&gt;I’m not quitting AI. I’m quitting the lie that it makes everything faster.&lt;/p&gt;

&lt;p&gt;The developers who are going to thrive aren’t the ones who ship more code. They’re the ones who learned what to keep and what to let go. Who built the judgment for when to delegate and when to do the work themselves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Confidence over throughput.&lt;/strong&gt; That’s the bet I’m making.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;METR Study — AI slows experienced developers by 19%&lt;/a&gt; (&lt;a href="https://arxiv.org/abs/2507.09089" rel="noopener noreferrer"&gt;arXiv paper&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.anthropic.com/research/AI-assistance-coding-skills" rel="noopener noreferrer"&gt;Anthropic Study — 17% comprehension loss with AI assistance&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://survey.stackoverflow.co/2025/ai" rel="noopener noreferrer"&gt;Stack Overflow 2025 Developer Survey — Trust declining&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://addyo.substack.com/p/avoiding-skill-atrophy-in-the-age" rel="noopener noreferrer"&gt;Addy Osmani on Skill Atrophy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.turingcollege.com/blog/agentic-engineering-vs-vibe-coding" rel="noopener noreferrer"&gt;Karpathy on Agentic Engineering&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.thoughtworks.com/en-us/insights/blog/agile-engineering-practices/spec-driven-development-unpacking-2025-new-engineering-practices" rel="noopener noreferrer"&gt;Thoughtworks on Spec-Driven Development&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/the-ai-productivity-lie-nobody-wants-to-admit-2/" rel="noopener noreferrer"&gt;The AI Productivity Lie Nobody Wants to Admit&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>developmentbestpract</category>
      <category>aicoding</category>
    </item>
    <item>
      <title>A Tale of Accidental Architecture: How 50 Lines Became A Black Friday Disaster</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Fri, 27 Feb 2026 04:26:03 +0000</pubDate>
      <link>https://dev.to/pacheco/a-tale-of-accidental-architecture-how-50-lines-became-a-black-friday-disaster-25cc</link>
      <guid>https://dev.to/pacheco/a-tale-of-accidental-architecture-how-50-lines-became-a-black-friday-disaster-25cc</guid>
      <description>&lt;p&gt;Let me tell you about Sarah.&lt;/p&gt;

&lt;p&gt;This is a fictional story. But I bet you’ll recognize it.&lt;/p&gt;

&lt;p&gt;I’ve seen this pattern play out across different companies, different teams, different tech stacks.  &lt;strong&gt;The details change. The progression doesn’t.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 1: The Perfect Start
&lt;/h2&gt;

&lt;p&gt;Sarah’s building a notification system for an e-commerce platform.&lt;/p&gt;

&lt;p&gt;First requirement: send an email when someone places an order.&lt;/p&gt;

&lt;p&gt;Simple. She writes one function. Webhook comes in, format the email, hit SMTP, done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The whole thing is maybe 50 lines.&lt;/strong&gt;  It works perfectly. Code review approves it. It ships.&lt;/p&gt;

&lt;p&gt;Sarah’s thinking: &lt;em&gt;“It’s just one notification type. I’ll add proper abstraction when we actually need it.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You’ve thought this too. So have I.&lt;/p&gt;

&lt;p&gt;Nothing wrong with it. Week 1, this is the right call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 3: The First Copy-Paste
&lt;/h2&gt;

&lt;p&gt;Product team loves the email notifications. Now they want SMS for order shipments.&lt;/p&gt;

&lt;p&gt;Mike picks up the ticket.&lt;/p&gt;

&lt;p&gt;He opens Sarah’s code. Sees the pattern.  &lt;strong&gt;Makes sense.&lt;/strong&gt;  He follows it.&lt;/p&gt;

&lt;p&gt;New handler. Receives the shipment webhook. Formats the SMS message. Connects to Twilio. Sends it.&lt;/p&gt;

&lt;p&gt;He copies some of Sarah’s email formatting logic because customers should see consistent information. Has to adjust it for the 160-character SMS limit, but the core logic is the same.&lt;/p&gt;

&lt;p&gt;Mike’s thinking: &lt;em&gt;“There’s some duplication with the email code, but SMS is different enough that abstracting it would be premature. It’s only two notification types.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deadline is tomorrow.&lt;/strong&gt;  This ships.&lt;/p&gt;

&lt;p&gt;Still nothing catastrophically wrong here. Two types, small duplication, it’s manageable.&lt;/p&gt;

&lt;p&gt;Right?&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 5: User Preferences
&lt;/h2&gt;

&lt;p&gt;Customers start complaining.&lt;/p&gt;

&lt;p&gt;“I don’t want SMS notifications.”&lt;/p&gt;

&lt;p&gt;“Why am I getting emails for every status change?”&lt;/p&gt;

&lt;p&gt;Sarah adds user preferences. Creates a database table. Updates her email handler to check if the user wants that particular notification before sending.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The handler triples in size.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Query the database. Check multiple preference flags. Handle the case where preferences don’t exist yet. Default values. Edge cases.&lt;/p&gt;

&lt;p&gt;Sarah’s thinking: &lt;em&gt;“This is getting messy, but the deadline is tomorrow and this works. I’ll refactor it next sprint.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I cannot tell you how many times I’ve heard “next sprint.”&lt;/p&gt;

&lt;p&gt;(Spoiler: next sprint never comes.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhzgu3hiae5nlit5zn7l.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhzgu3hiae5nlit5zn7l.gif" alt="This is fine dog meme - developer ignoring growing problems" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 7: Two Ways to Do Everything
&lt;/h2&gt;

&lt;p&gt;Mike needs to add notifications for order cancellations and delivery confirmations.&lt;/p&gt;

&lt;p&gt;He realizes hardcoding email bodies isn’t going to scale.&lt;/p&gt;

&lt;p&gt;So he builds a template system. Creates a templates directory. Writes a simple renderer. Updates his handlers to load templates, populate data, send.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s actually pretty clean.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Meanwhile, Sarah’s handlers still use string formatting. She doesn’t know Mike built a template system. Mike didn’t announce it in Slack. It just… exists now.&lt;/p&gt;

&lt;p&gt;The codebase now has  &lt;strong&gt;two different ways of generating notification content.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sarah finds out later. Thinks: &lt;em&gt;“I should probably switch to Mike’s templates… but my code is working and I’m slammed with other features.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And she is. Three new features this sprint. No time to refactor working code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 9: The Third Approach
&lt;/h2&gt;

&lt;p&gt;Emma joins the team.&lt;/p&gt;

&lt;p&gt;First task: add Slack notifications for the support team when high-value orders come in.&lt;/p&gt;

&lt;p&gt;She opens the notification code. Finds Sarah’s inline approach. Finds Mike’s templates.  &lt;strong&gt;Neither makes sense for Slack.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Slack needs structured JSON payloads, not formatted text.&lt;/p&gt;

&lt;p&gt;So Emma does what any good engineer would do: she creates a  &lt;strong&gt;“proper solution”.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Notification service class. Methods for each notification type. Handles destination-specific formatting internally. Clean. Testable. Well-designed.&lt;/p&gt;

&lt;p&gt;She shows it to the team in standup.&lt;/p&gt;

&lt;p&gt;Mike: &lt;em&gt;“That’s nice, but I don’t have time to refactor my SMS code right now. Maybe later.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Sarah: &lt;em&gt;“I like it, but my code has been running in production for months. If it ain’t broke…”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Emma’s service class gets used for Slack notifications.  &lt;strong&gt;Nothing else changes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now there are three ways to send notifications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj49li7kl1vg74rp29yei.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj49li7kl1vg74rp29yei.gif" alt="Spider-Man pointing meme - three developers with different approaches" width="498" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 12: The Chaos Compounds
&lt;/h2&gt;

&lt;p&gt;Product wants:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Push notifications for the mobile app&lt;/li&gt;
&lt;li&gt;Digest emails (daily order summaries)&lt;/li&gt;
&lt;li&gt;Ability to snooze notifications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Three developers. Three features. Same week.&lt;/p&gt;

&lt;p&gt;Each one discovers the existing fragmentation. Each one makes their own call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer A&lt;/strong&gt;  tries to extend Sarah’s inline approach. Adds push notification logic directly in the handler.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer B&lt;/strong&gt;  uses Mike’s templates but creates a  &lt;strong&gt;new template format&lt;/strong&gt;  because the existing one doesn’t support digest layouts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer C&lt;/strong&gt;  tries to use Emma’s service class but realizes it doesn’t handle scheduling or snoozing. So they add that logic directly in their handler instead.&lt;/p&gt;

&lt;p&gt;The notification preferences table is now being updated by  &lt;strong&gt;five different code paths.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each developer added their own columns because they didn’t realize others had added similar fields. One stores preferences as JSON. Another uses boolean columns. Another created a  &lt;strong&gt;separate preferences table&lt;/strong&gt;  with foreign keys.&lt;/p&gt;

&lt;p&gt;I’ve seen this code review happen. Every PR gets approved. Every piece of code works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nobody did anything wrong.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And yet.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“Every PR got approved. Every piece of code worked. Nobody did anything wrong. And yet.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Week 15: Customer Complaints
&lt;/h2&gt;

&lt;p&gt;Support tickets start flooding in.&lt;/p&gt;

&lt;p&gt;“I’m getting duplicate notifications.”&lt;/p&gt;

&lt;p&gt;“I disabled email but I’m still getting them.”&lt;/p&gt;

&lt;p&gt;“I’m not getting notifications at all for important orders.”&lt;/p&gt;

&lt;p&gt;Sarah investigates. Opens the codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Six different code paths handle notifications.&lt;/strong&gt;  Some check preferences before sending. Some check during sending. Some don’t check at all because the developer assumed another layer was handling it.&lt;/p&gt;

&lt;p&gt;She finds the bug. It’s in her original email handler. The preference check is wrong.&lt;/p&gt;

&lt;p&gt;She fixes it. Deploys.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three other notification types break.&lt;/strong&gt;  They were relying on her buggy behavior.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapyzb9u5k536v4qidesn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapyzb9u5k536v4qidesn.gif" alt="Domino effect - one bug fix breaks three other features" width="480" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The team estimate to fix it properly: &lt;em&gt;“We need to stop and refactor everything first, or we’ll just make it worse.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Management: “We don’t have time for a refactor. Just fix the bugs.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 17: The Template Nightmare
&lt;/h2&gt;

&lt;p&gt;Marketing wants to update email designs. New brand guidelines.&lt;/p&gt;

&lt;p&gt;The developer assigned to this opens the codebase.&lt;/p&gt;

&lt;p&gt;Templates are  &lt;strong&gt;everywhere.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some in a &lt;code&gt;/templates&lt;/code&gt; directory. Some hardcoded as strings. Some in the database. Some fetched from an external CMS that one developer integrated without telling anyone.&lt;/p&gt;

&lt;p&gt;There’s no single source of truth.&lt;/p&gt;

&lt;p&gt;Worse: the data passed to templates is completely inconsistent.&lt;/p&gt;

&lt;p&gt;Email templates expect order objects with certain fields. SMS templates expect a flattened structure. Push notifications expect a completely different format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One design change requires touching dozens of files.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The developer estimates: “Two weeks, maybe three.”&lt;/p&gt;

&lt;p&gt;Marketing: “It’s just a design update. How is that two weeks?”&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 20: Performance Crisis
&lt;/h2&gt;

&lt;p&gt;Black Friday.&lt;/p&gt;

&lt;p&gt;The system crashes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlrdt9uxnezf4h3w7irz.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlrdt9uxnezf4h3w7irz.gif" alt="Everything is on fire - Black Friday system crash" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Investigation reveals: notification handlers are opening new database connections for  &lt;strong&gt;every single notification sent.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some handlers properly close connections. Some don’t.&lt;/p&gt;

&lt;p&gt;Connection pools exhausted. Some handlers retry failed sends immediately and indefinitely,  &lt;strong&gt;amplifying the problem during the outage.&lt;/strong&gt;  One handler spawns a goroutine for each notification but never limits concurrency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The server runs out of memory processing a batch of 10,000 order confirmations.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Different developers made different assumptions about error handling.&lt;/p&gt;

&lt;p&gt;Some silently swallow errors and log them. Some retry with exponential backoff. Some fail fast. Some store failed notifications in one database table for retry. Others use a different table. One developer integrated a third-party queue system  &lt;strong&gt;that nobody else knew existed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Notifications are getting lost between these systems.&lt;/p&gt;

&lt;p&gt;I’ve been on calls where the CTO asks: “How many notification systems do we have?”&lt;/p&gt;

&lt;p&gt;Nobody can answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 24: The Audit
&lt;/h2&gt;

&lt;p&gt;Compliance team asks a simple question:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Can you show us a record of all notifications sent to customer X in the past 90 days?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The team cannot answer this.&lt;/p&gt;

&lt;p&gt;Notification logs are  &lt;strong&gt;scattered everywhere.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some handlers log to stdout. Some to files. Some to a database table. Some don’t log at all.&lt;/p&gt;

&lt;p&gt;The log formats are completely different. Some include the full message content. Some just log “notification sent” without details. There’s no correlation between the notification and the triggering event.&lt;/p&gt;

&lt;p&gt;The auditor asks: &lt;em&gt;“How do you ensure notifications contain required legal disclosures?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Each template was created independently. Some include required legal text. Some don’t.  &lt;strong&gt;There’s no centralized enforcement.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I’ve seen this audit happen. Teams spend weeks reconstructing logs manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Breaking Point
&lt;/h2&gt;

&lt;p&gt;VP of Engineering asks for a simple feature:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Add an unsubscribe link to all emails.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The team estimates:  &lt;strong&gt;Three weeks.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The VP is shocked.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3dybxptm7pxoqd6d67t.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3dybxptm7pxoqd6d67t.gif" alt="Shocked reaction - three weeks to add an unsubscribe link?!" width="195" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;“It’s just adding a link. How is that three weeks of work?”&lt;/p&gt;

&lt;p&gt;The tech lead explains:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“We have seven different code paths that send emails. Each uses a different templating system. Some render templates on the server. Some fetch them from external systems. Some are hardcoded strings. We need to update each one individually, ensure the unsubscribe logic is consistent across all of them, add tracking for unsubscribe events, update the preferences system to handle unsubscribes properly, and test everything thoroughly because there’s no centralized testing strategy.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Three weeks. For a link.&lt;/p&gt;

&lt;p&gt;The VP asks the obvious question:  &lt;strong&gt;“How did it get this bad?”&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Went Wrong?
&lt;/h2&gt;

&lt;p&gt;Here’s the thing that kills me about this story.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nobody made a catastrophically bad decision.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sarah’s Week 1 implementation was appropriate. Mike’s template system was a reasonable improvement. Emma’s service class was a genuine attempt to bring order.&lt;/p&gt;

&lt;p&gt;Every single developer was trying to do good work under deadline pressure.&lt;/p&gt;

&lt;p&gt;The problem wasn’t the individual decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It was the absence of a shared architectural vision.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without clear boundaries and layers, each developer made reasonable local optimizations that created global chaos.&lt;/p&gt;

&lt;p&gt;The “I’ll refactor it later” moments never came because there was never a good time to stop feature development.&lt;/p&gt;

&lt;p&gt;The “let’s standardize this” conversations happened but never resulted in action because no one had time to migrate existing code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The codebase evolved organically.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And organic growth without structure doesn’t produce a garden. It produces a weed-infested lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  “But This Is Just a Communication Problem”
&lt;/h2&gt;

&lt;p&gt;You might be thinking: the real issue was that developers didn’t communicate.&lt;/p&gt;

&lt;p&gt;If Sarah and Mike had talked, they wouldn’t have built two different templating systems. If Emma had socialized her service class better, others would have adopted it.&lt;/p&gt;

&lt;p&gt;Better standups. Better code reviews. Better documentation.  &lt;strong&gt;That’s&lt;/strong&gt;  what was missing, not architecture.&lt;/p&gt;

&lt;p&gt;This is seductive because it’s partially true.&lt;/p&gt;

&lt;p&gt;But here’s why it misses the point:  &lt;strong&gt;architecture IS communication.&lt;/strong&gt;&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“Architecture IS communication. It’s the most important form of communication for technical decisions.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;It’s the most important form of communication for technical decisions.&lt;/p&gt;

&lt;p&gt;Think about what actually happened in the story.&lt;/p&gt;

&lt;p&gt;The team  &lt;strong&gt;DID communicate.&lt;/strong&gt;  Mike showed his template system in code review. Emma presented her service class and got positive feedback. They had a meeting in Week 11 trying to align on standards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The communication happened.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What didn’t happen was turning those conversations into durable, enforceable decisions.&lt;/p&gt;

&lt;p&gt;This is the key difference:&lt;/p&gt;

&lt;p&gt;Conversation says &lt;em&gt;“we should probably do X.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Architecture says &lt;em&gt;“X is how we do things here, and here’s where it lives.”&lt;/em&gt;&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“Conversation is ephemeral. Architecture is the artifact that persists after the meeting ends.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;When a new developer joins and asks “where should notification logic go?”, the answer shouldn’t require scheduling a meeting or hunting through Slack history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It should be obvious from looking at the codebase.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Communication without architecture leads to the problem Emma faced. She built something good. People agreed it was good. And then…  &lt;strong&gt;nothing changed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without architectural decisions being explicitly made (&lt;em&gt;“from now on, all notifications go through NotificationService”&lt;/em&gt;), the good idea just becomes another option in an increasingly fragmented codebase.&lt;/p&gt;

&lt;p&gt;Good communication can prevent chaos. But it can’t survive bad processes.&lt;/p&gt;

&lt;p&gt;When developers are under deadline pressure, working on different features, joining the team at different times,  &lt;strong&gt;communication will have gaps.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Architecture is the safety net for when communication fails.&lt;/p&gt;

&lt;p&gt;It’s the shared context that makes it possible to work somewhat independently without creating complete divergence.&lt;/p&gt;

&lt;p&gt;So yes, the team in our story could have communicated better.&lt;/p&gt;

&lt;p&gt;But the solution isn’t &lt;em&gt;“communicate more.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s “communicate the architecture and make it stick.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Document where things belong. Make architectural decisions explicit. Enforce them in code review. Build structure that persists beyond any individual conversation.&lt;/p&gt;

&lt;p&gt;Because at the end of the day, you can have all the Slack channels and standups and retros you want.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without a shared architectural foundation, you’re just having the same conversations over and over while the codebase continues to fragment.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What Should Have Happened in Week 1
&lt;/h2&gt;

&lt;p&gt;Sarah should have spent 30 minutes writing this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Notification System Architecture

## Where Things Live
- All notification logic → services/notification_service.py
- Templates → templates/ directory (Jinja2 format)
- Preference checks → services/preference_service.py
- Delivery logging → notification_log table

## How to Add a New Notification Type
1. Add template to templates/
2. Add method to NotificationService
3. Log delivery attempt (success or failure)
4. Add tests to test_notification_service.py

## Error Handling
- Retries: 3 attempts with exponential backoff (1s, 2s, 4s)
- Failed sends → dead_letter_queue table
- All errors logged with correlation ID

## Preferences
- Check preferences BEFORE sending (not during)
- Default: all notifications enabled
- Unsubscribe → set all preferences to false

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;That’s it.&lt;/strong&gt;  30 minutes of work. Would have saved months of chaos.&lt;/p&gt;




&lt;p&gt;When Mike added SMS in Week 3, he would have known where to put it. When Emma added Slack in Week 9, she would have followed the existing pattern. When three developers worked simultaneously in Week 12, they would have made consistent decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not because they communicated better. Because the architecture communicated for them.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pattern You’ve Seen Before
&lt;/h2&gt;

&lt;p&gt;I’ve seen this exact pattern play out at least a dozen times.&lt;/p&gt;

&lt;p&gt;Different companies. Different tech stacks. Different teams. Different features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The pattern is always the same.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Week 1: Clean, working code.&lt;/p&gt;

&lt;p&gt;Week 3: Small duplication appears.&lt;/p&gt;

&lt;p&gt;Week 7: Multiple approaches emerge.&lt;/p&gt;

&lt;p&gt;Week 12: Chaos compounds.&lt;/p&gt;

&lt;p&gt;Month 6: Simple changes take weeks.&lt;/p&gt;

&lt;p&gt;The timeline varies. Sometimes it happens faster (AI accelerates it). Sometimes slower (disciplined team delays it). But without architecture,  &lt;strong&gt;the destination is always the same.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Then AI Showed Up and Made Everything 10x Worse
&lt;/h2&gt;

&lt;p&gt;Everything I just described? It’s been happening for decades.&lt;/p&gt;

&lt;p&gt;Slow burn. Predictable. Manageable if you catch it early.&lt;/p&gt;

&lt;p&gt;Then 2024 happened.&lt;/p&gt;

&lt;p&gt;AI coding assistants arrived. And they turned architectural decay from a slow burn into a  &lt;strong&gt;wildfire.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AI Replicates. It Doesn’t Invent.
&lt;/h3&gt;

&lt;p&gt;Here’s what changed.&lt;/p&gt;

&lt;p&gt;When Mike needed to add SMS in Week 3, he opened Sarah’s code.  &lt;strong&gt;Looked at it.&lt;/strong&gt;  Made a decision. Maybe he copied the pattern. Maybe he tried something different.&lt;/p&gt;

&lt;p&gt;But he  &lt;strong&gt;thought&lt;/strong&gt;  about it.&lt;/p&gt;

&lt;p&gt;Now imagine Mike has Cursor. Or Copilot. Or Claude Code.&lt;/p&gt;

&lt;p&gt;He types: &lt;code&gt;// Add SMS notification for shipments&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The AI looks at the codebase. Sees Sarah’s pattern.  &lt;strong&gt;Instantly replicates it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Code appears. Mike reviews it. Looks good. Ships.&lt;/p&gt;

&lt;p&gt;He never even saw the architectural decision being made.&lt;/p&gt;

&lt;p&gt;The AI made it for him. Based on what already existed.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“AI doesn’t just copy your code. It copies your architecture. Even the accidental parts.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  The Speed and Scale Just Exploded
&lt;/h3&gt;

&lt;p&gt;Remember Week 12? Three developers, three features, three different approaches emerging over a week?&lt;/p&gt;

&lt;p&gt;With AI,  &lt;strong&gt;that’s Tuesday.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developer A asks AI for push notifications. AI sees Sarah’s inline handler. Copies it.&lt;/p&gt;

&lt;p&gt;Developer B asks AI for digest emails. AI sees Mike’s templates. Copies those.&lt;/p&gt;

&lt;p&gt;Developer C asks AI for snoozing. AI sees Emma’s service class. Copies that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All three features ship the same day.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But it’s not just faster. It’s  &lt;strong&gt;bigger.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pre-AI: 50-200 lines of code per day.&lt;/p&gt;

&lt;p&gt;With AI: 500-2000 lines in the same time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s 5-10x more code&lt;/strong&gt;  implementing patterns, creating variations, spreading duplication.&lt;/p&gt;

&lt;p&gt;You have two ways of checking preferences? AI propagates both. Three error handling approaches? AI replicates all three.  &lt;strong&gt;Every inconsistency becomes a seed that AI plants everywhere.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The notification system that took Sarah’s team 20 weeks to become unmaintainable?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With AI, you can get there in 4.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AI Can’t See What You Didn’t Write Down
&lt;/h3&gt;

&lt;p&gt;Here’s the fundamental problem.&lt;/p&gt;

&lt;p&gt;AI is  &lt;strong&gt;incredible&lt;/strong&gt;  at implementation. It can write clean, working code. It follows patterns. It handles edge cases.&lt;/p&gt;

&lt;p&gt;But it cannot  &lt;strong&gt;architect.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It can’t look at your codebase and think: &lt;em&gt;“Wait, this is getting fragmented. We should consolidate these patterns.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It can’t say: &lt;em&gt;“I see three different approaches here. Which one should I follow?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It just… picks one. Based on similarity to what you’re asking for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If your architecture is accidental, AI accelerates the accident.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Old Advice Is Now Dangerous
&lt;/h3&gt;

&lt;p&gt;The advice used to be: &lt;em&gt;“Don’t over-architect small projects. Start simple. Refactor when you need to.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That advice just became  &lt;strong&gt;dangerous.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With AI, “small projects” don’t stay small. They  &lt;strong&gt;explode.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By the time you realize you need to refactor, you have 10x more code to untangle.&lt;/p&gt;

&lt;p&gt;The window between “clean start” and “architectural debt crisis” collapsed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 1 decisions matter more than ever.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can’t afford to defer architecture anymore.&lt;/p&gt;

&lt;h3&gt;
  
  
  But Here’s the Good News
&lt;/h3&gt;

&lt;p&gt;The same force that amplifies chaos can amplify  &lt;strong&gt;order.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI replicates good patterns just as enthusiastically as bad ones.&lt;/p&gt;

&lt;p&gt;If you write that architecture document in Week 1. If you establish clear boundaries. If you make the “right way” obvious.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI will follow it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Consistently. Every single time. Across every feature.&lt;/p&gt;

&lt;p&gt;It will use your NotificationService. It will follow your template structure. It will implement your error handling exactly as specified.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;At scale. At speed. Without deviation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The chaos multiplier becomes a  &lt;strong&gt;consistency multiplier.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But only if you give it something consistent to multiply.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“AI doesn’t make architecture optional. It makes it mandatory.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;This is why the next post matters even more now.&lt;/p&gt;

&lt;p&gt;I’ll show you how to set up that architectural foundation  &lt;strong&gt;before&lt;/strong&gt;  you start generating code with AI.&lt;/p&gt;

&lt;p&gt;How to make the right patterns so obvious that AI can’t help but follow them.&lt;/p&gt;

&lt;p&gt;How to turn AI from an architectural time bomb into an architectural enforcement mechanism.&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;In the next post, I’ll show you how to build that architectural foundation.&lt;/p&gt;

&lt;p&gt;Not some enterprise framework. Not over-engineered complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The simple, practical structure that prevents this chaos.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’ll rebuild this exact notification system with clear boundaries, testable code, and patterns that guide developers toward consistency instead of fragmentation.&lt;/p&gt;

&lt;p&gt;You’ll see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where things live (and why)&lt;/li&gt;
&lt;li&gt;How to test without infrastructure&lt;/li&gt;
&lt;li&gt;How to make architectural decisions stick&lt;/li&gt;
&lt;li&gt;How AI helps instead of amplifying chaos&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Until then, look at your codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What week are you on?&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you lived through this story? I’d love to hear about it. Find me on &lt;a href="https://twitter.com/pachecot" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or &lt;a href="https://linkedin.com/in/pachecothiago" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/a-tale-of-accidental-architecture-how-50-lines-became-a-black-friday-disaster/" rel="noopener noreferrer"&gt;A Tale of Accidental Architecture: How 50 Lines Became A Black Friday Disaster&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>uncategorized</category>
      <category>accidentalarchitectu</category>
      <category>codequality</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Nobody Knows How to Estimate Software Anymore (And It’s Not Your Fault)</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Sun, 15 Feb 2026 16:12:00 +0000</pubDate>
      <link>https://dev.to/pacheco/nobody-knows-how-to-estimate-software-anymore-and-its-not-your-fault-3d1a</link>
      <guid>https://dev.to/pacheco/nobody-knows-how-to-estimate-software-anymore-and-its-not-your-fault-3d1a</guid>
      <description>&lt;p&gt;Here’s a pattern I keep seeing (and living):&lt;/p&gt;

&lt;p&gt;A feature that would have taken 2-3 weeks gets estimated at “2 days with AI.”&lt;/p&gt;

&lt;p&gt;It ships in 4. Sometimes 5.&lt;/p&gt;

&lt;p&gt;Not because the AI was slow. Because there are three other “2-day AI projects” running simultaneously. Each one spiraling into bugs, edge cases, and integration issues nobody saw coming. Context-switching between half-finished features, fighting fires, somehow falling behind on all of them.&lt;/p&gt;

&lt;p&gt;The AI writes code faster than any human could.&lt;/p&gt;

&lt;p&gt;But we’re not shipping faster. We’re drowning.&lt;/p&gt;

&lt;p&gt;And here’s the uncomfortable part: we’re doing this to ourselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Promise vs The Reality
&lt;/h2&gt;

&lt;p&gt;You’ve seen the headlines. AI coding tools boost productivity by 39%. Developers are shipping faster than ever. The future is here.&lt;/p&gt;

&lt;p&gt;What they don’t tell you is what happens next.&lt;/p&gt;

&lt;p&gt;Your manager sees those numbers too. And if AI makes you 39% more productive, why can’t you handle 60% more work? Why are estimates still slipping? Why are bugs still happening?&lt;/p&gt;

&lt;p&gt;The math doesn’t add up. And you’re stuck in the middle trying to explain why “AI writes the code” doesn’t mean “features appear instantly.”&lt;/p&gt;

&lt;p&gt;Here’s what the data actually shows:&lt;/p&gt;

&lt;p&gt;A UC Berkeley study found that &lt;strong&gt;AI doesn’t reduce work, it intensifies it.&lt;/strong&gt; One developer they interviewed said it perfectly: “You thought maybe you’d work less with AI. But you don’t work less. You just work the same amount or even more.”&lt;/p&gt;

&lt;p&gt;TechCrunch reported last week: teams adopting AI workflows saw &lt;strong&gt;expectations triple, stress triple, but actual productivity only go up by maybe 10%.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And here’s the kicker: a METR study found developers &lt;em&gt;expected&lt;/em&gt; AI to speed them up by 24%. In reality? &lt;strong&gt;It slowed them down.&lt;/strong&gt; But they still &lt;em&gt;believed&lt;/em&gt; it made them 20% faster.&lt;/p&gt;

&lt;p&gt;The gap between perception and reality is dangerous. And most of us are living in it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The High Expectations Problem
&lt;/h2&gt;

&lt;p&gt;This isn’t just coming from management.&lt;/p&gt;

&lt;p&gt;Yes, leadership hears “AI can write massive amounts of code” and expects you to prompt your way through multiple features in no time. First try, maybe second. They don’t understand how AI actually works.&lt;/p&gt;

&lt;p&gt;But here’s the honest truth: &lt;strong&gt;we don’t either.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I thought I did. I thought “AI writes the code, I review it, we ship it” was the workflow. But that’s not what happens.&lt;/p&gt;

&lt;p&gt;What happens is: AI writes the code. I start reviewing it. I find issues. I ask the AI to fix them. It creates new issues. I start another feature while waiting. That one has issues too. Now I’m juggling three half-finished features, each with its own set of AI-generated bugs I’m trying to understand and fix.&lt;/p&gt;

&lt;p&gt;The promise was velocity. The reality is fragmentation.&lt;/p&gt;

&lt;p&gt;And I keep saying yes to more because “it’s just AI, how hard can it be?” But the cognitive load of reviewing, validating, debugging, and integrating AI code across multiple parallel tracks is crushing.&lt;/p&gt;

&lt;p&gt;Every feature &lt;em&gt;looks&lt;/em&gt; 80% done. None of them actually ship.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Estimation Trap
&lt;/h2&gt;

&lt;p&gt;Here’s the confession I don’t want to make: &lt;strong&gt;I have no idea how to estimate tasks anymore.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers have always struggled with estimation. We underestimate until we get burned enough times to realize “add a simple feature” is never simple when there’s an existing codebase involved.&lt;/p&gt;

&lt;p&gt;But AI broke our calibration completely.&lt;/p&gt;

&lt;p&gt;A task that used to take 3 days now takes… 2 hours? 4 days? Both? Neither? It depends on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How well I can describe what I want&lt;/li&gt;
&lt;li&gt;How many edge cases the AI misses&lt;/li&gt;
&lt;li&gt;How much integration complexity exists&lt;/li&gt;
&lt;li&gt;Whether the AI understands the existing patterns&lt;/li&gt;
&lt;li&gt;How many iterations it takes to get it right&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So now I swing between two extremes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Under-estimating&lt;/strong&gt; because “AI will handle it” — then spending 3 days debugging what the AI generated in 20 minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Over-estimating&lt;/strong&gt; because “who knows what AI will break” — then looking slow when it actually works the first time.&lt;/p&gt;

&lt;p&gt;The old rules don’t apply. New ones haven’t emerged. And this isn’t just an academic problem:&lt;/p&gt;

&lt;p&gt;Sprint planning becomes guesswork. Roadmaps turn into fiction. Technical debt compounds faster than we can track it. Trust erodes when commitments slip repeatedly.&lt;/p&gt;

&lt;p&gt;When someone asks “how long will this take?” the honest answer is often “I don’t know anymore.”&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Costs Nobody’s Talking About
&lt;/h2&gt;

&lt;p&gt;The expectation problem is obvious once you see it. But there are other traps hiding underneath:&lt;/p&gt;

&lt;h3&gt;
  
  
  You’re Not Writing Code Anymore. You’re Validating It.
&lt;/h3&gt;

&lt;p&gt;One researcher described it perfectly: “A senior developer with Copilot doesn’t become a code-writing machine. They become a &lt;strong&gt;code-validation machine&lt;/strong&gt;.”&lt;/p&gt;

&lt;p&gt;When AI generates 40% more code, you have 40% more code to review. But reviewing AI code is different than reviewing human code. Humans make predictable mistakes. AI makes plausible-sounding nonsense that looks right until you run it.&lt;/p&gt;

&lt;p&gt;Context switching between your work and reviewing AI output costs 20-30% of your focus &lt;em&gt;per switch&lt;/em&gt;. When you’re juggling multiple AI-started features, you’re switching constantly.&lt;/p&gt;

&lt;p&gt;You’re not more productive. You’re just more exhausted.&lt;/p&gt;

&lt;h3&gt;
  
  
  You Say Yes to Everything
&lt;/h3&gt;

&lt;p&gt;AI makes tasks that used to be “too expensive” feel trivial. So you say yes to things you would have declined or delegated.&lt;/p&gt;

&lt;p&gt;“Can you add that dashboard feature?”&lt;br&gt;&lt;br&gt;
“Sure, AI can knock that out.”&lt;/p&gt;

&lt;p&gt;“Can you refactor that module?”&lt;br&gt;&lt;br&gt;
“Yeah, should be quick with AI.”&lt;/p&gt;

&lt;p&gt;“Can you investigate that performance issue?”&lt;br&gt;&lt;br&gt;
“I’ll have AI profile it.”&lt;/p&gt;

&lt;p&gt;Harvard Business Review calls this &lt;strong&gt;work intensification&lt;/strong&gt; : AI doesn’t reduce your workload, it makes you take on more.&lt;/p&gt;

&lt;p&gt;You’re not automating your way to free time. You’re automating your way to more commitments.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Quality-Speed Death Spiral
&lt;/h3&gt;

&lt;p&gt;Here’s how it compounds:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AI gives you an initial productivity surge&lt;/li&gt;
&lt;li&gt;That surge creates expectations for speed&lt;/li&gt;
&lt;li&gt;Speed pressure leads to cutting corners on review&lt;/li&gt;
&lt;li&gt;Lower quality creates more bugs&lt;/li&gt;
&lt;li&gt;More bugs mean more debugging and rework&lt;/li&gt;
&lt;li&gt;Debugging takes longer because you didn’t write the code&lt;/li&gt;
&lt;li&gt;You fall behind, pressure increases, quality drops further&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Berkeley researchers warned: “The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems.”&lt;/p&gt;

&lt;p&gt;This is happening right now across teams. Features ship that “work,” but the developers don’t fully understand how. When they break, debugging becomes an archaeology project through code nobody wrote and barely reviewed. That’s not faster. That’s deferred pain.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Actually Feels Like
&lt;/h2&gt;

&lt;p&gt;A software engineer named Siddhant Khare wrote about “AI fatigue” last week. It resonated with me immediately because &lt;strong&gt;it’s real and nobody talks about it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;AI-era burnout doesn’t look like working 80-hour weeks. It looks like:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decision fatigue&lt;/strong&gt; from validating endless AI outputs. Every line might be wrong. Every function might have a subtle bug. You can’t just skim.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cognitive load&lt;/strong&gt; from juggling multiple AI-started initiatives. Each one 80% done. None actually shipping.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Imposter syndrome&lt;/strong&gt; when you can’t tell if you’re productive or just busy. You wrote 3,000 lines this week. Zero features shipped. Are you slow? Or is the process broken?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anxiety&lt;/strong&gt; from commitments you can’t estimate. You said 2 days. It’s been 4. The AI generated the code in 20 minutes and you’ve been debugging it ever since.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Guilt&lt;/strong&gt; for not keeping up. Everyone else seems to be shipping faster with AI. Why aren’t you?&lt;/p&gt;

&lt;p&gt;The research shows that some developers see burnout risk drop 17% with AI — but only if their workload doesn’t increase to fill the gap.&lt;/p&gt;

&lt;p&gt;In practice? Workload always increases.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Team Lead’s Nightmare
&lt;/h2&gt;

&lt;p&gt;If estimating one AI-assisted task is this chaotic, imagine coordinating an entire team.&lt;/p&gt;

&lt;p&gt;You’re trying to plan a sprint. Every developer gives you an estimate. Half of them are wildly optimistic because “AI will handle it.” The other half are padding heavily because they’ve been burned.&lt;/p&gt;

&lt;p&gt;You don’t know which estimates to trust. You don’t know how to aggregate them into a roadmap. You don’t know how to explain to stakeholders why the team that just adopted “productivity-boosting AI tools” is still missing deadlines.&lt;/p&gt;

&lt;p&gt;And when the sprint ends? Half the stories are “80% done.” A quarter shipped but with bugs. The rest are stuck in AI-generated complexity no one fully understands.&lt;/p&gt;

&lt;p&gt;Tech leads are stuck in the middle. Can’t estimate their own AI-assisted work. Somehow supposed to help the team estimate theirs.&lt;/p&gt;

&lt;p&gt;Sprint planning feels like collective guessing. Retrospectives turn into “we don’t know what went wrong, the AI just… took longer than expected.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The only solution I’ve found is the boring one: go back to basics.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Estimate anyway. Even if it’s totally wrong.&lt;br&gt;&lt;br&gt;
Run retrospectives. Understand what actually happened.&lt;br&gt;&lt;br&gt;
Repeat every cycle. Gather data.&lt;br&gt;&lt;br&gt;
Adjust future estimates based on reality, not hope.&lt;/p&gt;

&lt;p&gt;It’s unglamorous. It’s slow. But it’s the only path I see to understanding our true capacity with AI.&lt;/p&gt;

&lt;p&gt;You can’t optimize what you don’t measure. And right now, most teams aren’t measuring anything except “we’re using AI, we should be faster.”&lt;/p&gt;

&lt;p&gt;After enough cycles, patterns emerge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI features that touch legacy code take 3x longer than expected&lt;/li&gt;
&lt;li&gt;Net-new features hit estimates more reliably&lt;/li&gt;
&lt;li&gt;Code review adds 40% to any AI-heavy story&lt;/li&gt;
&lt;li&gt;Integration work still takes the same time regardless of AI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this is in the “AI boosts productivity 39%” headline. But it’s the reality of coordinating a team in the AI era.&lt;/p&gt;

&lt;p&gt;The boring strategies we’ve always used — estimate, measure, learn, adjust — they still work. They’re just slower to calibrate now because the variables changed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I’m Trying (Not Prescribing)
&lt;/h2&gt;

&lt;p&gt;I don’t have this figured out. Nobody does yet. But here’s what’s helping in my experience:&lt;/p&gt;

&lt;h3&gt;
  
  
  The One-Thing Rule
&lt;/h3&gt;

&lt;p&gt;Stop saying yes to multiple simultaneous AI features. One thing from start to shipped before starting the next.&lt;/p&gt;

&lt;p&gt;Does it feel slower? Yes.&lt;br&gt;&lt;br&gt;
Do you actually ship more? Also yes.&lt;/p&gt;

&lt;p&gt;Multiple AI-started initiatives feel like progress until nothing’s actually done. Finishing one thing beats starting five.&lt;/p&gt;

&lt;h3&gt;
  
  
  Honest Estimates
&lt;/h3&gt;

&lt;p&gt;When someone asks “how long will this take?” stop giving the optimistic AI-boosted number.&lt;/p&gt;

&lt;p&gt;Instead: “AI might generate it in an hour. Integration and debugging might take 3 days. Estimate 4 days to be safe.”&lt;/p&gt;

&lt;p&gt;It feels slow. But estimates stop slipping constantly. And trust improves.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Validation Budget
&lt;/h3&gt;

&lt;p&gt;Timebox AI code review. If you can’t fully understand and validate what the AI built in the time it would have taken to write it yourself, don’t use the AI code.&lt;/p&gt;

&lt;p&gt;Sounds counterintuitive. But reviewing 800 lines of AI code you don’t understand for 6 hours defeats the purpose. Sometimes writing 200 lines yourself in 4 hours is actually faster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Measuring What Matters
&lt;/h3&gt;

&lt;p&gt;Stop tracking lines of code or features started. Track instead:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Features actually in production&lt;/li&gt;
&lt;li&gt;Bugs introduced per feature&lt;/li&gt;
&lt;li&gt;Time from “start” to “shipped” (not “AI generated code”)&lt;/li&gt;
&lt;li&gt;Team stress level (are people sleeping okay?)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The numbers are uncomfortable. But they’re honest.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Speed for Quality, Not Quantity
&lt;/h3&gt;

&lt;p&gt;When AI genuinely saves time, spend that time on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Better tests&lt;/li&gt;
&lt;li&gt;Clearer documentation&lt;/li&gt;
&lt;li&gt;Paying down technical debt&lt;/li&gt;
&lt;li&gt;Deeper thinking on architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not just “more features.”&lt;/p&gt;

&lt;p&gt;The productivity gain is real. The question is: who captures it? If it all goes to “more output for the same salary,” you’re on a treadmill. If some goes to making work better and more sustainable, everyone might actually benefit.&lt;/p&gt;

&lt;h3&gt;
  
  
  At the Team Level: The Data Discipline
&lt;/h3&gt;

&lt;p&gt;For tech leads and managers, the same boring-but-effective cycle applies:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Estimate&lt;/strong&gt; — Even when it feels like guessing. Get the team’s best guess on record.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Measure&lt;/strong&gt; — Track actual time, not AI generation time. Start to shipped, not start to “AI wrote code.”&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Retrospect&lt;/strong&gt; — What took longer than expected? What patterns are emerging?&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Adjust&lt;/strong&gt; — Use the data. If AI stories touching legacy code are consistently 3x estimates, factor that in next sprint.&lt;/p&gt;

&lt;p&gt;After 4-5 cycles, you start seeing your team’s actual capacity with AI. Not the theoretical 39% boost. The real number.&lt;/p&gt;

&lt;p&gt;It’s slower than anyone wants. But it’s the only path to honest planning.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Question
&lt;/h2&gt;

&lt;p&gt;Here’s what I keep coming back to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 39% productivity gain might be real. But who’s capturing it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your company captures it as more output for the same salary.&lt;br&gt;&lt;br&gt;
Your manager captures it as more ambitious roadmaps.&lt;br&gt;&lt;br&gt;
The market captures it as higher expectations.&lt;/p&gt;

&lt;p&gt;You capture… what? More stress? More context-switching? More debugging code you didn’t write?&lt;/p&gt;

&lt;p&gt;Unless you actively defend your boundaries, AI productivity tools become productivity &lt;em&gt;traps&lt;/em&gt; — a treadmill that speeds up but never lets you off.&lt;/p&gt;

&lt;p&gt;I don’t want to sound cynical. AI is genuinely powerful. I use it every day. But the default path is work intensification, not work reduction. And if you don’t choose differently, the default will choose for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Different Path
&lt;/h2&gt;

&lt;p&gt;What if AI augmentation wasn’t about doing &lt;em&gt;more&lt;/em&gt;? What if it was about doing &lt;em&gt;better&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;What if productivity gains went toward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deeper thinking on hard problems&lt;/li&gt;
&lt;li&gt;Mentoring junior developers&lt;/li&gt;
&lt;li&gt;Paying down technical debt&lt;/li&gt;
&lt;li&gt;Building more resilient systems&lt;/li&gt;
&lt;li&gt;Actually shipping polished features instead of half-finished experiments&lt;/li&gt;
&lt;li&gt;Sustainable pace instead of constant sprinting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The technology is powerful. The question is: &lt;strong&gt;who decides what that power is for?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Right now, the default answer is “more output.” But you can choose differently.&lt;/p&gt;

&lt;p&gt;You can say no to the fifth simultaneous initiative.&lt;br&gt;&lt;br&gt;
You can give honest estimates instead of optimistic ones.&lt;br&gt;&lt;br&gt;
You can spend AI-gained time on quality instead of quantity.&lt;br&gt;&lt;br&gt;
You can protect your boundaries instead of filling every efficiency gain with new commitments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 39% productivity trap is only a trap if you don’t see it coming.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now you do.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;This is still being figured out across the industry. Some weeks teams ship fast and feel great. Other weeks they’re drowning in half-finished AI features and wondering what went wrong.&lt;/p&gt;

&lt;p&gt;But patterns are emerging. Data is being gathered. Teams are learning to say no more often. And slowly, the industry is learning to use AI as a tool for better work, not just more work.&lt;/p&gt;

&lt;p&gt;If you’re feeling this too — the expectations, the estimation chaos, the validation treadmill — you’re not slow. You’re not behind. The system is broken, and you’re just the first to notice.&lt;/p&gt;

&lt;p&gt;The question is: what are you going to do about it?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If this resonated with you, I’d love to hear your experience. Are you in the productivity trap too? What are you trying? Find me on &lt;a href="https://linkedin.com/in/pachecothiago" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Further Reading:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TechCrunch: &lt;a href="https://techcrunch.com/2026/02/09/the-first-signs-of-burnout-are-coming-from-the-people-who-embrace-ai-the-most/" rel="noopener noreferrer"&gt;“The first signs of burnout are coming from the people who embrace AI the most”&lt;/a&gt; (Feb 2026)&lt;/li&gt;
&lt;li&gt;Harvard Business Review: &lt;a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it" rel="noopener noreferrer"&gt;“AI Doesn’t Reduce Work—It Intensifies It”&lt;/a&gt; (Feb 2026)&lt;/li&gt;
&lt;li&gt;Fortune: &lt;a href="https://fortune.com/2026/02/10/ai-future-of-work-white-collar-employees-technology-productivity-burnout-research-uc-berkeley/" rel="noopener noreferrer"&gt;“AI is having the opposite effect it was supposed to”&lt;/a&gt; (Feb 2026)&lt;/li&gt;
&lt;li&gt;METR: &lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;“Measuring the Impact of Early-2025 AI on Developer Productivity”&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Business Insider: &lt;a href="https://www.businessinsider.com/ai-fatigue-burnout-software-engineer-essay-siddhant-khare-2026-2" rel="noopener noreferrer"&gt;“AI fatigue is real and nobody talks about it”&lt;/a&gt; (Feb 2026)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/nobody-knows-how-to-estimate-software-anymore/" rel="noopener noreferrer"&gt;Nobody Knows How to Estimate Software Anymore (And It’s Not Your Fault)&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>aicoding</category>
    </item>
    <item>
      <title>The Review Bottleneck: Why AI Explanations Are Making Us Trust Less, Not More</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Fri, 13 Feb 2026 14:07:52 +0000</pubDate>
      <link>https://dev.to/pacheco/the-review-bottleneck-why-ai-explanations-are-making-us-trust-less-not-more-35al</link>
      <guid>https://dev.to/pacheco/the-review-bottleneck-why-ai-explanations-are-making-us-trust-less-not-more-35al</guid>
      <description>&lt;p&gt;Last week I spent 3 hours reviewing code that took 20 minutes to write.&lt;/p&gt;

&lt;p&gt;The AI was faster. The review wasn’t.&lt;/p&gt;

&lt;p&gt;And I’m starting to realize: that’s the problem.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Less coding, more engineering.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I keep hearing this phrase everywhere. The idea is simple: AI handles the coding, so developers focus on the higher-level work. The engineering. The architecture. The review.&lt;/p&gt;

&lt;p&gt;But here’s what nobody’s talking about: AI isn’t just writing the code anymore. It’s reviewing it too.&lt;/p&gt;

&lt;p&gt;And the paradox is obvious once you see it: AI generates code faster, but reviewing it takes longer than ever.&lt;/p&gt;

&lt;p&gt;Here’s what those 3 hours looked like:&lt;/p&gt;

&lt;p&gt;I read through 300 lines of code carefully. Checked the tests. Verified the logic flow. Examined edge cases.&lt;/p&gt;

&lt;p&gt;But that was only the first hour.&lt;/p&gt;

&lt;p&gt;The next two hours? Reading AI-generated explanations. Reviewing the AI code reviewer’s feedback. Cross-referencing the AI’s architectural justifications with the actual implementation. Trying to reconcile conflicting suggestions from different AI systems.&lt;/p&gt;

&lt;p&gt;By the end, I understood the code. But I’d spent more time processing AI commentary than reviewing actual logic.&lt;/p&gt;

&lt;p&gt;And here’s what bothered me: I see people in the industry are approving similar PRs in 20 minutes.&lt;/p&gt;

&lt;p&gt;Are they reading all of this? Or are they skimming the AI explanations and trusting by default?&lt;/p&gt;

&lt;p&gt;I’m pretty sure it’s the second one.&lt;/p&gt;

&lt;p&gt;This isn’t about being thorough versus lazy. It’s a recognition of something shifting.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;300 lines of actual code&lt;/li&gt;
&lt;li&gt;1,200 words of AI-generated explanation&lt;/li&gt;
&lt;li&gt;800 words of AI code review feedback&lt;/li&gt;
&lt;li&gt;15 inline comments from the AI about trade-offs and alternatives&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;I had more documentation to review than code.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;300 lines&lt;/strong&gt; of actual implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2,000+ words&lt;/strong&gt; of AI-generated commentary&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code was the easy part. The cognitive load came from synthesizing multiple AI perspectives, each confident, each reasonable-sounding, some subtly contradicting each other.&lt;/p&gt;

&lt;p&gt;The tests passed. The linting passed. The AI explanations sounded reasonable. The AI reviewer’s concerns seemed addressed.&lt;/p&gt;

&lt;p&gt;So I trusted the process and moved on.&lt;/p&gt;

&lt;p&gt;And that’s becoming the norm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy76dfj6ck6kof5advbj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy76dfj6ck6kof5advbj.gif" width="244" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m not alone.&lt;/p&gt;

&lt;p&gt;At Anthropic—the company building Claude—engineers are generating 2,000 to 3,000 line pull requests regularly. Mike Krieger, their Chief Product Officer, openly admits: “pretty much 100%” of their code is now AI-generated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And they’re using Claude to review it too.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Boris Cherny, head of Anthropic’s Claude Code team, hasn’t written a single line of code in over two months. He shipped 22 pull requests in one day, 27 the next.&lt;/p&gt;

&lt;p&gt;“Each one 100% written by Claude.”&lt;/p&gt;

&lt;p&gt;This isn’t the future. It’s happening right now, at the companies building the AI tools we’re all using.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsek1p2uqplbnix3zfkg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsek1p2uqplbnix3zfkg.gif" width="244" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Confidence Trap
&lt;/h2&gt;

&lt;p&gt;Code reviews were already hard. They required skill, domain knowledge, and patience.&lt;/p&gt;

&lt;p&gt;Now multiply that by the sheer volume AI generates.&lt;/p&gt;

&lt;p&gt;But volume isn’t even the real issue.&lt;/p&gt;

&lt;p&gt;The real issue is that AI writes &lt;strong&gt;confident&lt;/strong&gt; code. It comes with detailed explanations. Trade-off analysis. References. Architecture justifications.&lt;/p&gt;

&lt;p&gt;Enough well-articulated reasoning to make everything sound sensible.&lt;/p&gt;

&lt;p&gt;When you look at a 500-line PR with a 2,000-word explanation of why every decision was made, the cognitive load is enormous.&lt;/p&gt;

&lt;p&gt;You can dig in and verify every claim.&lt;/p&gt;

&lt;p&gt;Or you can trust that the explanation sounds reasonable and move on.&lt;/p&gt;

&lt;p&gt;Most developers are choosing “move on.”&lt;/p&gt;

&lt;p&gt;Here’s where we are:&lt;/p&gt;

&lt;p&gt;Claude Code and Codex are generating code at unprecedented scale. &lt;strong&gt;46% of developers’ code&lt;/strong&gt; is now AI-written across major tools like Claude Code, Codex, and GitHub Copilot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;84% of developers&lt;/strong&gt; use AI coding tools regularly.&lt;/p&gt;

&lt;p&gt;And here’s the kicker: while AI generates nearly half our code, &lt;strong&gt;only 30% of AI-suggested code actually gets accepted&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The rest gets rejected during review—or should get rejected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcojyumx8m43c5noyf4n5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcojyumx8m43c5noyf4n5.gif" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Then We Added AI Code Review
&lt;/h2&gt;

&lt;p&gt;So teams did the obvious thing: bring in AI code review tools.&lt;/p&gt;

&lt;p&gt;Now every PR has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The AI-generated code (500 lines)&lt;/li&gt;
&lt;li&gt;The AI’s explanation of what it built and why (2,000 words)&lt;/li&gt;
&lt;li&gt;The AI reviewer’s analysis (another 1,500 words)&lt;/li&gt;
&lt;li&gt;Sometimes multiple AI reviewers, each with their own opinions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’re staring at 4,000+ words of confident, reasonable-sounding explanations from multiple AI systems.&lt;/p&gt;

&lt;p&gt;All of it well-structured. All of it articulate. Much of it contradicting itself in subtle ways.&lt;/p&gt;

&lt;p&gt;And you’re supposed to synthesize all of this, make a judgment call, and approve or reject.&lt;/p&gt;

&lt;p&gt;What actually happens?&lt;/p&gt;

&lt;p&gt;You skim the AI’s explanation. You skim the AI reviewer’s comments. If they roughly agree and the tests pass, you approve it.&lt;/p&gt;

&lt;p&gt;The AI’s confidence became your confidence by default.&lt;/p&gt;

&lt;p&gt;Research confirms what we all feel: AI-generated code creates &lt;strong&gt;1.7x more issues&lt;/strong&gt; than human-written code.&lt;/p&gt;

&lt;p&gt;Unclear naming. Mismatched terminology. Generic identifiers everywhere.&lt;/p&gt;

&lt;p&gt;All of it increasing cognitive load for reviewers.&lt;/p&gt;

&lt;p&gt;And here’s the kicker: all of it explained so confidently you don’t question it.&lt;/p&gt;

&lt;p&gt;This is what researchers call “automation bias”—our tendency to accept answers from automated systems, even when we encounter contradictory information.&lt;/p&gt;

&lt;p&gt;We’re not carefully evaluating the code. We’re trusting that the volume of explanation equals correctness.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Explanation ≠ More Understanding
&lt;/h2&gt;

&lt;p&gt;The paradox is obvious once you see it:&lt;/p&gt;

&lt;p&gt;Adding AI code reviewers didn’t make reviews better. It made them worse.&lt;/p&gt;

&lt;p&gt;Not because the AI reviewers are bad. But because the sheer volume of explanation—from the writer AI, from the reviewer AI, sometimes from multiple reviewer AIs—has become impossible to actually process.&lt;/p&gt;

&lt;p&gt;We traded one problem (not enough context) for another (too much confident noise).&lt;/p&gt;

&lt;p&gt;And the human reviewer, the supposed quality gate, is now just the person who clicks “Approve” after skimming thousands of words they don’t have time to verify.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8f1zitdtj0ynedskswd.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8f1zitdtj0ynedskswd.gif" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The bottleneck isn’t writing code anymore.&lt;/p&gt;

&lt;p&gt;It’s not even reviewing code.&lt;/p&gt;

&lt;p&gt;It’s trusting code we don’t fully understand because we’re drowning in explanations that sound reasonable but are too expensive to verify.&lt;/p&gt;

&lt;p&gt;Even OpenAI acknowledges this in their Codex documentation: “It still remains essential for users to manually review and validate all agent-generated code.”&lt;/p&gt;

&lt;p&gt;But are we actually doing that?&lt;/p&gt;

&lt;p&gt;The evidence suggests no.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wait. See What Just Happened?
&lt;/h2&gt;

&lt;p&gt;I need to be honest with you.&lt;/p&gt;

&lt;p&gt;I almost did the exact same thing to you.&lt;/p&gt;

&lt;p&gt;I almost buried this post in citations.&lt;/p&gt;

&lt;p&gt;16 footnotes. Statistics every other paragraph. Research from Anthropic, OpenAI, arXiv, CodeRabbit, Qodo. All credible. All well-sourced. All making the same point.&lt;/p&gt;

&lt;p&gt;And if you’re like most readers, you would have skimmed them. Trusted that they said what I claimed. Moved on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s exactly what we’re doing with code reviews.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The volume of explanation—even when accurate—becomes its own problem. Too many words. Too much confidence. Not enough time to verify.&lt;/p&gt;

&lt;p&gt;So we trust by default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk22bvtdph5h9wgiywyg4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk22bvtdph5h9wgiywyg4.gif" width="500" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I’m Trying
&lt;/h2&gt;

&lt;p&gt;I don’t have this solved. But here’s what’s working for me:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 30-minute rule&lt;/strong&gt; – If I can’t understand the PR in 30 minutes of focused review, it’s too big. Send it back or break it down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No AI reviewer without human review&lt;/strong&gt; – AI review is a supplement, not a replacement. I still need to read the actual code, not just the summary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The explain-it test&lt;/strong&gt; – If I can’t explain the core logic to someone else, I don’t approve it. Knowing “the tests passed” isn’t good enough.&lt;/p&gt;

&lt;p&gt;Does this slow me down? Yes.&lt;/p&gt;

&lt;p&gt;Does it help? I think so.&lt;/p&gt;

&lt;p&gt;But I’m also watching my team ship faster by trusting more. And I don’t know if I’m being careful or just stubborn.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Leaves Us
&lt;/h2&gt;

&lt;p&gt;I’m caught in the same trap.&lt;/p&gt;

&lt;p&gt;I want to ship faster. But I also want to understand what I’m shipping.&lt;/p&gt;

&lt;p&gt;And the current tools make both feel impossible at the same time.&lt;/p&gt;

&lt;p&gt;Some days I slow down and review everything carefully. Other days I skim and trust.&lt;/p&gt;

&lt;p&gt;And I’m not sure which approach is right anymore.&lt;/p&gt;

&lt;p&gt;Anthropic’s Dario Amodei predicts the industry may be “just six to twelve months away from AI handling most or all of software engineering work from start to finish.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;25% of Google’s code&lt;/strong&gt; is already AI-assisted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;30% of Microsoft’s code&lt;/strong&gt; is AI-generated.&lt;/p&gt;

&lt;p&gt;These aren’t small experiments. This is how we’re building software now.&lt;/p&gt;

&lt;p&gt;But here’s what we’re not saying out loud:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We’ve replaced code we wrote and didn’t fully understand with code AI wrote and we definitely don’t understand.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’re not talking about this problem honestly enough.&lt;/p&gt;

&lt;p&gt;The “less coding, more engineering” narrative assumes we’re still doing the review work.&lt;/p&gt;

&lt;p&gt;We’re not.&lt;/p&gt;

&lt;p&gt;We’re skimming AI-generated justifications and hoping for the best.&lt;/p&gt;

&lt;p&gt;Maybe that’s fine. Maybe the tests are good enough. Maybe AI review plus AI generation actually works.&lt;/p&gt;

&lt;p&gt;But we should stop pretending we’re still doing the review work.&lt;/p&gt;

&lt;p&gt;Because “less coding, more engineering” sounds great until you realize:&lt;/p&gt;

&lt;p&gt;We’re not doing more engineering.&lt;/p&gt;

&lt;p&gt;We’re doing more &lt;strong&gt;trusting&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0pt47fnfowidxqlw48l.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0pt47fnfowidxqlw48l.gif" width="498" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So here’s my question:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Are you actually reviewing AI code? Or are you just hoping the explanation is right?&lt;/p&gt;

&lt;p&gt;Because if it’s the second one—and the data suggests it is—we need to start talking about what comes next.&lt;/p&gt;

&lt;p&gt;The quality gate we automated away isn’t coming back. We need to figure out what replaces it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next up:&lt;/strong&gt; I’m going to share how I’m breaking down AI-generated features into bite-sized review sessions that force comprehension instead of trust. It’s slower. It’s deliberate. And it might be the only way to stay honest about what we’re shipping.&lt;/p&gt;

&lt;h2&gt;
  
  
  References &amp;amp; Further Reading
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Key Sources:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/" rel="noopener noreferrer"&gt;Fortune: “Top engineers at Anthropic, OpenAI say AI now writes 100% of their code”&lt;/a&gt; – Mike Krieger and Boris Cherny interviews, January 2026&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.quantumrun.com/consulting/github-copilot-statistics/" rel="noopener noreferrer"&gt;GitHub Copilot Statistics 2026&lt;/a&gt; – 46% of code AI-generated&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report" rel="noopener noreferrer"&gt;CodeRabbit: “AI code creates 1.7x more issues”&lt;/a&gt; – Cognitive load study, 2025&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.index.dev/blog/developer-productivity-statistics-with-ai-tools" rel="noopener noreferrer"&gt;Index.dev: Developer Productivity Statistics 2026&lt;/a&gt; – 84% adoption, 30% acceptance rate&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://help.openai.com/en/articles/11369540-using-codex-with-your-chatgpt-plan" rel="noopener noreferrer"&gt;OpenAI: Using Codex with ChatGPT&lt;/a&gt; – Manual review guidance&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://link.springer.com/article/10.1007/s00146-025-02422-7" rel="noopener noreferrer"&gt;Springer: Automation bias in human–AI collaboration&lt;/a&gt; – AI &amp;amp; Society, July 2025&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://indianexpress.com/article/technology/artificial-intelligence/anthropic-100-percent-code-ai-generated-claude-10522033/" rel="noopener noreferrer"&gt;Indian Express: Anthropic’s 100% AI-generated code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://techcommunity.microsoft.com/blog/appsonazureblog/an-ai-led-sdlc-building-an-end-to-end-agentic-software-development-lifecycle-wit/4491896" rel="noopener noreferrer"&gt;Qodo 2025 AI Code Quality Report&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/html/2508.18771v1" rel="noopener noreferrer"&gt;arXiv: “Does AI Code Review Lead to Code Changes?”&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://thedecisionlab.com/biases/automation-bias" rel="noopener noreferrer"&gt;The Decision Lab: Automation Bias&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://addyo.substack.com/p/the-reality-of-ai-assisted-software" rel="noopener noreferrer"&gt;Addy Osmani: AI-Assisted Engineering Reality&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/the-review-bottleneck-why-ai-explanations-are-making-us-trust-less-not-more/" rel="noopener noreferrer"&gt;The Review Bottleneck: Why AI Explanations Are Making Us Trust Less, Not More&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aicoding</category>
      <category>claudecode</category>
    </item>
    <item>
      <title>Working Twice as Hard to Be Seen as Average: Life as a Latino Developer</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Wed, 11 Feb 2026 02:04:07 +0000</pubDate>
      <link>https://dev.to/pacheco/working-twice-as-hard-to-be-seen-as-average-life-as-a-latino-developer-2l95</link>
      <guid>https://dev.to/pacheco/working-twice-as-hard-to-be-seen-as-average-life-as-a-latino-developer-2l95</guid>
      <description>&lt;p&gt;I walked into the conference room with my laptop to set up the infrastructure demo. Before I could connect to the projector, someone asked me to refill the coffee first.&lt;/p&gt;

&lt;p&gt;I had a computer science degree. I was working in infra and support. But they saw a Latino face and assumed “service staff,” not “software engineer.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This was in Brazil.&lt;/strong&gt; My own country.&lt;/p&gt;

&lt;p&gt;If the bias is this strong at home, imagine what it’s like abroad.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs41i7b87pv9h3yuyw7a.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs41i7b87pv9h3yuyw7a.gif" width="498" height="278"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  When I Moved Abroad
&lt;/h2&gt;

&lt;p&gt;That coffee moment in Brazil taught me the bias runs deep, so deep it exists even at home.&lt;/p&gt;

&lt;p&gt;But when I moved abroad? I learned what &lt;strong&gt;intensity&lt;/strong&gt; means.&lt;/p&gt;

&lt;p&gt;No one asked me to refill coffee anymore. The bias evolved. Got sophisticated.&lt;/p&gt;

&lt;p&gt;I exceeded all expectations. Top performer. Multiple successful projects. When my first promotion came up, leadership hesitated.&lt;/p&gt;

&lt;p&gt;Not because of my work, that was undeniable. But because something about me didn’t fit their mental model of what “senior” looks like.&lt;/p&gt;

&lt;p&gt;It wasn’t just that moment. It was &lt;strong&gt;every single day&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Every code review scrutinized harder. Every meeting where I had to prove my point twice. Every technical decision questioned just a bit more. Every accomplishment met with surprise instead of recognition.&lt;/p&gt;

&lt;p&gt;The weight isn’t in one coffee incident or one delayed promotion.&lt;/p&gt;

&lt;p&gt;The weight is in living it &lt;strong&gt;every day&lt;/strong&gt; , in ways so subtle that calling them out feels like paranoia—until the pattern becomes undeniable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp3qwdn9cawsueqx66ak.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp3qwdn9cawsueqx66ak.gif" width="486" height="354"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  You’re Not Imagining It
&lt;/h2&gt;

&lt;p&gt;If you’re a Latino developer, you’ve felt it. That sense that you need to work twice as hard to prove half as much. That your accent makes people second-guess your skills. That your degree from a Latin American university is worth less.&lt;/p&gt;

&lt;p&gt;Here’s the data: &lt;strong&gt;You’re not imagining it.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Latinos are &lt;strong&gt;19% of the US population&lt;/strong&gt; but only &lt;strong&gt;5.9-8% of the tech workforce&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;At Google in 2020, despite a $150M diversity commitment, Latinos made up just &lt;strong&gt;5.9%&lt;/strong&gt; of employees&lt;/li&gt;
&lt;li&gt;In core computer/math roles, only &lt;strong&gt;8.3%&lt;/strong&gt; are Latino&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Research on imposter syndrome shows it’s “especially prevalent in underrepresented racial and ethnic minorities” (NIH). Forbes notes that “minorities face bias that makes it harder for them to be promoted or selected for certain roles.”&lt;/p&gt;

&lt;p&gt;This isn’t personal failure. This is &lt;strong&gt;structural exclusion&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0e07ujf6qvl5jgo9aw2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0e07ujf6qvl5jgo9aw2.gif" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Google Interview
&lt;/h2&gt;

&lt;p&gt;Ten years of experience. Proven delivery. Strong references.&lt;/p&gt;

&lt;p&gt;I made it through all the Google interview rounds. The feedback was good.&lt;/p&gt;

&lt;p&gt;Then: &lt;strong&gt;rejected&lt;/strong&gt;. No clear explanation.&lt;/p&gt;

&lt;p&gt;I kept replaying the system design interview. I knew my architecture was sound. But did I explain it the way they expected? Did my phrasing sound uncertain when I was being thoughtful? Did my accent make them doubt my competence?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I’ll never know.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But I know this: technical competence wasn’t the only thing being evaluated.&lt;/p&gt;

&lt;p&gt;Research on technical interviews confirms it. interviewing.io found that “implicit biases sneak in and people aren’t even aware of them.” Non-native speakers face an inherent disadvantage in interviews where “effective communication is key to success.”&lt;/p&gt;

&lt;p&gt;Here’s the nuance: You might be fluent in English, but &lt;strong&gt;cultural differences in how you convey ideas&lt;/strong&gt; still come across as weakness.&lt;/p&gt;

&lt;p&gt;Brazilian communication style is more relationship-focused, context-aware. North American style is more direct, transactional. &lt;strong&gt;Neither is wrong&lt;/strong&gt; , but one gets judged as “unprofessional.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0m7lsshzcz9d7libnyo.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0m7lsshzcz9d7libnyo.gif" width="498" height="498"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hidden Barriers
&lt;/h2&gt;

&lt;p&gt;These aren’t isolated incidents. They’re patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Name Effect&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Resumes with Latino/Hispanic names get fewer callbacks. Even before the interview, the name creates bias. Some developers anglicize their names to get past this filter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Cultural Fit Trap&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
“Cultural fit” is often code for “thinks and communicates like us.” When you express ideas differently—even if technically sound—it gets labeled as not fitting in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Timezone as Invisible Labor&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Working from Brazil/LATAM for North American companies? Early morning or late night calls are YOUR problem to solve. You adjust. They don’t. This invisible labor never counts in performance reviews.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Credential Discount&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Your degree from a top Brazilian university isn’t seen as equal to a North American degree, regardless of actual education quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Promotion Ceiling&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
High-performing Latino mid-level developers get held back from senior roles. The bar for “leadership presence” or “communication skills” becomes a convenient filter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqahapu0yw25kemsgg3m.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqahapu0yw25kemsgg3m.gif" width="498" height="278"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Paradox
&lt;/h2&gt;

&lt;p&gt;Here’s the weird part: this structural imposter syndrome (the one society applies to us and we apply to ourselves) makes us work &lt;strong&gt;10x harder&lt;/strong&gt; to achieve what others achieve easily.&lt;/p&gt;

&lt;p&gt;Which makes us &lt;strong&gt;excellent engineers&lt;/strong&gt;. But also &lt;strong&gt;exhausted humans&lt;/strong&gt; who never feel like we’ve done enough.&lt;/p&gt;

&lt;p&gt;This isn’t just personal. It’s social, structural, cultural. We do it to ourselves, AND the world does it to us.&lt;/p&gt;

&lt;p&gt;The same trait that drives us to over-deliver also prevents us from recognizing our own value. We minimize our contributions. We overestimate North American tech. We stay silent in meetings. We accept lower salaries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9eff9cwouq3q1a2whcg0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9eff9cwouq3q1a2whcg0.gif" width="497" height="280"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What We Bring
&lt;/h2&gt;

&lt;p&gt;Flip the narrative. What do Latino developers bring that North American tech culture often lacks?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Work ethic&lt;/strong&gt; – We’re willing to go further, learn more, prove ourselves repeatedly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resourcefulness&lt;/strong&gt; – Building with constraints makes better engineers. We know how to do more with less.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cultural intelligence&lt;/strong&gt; – Navigating multiple cultures IS a technical skill. We understand global markets beyond the Silicon Valley bubble.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Relationship-building&lt;/strong&gt; – Brazilian emphasis on personal connections creates stronger, more cohesive teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multilingual abilities&lt;/strong&gt; – Our “accent” is proof we’re multilingual. That’s a skill, not a weakness.&lt;/p&gt;

&lt;p&gt;These are competitive advantages. But only if companies recognize them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p6n9n50b07sy428zg37.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p6n9n50b07sy428zg37.gif" width="498" height="498"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What To Do About It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;For Latino Developers:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Document everything&lt;/strong&gt; – Bias thrives in ambiguity. Keep records of your work and wins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build in public&lt;/strong&gt; – Blog, contribute to open source, give talks. Create undeniable proof of competence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Find the right companies&lt;/strong&gt; – Look for Latino leadership or strong D&amp;amp;I track records. Culture starts at the top.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practice technical communication explicitly&lt;/strong&gt; – Mock interviews with native speakers help.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leverage networks&lt;/strong&gt; – Connect with groups like SHPE (Society of Hispanic Professional Engineers).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Own your accent&lt;/strong&gt; – Reframe it as proof of multilingual ability, not a deficit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Allies and Managers:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Question your assumptions in code review&lt;/strong&gt; – Am I judging the code or the communicator?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Separate technical competence from communication style&lt;/strong&gt; – Different doesn’t mean worse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit promotion decisions for bias&lt;/strong&gt; – Are Latinos hitting a ceiling in your org?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Value multilingual abilities as a skill&lt;/strong&gt; – Not just a neutral trait.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Champion Latino developers actively&lt;/strong&gt; – Advocate for them in senior roles when they’re ready.&lt;/p&gt;




&lt;h2&gt;
  
  
  Not Just a Brazil Story
&lt;/h2&gt;

&lt;p&gt;I wrote this from the perspective of a Brazilian software engineer. But these patterns aren’t unique to Brazil or Latin America. They’re not unique to tech.&lt;/p&gt;

&lt;p&gt;This is what happens when you’re perceived as “other” in spaces built by and for one dominant culture.&lt;/p&gt;

&lt;p&gt;The coffee. The furniture. The Google rejection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;These moments happen to Latinos across industries.&lt;/strong&gt; To Africans in European companies. To Asians in Western firms. The details change. The pattern doesn’t.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Honest Truth
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzilkmf45r4a0382yxcda.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzilkmf45r4a0382yxcda.gif" width="498" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some days I feel confident. I know I’m good at what I do. I see the systems I’ve built, the people I’ve mentored, the problems I’ve solved.&lt;/p&gt;

&lt;p&gt;Other days, the imposter syndrome wins. I wonder if I’ll ever be “enough.” I replay conversations, second-guessing how I phrased things. I see another rejection and wonder if it was my accent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s okay.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Naming the reality doesn’t make it go away. But it does make it visible. And once it’s visible, we can change it.&lt;/p&gt;

&lt;p&gt;If you’re a Latino developer: You’re not imagining it. You’re not alone. Your work is valuable. Your perspective matters. Keep pushing forward.&lt;/p&gt;

&lt;p&gt;If you’re an ally: Look around your team. Who’s missing? Whose ideas get dismissed? Who has to work twice as hard to get half the credit?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now you know. What will you do about it?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Per Scholas: &lt;em&gt;Latino Representation in Tech&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;SQ Magazine: &lt;em&gt;Diversity in Tech Statistics 2026&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;NIH: &lt;em&gt;Imposter Phenomenon in Racially/Ethnically Minoritized Groups&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;BairesDev: &lt;em&gt;Breaking Barriers – Tackling Imposter Syndrome Among Minorities in Tech&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Forbes: &lt;em&gt;How To Navigate Imposter Syndrome – A Hispanic Perspective&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;ACM: &lt;em&gt;Fairness and Bias in Algorithmic Hiring&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;interviewing.io: &lt;em&gt;Unconscious Bias in Technical Interviews&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/working-twice-as-hard-to-be-seen-as-average-life-as-a-latino-developer/" rel="noopener noreferrer"&gt;Working Twice as Hard to Be Seen as Average: Life as a Latino Developer&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>career</category>
      <category>diversityinclusion</category>
      <category>careergrowth</category>
      <category>careeradvice</category>
    </item>
    <item>
      <title>Are We Still Developers? The Hidden Cost of Vibe Coding</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Fri, 06 Feb 2026 03:34:05 +0000</pubDate>
      <link>https://dev.to/pacheco/are-we-still-developers-the-hidden-cost-of-vibe-coding-3209</link>
      <guid>https://dev.to/pacheco/are-we-still-developers-the-hidden-cost-of-vibe-coding-3209</guid>
      <description>

&lt;p&gt;I generated 847 lines of production code in 12 minutes.&lt;/p&gt;

&lt;p&gt;Not pseudocode. Not a prototype. Real, working Python with tests, error handling, API integration, and it works well. I described what I wanted to an AI agent, went to grab coffee, came back, and it was done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It felt incredible.&lt;/strong&gt; Like unlocking god mode. Why would I ever go back to writing code line by line?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7obpkojne879jhidv1y.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7obpkojne879jhidv1y.gif" width="498" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Promise &amp;amp; The Reality
&lt;/h2&gt;

&lt;p&gt;This is the promise of AI-centric development tools like Zencoder, vibe-kanban, and even parts of Cursor. Steve Yegge calls it “vibe coding”: stop micromanaging, trust the AI, let it scaffold entire features while you focus on the big picture. And when you see it work, it’s intoxicating.&lt;/p&gt;

&lt;p&gt;But here’s what happened next.&lt;/p&gt;

&lt;p&gt;I had to review those 847 lines. Every function. Every edge case. Every assumption the AI made about my requirements. Did it handle validation correctly? Is this maintainable? Did it miss something subtle about the business logic?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The review took longer than writing it would have.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30gx76vfpy5q4k4z94na.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30gx76vfpy5q4k4z94na.gif" width="498" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And I still wasn’t confident. So I asked a different AI to review the first AI’s code. It found some issues. I fixed them. But now I’m reviewing AI-generated fixes to AI-generated code, and I’m three layers deep in a review process that feels more like managing a team of junior developers than writing software.&lt;/p&gt;

&lt;p&gt;Then comes the PR. Do I ask my teammates to review 847 lines of AI code? They’ll either spend hours on it (and resent me) or run it through AI themselves (and we’re all just trusting machines at that point).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Identity Question
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;So I have to ask: what does this make us as developers?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If AI writes the code and AI reviews the code, and we’re just approving diffs we don’t fully understand… are we developers anymore? Or are we product managers for code we didn’t write?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxr2sb7iq6dh6e0vz8vr.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxr2sb7iq6dh6e0vz8vr.gif" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Maybe that’s fine. Maybe AI is good enough now that we should embrace it.&lt;/p&gt;

&lt;p&gt;But I don’t think it is. Not yet. I find issues constantly: bugs the AI missed, patterns it didn’t understand, edge cases it overlooked. And the “big bang” feature implementations don’t feel right to me. &lt;strong&gt;I still value ensuring the AI is building the right thing according to my understanding of the problem.&lt;/strong&gt; And the only way to do that is to review in detail, stay close to the code, and actually understand what’s being built.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Journey Through the Landscape
&lt;/h2&gt;

&lt;p&gt;I’ve been a Vim user for years. Not the “I use Vim btw” meme kind. The “my fingers know hjkl better than WASD” kind. The muscle memory runs deep. So when AI coding assistants exploded onto the scene, I had a choice: migrate to VS Code like everyone else, or figure out how to make AI work in my world.&lt;/p&gt;

&lt;p&gt;Let’s be clear: &lt;strong&gt;I didn’t stick with Vim out of stubbornness.&lt;/strong&gt; I explored the alternatives. I &lt;em&gt;keep&lt;/em&gt; exploring them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tkgfo3uwcpp2m7501iv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tkgfo3uwcpp2m7501iv.gif" width="498" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cursor: Too Distant
&lt;/h3&gt;

&lt;p&gt;I’ve tried Cursor. It’s impressive: genuinely AI-first, with inline suggestions and chat that feels magical. But here’s the problem: it makes you &lt;em&gt;too distant from the code&lt;/em&gt;. You’re directing an AI that’s directing the editor. There’s a layer of abstraction I don’t trust yet. I want my hands closer to the metal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsson0zyugdajpknzrd21.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsson0zyugdajpknzrd21.gif" width="498" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Zed: Doesn’t Fit My Workflow
&lt;/h3&gt;

&lt;p&gt;I tried Zed. Better balance. It’s code-centric with AI tools bolted on the side: you choose when to invoke them. I liked that. But Zed expects you to work on &lt;em&gt;one project at a time&lt;/em&gt;. That breaks my workflow immediately. I live in tmux with 3-4 projects open, constantly switching contexts, piping output from one terminal to another. Zed doesn’t fit that reality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Zencoder &amp;amp; vibe-kanban: Even More Distant
&lt;/h3&gt;

&lt;p&gt;Then I went to the other extreme: full AI-centric tools like vibe-kanban and Zencoder. &lt;strong&gt;I was motivated to try this route after reading Steve Yegge’s writings on vibe coding.&lt;/strong&gt; The idea is compelling: stop micromanaging the AI, trust the vibes, let it scaffold entire features while you focus on the big picture. So I gave it an honest shot.&lt;/p&gt;

&lt;p&gt;These tools are &lt;em&gt;wild&lt;/em&gt;. You describe what you want, and they scaffold entire features, write tests, integrate APIs. It feels like having a senior dev in a box. Zencoder especially caught my attention. You feel powerful. You ship fast.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7132suvtdrlaea2c8f2r.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7132suvtdrlaea2c8f2r.gif" width="498" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But here’s the catch: &lt;strong&gt;you’re absurdly distant from the code.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI writes it. AI organizes it. You review diffs like a manager reviewing PRs. And I don’t trust AI &lt;strong&gt;that much&lt;/strong&gt; yet. Every line it writes, I have to re-review. Does it meet standards? Is it maintainable? Did it miss an edge case? The review overhead is real.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I realized:&lt;/strong&gt; I still need to review code in detail and ensure features are going in the right direction. The “trust the vibes” approach sounds liberating, but in practice, I’m doing &lt;em&gt;more&lt;/em&gt; cognitive work reviewing after the fact than I would have supervising during development.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Bite-Sized Pair Programming
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75e8po3yhod2c8uv2an2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75e8po3yhod2c8uv2an2.gif" width="498" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So I found a middle ground: bite-sized pair programming with AI.&lt;/strong&gt; The AI does most of the coding. I supervise and keep it on track. I course-correct in real time instead of reviewing a massive diff later. And the best way I’ve found to do that is still &lt;strong&gt;Neovim + tmux&lt;/strong&gt; : AI in one pane, code in another, constant back-and-forth.&lt;/p&gt;

&lt;p&gt;I don’t write code the same way I used to. I used to open a file, think through the problem, and type. Now? I spin up a worktree, open an AI in a terminal pane, and direct the solution instead of typing it character by character. But I stay close. I supervise. I course-correct in real time.&lt;/p&gt;

&lt;p&gt;The AI does the heavy lifting. I do the thinking. It’s not full vibe coding. It’s not solo coding either. It’s &lt;strong&gt;collaborative&lt;/strong&gt; , with me staying close enough to catch problems early. The tools are different. The medium is the same: the terminal.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI in the Terminal: Multiple Tools, One Workflow
&lt;/h3&gt;

&lt;p&gt;The real shift wasn’t about finding one perfect plugin. It was about building a workflow that lets me use &lt;strong&gt;whatever AI tool fits the task&lt;/strong&gt; without leaving the terminal.&lt;/p&gt;

&lt;p&gt;For a while, I experimented with different approaches: Claude Code in a split buffer, Codex in a tmux pane, jumping between terminal windows to manage different tools manually. I wasn’t married to any particular setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More recently, I’ve found somewhat of a sweet spot:&lt;/strong&gt; I use &lt;strong&gt;sidekick.nvim&lt;/strong&gt; as my interface layer. It gives me the flexibility to switch between different AI agents when I want to. But in practice? &lt;strong&gt;I mostly default to Claude Code&lt;/strong&gt;. Its rules and configuration are pretty robust right now, plus my company pays for it, so why not use it?&lt;/p&gt;

&lt;p&gt;That’s the real advantage of the terminal workflow: &lt;strong&gt;the flexibility is there when you need it.&lt;/strong&gt; Want to test a new model? Swap it in. Want a second opinion on code? Switch agents mid-task. But you’re not forced to constantly context-switch. You can settle into what works and only switch when it makes sense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And here’s a workflow trick I’ve been using:&lt;/strong&gt; implement a feature with one AI tool, then review or test it with a different one. You get a less biased second opinion. If Model A wrote the code and Model B flags the same issues you’re concerned about, you know it’s real. If Model B says “looks good,” you have more confidence. It’s like pair programming, but the second programmer is a completely different intelligence.&lt;/p&gt;

&lt;p&gt;The “best” AI model changes constantly. Claude Code dominates today, but that changes by the hour. Being locked into one tool means you’re always playing catch-up. In the terminal, switching is trivial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Terminal Wins
&lt;/h2&gt;

&lt;p&gt;Neovim scored 83% in StackOverflow’s 2024 Developer Survey as the most admired IDE, even though VS Code is the most used at 59%. That gap tells you something. People who use Vim don’t just tolerate it. They love it. And it’s not Stockholm syndrome.&lt;/p&gt;

&lt;p&gt;Here’s what keeps me here.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hands on the Keyboard
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft71raru008quqov3le32.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft71raru008quqov3le32.gif" width="498" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When my hands never leave the keyboard, I stay in flow. Every time I reach for the mouse, there’s a micro-decision: where’s the cursor? What am I clicking? Did I miss? Those interruptions compound. Not just in seconds, but in mental overhead.&lt;/p&gt;

&lt;p&gt;Want to find a file? sf brings up Telescope. Search across everything? sg for live grep. Navigate between Vim splits, tmux panes, or even tmux windows? Ctrl+h/j/k/l handles it all seamlessly. Start a new task with a worktree and AI ready? at. Toggle a floating terminal? ;.&lt;/p&gt;

&lt;p&gt;No mouse. No sidebars. No menu hunting. Just muscle memory.&lt;/p&gt;

&lt;p&gt;The AI tools I use (Claude Code, Codex, OpenCode) fit into this. I don’t context-switch to a browser or separate app. I invoke them in a split pane with a keybind. Everything stays in one place.&lt;/p&gt;

&lt;p&gt;Speed isn’t about typing fast. It’s about never stopping.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Terminal is a Toolkit
&lt;/h3&gt;

&lt;p&gt;The terminal isn’t one tool. It’s composable. Want to process JSON from an API? Pipe it to jq. Transform file paths? sed or awk. Run a command on 50 files? find | xargs. Monitor logs while coding? tmux split with tail -f.&lt;/p&gt;

&lt;p&gt;When you add AI to this composability, things get wild.&lt;/p&gt;

&lt;p&gt;I can prompt Claude Code in one pane, watch it write code in another, pipe the output through a test runner, grep the results, and feed errors back into the AI. All without leaving the terminal. All scriptable. All reproducible.&lt;/p&gt;

&lt;p&gt;Cursor can’t do that. You can’t pipe Cursor’s output through grep. You can’t script it to run in a loop. It’s a black box. The terminal is Lego blocks, and AI is just another piece.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance You Actually Feel
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbja3b2hnptzztc5p2r0o.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbja3b2hnptzztc5p2r0o.gif" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;VS Code and Cursor are Electron apps. They’re running a full Chromium browser under the hood. Neovim is written in C. It opens instantly, uses around 50MB of RAM, and never lags.&lt;/p&gt;

&lt;p&gt;When I spin up 10 tmux tabs with Neovim and AI tools, my system barely blinks. Try opening 10 VS Code windows and listen to your fan scream.&lt;/p&gt;

&lt;p&gt;I juggle 5 to 10 worktrees at any given time. Each one is a separate environment. If each took 500MB of RAM and 3 seconds to load, my workflow would fall apart. Neovim and tmux? Instant, lightweight, snappy.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Setup is Code
&lt;/h3&gt;

&lt;p&gt;My entire development environment is 600 lines of dotfiles in a git repo. New machine? Clone it, run a script, and I’m back up in two minutes. Same keybinds, same plugins, same aliases, same AI integrations.&lt;/p&gt;

&lt;p&gt;GUI tools let you sync settings, but you’re at the mercy of their config systems. With terminal tools, the setup &lt;em&gt;is code&lt;/em&gt;. You can version it, diff it, review it, share it. I can recreate my entire workflow on a fresh VM faster than VS Code can install.&lt;/p&gt;

&lt;h3&gt;
  
  
  These Tools Have Staying Power
&lt;/h3&gt;

&lt;p&gt;Vim came out in 1991. Neovim in 2014. tmux in 2007. They’ve outlived programming languages, frameworks, companies. They’ll outlive Cursor. They’ll outlive Zed. They might outlive JavaScript.&lt;/p&gt;

&lt;p&gt;The muscle memory I’m building, the keybinds, the workflow patterns: they’ll be relevant in 10 years. Will Cursor? Maybe. Probably not. Learning Vim is an investment that compounds. GUI tools are a bet on a company staying solvent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Control Gradient
&lt;/h2&gt;

&lt;p&gt;Here’s how I think about the spectrum:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full Control (Traditional Vim)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
You write every character. You think through every problem. Slow, but you know exactly what’s happening.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-Assisted (My Current Setup)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
You direct the solution. AI accelerates execution. You stay close to the code. You review as it’s built, not after. You can do manual trivial actions where pertinent instead of asking everything to the agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-First (Cursor, Zed)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
AI suggests, you accept/reject. Fast, but you’re reacting more than creating.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-Centric (Zencoder, vibe-kanban)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
AI builds, you review diffs. Fastest, but you’re a product manager for code you didn’t write.&lt;/p&gt;

&lt;p&gt;Right now, &lt;strong&gt;AI-assisted is the sweet spot.&lt;/strong&gt; I get speed without losing control. I stay in my flow. I trust the output because I was &lt;em&gt;there&lt;/em&gt; while it was written.&lt;/p&gt;

&lt;p&gt;But I’ll be honest: &lt;strong&gt;this might not last.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI is getting scary good at both writing &lt;em&gt;and&lt;/em&gt; reviewing code. It’s catching bugs I miss. It’s flagging patterns I overlook. If AI review becomes as reliable as human review, the argument for staying hands-on gets weaker.&lt;/p&gt;

&lt;p&gt;We might be transitioning to AI-centric whether we like it or not. The question is: how long do I have before “staying close to the code” becomes nostalgia instead of pragmatism?&lt;/p&gt;

&lt;p&gt;For now, I’m staying in the terminal. But I’m watching.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Honest Uncertainty
&lt;/h2&gt;

&lt;p&gt;This workflow makes me feel in control. I know where my code is, how it got there, and what’s happening at every step. I’m not watching an AI build something in the background and hoping it got it right. I’m directing it, reviewing as it goes, staying close to the work.&lt;/p&gt;

&lt;p&gt;That control matters. For now.&lt;/p&gt;

&lt;p&gt;But I’d be lying if I said I’m confident this is the future. AI is getting better at an absurd pace. It’s already catching bugs I miss. It’s writing code that would take me hours in minutes. It’s reviewing my work and finding patterns I overlooked.&lt;/p&gt;

&lt;p&gt;At some point, the argument for staying hands-on stops being pragmatic and starts being nostalgic. Maybe we’re already there and I just don’t want to admit it.&lt;/p&gt;

&lt;p&gt;I keep testing the AI-first tools. Zencoder, vibe-kanban, Cursor. Not because I think they’re worse, but because I want to know when the terminal workflow stops being the smart choice and starts being stubbornness.&lt;/p&gt;

&lt;p&gt;Maybe that happens next week. Maybe it already happened and I’m just slow to see it. AI-centric development might not be a distant future. It might be now, and I’m still clinging to a workflow that makes me comfortable.&lt;/p&gt;

&lt;p&gt;For today, I’m staying in the terminal. It’s faster for me. It fits how I think. It keeps me close to the code in a way that feels right.&lt;/p&gt;

&lt;p&gt;But tomorrow? Who knows.&lt;/p&gt;

&lt;p&gt;The terminal-centric workflow wins for me right now. But I’m watching the gap close. And when it does, I’ll probably switch. Not because I want to, but because it’ll be the obvious move.&lt;/p&gt;

&lt;p&gt;Until then, I’ll keep hitting at and letting the AI do the heavy lifting while I stay at the wheel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkp8havgwpdwv2sc8qzrj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkp8havgwpdwv2sc8qzrj.gif" width="498" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/are-we-still-developers-the-hidden-cost-of-vibe-coding/" rel="noopener noreferrer"&gt;Are We Still Developers? The Hidden Cost of Vibe Coding&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aicoding</category>
      <category>claudecode</category>
    </item>
  </channel>
</rss>
