<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chetan Munigangappa</title>
    <description>The latest articles on DEV Community by Chetan Munigangappa (@chetangangappa).</description>
    <link>https://dev.to/chetangangappa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chetangangappa"/>
    <language>en</language>
    <item>
      <title>AI in the SDLC: The Next 5 Years</title>
      <dc:creator>Chetan Munigangappa</dc:creator>
      <pubDate>Sun, 08 Mar 2026 19:44:19 +0000</pubDate>
      <link>https://dev.to/chetangangappa/ai-in-the-sdlc-the-next-5-years-2364</link>
      <guid>https://dev.to/chetangangappa/ai-in-the-sdlc-the-next-5-years-2364</guid>
      <description>&lt;p&gt;As the new year came in, I found myself reading every AI prediction I could find. I stopped after the second week. Not because the writing was bad, some of it was sharp, but because the forecasts were expiring faster than I could finish them. A new model would drop, a vendor would announce something, a benchmark would get shattered, and whatever someone had written in January would feel like archaeology by February. The half-life of a one-year AI prediction is shorter than a sprint cycle.&lt;/p&gt;

&lt;p&gt;The deeper problem was what they were measuring. They counted automatable tasks, ran benchmarks, estimated job exposure. None of them asked how organisations actually decide to restructure, or what happens when the cost curves shift but the org chart doesn't. They treated software engineering as a set of tasks to optimise rather than a function embedded in institutions with their own logic and friction.&lt;/p&gt;

&lt;p&gt;Instead of betting on what a model release does to your Q3 velocity, this piece looks at what five years of compounding AI adoption does to the organisations building software. To answer that properly, you have to start somewhere most predictions skip: how companies actually work.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Organisations Work
&lt;/h2&gt;

&lt;p&gt;The org chart on the company website is not wrong, exactly. It's just incomplete in the ways that matter.&lt;/p&gt;

&lt;p&gt;The conventional picture is a pyramid: executives at the top set direction, managers in the middle transmit it, teams at the bottom execute it. Clean lines, clear accountability, everybody knows their lane. The reality is messier and more interesting. Large and medium-sized organisations don't function as pyramids. They function as translation machines, and the translation happens in layers, each with a different time horizon and a different kind of work.&lt;/p&gt;

&lt;p&gt;The C-suite operates on the longest horizon. They own cross-organisational initiatives: the three-to-five year bets on market position, the regulatory compliance programmes, the platform migrations that span multiple budget cycles. Crucially, and this is the part that surprises people, they often don't know every project currently running in the organisation. They don't need to. Their job is to set the direction and the constraints, not to track the work. A CEO who knows the details of every active sprint has bigger problems than an inefficient org chart.&lt;/p&gt;

&lt;p&gt;Below them sits the first middle layer: directors, senior managers, heads-of. These are the translators. They take a strategic initiative, say "we need to reduce infrastructure costs by 30% over three years", and decompose it into programmes and projects that can actually be staffed, scoped, and delivered. This is not mechanical work. It requires understanding both the strategic intent and the operational reality well enough to know when the two are in tension. A director who can't push back on an initiative that's been scoped unrealistically isn't doing the job. McKinsey's research on long-range planning found that just over half of companies regularly translate strategic goals into three-to-seven year financial plans, meaning the long-horizon bet exists, but it's always in tension with quarterly cost pressure. The first middle layer lives in that tension every day.&lt;/p&gt;

&lt;p&gt;Below them sits the second middle layer: managers, project managers, delivery managers, scrum masters, team leads. These are the executors. They take the programmes handed down from the first layer and make them happen, coordinating across teams, tracking dependencies, running ceremonies, translating requirements into tickets, escalating blockers, reporting status upward. The work is real and the pressure is constant. But the nature of the work is fundamentally different from the layer above it. The first layer exercises judgment about what to build. The second layer exercises coordination to make sure it gets built.&lt;/p&gt;

&lt;p&gt;This distinction, judgment versus coordination, is the one the org chart doesn't show you. And it's the one that matters for everything that follows.&lt;/p&gt;

&lt;p&gt;One important caveat before we go further: none of this describes small businesses or lower-medium sized organisations. In a 20-person company, the founder is the C-suite, the first middle layer, and often the second. Decision-making is fast, hierarchy is flat, and the layers described above either collapse into one person or don't exist at all. The dynamics in this piece apply to organisations large enough to have grown the full stack, typically from around 200 people upward, where the coordination burden has grown large enough to justify dedicated roles for it. Below that threshold, different rules apply.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Shrinking Hierarchy
&lt;/h2&gt;

&lt;p&gt;The second middle layer doesn't grow because companies are wasteful. It grows because visibility has a cost, and that cost scales with complexity.&lt;/p&gt;

&lt;p&gt;When a director hands a programme to an engineering team, they need to know if it's on track. Not in real time, but reliably enough to escalate before something becomes a crisis. The project manager's core function is producing that signal. Status reports flow upward. Risk registers get updated. Sprint ceremonies create checkpoints. The system works because human communication between layers requires human intermediaries to manage it.&lt;/p&gt;

&lt;p&gt;The problem is that as organisations grow, the surface area requiring visibility expands faster than the programmes themselves. Each new team is another node. Each new dependency is another handoff requiring tracking. The answer companies have consistently reached for is more coordination, more people whose job is to make sure other people know what is happening. This is how you end up with what Zuckerberg described when restructuring Meta: managers managing managers, managing managers, managing the people actually doing the work. His diagnosis was right even if the fix was blunt. You cannot remove a coordination layer without replacing the coordination function. The question is whether that function still requires humans to perform it.&lt;/p&gt;

&lt;p&gt;For most of the current second middle layer, it doesn't. Not entirely, and not immediately, but directionally. The project manager's work splits into two functions that rarely get separated. The first is synthesis: reading across Jira, GitHub, budget trackers, and vendor systems to assemble a coherent picture of programme health for the director. The second is planning: working with directors on next quarter's capacity, budget allocation, and programme scope. Today both require a human because the underlying systems don't talk to each other in any meaningful way. Someone has to read each one, reconcile the signals, and produce the output.&lt;/p&gt;

&lt;p&gt;Agentic AI systems are beginning to make the synthesis function redundant. OpenClaw, one of the first systems to genuinely execute across tools rather than simply respond to queries, points at what this looks like in practice: an agent that reads your Jira board, watches your GitHub PRs, tracks budget burn, and surfaces a coherent programme picture without a human assembling it. When that synthesis becomes reliable, the project manager's time reclaims itself. What remains is the judgment work: scope tradeoffs, stakeholder management, pushing back on unrealistic timelines, and working with directors on planning cycles that currently consume weeks of manual data assembly. That work was always the more valuable half of the role. It just got buried under the synthesis overhead.&lt;/p&gt;

&lt;p&gt;For engineering managers the change cuts deeper, and in a different direction. The constraint on how many direct reports a manager can effectively handle has never been the number of humans they could track. It has been the quality of attention they could give each one. Career conversations, technical mentorship, performance calibration, helping someone work through a problem they are stuck on. These are not coordination functions and they cannot be delegated to a dashboard. What AI changes is the overhead around them: the status chasing, ticket grooming, and ceremony facilitation that consumes the hours that should be spent developing people. In the 1980s the average managerial span was 1-to-4 direct reports. Information technology moved that closer to 1-to-10. AI moves it further still.&lt;/p&gt;

&lt;p&gt;But the span widening is the smaller part of the story. The bigger part is what engineers are now being asked to become. For most of the past decade, as the previous post argued, engineers were reduced to implementers — handed tickets, kept away from stakeholders, insulated from the business context that would have made their work meaningful. AI is reversing that. The engineer who only closes tickets is being displaced by tooling. What survives is the engineer who can engage with the problem domain, work directly with stakeholders, own outcomes rather than tasks. That is a significantly harder job to grow someone into than the one the industry settled for. Career conversations get more complex. Mentorship requires more than code review. Performance calibration becomes about judgment and domain understanding, not velocity metrics. The engineering manager who was already stretched thin on four direct reports, spending most of their time in ceremonies and status updates, now has more reports, deeper development conversations, and engineers whose scope of responsibility has expanded substantially. The coordination overhead coming down is what makes that possible. It is not optional relief. It is the condition that makes the expanded role survivable.&lt;/p&gt;

&lt;p&gt;For directors the change is about bandwidth. Their job is translating strategy into programmes and making judgment calls about priority and scope. AI does not do that. What changes is the fidelity and speed of the information they are working from. A director who currently waits for a weekly status report will instead have a live synthesis across the programme portfolio. The time freed from information gathering goes into the work that was always the point: refining programmes for the next quarter, initiating new ones, catching misalignment between strategic intent and execution before it becomes expensive. The planning cycle that currently takes weeks of manual data assembly collapses to days. The director who was constrained by information velocity becomes constrained by their own judgment speed which is as it should be.&lt;/p&gt;

&lt;p&gt;For the C-suite, the change is about visibility and calibration. They don't need to know every project, but they do need to sense when strategic intent and execution are drifting apart. AI-assisted synthesis across the programme portfolio makes that drift visible earlier. The quarterly business review becomes less about assembling the picture and more about interrogating it. The CEOs who thrive will be those who use the freed bandwidth to engage more deeply with the three-to-five year bets that actually determine their company's position not those who use it to meddle in execution they never needed to own.&lt;/p&gt;

&lt;p&gt;The through-line across all three layers is the same: coordination work that was performed by humans because the systems didn't talk to each other becomes automated. Judgment work that was always the point becomes the primary occupation. The hierarchy doesn't disappear. It shrinks. The same organisational function gets performed by fewer people, each with a wider span and a sharper focus on the work that actually matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Broadening Opportunity
&lt;/h2&gt;

&lt;p&gt;Gartner projects that 80% of the most common customer service issues will be handled by agentic AI by 2029. That's the number everyone cites. What they miss is what happens to the remaining 20%.&lt;/p&gt;

&lt;p&gt;The scripted work goes away: password resets, order status, appointment scheduling. It should. That work was never the interesting part of the job. What remains is what was always hardest to scale: the customer disputing a charge they don't understand, frustrated, trying to explain a situation that fits no category in any decision tree. That interaction needs someone who can listen past the complaint to the actual problem, make a judgment call about what the right resolution is, and leave the customer feeling heard rather than processed. The agent role doesn't disappear. It sheds what it was never good at and concentrates on what it always should have been doing.&lt;/p&gt;

&lt;p&gt;But the more important point isn't what call centres lose within their own walls. It's what the wider market creates to absorb it and then some. The WEF's 2025 Future of Jobs report is unambiguous: 92 million jobs displaced by 2030, 170 million new ones created. Over 85% of employment growth since 1940 came from technology-driven job creation, and the pattern is consistent. This cycle is no different. The same AI that automates the scripted interaction is simultaneously creating the healthcare platform, the legal compliance tool, the industrial monitoring system, each of which generates its own demand for people who understand the domain deeply enough to support it. The jobs don't disappear. They move toward complexity, and they multiply in the process.&lt;/p&gt;

&lt;p&gt;The mechanism is inference cost. GPT-3.5-level performance dropped from $20 per million tokens in November 2022 to $0.07 by October 2024, a 280-fold reduction in eighteen months. Projects that failed an ROI calculation in 2021 need recalculating at 2025 prices. The long tail of businesses Salesforce and SAP never properly served — too small for enterprise software, too complex for off-the-shelf tools — is now economically addressable. The constraint has shifted from cost to capability: not "we cannot afford this" but "we need someone who understands our domain well enough to build this right."&lt;/p&gt;

&lt;p&gt;That space also includes a lot of bad code. Humans have been shipping debt-laden software since long before AI arrived — the bootcamp era produced industrial quantities of it. AI didn't invent slop, it inherited the tendency and gave it a faster engine. GitClear tracked an eightfold increase in duplicated code blocks during 2024, with 46% of code changes consisting entirely of new lines while refactored and moved code dropped sharply. MIT professor Armando Solar-Lezama called it a brand new credit card for accumulating technical debt in ways we were never able to before. The expansion creates its own counter-demand: the backlog of systems needing someone who can read them, diagnose them, and make principled decisions about what to fix grows alongside the new projects. Who fixes the slop matters more than who created it.&lt;/p&gt;

&lt;p&gt;Healthcare is the clearest domain that has crossed a viability threshold. Buying cycles have compressed from 12 to 18 months down to under six, yet 80% of the market remains untapped. Prior authorisation systems that trap clinical staff in paperwork producing no patient value. Voice interfaces for patient engagement a two-doctor practice could never previously afford. Diagnostic support tools surfacing patterns across patient records at a scale no clinician could maintain unaided. These are not incremental improvements. They are categories of software that didn't exist as commercially viable products three years ago. The engineering required to build them correctly, integrating with legacy clinical systems, navigating data governance constraints, designing for safety-critical failure modes, is hard in ways no amount of AI assistance substitutes for. That hardness is the opportunity.&lt;/p&gt;

&lt;p&gt;Legal and compliance follows the same pattern with added regulatory tailwind. The EU AI Act, the EU Cyber Resilience Act, evolving data sovereignty requirements across jurisdictions — each a new surface area of compliance work organisations need software to manage. Contract review, regulatory change monitoring, audit trail generation: work that previously required expensive specialist time, or simply wasn't done rigorously, is now viable at a cost that makes sense for organisations of all sizes. Regulation is a forcing function for software investment, and the current environment is generating more of them than the industry has seen in a decade.&lt;/p&gt;

&lt;p&gt;Manufacturing and industrial software is crossing a different threshold entirely. Predictive maintenance systems previously requiring expensive specialist integration can now run on existing sensor infrastructure at a fraction of the cost. Digital twins for factory floors, simulations that let operators model consequences before making changes, are moving from enterprise-only to mid-size manufacturers. Quality control systems catching defects in real time rather than in post-production sampling. The engineering problems are genuinely difficult: real-time control loops, safety-critical systems, integration with legacy industrial hardware designed before the internet existed. That difficulty is not a barrier; it is a moat for engineers who can navigate it.&lt;/p&gt;

&lt;p&gt;Robotics sits at the intersection of several of these domains. The convergence of cheaper foundation models, affordable actuators, and improved simulation tooling has brought humanoid and industrial robots into commercial viability faster than most observers predicted. The software governing these systems — perception pipelines, real-time control loops, safety monitoring, human-robot interaction layers — represents an enormous surface area of engineering work that barely existed as a commercial discipline in 2020. Every new deployment is a new software project. The patterns are still being established, which is precisely when it is most valuable to be involved.&lt;/p&gt;

&lt;p&gt;Security sits in a category of its own because AI is simultaneously creating the problem and generating the demand for people to solve it. The AI cybersecurity market is projected to reach $86 billion by 2030, driven by accelerating attack surface expansion and a talent shortage that already stood at 4.8 million unfilled positions before agentic AI began proliferating across enterprise stacks. The adversarial dynamic is what makes this domain structurally different from every other one here. Attackers use the same foundation models, the same code generation tools, the same agentic frameworks that defenders do. The threat surface expands every time a new agent ships, every time a developer uses AI to generate integration code without understanding the security model of the library they're calling. What the next five years demand is the engineer who reasons adversarially: who identifies how a system might be exploited before it is, who treats security as a design constraint from the first conversation rather than a compliance checkbox before launch. Every IT position is becoming a cybersecurity position. Security makes the broader argument of this piece urgent rather than merely interesting.&lt;/p&gt;

&lt;p&gt;For businesses the horizontal expansion may be the most significant shift of the next five years, not because the tools are different but because for the first time they are accessible. A 15-person logistics company can build custom route optimisation. A regional accountancy firm can offer AI-powered client tools that would have required a dedicated engineering team two years ago. The assumption that custom software was for enterprises is becoming false faster than most small business owners realise. That shift creates demand for engineers who understand a domain well enough to build for it — not engineers who know the frameworks, but engineers who know the problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open Source Gets Complicated
&lt;/h2&gt;

&lt;p&gt;In January 2026, Adam Wathan posted a GitHub comment explaining why three of the four engineers maintaining Tailwind CSS had just lost their jobs. The project powers over 617,000 live websites. Monthly downloads exceed 75 million. Usage has never been higher. Revenue has never been lower.&lt;/p&gt;

&lt;p&gt;The conversion funnel that sustained the operation broke when AI started answering the questions that used to bring developers to the documentation site. When a community member submitted a pull request to make Tailwind's documentation easier for AI to consume, Wathan declined. Easier AI access meant fewer human visits, which meant less revenue, which meant faster collapse. This is not a Tailwind problem. It is the open source funding problem, made visible.&lt;/p&gt;

&lt;p&gt;The ecosystem was built on an unspoken assumption that maintainers would keep going out of goodwill, reputation, or employment by a company that benefited indirectly. That assumption held as long as two conditions were true: the maintenance burden stayed manageable, and the monetisation paths stayed open. AI is closing the monetisation paths and increasing the maintenance burden simultaneously.&lt;/p&gt;

&lt;p&gt;What emerges is an hourglass dynamic. At the top, the Pareto principle becomes more pronounced: attention, funding, and trust concentrate on a shrinking number of well-governed, institutionally backed projects. OpenSearch, Valkey, OpenTofu survive because they have the backing to absorb scrutiny. At the bottom, forking becomes so cheap that organisations maintain their own versions for trivial differences. A team that would previously have spent a week submitting a PR upstream now forks in an afternoon and moves on. The fork never gets maintained properly. Nobody contributes back. The middle tier — projects sustainable on community goodwill and modest corporate interest — gets squeezed from both directions.&lt;/p&gt;

&lt;p&gt;OpenClaw became the most starred project on GitHub in early 2026. It is built on Node, TypeScript, and dozens of smaller libraries most of its 50,000 stars have never heard of. Six weeks after going viral, its creator announced he was joining OpenAI and handing the project to a foundation. The foundation model is the right response. It is also an admission that individual maintainers cannot sustain what the ecosystem now demands of them.&lt;/p&gt;

&lt;p&gt;The maintenance burden was already severe. Tidelift's 2024 survey found 60% of open source maintainers are unpaid, with 86% of organisational investment coming through employee labour rather than direct financial support. The xz backdoor incident showed what critical infrastructure running on exhausted volunteers looks like under pressure. Maintainer trust toward new contributors dropped 66% in the aftermath, deepening the isolation that accelerates burnout. AI adds noise rather than relief: generated issues and pull requests of variable quality flood maintainers while usage increases and documentation traffic — the monetisation path for many — declines.&lt;/p&gt;

&lt;p&gt;The second revenue collapse happens when AI agents replace human visitors. MCP and similar protocols let AI read documentation, answer integration questions, and generate working code without a human ever landing on your site. The advertising, sponsorship, and consulting leads that sustained many maintainers disappear. More usage. Less money. More work.&lt;/p&gt;

&lt;p&gt;The supply chain consequence is where this becomes a board-level problem. Attackers have learned to register malicious packages under names that AI code generation tools hallucinate. A developer who accepts an AI package suggestion without verifying it may import malware directly into production, at a speed no manual review process was designed to match. Cisco's security team tested a third-party OpenClaw skill and found it performing data exfiltration and prompt injection without user awareness, while the skill repository had no vetting process to prevent malicious submissions. The EU Cyber Resilience Act and US executive orders on software supply chain are beginning to make the dependency graph a legal liability rather than a technical convenience. Enterprises consolidating on fewer, better-audited components are responding rationally to a threat model the casual open source assumptions of the 2010s were never designed to handle.&lt;/p&gt;

&lt;p&gt;The open source model worked because the attack surface was manageable and the goodwill was sufficient. AI expands the attack surface and exhausts the goodwill simultaneously. The projects that survive the next five years will be the ones that solved their sustainability model before the crisis arrived.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Education Industrial Complex
&lt;/h2&gt;

&lt;p&gt;Universities were already out of step with the industry before AI arrived. This was an open secret that most CS graduates needed twelve to eighteen months of on-the-job correction before they were genuinely useful, and the industry absorbed them anyway because demand was high enough to compensate for the mismatch. Only 30% of 2025 graduates found jobs in their field. 33% were unemployed and actively seeking work. 48% felt unprepared to apply for entry-level positions. The arrangement suited everyone except the graduates: universities filled seats, employers hired on credential and corrected on the job, and nobody had to confront the structural misalignment directly. AI is making that comfortable arrangement untenable.&lt;/p&gt;

&lt;p&gt;The deeper problem is what universities optimised for. The last decade pushed specialisation — data scientists, business analysts, UX researchers, each trained for a narrow role in a system that required those roles to exist in volume. CS degrees more than doubled between 2013 and 2022, universities expanding supply at precisely the moment the market was about to consolidate the most automatable roles. The fragmentation of job titles that drove curriculum design is exactly what the market is now reversing. The narrow specialist — the data scientist who cannot engage with a production system, the business analyst who cannot interrogate a technical tradeoff has nowhere to land when the entry-level tier shrinks and the roles they were trained for get absorbed into broader engineering responsibilities.&lt;/p&gt;

&lt;p&gt;Understanding how slowly the system moves requires understanding what it is. The education industrial complex is not just universities. It is the accreditation bodies that evaluate programmes against standards written five years ago. It is the textbook publishers with billion-dollar catalogues built on curricula that haven't changed meaningfully since the 1990s. It is the professors whose tenure is awarded for research output, not teaching quality, and who have no career incentive to redesign courses around a field changing faster than any curriculum cycle can track. The ACM and IEEE updated their joint CS curricular guidelines in 2024. The previous update was in 2013. A decade between revisions in a field that fundamentally changed in eighteen months. The pace is not an oversight. It is the system working as designed.&lt;/p&gt;

&lt;p&gt;The Farley question applies here as directly as it does to the industry. Are graduates being trained to be scientists of their systems — forming hypotheses, testing them, learning from production, iterating or are they being trained to be implementers of requirements? For most programmes, the answer is the latter. And the latter is the tier AI handles first. A student who spends three years learning to implement solutions to well-defined problems and graduates into a market where AI implements solutions to well-defined problems faster and cheaper is not the victim of bad luck. They are the product of a system that was optimised for a version of the industry that no longer exists.&lt;/p&gt;

&lt;p&gt;When knowledge transmission becomes cheap, the question is what universities are actually for. The foundational courses that justified years of tuition — data structures, algorithms, discrete mathematics are available at zero marginal cost from tools students already use. The knowledge gap that justified generic foundational courses closes faster than curricula can be rewritten. The university that has not asked what it provides once that gap closes is already behind.&lt;/p&gt;

&lt;p&gt;The credential still matters as a baseline signal to employers. But what it certifies needs to change: not proof that a student sat through required modules, but proof that they can think independently across a domain and build judgment that survives contact with real problems. Personalised pathways that build on what students already know. Domain immersion that produces engineers who understand healthcare systems or financial regulations well enough to build for them without a translator. Sustained intellectual challenge that develops the judgment AI cannot replicate. That university exists. It is not the majority.&lt;/p&gt;

&lt;p&gt;The academy has survived every previous technological shift by absorbing it slowly. It absorbed the internet. It absorbed MOOCs, which were supposed to make universities obsolete in 2012 and mostly became a supplementary market. Whether this shift is different in kind or merely degree is the question. AI does not just change what students need to learn. It changes what learning itself looks like and who can provide it. Whether universities answer that question or wait for someone else to answer it for them may be the most consequential institutional decision of the next decade. The students currently in their lecture halls will graduate into whatever that answer turns out to be.&lt;/p&gt;

&lt;h2&gt;
  
  
  Evolution, Not Revolution
&lt;/h2&gt;

&lt;p&gt;The next five years of AI in the SDLC will disappoint everyone waiting for a dramatic moment. There will be no single announcement, no model release, no product launch that draws a clean line between before and after. The AI-augmented software of 2025 — features bolted on, LLM wrappers shipped as products, "powered by AI" in every marketing deck — will quietly give way to something less visible and more consequential. AI-native software: built around what the technology actually does well, by engineers who understand the domain before they touch the model. The transition will feel unremarkable as it happens and significant in retrospect.&lt;/p&gt;

&lt;p&gt;The hype will persist throughout. Every new model release will generate another wave of disruption predictions, another round of job displacement headlines, another set of quarterly forecasts that expire before anyone finishes reading them. The noise is structural. The signal is slower and less exciting. DORA's finding that a 25% increase in AI adoption produces a 2.1% productivity lift is not a headline. It is the honest starting point for a compounding argument that plays out over years, not quarters.&lt;/p&gt;

&lt;p&gt;What compounds is not the technology. It is the organisational adaptation — the companies figuring out how to work AI natively rather than additively, the engineers expanding their scope rather than defending their tickets, the managers using freed coordination overhead to actually develop their people. Those advantages are invisible quarter by quarter. Over five years, they are structural.&lt;/p&gt;

&lt;p&gt;Not disruption. Accumulation.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>sdlc</category>
      <category>predictions</category>
    </item>
    <item>
      <title>AI in SDLC: A 2025 Retrospective</title>
      <dc:creator>Chetan Munigangappa</dc:creator>
      <pubDate>Sat, 28 Feb 2026 14:01:32 +0000</pubDate>
      <link>https://dev.to/chetangangappa/ai-in-sdlc-a-2025-retrospective-4c4b</link>
      <guid>https://dev.to/chetangangappa/ai-in-sdlc-a-2025-retrospective-4c4b</guid>
      <description>&lt;p&gt;It's February 2026 and I've got a fractured leg, which turns out to be the right conditions for looking back at a year that moved too fast to examine properly at the time. Laptop on the sofa, nowhere to be, no standups to attend, no Slack pings filling the gaps between thoughts. Just the kind of enforced pause that doesn't happen in a normal engineering career.&lt;/p&gt;

&lt;p&gt;And what keeps coming back is 2025. Specifically, how different the work felt, and why.&lt;/p&gt;

&lt;p&gt;David Farley defines software engineering as "the application of an empirical, scientific approach to finding efficient, economic solutions to practical problems in software. It requires practitioners to become experts at both learning effectively and managing complexity sustainably."&lt;/p&gt;

&lt;p&gt;Sitting here with time to actually think, I keep returning to that definition. Not as a textbook quote — as a recognition. It describes exactly what 2025 forced a return towards.&lt;/p&gt;

&lt;p&gt;We forgot this. Or rather, we let others forget it for us. And 2025 was the year the consequences became impossible to ignore.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Software Engineers Became the Least Important People in Software Engineering
&lt;/h2&gt;

&lt;p&gt;Somewhere in the 2010s, we collectively decided that software engineering was a coding problem. The bootcamp explosion promised that anyone could learn to code in twelve weeks and immediately qualify for a six-figure job. General Assembly, Lambda School, Flatiron School — they all taught variations of the same curriculum: React, Rails, JavaScript fundamentals, maybe some basic database work. The implicit promise was that coding was the skill that mattered. Learn the syntax, learn the frameworks, and you were an engineer.&lt;/p&gt;

&lt;p&gt;This was never true, but it was convenient. Convenient for the industry, which needed implementers faster than universities could produce them. Convenient for business people, who wanted to believe that the "vision" and the "strategy" were the hard parts, while the coding was just execution. Convenient for the bootcamps themselves, which could sell a transformational experience in a timeframe that fit between unemployment benefits and desperation.&lt;/p&gt;

&lt;p&gt;What we actually produced was a generation of developers who knew how to build components but not how to decide what components to build. Developers who could write React but couldn't sit with a stakeholder and understand why a feature mattered. Developers who knew the technical implementation of a user story but had no context for the business problem that story was supposed to solve.&lt;/p&gt;

&lt;p&gt;The "full-stack" developer became the industry's darling — not because full-stack represented deep competence, but because it represented flexibility. One person who could do everything, which really meant one person who could be assigned to any ticket without complaining. The stack didn't matter; what mattered was that we'd found a way to make engineers interchangeable.&lt;/p&gt;

&lt;p&gt;Meanwhile, the complexity didn't go away. It just got managed by people who weren't trained to manage it. Business founders and product managers took ownership of "the vision" — often without understanding what was technically possible. Designers took ownership of "the experience" — frequently without understanding the data models that would have to support their interfaces. Architects emerged to "guide" technical decisions, often from positions where they no longer wrote production code. Engineering managers optimised for velocity metrics that measured activity rather than outcomes.&lt;/p&gt;

&lt;p&gt;Every new role that appeared was built on the same assumption: engineers couldn't be trusted with the full picture. They needed translation, guidance, oversight. The person who actually understood the system — who knew where the complexity lived, which assumptions were fragile, what would break under load — had the least authority to influence decisions.&lt;/p&gt;

&lt;p&gt;Technical debt became a "developer problem" rather than a business reality. Refactoring became something you did "when you had time" between feature deliveries. The complexity engineers managed was invisible to business stakeholders, which meant it was unvalued. When engineers tried to explain why a simple-sounding feature would take weeks, they were seen as making excuses rather than describing constraints.&lt;/p&gt;

&lt;p&gt;The bootcamp model reinforced this dynamic by design. They taught React not because React was the right tool for every problem, but because React was what employers wanted. They taught CRUD applications because CRUD applications were easy to teach and easy to evaluate. They produced developers who could follow tutorials, copy patterns, and implement specifications — but not developers who could define problems, evaluate tradeoffs, or own outcomes.&lt;/p&gt;

&lt;p&gt;This wasn't the fault of the bootcamp graduates. They were doing exactly what the system asked of them. The fault was with an industry that had convinced itself that coding was the valuable part, and that everything else — understanding context, designing approaches, managing complexity sustainably — was someone else's job.&lt;/p&gt;

&lt;p&gt;You could say I'm being elitist about bootcamps — that they gave access to people who couldn't afford CS degrees. But this isn't about credentials. It's about what we taught. Bootcamps taught coding as a commodity skill because that's what the industry demanded. The critique is of an industry that wanted implementers, not engineers. Access matters. What we give people access to matters more.&lt;/p&gt;

&lt;p&gt;By the early 2020s, software engineers had become the least important people in software engineering. We were the ones who actually built the systems, who understood how they worked, who managed the complexity that everyone else ignored. But we weren't the ones who decided what to build, or why, or for whom. We'd been reduced to expensive typists, implementing decisions made by people who didn't understand their implications.&lt;/p&gt;

&lt;p&gt;We called it "collaboration." It was mostly translation — endless meetings where engineers tried to explain technical constraints to business people who didn't want to hear them, and business people tried to explain user needs to engineers who weren't allowed to talk to users directly. The boundary between roles wasn't about efficiency; it was about control. And engineers had lost it.&lt;/p&gt;

&lt;p&gt;Then 2025 happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  When AI Exposed the Gap
&lt;/h2&gt;

&lt;p&gt;The first time I used Copilot to generate a React component in early 2025, I felt a strange mix of exhilaration and dread. The exhilaration was obvious — I'd just written a complex form handler in seconds instead of minutes. The dread took longer to identify. It wasn't that the AI was going to replace me. It was that the AI was making visible something I'd been trying not to see: the coding was never the hard part.&lt;/p&gt;

&lt;p&gt;I'd spent the previous decade optimising for coding speed. Learning new frameworks, mastering type systems, keeping up with the JavaScript ecosystem's relentless churn. All of that became nearly worthless overnight — not because the AI could do it better, but because the AI could do it fast enough that the difference between "good at coding" and "competent at coding" stopped mattering.&lt;/p&gt;

&lt;p&gt;What the AI couldn't do was understand why we were building something. It couldn't sit with risk engineers and learn how they actually processed reports. It couldn't evaluate whether a technical approach would scale with the business, or whether we were solving the right problem, or what would happen when the edge cases we hadn't considered inevitably appeared.&lt;/p&gt;

&lt;p&gt;Those weren't coding problems. They were engineering problems. And they'd been my problems all along — I just hadn't been allowed to own them.&lt;/p&gt;

&lt;p&gt;In January 2025, I started building a risk assessment platform for insurance underwriters. LightRAG would process their reports and generate standardised grading according to company guidelines. In 2024, this would have triggered the full organisational machinery: product manager for discovery, designer for workflows, architect for technical approach, probably three engineers for six months of implementation.&lt;/p&gt;

&lt;p&gt;Instead, I started with the SDK. Not because someone prioritised it in a roadmap, but because my data scientist partner needed something to test their prompt engineering against. They needed real inputs and real outputs, a way to iterate on scoring guidelines without waiting for a full platform. So I built a simple Python library — ingest reports, run them through LightRAG, return structured grading. A few days' work.&lt;/p&gt;

&lt;p&gt;While they tested prompts, I built the server backend. No handoff documents. No estimation rituals. Just parallel work coordinated through conversation. The speed was disorienting. I'd spent years waiting — for requirements, for designs, for approvals — and now there was nothing to wait for. The work was just... done. Then I could do more work.&lt;/p&gt;

&lt;p&gt;But the real shift came when I did something I'd never done before. I sat down with the risk engineers themselves. Not through a product manager who would translate. Not by reviewing personas someone else created. I sat in their workspace, watched them process reports, understood the actual pain of their current workflow. Then I sketched UI flows on a whiteboard whilst they told me what would work.&lt;/p&gt;

&lt;p&gt;This would have been impossible in 2024. Not because I lacked the skills — I could always sketch, always ask questions. But because the structure prevented it. The structure said that was product's job, or design's job. The structure said engineers implement, they don't discover. The structure said you wait for specifications, you don't create them.&lt;/p&gt;

&lt;p&gt;We refined those mockups over a week. I'd sketch something, they'd try it in their actual workflow, we'd identify what didn't work, I'd iterate. When we landed on something that felt right, I wrote the requirements documentation myself — translating their domain knowledge into technical specs whilst I could still ask clarifying questions.&lt;/p&gt;

&lt;p&gt;The platform grew organically. SDK became foundation. Backend took shape. UI emerged from those collaborative sessions. When the first business line was stable, I started conversations with the second business line directly — understanding their variations, adapting what we'd built. In 2024 this would have required a product manager, a designer, multiple engineers, six months to get started. I delivered the complete platform — SDK, backend, UI, two business lines, end-to-end stakeholder management within a year, alone.&lt;/p&gt;

&lt;p&gt;What made this possible wasn't "10x coding." The AI helped with boilerplate, sure. But what actually made it possible was exercising the full engineering competence that Farley defined: understanding the problem deeply enough to design an efficient, economic solution. The coding was trivial. The engineering was not.&lt;/p&gt;

&lt;p&gt;When coding becomes fast, engineering judgement becomes visible. When implementation is cheap, understanding the problem becomes valuable. The things we'd offloaded to PMs and designers — understanding stakeholders, designing workflows, making tradeoffs — turned out to be engineering work after all. We'd been solving problems all along. We just weren't allowed to own the solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Acceleration Broke
&lt;/h2&gt;

&lt;p&gt;The transition wasn't clean. Three things broke, and they all point to the same root cause: treating engineering as coding for so long meant that when the full competence was suddenly required, neither organisations nor people were ready for it.&lt;/p&gt;

&lt;p&gt;My engineering manager said something like: "Now that AI handles the coding, you've got bandwidth for more." The scope expanded. The timeline didn't. Headcount stayed static. The assumption was that coding had become solved, freeing up capacity for "higher value work."&lt;/p&gt;

&lt;p&gt;But what he called "higher value work" was actually the engineering I'd been doing all along — understanding context, designing approaches, owning outcomes. The cognitive load increased whilst the recognition of that load didn't. The organisation saw efficiency gains and demanded more capability from the same people, without acknowledging that "doing it all" requires different skills, different energy, different support than doing one well-defined piece.&lt;/p&gt;

&lt;p&gt;In mid-2025, an intern joined our team. Bright, eager, armed with the same AI tools I used daily. I gave them a CI pipeline to migrate — a straightforward work based on a template and documented decisions. They stared at it, then reached for the AI. The AI completed the job, but not the whole job, there were environments and team standards and internals the AI didn't understand. The deployment failed, the tool suggested a fix and they applied it without understanding it. As expected this led to a successful deployment with the failed application. In this particular case, the error was trivial enough to be missed by us even in PR review.&lt;/p&gt;

&lt;p&gt;This isn't their fault. They were doing exactly what the system taught them: code fast, use tools, deliver features. The system just never taught them that understanding matters more than output. AI accelerates experienced engineers because we already have patterns in our heads. For juniors, the same tools prevent those patterns from forming. We're creating a generation who can generate but can't understand.&lt;/p&gt;

&lt;p&gt;Then there was the production incident. The platform shipped fast because implementation had "no delay." There was no pause to learn Datadog properly, to understand observability best practices, to set up meaningful dashboards and alerts. I knew how to build the thing. I hadn't given myself time to learn how to operate it.&lt;/p&gt;

&lt;p&gt;When something broke at 2am, I was debugging blind. The dashboards were there — I'd set them up quickly, checking boxes without understanding what I was looking at. The metrics didn't tell me what I needed to know because I hadn't learned which metrics mattered. I fixed the immediate issue, but I didn't understand why it had happened, and that meant I couldn't be sure it wouldn't happen again.&lt;/p&gt;

&lt;p&gt;All three incidents stem from the same source. When coding is all you value, everything else becomes invisible. The organisation saw speed and demanded more of it. The junior saw tools and skipped the work. The senior saw a delivery target and missed the operational depth. Same mistake, three levels.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Return to Form
&lt;/h2&gt;

&lt;p&gt;The concrete problem isn't philosophical. The tools that accelerate experienced engineers actively damage junior formation. AI removes the struggle that builds intuition. Bootcamps already taught implementation over understanding. Combine them and you get developers who can produce but can't evaluate — who can generate code but can't hold a mental model of what that code actually does.&lt;/p&gt;

&lt;p&gt;The industry will need to deliberately rebuild how engineers are trained and mentored. Not by going back to the old gatekeeping — access matters — but by designing structures that force understanding. Assign problems where AI can't be the first answer. Require explanation, not just generation. Build in the pause that acceleration removes. Create the friction that learning requires.&lt;/p&gt;

&lt;p&gt;This is structural, not personal. New graduates aren't doomed. The structure that trained them needs to change — from teaching coding as a commodity skill to teaching engineering as problem-solving. From producing implementers who follow specifications to producing engineers who can sit with stakeholders, understand constraints, and own outcomes.&lt;/p&gt;

&lt;p&gt;"Finding efficient, economic solutions to practical problems." That's the job. It always was.&lt;/p&gt;

&lt;p&gt;The rest — the tickets, the ceremonies, the handoffs, the theatre of process that let everyone feel important whilst obscuring who was actually responsible — that was the deviation. We built an industry around the assumption that engineers couldn't handle complexity, then wondered why the complexity kept overwhelming us. We optimised for coding speed and forgot that understanding matters more than typing.&lt;/p&gt;

&lt;p&gt;AI didn't create a new kind of engineer. It revealed that the old definition was always the right one. We let business people convince us that coding was the valuable part. In doing so, we let ourselves become the least important people in the room. 2025 was the year that stopped working.&lt;/p&gt;

&lt;p&gt;The engineers who thrive in 2026 won't be the ones who prompt best. They'll be the ones who can understand a problem, design a solution, implement it reliably, and learn from it properly. The ones who never forgot — or who are now remembering — what software engineering actually means.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>sdlc</category>
      <category>engineeringculture</category>
      <category>retrospective</category>
    </item>
  </channel>
</rss>
