As the new year came in, I found myself reading every AI prediction I could find. I stopped after the second week. Not because the writing was bad, some of it was sharp, but because the forecasts were expiring faster than I could finish them. A new model would drop, a vendor would announce something, a benchmark would get shattered, and whatever someone had written in January would feel like archaeology by February. The half-life of a one-year AI prediction is shorter than a sprint cycle.
The deeper problem was what they were measuring. They counted automatable tasks, ran benchmarks, estimated job exposure. None of them asked how organisations actually decide to restructure, or what happens when the cost curves shift but the org chart doesn't. They treated software engineering as a set of tasks to optimise rather than a function embedded in institutions with their own logic and friction.
Instead of betting on what a model release does to your Q3 velocity, this piece looks at what five years of compounding AI adoption does to the organisations building software. To answer that properly, you have to start somewhere most predictions skip: how companies actually work.
How Organisations Work
The org chart on the company website is not wrong, exactly. It's just incomplete in the ways that matter.
The conventional picture is a pyramid: executives at the top set direction, managers in the middle transmit it, teams at the bottom execute it. Clean lines, clear accountability, everybody knows their lane. The reality is messier and more interesting. Large and medium-sized organisations don't function as pyramids. They function as translation machines, and the translation happens in layers, each with a different time horizon and a different kind of work.
The C-suite operates on the longest horizon. They own cross-organisational initiatives: the three-to-five year bets on market position, the regulatory compliance programmes, the platform migrations that span multiple budget cycles. Crucially, and this is the part that surprises people, they often don't know every project currently running in the organisation. They don't need to. Their job is to set the direction and the constraints, not to track the work. A CEO who knows the details of every active sprint has bigger problems than an inefficient org chart.
Below them sits the first middle layer: directors, senior managers, heads-of. These are the translators. They take a strategic initiative, say "we need to reduce infrastructure costs by 30% over three years", and decompose it into programmes and projects that can actually be staffed, scoped, and delivered. This is not mechanical work. It requires understanding both the strategic intent and the operational reality well enough to know when the two are in tension. A director who can't push back on an initiative that's been scoped unrealistically isn't doing the job. McKinsey's research on long-range planning found that just over half of companies regularly translate strategic goals into three-to-seven year financial plans, meaning the long-horizon bet exists, but it's always in tension with quarterly cost pressure. The first middle layer lives in that tension every day.
Below them sits the second middle layer: managers, project managers, delivery managers, scrum masters, team leads. These are the executors. They take the programmes handed down from the first layer and make them happen, coordinating across teams, tracking dependencies, running ceremonies, translating requirements into tickets, escalating blockers, reporting status upward. The work is real and the pressure is constant. But the nature of the work is fundamentally different from the layer above it. The first layer exercises judgment about what to build. The second layer exercises coordination to make sure it gets built.
This distinction, judgment versus coordination, is the one the org chart doesn't show you. And it's the one that matters for everything that follows.
One important caveat before we go further: none of this describes small businesses or lower-medium sized organisations. In a 20-person company, the founder is the C-suite, the first middle layer, and often the second. Decision-making is fast, hierarchy is flat, and the layers described above either collapse into one person or don't exist at all. The dynamics in this piece apply to organisations large enough to have grown the full stack, typically from around 200 people upward, where the coordination burden has grown large enough to justify dedicated roles for it. Below that threshold, different rules apply.
The Shrinking Hierarchy
The second middle layer doesn't grow because companies are wasteful. It grows because visibility has a cost, and that cost scales with complexity.
When a director hands a programme to an engineering team, they need to know if it's on track. Not in real time, but reliably enough to escalate before something becomes a crisis. The project manager's core function is producing that signal. Status reports flow upward. Risk registers get updated. Sprint ceremonies create checkpoints. The system works because human communication between layers requires human intermediaries to manage it.
The problem is that as organisations grow, the surface area requiring visibility expands faster than the programmes themselves. Each new team is another node. Each new dependency is another handoff requiring tracking. The answer companies have consistently reached for is more coordination, more people whose job is to make sure other people know what is happening. This is how you end up with what Zuckerberg described when restructuring Meta: managers managing managers, managing managers, managing the people actually doing the work. His diagnosis was right even if the fix was blunt. You cannot remove a coordination layer without replacing the coordination function. The question is whether that function still requires humans to perform it.
For most of the current second middle layer, it doesn't. Not entirely, and not immediately, but directionally. The project manager's work splits into two functions that rarely get separated. The first is synthesis: reading across Jira, GitHub, budget trackers, and vendor systems to assemble a coherent picture of programme health for the director. The second is planning: working with directors on next quarter's capacity, budget allocation, and programme scope. Today both require a human because the underlying systems don't talk to each other in any meaningful way. Someone has to read each one, reconcile the signals, and produce the output.
Agentic AI systems are beginning to make the synthesis function redundant. OpenClaw, one of the first systems to genuinely execute across tools rather than simply respond to queries, points at what this looks like in practice: an agent that reads your Jira board, watches your GitHub PRs, tracks budget burn, and surfaces a coherent programme picture without a human assembling it. When that synthesis becomes reliable, the project manager's time reclaims itself. What remains is the judgment work: scope tradeoffs, stakeholder management, pushing back on unrealistic timelines, and working with directors on planning cycles that currently consume weeks of manual data assembly. That work was always the more valuable half of the role. It just got buried under the synthesis overhead.
For engineering managers the change cuts deeper, and in a different direction. The constraint on how many direct reports a manager can effectively handle has never been the number of humans they could track. It has been the quality of attention they could give each one. Career conversations, technical mentorship, performance calibration, helping someone work through a problem they are stuck on. These are not coordination functions and they cannot be delegated to a dashboard. What AI changes is the overhead around them: the status chasing, ticket grooming, and ceremony facilitation that consumes the hours that should be spent developing people. In the 1980s the average managerial span was 1-to-4 direct reports. Information technology moved that closer to 1-to-10. AI moves it further still.
But the span widening is the smaller part of the story. The bigger part is what engineers are now being asked to become. For most of the past decade, as the previous post argued, engineers were reduced to implementers — handed tickets, kept away from stakeholders, insulated from the business context that would have made their work meaningful. AI is reversing that. The engineer who only closes tickets is being displaced by tooling. What survives is the engineer who can engage with the problem domain, work directly with stakeholders, own outcomes rather than tasks. That is a significantly harder job to grow someone into than the one the industry settled for. Career conversations get more complex. Mentorship requires more than code review. Performance calibration becomes about judgment and domain understanding, not velocity metrics. The engineering manager who was already stretched thin on four direct reports, spending most of their time in ceremonies and status updates, now has more reports, deeper development conversations, and engineers whose scope of responsibility has expanded substantially. The coordination overhead coming down is what makes that possible. It is not optional relief. It is the condition that makes the expanded role survivable.
For directors the change is about bandwidth. Their job is translating strategy into programmes and making judgment calls about priority and scope. AI does not do that. What changes is the fidelity and speed of the information they are working from. A director who currently waits for a weekly status report will instead have a live synthesis across the programme portfolio. The time freed from information gathering goes into the work that was always the point: refining programmes for the next quarter, initiating new ones, catching misalignment between strategic intent and execution before it becomes expensive. The planning cycle that currently takes weeks of manual data assembly collapses to days. The director who was constrained by information velocity becomes constrained by their own judgment speed which is as it should be.
For the C-suite, the change is about visibility and calibration. They don't need to know every project, but they do need to sense when strategic intent and execution are drifting apart. AI-assisted synthesis across the programme portfolio makes that drift visible earlier. The quarterly business review becomes less about assembling the picture and more about interrogating it. The CEOs who thrive will be those who use the freed bandwidth to engage more deeply with the three-to-five year bets that actually determine their company's position not those who use it to meddle in execution they never needed to own.
The through-line across all three layers is the same: coordination work that was performed by humans because the systems didn't talk to each other becomes automated. Judgment work that was always the point becomes the primary occupation. The hierarchy doesn't disappear. It shrinks. The same organisational function gets performed by fewer people, each with a wider span and a sharper focus on the work that actually matters.
The Broadening Opportunity
Gartner projects that 80% of the most common customer service issues will be handled by agentic AI by 2029. That's the number everyone cites. What they miss is what happens to the remaining 20%.
The scripted work goes away: password resets, order status, appointment scheduling. It should. That work was never the interesting part of the job. What remains is what was always hardest to scale: the customer disputing a charge they don't understand, frustrated, trying to explain a situation that fits no category in any decision tree. That interaction needs someone who can listen past the complaint to the actual problem, make a judgment call about what the right resolution is, and leave the customer feeling heard rather than processed. The agent role doesn't disappear. It sheds what it was never good at and concentrates on what it always should have been doing.
But the more important point isn't what call centres lose within their own walls. It's what the wider market creates to absorb it and then some. The WEF's 2025 Future of Jobs report is unambiguous: 92 million jobs displaced by 2030, 170 million new ones created. Over 85% of employment growth since 1940 came from technology-driven job creation, and the pattern is consistent. This cycle is no different. The same AI that automates the scripted interaction is simultaneously creating the healthcare platform, the legal compliance tool, the industrial monitoring system, each of which generates its own demand for people who understand the domain deeply enough to support it. The jobs don't disappear. They move toward complexity, and they multiply in the process.
The mechanism is inference cost. GPT-3.5-level performance dropped from $20 per million tokens in November 2022 to $0.07 by October 2024, a 280-fold reduction in eighteen months. Projects that failed an ROI calculation in 2021 need recalculating at 2025 prices. The long tail of businesses Salesforce and SAP never properly served — too small for enterprise software, too complex for off-the-shelf tools — is now economically addressable. The constraint has shifted from cost to capability: not "we cannot afford this" but "we need someone who understands our domain well enough to build this right."
That space also includes a lot of bad code. Humans have been shipping debt-laden software since long before AI arrived — the bootcamp era produced industrial quantities of it. AI didn't invent slop, it inherited the tendency and gave it a faster engine. GitClear tracked an eightfold increase in duplicated code blocks during 2024, with 46% of code changes consisting entirely of new lines while refactored and moved code dropped sharply. MIT professor Armando Solar-Lezama called it a brand new credit card for accumulating technical debt in ways we were never able to before. The expansion creates its own counter-demand: the backlog of systems needing someone who can read them, diagnose them, and make principled decisions about what to fix grows alongside the new projects. Who fixes the slop matters more than who created it.
Healthcare is the clearest domain that has crossed a viability threshold. Buying cycles have compressed from 12 to 18 months down to under six, yet 80% of the market remains untapped. Prior authorisation systems that trap clinical staff in paperwork producing no patient value. Voice interfaces for patient engagement a two-doctor practice could never previously afford. Diagnostic support tools surfacing patterns across patient records at a scale no clinician could maintain unaided. These are not incremental improvements. They are categories of software that didn't exist as commercially viable products three years ago. The engineering required to build them correctly, integrating with legacy clinical systems, navigating data governance constraints, designing for safety-critical failure modes, is hard in ways no amount of AI assistance substitutes for. That hardness is the opportunity.
Legal and compliance follows the same pattern with added regulatory tailwind. The EU AI Act, the EU Cyber Resilience Act, evolving data sovereignty requirements across jurisdictions — each a new surface area of compliance work organisations need software to manage. Contract review, regulatory change monitoring, audit trail generation: work that previously required expensive specialist time, or simply wasn't done rigorously, is now viable at a cost that makes sense for organisations of all sizes. Regulation is a forcing function for software investment, and the current environment is generating more of them than the industry has seen in a decade.
Manufacturing and industrial software is crossing a different threshold entirely. Predictive maintenance systems previously requiring expensive specialist integration can now run on existing sensor infrastructure at a fraction of the cost. Digital twins for factory floors, simulations that let operators model consequences before making changes, are moving from enterprise-only to mid-size manufacturers. Quality control systems catching defects in real time rather than in post-production sampling. The engineering problems are genuinely difficult: real-time control loops, safety-critical systems, integration with legacy industrial hardware designed before the internet existed. That difficulty is not a barrier; it is a moat for engineers who can navigate it.
Robotics sits at the intersection of several of these domains. The convergence of cheaper foundation models, affordable actuators, and improved simulation tooling has brought humanoid and industrial robots into commercial viability faster than most observers predicted. The software governing these systems — perception pipelines, real-time control loops, safety monitoring, human-robot interaction layers — represents an enormous surface area of engineering work that barely existed as a commercial discipline in 2020. Every new deployment is a new software project. The patterns are still being established, which is precisely when it is most valuable to be involved.
Security sits in a category of its own because AI is simultaneously creating the problem and generating the demand for people to solve it. The AI cybersecurity market is projected to reach $86 billion by 2030, driven by accelerating attack surface expansion and a talent shortage that already stood at 4.8 million unfilled positions before agentic AI began proliferating across enterprise stacks. The adversarial dynamic is what makes this domain structurally different from every other one here. Attackers use the same foundation models, the same code generation tools, the same agentic frameworks that defenders do. The threat surface expands every time a new agent ships, every time a developer uses AI to generate integration code without understanding the security model of the library they're calling. What the next five years demand is the engineer who reasons adversarially: who identifies how a system might be exploited before it is, who treats security as a design constraint from the first conversation rather than a compliance checkbox before launch. Every IT position is becoming a cybersecurity position. Security makes the broader argument of this piece urgent rather than merely interesting.
For businesses the horizontal expansion may be the most significant shift of the next five years, not because the tools are different but because for the first time they are accessible. A 15-person logistics company can build custom route optimisation. A regional accountancy firm can offer AI-powered client tools that would have required a dedicated engineering team two years ago. The assumption that custom software was for enterprises is becoming false faster than most small business owners realise. That shift creates demand for engineers who understand a domain well enough to build for it — not engineers who know the frameworks, but engineers who know the problem.
Open Source Gets Complicated
In January 2026, Adam Wathan posted a GitHub comment explaining why three of the four engineers maintaining Tailwind CSS had just lost their jobs. The project powers over 617,000 live websites. Monthly downloads exceed 75 million. Usage has never been higher. Revenue has never been lower.
The conversion funnel that sustained the operation broke when AI started answering the questions that used to bring developers to the documentation site. When a community member submitted a pull request to make Tailwind's documentation easier for AI to consume, Wathan declined. Easier AI access meant fewer human visits, which meant less revenue, which meant faster collapse. This is not a Tailwind problem. It is the open source funding problem, made visible.
The ecosystem was built on an unspoken assumption that maintainers would keep going out of goodwill, reputation, or employment by a company that benefited indirectly. That assumption held as long as two conditions were true: the maintenance burden stayed manageable, and the monetisation paths stayed open. AI is closing the monetisation paths and increasing the maintenance burden simultaneously.
What emerges is an hourglass dynamic. At the top, the Pareto principle becomes more pronounced: attention, funding, and trust concentrate on a shrinking number of well-governed, institutionally backed projects. OpenSearch, Valkey, OpenTofu survive because they have the backing to absorb scrutiny. At the bottom, forking becomes so cheap that organisations maintain their own versions for trivial differences. A team that would previously have spent a week submitting a PR upstream now forks in an afternoon and moves on. The fork never gets maintained properly. Nobody contributes back. The middle tier — projects sustainable on community goodwill and modest corporate interest — gets squeezed from both directions.
OpenClaw became the most starred project on GitHub in early 2026. It is built on Node, TypeScript, and dozens of smaller libraries most of its 50,000 stars have never heard of. Six weeks after going viral, its creator announced he was joining OpenAI and handing the project to a foundation. The foundation model is the right response. It is also an admission that individual maintainers cannot sustain what the ecosystem now demands of them.
The maintenance burden was already severe. Tidelift's 2024 survey found 60% of open source maintainers are unpaid, with 86% of organisational investment coming through employee labour rather than direct financial support. The xz backdoor incident showed what critical infrastructure running on exhausted volunteers looks like under pressure. Maintainer trust toward new contributors dropped 66% in the aftermath, deepening the isolation that accelerates burnout. AI adds noise rather than relief: generated issues and pull requests of variable quality flood maintainers while usage increases and documentation traffic — the monetisation path for many — declines.
The second revenue collapse happens when AI agents replace human visitors. MCP and similar protocols let AI read documentation, answer integration questions, and generate working code without a human ever landing on your site. The advertising, sponsorship, and consulting leads that sustained many maintainers disappear. More usage. Less money. More work.
The supply chain consequence is where this becomes a board-level problem. Attackers have learned to register malicious packages under names that AI code generation tools hallucinate. A developer who accepts an AI package suggestion without verifying it may import malware directly into production, at a speed no manual review process was designed to match. Cisco's security team tested a third-party OpenClaw skill and found it performing data exfiltration and prompt injection without user awareness, while the skill repository had no vetting process to prevent malicious submissions. The EU Cyber Resilience Act and US executive orders on software supply chain are beginning to make the dependency graph a legal liability rather than a technical convenience. Enterprises consolidating on fewer, better-audited components are responding rationally to a threat model the casual open source assumptions of the 2010s were never designed to handle.
The open source model worked because the attack surface was manageable and the goodwill was sufficient. AI expands the attack surface and exhausts the goodwill simultaneously. The projects that survive the next five years will be the ones that solved their sustainability model before the crisis arrived.
The Education Industrial Complex
Universities were already out of step with the industry before AI arrived. This was an open secret that most CS graduates needed twelve to eighteen months of on-the-job correction before they were genuinely useful, and the industry absorbed them anyway because demand was high enough to compensate for the mismatch. Only 30% of 2025 graduates found jobs in their field. 33% were unemployed and actively seeking work. 48% felt unprepared to apply for entry-level positions. The arrangement suited everyone except the graduates: universities filled seats, employers hired on credential and corrected on the job, and nobody had to confront the structural misalignment directly. AI is making that comfortable arrangement untenable.
The deeper problem is what universities optimised for. The last decade pushed specialisation — data scientists, business analysts, UX researchers, each trained for a narrow role in a system that required those roles to exist in volume. CS degrees more than doubled between 2013 and 2022, universities expanding supply at precisely the moment the market was about to consolidate the most automatable roles. The fragmentation of job titles that drove curriculum design is exactly what the market is now reversing. The narrow specialist — the data scientist who cannot engage with a production system, the business analyst who cannot interrogate a technical tradeoff has nowhere to land when the entry-level tier shrinks and the roles they were trained for get absorbed into broader engineering responsibilities.
Understanding how slowly the system moves requires understanding what it is. The education industrial complex is not just universities. It is the accreditation bodies that evaluate programmes against standards written five years ago. It is the textbook publishers with billion-dollar catalogues built on curricula that haven't changed meaningfully since the 1990s. It is the professors whose tenure is awarded for research output, not teaching quality, and who have no career incentive to redesign courses around a field changing faster than any curriculum cycle can track. The ACM and IEEE updated their joint CS curricular guidelines in 2024. The previous update was in 2013. A decade between revisions in a field that fundamentally changed in eighteen months. The pace is not an oversight. It is the system working as designed.
The Farley question applies here as directly as it does to the industry. Are graduates being trained to be scientists of their systems — forming hypotheses, testing them, learning from production, iterating or are they being trained to be implementers of requirements? For most programmes, the answer is the latter. And the latter is the tier AI handles first. A student who spends three years learning to implement solutions to well-defined problems and graduates into a market where AI implements solutions to well-defined problems faster and cheaper is not the victim of bad luck. They are the product of a system that was optimised for a version of the industry that no longer exists.
When knowledge transmission becomes cheap, the question is what universities are actually for. The foundational courses that justified years of tuition — data structures, algorithms, discrete mathematics are available at zero marginal cost from tools students already use. The knowledge gap that justified generic foundational courses closes faster than curricula can be rewritten. The university that has not asked what it provides once that gap closes is already behind.
The credential still matters as a baseline signal to employers. But what it certifies needs to change: not proof that a student sat through required modules, but proof that they can think independently across a domain and build judgment that survives contact with real problems. Personalised pathways that build on what students already know. Domain immersion that produces engineers who understand healthcare systems or financial regulations well enough to build for them without a translator. Sustained intellectual challenge that develops the judgment AI cannot replicate. That university exists. It is not the majority.
The academy has survived every previous technological shift by absorbing it slowly. It absorbed the internet. It absorbed MOOCs, which were supposed to make universities obsolete in 2012 and mostly became a supplementary market. Whether this shift is different in kind or merely degree is the question. AI does not just change what students need to learn. It changes what learning itself looks like and who can provide it. Whether universities answer that question or wait for someone else to answer it for them may be the most consequential institutional decision of the next decade. The students currently in their lecture halls will graduate into whatever that answer turns out to be.
Evolution, Not Revolution
The next five years of AI in the SDLC will disappoint everyone waiting for a dramatic moment. There will be no single announcement, no model release, no product launch that draws a clean line between before and after. The AI-augmented software of 2025 — features bolted on, LLM wrappers shipped as products, "powered by AI" in every marketing deck — will quietly give way to something less visible and more consequential. AI-native software: built around what the technology actually does well, by engineers who understand the domain before they touch the model. The transition will feel unremarkable as it happens and significant in retrospect.
The hype will persist throughout. Every new model release will generate another wave of disruption predictions, another round of job displacement headlines, another set of quarterly forecasts that expire before anyone finishes reading them. The noise is structural. The signal is slower and less exciting. DORA's finding that a 25% increase in AI adoption produces a 2.1% productivity lift is not a headline. It is the honest starting point for a compounding argument that plays out over years, not quarters.
What compounds is not the technology. It is the organisational adaptation — the companies figuring out how to work AI natively rather than additively, the engineers expanding their scope rather than defending their tickets, the managers using freed coordination overhead to actually develop their people. Those advantages are invisible quarter by quarter. Over five years, they are structural.
Not disruption. Accumulation.
Top comments (0)