First published at memex.ai
Extreme AI Programming #1 — Agile, After Agile
Kent Beck in 1999. A stuck team that hadn't shipped in months. The developer who is no longer a person. A manifesto without a successor. Vibe coding versus the instrument. About two years until the door closes.
Kent Beck published Extreme Programming Explained in 1999. I read it as a young engineer running a small team that kept getting lost in its own commitments and had not shipped anything in months. By the time I finished it, and later met Kent in person, it had shaped how I thought about the job for the next twenty-five years. Pair programming, TDD, short iterations. It helped seed the Agile movement that followed, which became the way most serious software was built and most serious engineering teams were run.
The world it was written for no longer exists.
Agile evolved through Scrum, Lean, Kanban and continuous delivery. Every adjustment refined the same underlying problem: how a group of humans working on the same software stays coordinated. None of us anticipated what would happen when the team members were not all human.
In any serious AI-native team today, the developer is increasingly not a person. Nor, often, is the tester. The developer is an agent (Claude Code, Codex, Copilot, Cursor, Windsurf, with Aider, Cline and OpenCode on the open-source side), and often it is several of them at once, each running its own conversation with a different member of the team, each producing work on its own cadence. The coordination problem has changed shape entirely. Agile assumed developers on one side of the table and customers on the other. That is no longer what the table looks like.
The Agile Manifesto, signed in 2001, needs a successor.
This series, Extreme AI Programming, is an unsubtle homage to Beck. Without the work he, Martin Fowler and the others did in the late nineties, there is nothing to build on here. It is an attempt to describe what a serious, professional discipline for building software with AI agents actually looks like, rooted in their work but reshaped for a world they could not yet see.
A short note on where this is coming from. I have spent more than thirty-five years running software development teams across the companies I have founded and exited. Today I am co-founder and CEO of Mindset AI, where for over a year we have been an AI-native company by design, and the practices I plan to write about here are the ones we have been evolving in real time. This is not observation from a distance.
Here is the thesis of the series, compressed.
Agile is, at heart, a coordination protocol between humans. When most of the implementation is being done by agents that never forget, never tire and never need to be motivated on a Monday morning, the shape of the coordination problem changes. The hard part is no longer keeping a team of engineers aligned. It is keeping humans, agents and the codebase itself aligned with the decisions the team has actually made, and making sure each agent acts on the current set of decisions rather than an old one, a half-remembered one, someone else's private one, or worse, no one's at all.
Instrument or slot machine?
There are two ways of working with AI. One treats it as a genuine instrument to be mastered. The other treats it as a slot machine.
The slot-machine version has a name. It is called vibe coding. Prompt, accept, ship. If the code runs, move on. If it doesn't, prompt again. It feels productive in the moment, and for a weekend project it is perfectly fine. As a professional practice it is quietly catastrophic.
You end up with codebases the team cannot evolve. Decisions the team does not remember making, or did not actually make. A continuous accumulation of incoherence nobody can see until it breaks something expensive.
The instrument version is harder. It requires that you be articulate.
Being technically articulate as a human has become, almost overnight, the most economically valuable skill in software. Not because humans have any mystical edge over machines, but because no agent does its best work without one telling it precisely what to build.
Agents are excellent at writing code when they know what they are meant to be writing. They are also quite capable of producing fluent, confident code that does precisely the wrong thing, and doing so very quickly. The same goes for the markdown documents they produce to describe the code. The difference between those outcomes is almost entirely the precision of the brief they were given. An agent is only as good as its specification, and only as good as the data it can see. The better the agent gets, the more unforgiving the gap between what you said and what you meant becomes.
The alternative, and the subject of most of what I plan to write, is a more disciplined way of working:
- Intent set clearly up front.
- Decisions captured as they are made.
- Rules for how software gets built, written down somewhere every agent and every engineer on the team will actually read.
- Plans reviewed before code is written, rather than code reviewed after the fact.
None of this is new in spirit. A reasonable reader could point out that it is roughly what good engineering has always looked like. The cast has changed, though, and the practices have to change with it.
Next week I want to look at the first response the industry has already reached for: the CLAUDE.md file, the cursor-rules file, and most recently Anthropic's Skills, which are the moment's hot topic. Every AI-native team has independently invented some version of these. They are useful, but they are band-aids, and the first implementations break in predictable ways.
Spoiler: Skills are not the answer. In their current form, they exacerbate the problem.
The harder question
There is a harder question underneath all of this, and the industry has been slow to talk about it openly.
The junior job market has, over the past year or so, all but disappeared. A senior engineer with a competent agent now ships what used to take a team of five. The obvious economic move is to hire the senior and skip the juniors. That move is being made everywhere, quietly.
The mechanism is depressing in its specifics: graduates submit hundreds, sometimes thousands, of applications to AI-driven ATS systems that reject them before any human reads their CVs.
The part that genuinely worries me is not, in itself, that our profession will shrink, even though it means good people will be out of work just as society needs more opportunities, not fewer. Professions reshape themselves all the time, and ours has done it before.
What makes the calculation work is articulation. The agent's output is only as good as the brief it is given, and writing that brief takes the kind of judgment a senior has spent years earning. A junior cannot yet bring that, not because they are incapable, but because the instinct takes time.
What worries me more is that the people currently being trained to enter the profession, in universities, on boot camps, in the first year of a graduate scheme, will arrive to find that the first rung of the ladder is no longer there. They will have done the work, paid the fees, learned the craft, and the jobs they were being prepared for will not exist in the form they were promised.
Unless those of us already inside the profession reshape it around what the next generation still has to bring, we will run on the experience we have now and stop restocking. The closing chapters of this series come back to what that reshape needs to look like in practice.
The timing is unforgiving: about two years, and after that the door does not reopen.
Why I'm writing this
That is the pessimistic version of the argument. The optimistic version is the reason I am writing this series at all.
If we find genuinely good ways of collaborating with machines, and if we build the new ceremonies and disciplines a team actually needs when several of its members do not sleep, then what we have been handed is not a diminishing of the profession but an enlargement of it.
The mechanical middle of the job, the typing, can be delegated. What is left is human creativity and human judgment, amplified by instruments that can turn a well-framed intention into working software in an afternoon. The point of this series is to work out, in public, how to build that discipline, so that we exploit what humans are actually good at while the machines do the work.
There is a great deal of talk at the moment about whether our profession is coming to an end. I don't believe a word of it.
For anyone who has been in software for a long time, this is the most interesting moment in the discipline since the arrival of the web. The barrier of programming language, the particular question of which one you happen to know, has quietly become less important than it has ever been. What matters now is clarity of intent and the ability to articulate what you want in enough detail that a competent instrument can produce it. That specific skill has always been scarce. For the first time, it is directly economically productive.
Does this need a new manifesto?
At the start I said the Agile Manifesto, signed in 2001, needs a successor. That leaves a question I have been turning over for a while: who should write it?
Over the last eighteen months, several people have already tried. None has yet had its Snowbird moment, but the conversation is well underway, and the candidates worth knowing are these.
Casey West, The Agentic Manifesto (November 2025). Modelled directly on the original ("while there is value in the items on the right, we value the items on the left more"), it pairs four new values with a five-phase Agentic Delivery Lifecycle. The central problem he names is the "determinism gap": the move from "did it do what I said" to "did it do what I wanted". The most-cited of the candidates so far, and the closest in shape to the original.
Shay Cohen at Wix Engineering, The AI Coding Agent Manifesto (April 2026). Five values written from inside an engineering team that has been living with agents for a year: contracts over conventions, verification over generation, vanilla over clever, types over tests, explicit over implicit. The most practitioner-shaped of the bunch.
Ry Walker and Jonathan Vanderford, The AIFSD Manifesto. Eleven principles, two of which carry most of the weight: "AI is your intern, not your boss", and "the human always has the last word". A hard line on responsibility, written for a moment when the temptation to delegate accountability is real.
There are others circulating, including Mircea Trofimciuc's earlier agenticmanifesto.org from May 2025, and a steady stream of essays calling for a successor without yet committing one to a fixed text. The list is open.
I take some comfort in not being alone on the deferral. Asked at Thoughtworks' twenty-fifth-anniversary retreat in February 2026 whether there should be a new manifesto for AI development, Martin Fowler said:
It's way too early. I don't have a lot of time for manifestos.
Of the original Snowbird signatories, he is the only one to have spoken publicly on the meta question.
My intention with this series is not to add another. The world does not need yet another manifesto written by someone who has not yet earned the seventeen co-signatories the original had. What I would rather do, in the spirit of open source, is back one of the existing candidates: read them carefully, write about them seriously, and put my weight behind the strongest of them. By the end of this series I will have made that choice explicit and said why.
Over the next few months, the series will cover what I think the new discipline actually contains. The roles and how they have recomposed, including which Agile ceremonies still make sense and which do not. The new first-class artefacts, which in my view are decisions, blueprints and execution plans. How to review work produced by an agent without becoming its babysitter. The economics of running a team where one engineer and three agents produce what five engineers used to, and the things that get harder rather than easier. And occasionally, because there is no point pretending otherwise, the commercial work I am doing with Mindset AI sits inside the argument.
The rhythm of the series will alternate between two registers. This first piece has been philosophical, sitting with the manifestos and the broader argument about the discipline. Next week is closer to the keyboard, practical: the artefacts every AI-native team has already reinvented half a dozen times in the last year, and why the first versions break in the same places. Both registers matter, and neither does the work alone.
If the argument resonates, I would be glad of the company. If it doesn't, I would be glad to hear why. I would rather be argued with now than wrong in print later.
— Barrie
I am co-founder and CEO of Mindset AI, where we are building Memex AI, a decision and knowledge layer for AI-native engineering teams. This series is the thinking that shapes our product. I will flag it explicitly when an article touches something we build. Most of it is simply where the industry is going, with or without us.
Top comments (0)