You know the Scrum lifecycle by heart.
A PM hands off a ticket. The team refines it, gets some acceptance criteria, pulls it into the sprint. Coding starts. Halfway through, you find a gap in the logic. You ping a stakeholder. You drag two engineers into a Zoom huddle to figure out something nobody thought about during refinement. You finish, hand it to QA, and the tester finds an edge case in 12 minutes flat.
Back to refinement. Back to dev. Back to QA.
We call this "iteration." But after watching AI rip through this workflow over the last year, I'm starting to think we just gave a fancy name to "we didn't plan very well."
The bottleneck moved
In an AI-first workflow, implementation isn't the slow part anymore. The typing of the code is becoming the shortest phase. Which means the two-week sprint, that sacred container we've all built our careers around, is starting to feel less like agility and more like artificial friction.
If a developer using Cursor and a stack of agents can ship in two days what used to take two weeks, what exactly is the sprint protecting?
Front-loading, but for real this time
Here's how the flow looks now.
Before a human writes a line of code, the ticket runs through an AI gap analyzer with full context: design system, service architecture, related tickets, the works. The agent expands it to include technical requirements, edge cases, testing strategies, and tight acceptance criteria. By the time refinement happens, the conversation isn't "what is this ticket about." It's "the agent flagged these three questions, who owns the answers."
Refinement stops being a negotiation and becomes a verification.
A real example: the "simple export button"
Our product team handed us what looked like an easy enhancement. Add a CSV export button to a reporting page. Tweak some column ordering on a table.
In the old world, we'd have written the stories, booked a mid-sprint Spike to figure out large-export handling, and called it a day. (Spikes, by the way, are what we call it when we admit we don't know what we're doing but want to charge story points for it.)
We ran the tickets through our gap analyzer instead. Within a couple of minutes, it surfaced something nobody caught: the reporting service was pulling data through a synchronous API that capped responses at 30 seconds. Any customer with more than a few thousand rows would silently fail.
The fix wasn't a frontend change at all. We needed an async job queue, a notification mechanism for completed exports, and pagination across several endpoints. The agent also noticed that our "small" column reorder was about to nuke the saved-views feature.
Two tickets became a nine-ticket epic.
In the old world, we'd have caught this on day three of implementation, when a developer hit the timeout in staging and started swearing. We'd be scrambling mid-sprint, missing our quarterly commitment, and explaining to leadership why a button derailed the roadmap. Instead, we surfaced it before planning closed and routed the work to a team that actually had the capacity.
What happens to stand-ups
If you've pre-resolved your technical hurdles before the sprint starts, what's the daily stand-up even for?
Stand-ups have always been defensive. Status reports. Blocker hunts. The polite ritual of saying "no blockers" while quietly drowning. When AI clears the fog of war, those technical blockers mostly evaporate.
So stand-ups become offensive. They turn into a space for cultural calibration and trust. Less "I'm stuck on the API contract" and more "I prompted the agent to use a really interesting pattern yesterday, you guys should look at the PR." When everything else moves at agentic speed, the synchronous time you spend together becomes premium real estate. Use it for sharing innovation, not for micromanaging tickets.
The new definition of done
Here's the trap. If a ticket takes two hours to implement but sits in "Pending QA" for three days, nothing has actually sped up. You've just shifted the bottleneck.
Assuming you're using modern patterns like feature flags to decouple deploys from releases, the definition of done can't be "merged to main" anymore. It has to be deployed. Otherwise you're an F1 car stuck in traffic.
Stop bolting AI onto Scrum
Agile was a framework built to manage human ignorance and unpredictability. It assumed we'd discover what was wrong by trying to build it. AI-first development flips that assumption. We now have computational foresight, and we should use it.
The shift is from iterating through implementation (coding to find out what's wrong) to iterating through intent (planning so thoroughly that the build is almost mechanical).
If you bolt AI tools onto a traditional Scrum framework, congratulations, you're doing Scrum slightly faster. The process itself has to change. The sprint stops being a frantic race to ship code and becomes a delivery vehicle for engineering work that mostly happened before the first line of code got generated.
That's not Scrum. That's something new. We just haven't named it yet.
Top comments (0)