Most people think AI is here to replace developers. It isn’t-at least not for a long time. What’s happening is more interesting: developers and AI are learning to work together. And that partnership doesn’t just tweak how we code; it rewires the people and process parts of engineering.
A morning at Northpier Works
“Are we hiring fewer devs next quarter because Delta can ‘code now’?” Tom (Head of Sales) asked at the Monday stand-up, wielding a coffee like a microphone.
Elena (engineering manager) didn’t flinch. “We’re hiring differently,” she said. “Delta writes code the way a calculator does math-fast, literal, occasionally wrong. We still need people who know what to ask and when to say no.”
Raj (tech lead) nodded. “Also, Delta keeps naming variables after Greek philosophers.”
Sofia (senior dev) scrolled through a PR. “I asked Delta to refactor the billing adapter. It created three versions, all pass tests, but only one actually respects our error budget. Humans still own intent.”
Liam (junior dev) raised a hand. “I paired with Delta on the migration script. It was like driving a race car on a wet track-fun, but you still have to steer.”
Maya (PM) chimed in from the corner. “And I want a release note people can read, not a model dump.”
On the screen, Delta-the team’s AI agent-posted a summary:
“Refactor complete. Tests pass. Caveat: approach #2 reduces operational toil; approach #1 faster locally. Choose with care. Also, renamed
SAD_PATH
tounexpectedPath
for morale.”
The team laughed. Then they got to work—not less work, but different work.
What actually changes when AI joins the team
1) Work shifts from “write” to “orchestrate.”
Coding becomes a blend of framing the problem, sampling options, and deciding trade-offs. The best developers look like editors: they specify constraints, compare alternatives, and prune aggressively. Output increases; judgment becomes the scarce skill.
2) Processes must assume non-human contributors.
When non-humans produce code, tests, or docs, your old rituals (stand-ups, sprint planning, PR review) break in subtle ways. You need explicit places to capture assumptions—prompts used, model risks, data boundaries—and to separate mechanical checks from intent checks.
3) Roles evolve, not disappear.
- Developers become problem framers and reviewers-of-intent.
- Tech leads curate architecture boundaries and guardrail libraries.
- EMs coach human+AI workflows and remove process friction.
- PMs become editors-in-chief of AI-drafted user stories and release notes.
- QA/SRE focus on scenario design, observability, and fast rollback-not just pass/fail.
4) Cadence moves from status to signals.
Daily status meetings are replaced or reduced by async “signal packs” auto-compiled by AI: blockers, anomalies, risk flags. Conversations shift from “what did you do?” to “what changed in the system and what decision do we need to make?”
5) Metrics change.
Velocity stops being the hero metric. You watch rework rate, escaped defects, recovery time, and “decision cycle time” (how fast a team detects a risk and decides what to do).
The new rituals (from our Northpier week)
Backlog shaping, not grooming.
Before refinement, Delta drafts first-pass stories from product notes. The team spends refinement aligning on intent and constraints, not wordsmithing tickets. Maya tosses anything too fuzzy back into discovery with a one-line “unknowns” list.
Risk zoning.
Raj tags work as Green/Yellow/Red.
- Green: AI can take the first swing; human review focuses on style and small correctness.
- Yellow: AI drafts, but tests and observability updates are mandatory.
- Red: human-led design first; AI only assists in scaffolding and test generation.
Two-layer code review.
Layer 1 (automated): lint, style, dependency scans, obvious smells—AI handles this instantly.
Layer 2 (human): intent, domain assumptions, failure modes, rollout plan. PRs must include a 2-line “How I verified this” note and a named owner for the first 48 hours after release.
Orchestration log.
Every PR has a tiny “AI usage” note: prompts used, models assumed, data sensitivity, generated configs. Not to police—just to leave breadcrumbs for future you.
Async stand-up, synchronous decisions.
By 9:00, Delta posts a digest: risks, flakes, anomalies. The team meets only if there’s something to decide. Otherwise, people start building.
Documentation by subtraction.
Delta drafts README updates and release notes. Sofia cuts them down to the minimum someone actually needs. If it doesn’t help a future engineer, it doesn’t survive.
How people grow in this world
Liam’s path as a junior looks different. He still learns algorithms and systems, but he learns them by comparing AI-proposed solutions, designing tests, and running small game-day drills. He gets good at saying: “This passes tests but violates our SLO,” or “This is elegant but brittle under load.”
Sofia spends less time typing boilerplate and more time expressing constraints, writing narrative tests, and naming the trade-offs in English before code. Her reviews read like short design notes.
Raj evolves from “chief code reviewer” to “market maker for quality”—he sets patterns, risk thresholds, and chooses where humans must slow down.
Elena’s job becomes about flow: making sure the team has clear decision points, quick feedback loops, and psychological safety to say, “AI got this wrong” without blame.
Maya becomes the voice of the user amid the speed. She uses AI to explore copy and acceptance tests, but she decides what “good” means, not the model.
The uncomfortable truths
- AI increases throughput and the blast radius of mistakes. Guardrails aren’t bureaucracy; they’re seatbelts.
- Invisible work becomes the most important work. Framing, verifying, and documenting assumptions looks slower-until it saves a midnight rollback.
- If you don’t change process, AI just accelerates your mess. Teams that bolt AI onto yesterday’s rituals drown in PRs and Slack threads.
A simple action guide to start (two weeks)
Week 1 - Make it safe and visible
- Add a 1-page AI Definition of Done to your PR template (AI use note, tests, observability, rollback plan, named owner).
- Introduce Risk Zoning (Green/Yellow/Red) for all tickets this sprint.
- Move daily stand-up to async with an AI-generated signal pack; meet only for decisions.
Week 2 - Tighten the loop
4. Split code review into two layers (automated + human intent).
5. Run one 45-minute game day for a Yellow/Red change: practice rollback and ownership.
6. Swap out “velocity” in your sprint recap for rework rate and time-to-decision.
If you do only this, you’ll feel the shift: fewer status debates, clearer ownership, and faster, calmer releases. Not because AI replaced your developers—but because your developers learned how to lead the AI.
And that’s the point. AI won’t erase the people in software engineering. It will reward the teams that master new habits: framing over frenzy, signals over status, and judgment over jargon. The code may come together faster, but the real upgrade is how the team works.
If AI is flooding your PRs and meetings, this book shows how to fix the people & process side-without hype.
Here’s The AI-Driven Software Team on Amazon (Look Inside preview): https://www.amazon.com/dp/B0FNFHXWTQ
If you read it, tell me what you’d cut or keep—I’ll iterate.
Top comments (0)