A guide to faster delivery and stronger developers
Artificial intelligence has become a constant part of many developers’ workflows. For many teams, AI is already woven into daily work—generating snippets, fixing bugs, writing tests, and explaining unfamiliar APIs.
But as AI becomes more capable, other questions emerge:
Is pair programming still effective with AI tooling?
How do we use AI effectively without quietly eroding the skills that make us good developers?
This article offers a practical framework: when to pair with AI, when to pair with humans, how to keep learning, and how to prevent fast output from turning into shallow understanding.
It’s written for teams (especially leads and seniors) establishing norms, but the same principles work for individual developers optimizing their workflow.
TL;DR - Key takeaways
- AI accelerates execution. Humans elevate understanding.
- The risk isn’t using AI—it’s becoming passive and stopping the mental work that builds expertise.
- Don’t ask “Should we use AI?” Ask: What’s the goal (learning vs speed) and what’s the risk level (low vs high)?
- Treat AI output as untrusted until verified: tests, invariants, edge cases, and security checks.
- Protect the skills that must not fade: debugging models, system design, testing strategy, security reasoning, code reading, performance intuition, and soft skills.
AI as a new kind of pair programmer
If you use AI tools today, you’re already pairing—just not with a human.
AI is excellent at:
- generating boilerplate and glue code
- drafting tests and documentation
- explaining unfamiliar APIs and error messages
- proposing refactors and alternative implementations
- helping you get unstuck quickly
But AI isn’t a full replacement for human pairing. Humans bring what models still struggle with:
- shared context and domain nuance
- mentorship and teaching
- design intuition and trade-off judgement
- the willingness to challenge assumptions
- accountability for decisions
AI accelerates execution. Humans elevate understanding.
The balance problem: speed vs passivity
AI can make us faster—but it can also make us passive.
If we let AI handle every routine task, we risk losing the muscle memory of coding and the deeper capabilities that sit behind it:
- spotting subtle bugs
- reasoning about state, concurrency, and failure modes
- designing maintainable systems
- understanding why something works (not just that it works)
To be fair, skill atrophy isn’t always bad. We don’t memorise phone numbers any more. Most of us don’t write assembly. We routinely offload work to tools—and that’s progress.
So the real question isn’t “Will skills fade?”
It’s: which skills can fade safely, and which ones are non-negotiable?
A relevant signal from recent research and commentary
Some recent writing on AI coding assistance (including discussion of Anthropic-related research) points to a recurring pattern: AI can help people complete tasks faster, while reducing conceptual understanding and debugging performance afterwards—especially for less experienced developers.
Source: https://www.rafay99.com/blog/ai-coding-assistance-skill-atrophy-anthropic-research/
Whether or not the exact numbers generalise to your context, the pattern is worth taking seriously: delegation style matters. If AI replaces thinking, skills decay; if AI supports thinking, skills can grow.
Skills that should not fade
You can offload a lot to tools, but there are core skills that remain essential—especially when systems break, requirements shift, or stakes rise.
Protect these:
- Debugging mental models Reading logs, tracing state, isolating variables, reasoning about causality.
- System design and trade-offs Boundaries, reliability, scalability, data integrity, operability.
- Security reasoning Authn/authz, injection risks, secrets handling, least privilege, threat modelling basics.
- Testing strategy (not just writing tests) What to test, why it matters, where bugs hide, and what failures look like.
- Code reading and comprehension Navigating large codebases, understanding intent, detecting subtle regressions.
- Performance intuition Profiling habits, complexity awareness, and knowing what to measure.
- Soft skills Explaining intent, asking good questions, giving and receiving feedback, disagreeing constructively, and maintaining psychological safety during pairing and review.
These skills don’t develop automatically from shipping output. They develop from practice in reasoning, explanation, and correction—especially debugging.
Goal × Risk
Instead of debating AI in the abstract, decide based on two questions:
-
What’s the goal right now?
- Learning and mentorship
- Delivery speed and throughput
-
What’s the risk level?
- Low risk: internal tool, simple UI, non-critical automation
- High risk: security, payments, permissions, data migrations, concurrency, regulated systems
The 2×2 matrix
A) Learning + Low Risk (ideal practice zone)
- Default pairing mode: Solo or human pairing (optional).
- AI role: Coach and critic, not primary author.
- Required safeguards: Form a hypothesis and plan first; use AI to critique and suggest alternatives; verify with small tests.
- Success criterion: You can explain the design and trade-offs in your own words.
B) Learning + High Risk (mentored rigour)
- Default pairing mode: Human pairing (recommended).
- AI role: Option generator and test/edge-case assistant.
- Required safeguards: Human explanation, design review, and strong tests; explicitly discuss failure modes and what could go wrong.
- Success criterion: The developer can justify correctness and risk handling, not just produce working code.
C) Speed + Low Risk (automation sweet spot)
- Default pairing mode: Mostly solo.
- AI role: Primary accelerator for drafts, boilerplate, refactors, and documentation.
- Required safeguards: Run tests and linting; keep diffs small enough to review; do quick edge-case reasoning.
- Success criterion: Fast delivery with low review overhead and no recurring regressions.
D) Speed + High Risk (trust boundaries)
- Default pairing mode: Human pairing (recommended).
- AI role: Generate options, draft tests, and surface edge cases.
- Required safeguards: Humans own design decisions, threat modelling, and correctness arguments (invariants); review standards are stricter than normal.
- Success criterion: You can defend the approach under scrutiny (security, data integrity, reliability), not just demonstrate that it “seems to work.”
A useful rule of thumb: high delegation plus low verification creates fragility. Moderate delegation plus strong verification creates leverage.
AI during human pair programming: when it helps vs when it hurts
Introducing AI into a human pairing session changes the dynamic—sometimes productively, sometimes destructively.
When AI tends to hurt pairing
- It disrupts shared mental modelling Pairing works because two humans build a common understanding. AI can inject answers before the pair has aligned on the problem.
- It removes productive struggle Some struggle is where learning happens, especially for juniors.
- It creates a third-wheel dynamic One person drives, one watches, and AI becomes the main source of solutions.
- It can amplify senior dominance If the senior uses AI as a confidence amplifier, the junior can disengage.
- It can reduce psychological safety People may feel pressured to accept suggestions or embarrassed to ask questions.
When AI tends to help pairing
- Mechanical acceleration Boilerplate, API lookups, test scaffolding, docs—let AI do it so humans can focus on reasoning.
- Rapid option generation AI can offer multiple approaches worth discussing.
- Neutral tie-breaker for factual disputes Syntax and library facts can be checked quickly (but still verified).
- Unblocking when stuck AI can provide a new angle when both partners stall.
A practical middle ground: time-boxed consults
Instead of AI on or AI off, try norms like:
- Humans first: spend 5 to 10 minutes forming the model, constraints, and plan.
- 2-minute AI consult: ask a targeted question (not “solve this whole thing”).
- Humans decide: the pair chooses and can explain the approach.
- AI for mechanics: use AI to draft code and tests after the reasoning is settled.
If you want to explicitly protect junior growth, add one more rule:
- Junior explains first: before asking AI, the junior states their hypothesis and plan (even if it’s incomplete). Then AI can be used to refine it.
When to pair with humans (human to human)
Human pairing becomes a strategic tool where humans consistently outperform AI:
- Architecture and system design Defining boundaries, trade-offs, operational concerns, long-term maintainability.
- Complex problem-solving Debugging multi-layer issues, concurrency, distributed systems, unclear requirements.
- Mentorship and onboarding Teaching design thinking and helping juniors build durable mental models.
- High-stakes review Permissions, payments, security-sensitive logic, migrations, business-critical workflows.
When to pair with AI (human to AI)
AI shines as a multiplier for tasks where speed and breadth matter:
- Routine implementation CRUD, glue code, scaffolding, documentation, repetitive refactors.
- Early drafting First drafts of functions, outlines, interface sketches, example implementations.
- Debugging and exploration support Explaining errors, suggesting hypotheses, generating test cases.
- Learning support Summaries, examples, and API explanations—especially when you already have a goal.
A subtle but important detail: if AI assistance disproportionately harms debugging skill, then it’s not enough to use AI to fix the error. Use it to generate hypotheses and tests while you still practise the debugging loop.
Verification: how to use AI without getting burned
AI can be helpful and wrong at the same time. Treat outputs as untrusted until verified.
A lightweight verification checklist
- Run tests (and add tests that would fail if the AI is wrong)
- Ask for edge cases: “What inputs break this?” “What about null, empty, timeout?”
- Extract invariants: “What must always be true after this function runs?”
- Security sanity check (especially for auth, permissions, data handling)
- Prefer small diffs over pasting large chunks blindly
- Explain it in your own words before merging
If the tool makes you feel done before you can explain it, you are not done.
Where you should be extra strict
Be cautious using AI as the author in areas like:
- authentication and authorisation
- money, billing, accounting
- concurrency and threading
- database migrations and data integrity
- cryptography and secrets management
- anything regulated or legally sensitive
In these areas, AI can still help, but humans must own the reasoning and the risks.
A practical philosophy for today
Use AI to accelerate, not replace. Stay mentally engaged. Keep learning. Pair with humans strategically. If you’re not pairing with humans, use AI as a first-pass reviewer and a source of feedback on your PRs—but still do the reasoning work yourself.
Final thought
AI is changing software development quickly. We can embrace the benefits without losing the craft, but only if we’re intentional.
The goal isn’t to resist AI or surrender to it.
It’s to build a partnership where speed doesn’t replace understanding, and where the next generation of developers becomes stronger, not just faster.
Top comments (1)
Great framework. One thing that gets overlooked in the AI pairing discussion: the tooling for human pair programming is still fundamentally broken.
Most remote teams "pair" by screen sharing on Zoom/Meet — which is just watching someone type. That's not collaboration, it's a lecture. Real pairing needs both people to control the screen simultaneously (driver-navigator with instant role swaps, not "can you scroll up?").
Tuple got this right but it's $25/user/month and Mac-only. We're building pairux.com as an open source alternative — simultaneous mouse + keyboard, browser-based via WebRTC, no install for viewers. The idea is that better tooling makes human pairing easier to default to, which matters more now that AI handles the routine stuff and the hard problems need two humans thinking together.
Your point about "AI accelerates execution, humans elevate understanding" is the key insight. But we need the tooling to match.