DEV Community

Lucas Rainett
Lucas Rainett

Posted on

How Pair Programming and Mob Programming can make you a better AI Developer

TLDR: Pair programming and Mob programming skills are 100% transferable to AI coding.

Most developers I know are frustrated with AI coding tools. They write a prompt, get something weird back, try again, and eventually give up entirely and re-write everything manually.

I am not having that experience. And I've spent time thinking about why.

I think the answer is the experience working in collaborative environments with pair programming and mob programming, and the accountability I take for people's understanding of what I try to say. When I'm pairing, when I'm teaching, when I'm prompting an AI, I'm always evaluating if the other side is missing something, and if yes, how do I provide it?

Let me explain what I mean.


The insight I got from teaching

I started working as a teacher at 21 years old, as a secondary activity on top of working as a developer, mostly to force myself out of my comfort zone as an extremely introverted person.

Working with students from a variety of backgrounds and learning speeds taught me something interesting: knowledge is a step-by-step process, where every new step relies on the step before.

There is always a dependency. There is always a step underneath the step you think you're teaching.

So if we try to explain a complex topic to someone who is missing the base to understand it, they will not absorb the information. And it is the speaker's fault. The speaker is accountable for identifying which step the audience is on, and building up from there.

Every time someone doesn't understand something, it's because they're missing the previous step.

As a speaker, the job isn't to repeat the explanation louder. It's to identify the missing step, then fill it using an example anchored in something they already understood.


The insight I got from pair and mob programming

I'll be honest: when I first heard about pair programming and mob programming, I thought it was a waste of time.

Multiple developers, one keyboard, one screen. I couldn't see how that was more efficient than everyone working in parallel.

In New Zealand I worked alongside the person who literally wrote the book on mob programming. And it took time to come around. What changed my mind wasn't a single moment. It was repeated exposure to what happens when a room of smart people are all looking at the same problem together. And also seeing a project from start to end with mob programming, the value will not show in one or two sessions. It shows after a while, when the team flow is high and the bugs are low.

The navigator is not just watching. When the driver goes sideways, you step in, the same way I stepped in with students. You find what they are missing, and you give it to them.

I became a genuine advocate. I ran coding sessions at every company I worked at after that.


The insight I got from AI development

Since my first interaction with AI development, I treat the agent as a fellow new joiner. A person with some technical experience, but no clue about what we are building here.

I share links to documentation, point to code examples in the project to follow as references, and proactively ask if it has enough context before we start.

Most frustrated developers treat AI coding agent like a search engine. They type what they want, get something back, find it wrong, and conclude the tool doesn't work.

But if you act as the navigator from pair programming and provide the context the driver is missing, catch when they've gone off-track, and redirect, the experience is genuinely different.

And most importantly, not only share what we are building, but why we are building it. The rationale behind every decision is context the AI doesn't have unless you give it.


Here is the uncomfortable truth: if the AI doesn't understand you, it is your fault. Be better at communicating what you want.


What this looks like in practice

I built a project where 99% of the code was AI-generated, with every AI output committed raw so what was produced by the AI is visible in the git history. I commit as-is, before touching anything, then review and correct in a separate commit.

Three-step cycle, for each separated feature: prompt, raw AI result, manual review. This mirrors the mob programming rhythm exactly. The driver runs, the navigator reviews, the mob catches what drifted.

Brief the AI before you write a single line

Have you heard about "Project Kickoff"?

Before writing any code in this project, I asked the AI to draft Architecture Decision Records (short documents that capture what you decided and why) covering all sorts of concerns: library choices, deployment strategy, layer boundaries, testing plan, code style, and database schema. I reviewed and corrected each one MANUALLY.

This is what I used to do with students before a new concept. You don't just start with the new thing. You make sure all the prerequisite pieces are fresh, named, and linked. Then you move forward.

When it was time to implement, the AI had precise documented context for every decision it would need to make, and why each decision was made. This is particularly useful in the long term as every new session can re-build the context about the project

And later during coding, if the AI did something unexpected, that was the signal for a new ADR.

The ADRs served two purposes at once:

  • For me forced me to think through decisions before touching the keyboard
  • For the AI provided constraints that guided every subsequent generation

Anchor prompts to concepts the AI already knows

One prompt from this project read:

"Create the minimum set of configuration files to allow a complete AWS deployment. This technique is called hello world in production, and should allow each future step to be deployed to the cloud, instead of only doing deploy after the code is complete."

I didn't just say "set up AWS deployment." I gave it a named technique and explained why the technique matters. I anchored the new task to a concept already in its knowledge base.

Same as the classroom. You don't ask for the new thing cold. You connect it to something they already know.


The habits that transfer directly

Identify what step the AI is missing, not just what it got wrong. When output is bad, don't just rewrite the prompt. Ask yourself: what context did it not have? What assumption did it make that I haven't corrected? Provide that, with an example it already knows. The problem was never the output, it is always the missing context.

Warm up the session before asking for anything. Don't start with implementation. Start with documentation, decisions, constraints. Load the context deliberately, the way you'd brief a new pair partner on a system before sitting down together.

Reference external resources, not just internal ones. Point the AI at library docs, example repositories, other files in the same codebase. Same as mob programming: you don't expect the driver to know everything. You hand them the reference they need.

Treat every raw output as a first draft. 100% of the code in this project should be reviewed. The AI was always the driver. I was always the navigator.

Work in small, verifiable steps. Each prompt built on the last. I didn't ask for the whole system at once. Each step was deployable and verifiable before moving forward.


What most developers are skipping

The most common failure pattern I see: one big prompt, one big output, it's wrong, conclusion: AI coding doesn't work.

The missing step isn't prompt engineering. It's knowing how to collaborate incrementally, how to provide context progressively, catch drift early, and course-correct without restarting from scratch.

Those are pairing skills. Most developers never developed them, because most teams don't pair seriously, and most developers, as I was, are skeptical about the value until they see it work.

If you want to get better at AI-assisted development, the fastest path isn't learning more prompting techniques. It's learning to pair program well, and then applying the same instincts to a new kind of partner.

The developers getting the most out of AI tools right now are not the ones who know the most tricks. They are the ones who can communicate clearly. Who can take what is in their head and make it legible to another mind without losing the intent along the way.
That is not a new skill. Teachers build it. Pair programmers build it. Anyone who has ever had to explain a complex system to a new joiner and watched them get it, or not get it, and adjusted, they have been building it.
The tool changed. The skill didn't.


Credit to Mark Pearl, author of Code with the Wisdom of the Crowd, for the mob programming experience that, eventually, proved me wrong.

Top comments (0)