AI-assisted development is often compared to pair programming, but, I think, an equally apt metaphor is a solo tennis match between a developer and an AI model. Instead of playing doubles on the same side, imagine facing the AI across the net. In a competitive yet constructive sparring match: the developer serves prompts, the AI returns code or answers, and together they rally towards a solution. The goal isn’t to defeat the AI, but to outmanoeuvre its unpredictability and guide it toward the right outcome. In this match, a perfectly placed prompt can feel like a swift ace, whereas more complex problems require a longer rally of back-and-forth exchanges.
Modern AI coding tools have made these rallies more dynamic than ever. Anthropic’s Claude Code (a powerful CLI-driven coding assistant) stands out as a formidable “opponent” – it can scan and interpret your entire project, run tests or commands, and even spin up specialised sub-agents to tackle subtasks. Such capabilities let the AI anticipate shots and take initiative on its own. Other assistants like the Cursor AI editor can complement Claude for quick exchanges and small code changes, but it’s Claude Code’s advanced features – large context windows, autonomous code execution, sub-agents, and hooks – that give developers new ways to stay in control. The result is a game where strategy, timing, and adaptation matter as much as raw coding skill. In this article, we’ll explore AI-assisted coding through the lens of a tennis match, highlighting strategies for serving strong prompts, managing rallies, and ultimately winning the point (i.e. getting correct, efficient output) with the help of these advanced tools.
Serve Strategy: Setting the Tone with the First Prompt
Every rally in tennis begins with a serve, and in AI-assisted development the “serve” is your initial prompt or query. A strong serve sets the tone. If you articulate your request clearly and precisely, you can often gain an immediate advantage. For instance, a developer might launch Claude Code and serve a detailed instruction like: “Implement a function to parse this specific log format into JSON, using Python’s standard library only.” This is akin to aiming your serve to the opponent’s weak side – it guides the AI’s response in a favourable direction from the start. Sometimes, such a well-placed prompt yields a winning return straight away. A single prompt may refactor a complex module flawlessly on the first try – a bit like hitting an ace that the AI simply can’t return with anything but the perfect solution.
However, not every first serve is perfect. If your prompt is vague or overly broad, the AI may come back with something unexpected or off-target – a strong return that puts you on the defensive. Imagine you just ask, “Optimise my application,” and the AI floods you with a barrage of changes or suggestions that don’t align with your vision. You’ve essentially given the AI an easy shot to attack. To avoid that, refine your serve. In tennis, servers adjust placement and spin; in prompting, you adjust scope and context. You might break the request into smaller pieces or specify constraints (“optimise the database queries for X feature”). A careful serve strategy builds immediate pressure on the AI to comply usefully, making the ensuing rally constructive rather than chaotic.
Claude Code let you set or generate some context or rules before the first prompt in a claude.md file – similar to choosing the right racket and stance before a serve. You establish ground rules like “use React for any UI suggestions” or “keep functions pure,” which the AI will automatically take into account. In effect, the AI’s very first return is already within bounds because you’ve pre-loaded the playbook. A strong serve doesn’t guarantee you’ll win the point, but it certainly improves your odds by starting the exchange on your terms.
Hawk-Eye for Code: Keeping Claude Inside the Lines with Hooks
In professional tennis, players don’t argue about line calls anymore — they trust the tech. Systems like Hawk-Eye track the ball with precision, calling shots in or out instantly. It keeps the game fair, focused, and free of distractions. In Claude Code, Hooks serve the same purpose: they watch the AI’s every move and enforce the rules you’ve set — automatically.
Hooks are event-based actions you configure to run at key moments in Claude’s workflow. For example, a PostToolUse hook might run your unit test suite immediately after Claude edits code — like a line judge calling a ball out the moment it lands wide. A PreWrite hook can act like a net sensor, preventing invalid file writes before they even happen. Other hooks, like PostWrite, might format the code (prettier, gofmt) or scan for security issues, making sure nothing sketchy sneaks past the baseline.
The power here isn’t in micromanaging Claude — it’s in automating trust. When you know the lines are being checked, the net is being watched, and out-of-bounds plays are called instantly, you can let the rally flow. Claude still swings freely, but you’ve defined the court. And when something goes wrong — a test fails, a file is blocked, a formatting rule is broken — the hook fires, the call is made, and Claude can adjust its next move accordingly.
Coaching from the Sidelines: Enhancement with MCP Servers
In professional tennis, elite players don’t just rely on instinct — they’re backed by data: performance trackers, match footage, coaching insights, and real-time analytics. In Claude Code, connecting to MCP servers plays a similar role. It’s how you equip your AI opponent with deeper awareness of your project: access to your filesystem, design assets from Figma, or documentation from internal tools like Notion or Confluence.
These servers act like Claude’s coaching team — feeding it context mid-match. Want it to match a feature to the latest Figma mockup? Connect your design system via MCP. Need it to understand your folder structure, logs, or config files? Hook in a local filesystem tool. Want it to write code that actually reflects your latest specs? Pull in a knowledge base through a connector. Rather than manually pasting snippets into your prompt, Claude can retrieve and use the source directly — playing with its head up, not guessing blindly.
The advantage is strategic depth. Claude becomes a better player not by guessing, but by seeing the field clearly. And importantly, you control which tools it can use, just like a coach decides what analytics reach the player. It’s still your match — but now the AI shows up with scouting reports, play diagrams, and real-time stats, giving you the edge in every rally.
Pattern Construction: Building a Point with Multi-Turn Plays
Winning a tennis point often involves setting up a pattern – a sequence of shots that gradually puts the opponent out of position until you can deliver a winner. Similarly, complex development tasks with an AI require a multi-turn strategy rather than expecting a perfect one-shot answer. Pattern construction in this context means planning a series of prompts and responses that guide the AI step by step to the end goal. Instead of swinging for a one-hit winner from a tough position (a low-percentage play), you break the problem down and rally with the AI.
For example, imagine you’re using Claude Code to build a new feature. You might start with a high-level prompt (“Generate a skeleton for a module that does X”), then follow up on the AI’s return with specific subtasks (“Fill in the data validation part,” “Now add error handling for these cases,” “Improve the efficiency of this function,” and so on). Each exchange is like a shot in a rally: first you push the AI in one direction, then another, each time evaluating its return and choosing the next prompt accordingly. By constructing this pattern of play, you gradually corner the AI into producing the comprehensive solution you need. The final “winner” might be a fully working feature, but it was set up by the preceding sequence of well-placed prompts.
Changing Tactics Mid-Rally: Using Claude’s Sub-Agents
Not every point is won with the same kind of shot. Sometimes, you need to switch tactics mid-rally — and that’s where Claude Code’s sub-agents come in. They let Claude temporarily hand off part of the task to a specialist — a dedicated agent configured for a specific role, like test generation, code review, or refactoring.
Each sub-agent runs in its own isolated context with its own instructions, so it stays focused and doesn’t derail the main conversation. Claude can invoke one automatically when the prompt calls for it, or you can switch them in directly. Either way, the core session stays clean — no loss of flow, no loss of control.
It’s like Claude can swap in a specialist mid-point — a drop-shot expert when the problem calls for finesse, a volleying pro when speed and iteration matter, a baseline grinder for structured, methodical work. You’re still playing the match, but the AI can adapt its playstyle to meet the moment — always fielding the right player for the shot in front of it.
Maintaining Momentum: Capitalising on a Winning Streak
Momentum is a powerful force in both tennis and AI-assisted coding. In a match, when a player finds their rhythm and starts winning points or games in succession, they press the advantage – playing aggressively but smartly to keep their opponent on the back foot. In coding, maintaining momentum means that when your interaction with the AI model is yielding good results, you continue to leverage that state without pause or distraction. Large language models thrive on context; when the recent conversation is relevant and focused, they tend to stay on target. Thus, if you’ve managed to get the AI to understand your problem well and it’s producing helpful output, it’s time to keep rallying and drive the point home.
Momentum can be lost if you abruptly change topics. It’s a bit like hitting a careless shot that lets your opponent reset their footing. For instance, if mid-rally you suddenly ask the AI about a completely unrelated part of the project, you risk confusing it or diluting the context it had built up. Instead, ride the wave of success on the current problem: tie up all loose ends, get all the value you can from the AI while it’s “in the zone,” and only then move on to the next challenge.
Resetting After a Bad Point: Recovering and Adapting
Even the best players drop points, and even the best prompt strategies can yield disappointing results. What matters in a match is how you respond to losing a point – do you dwell on the mistake, or do you reset mentally before the next serve? In AI development, “losing a point” might mean the model produced a completely wrong answer, misunderstood your request, or took a wild tangent that wastes time. It’s important for a developer to recognise when an exchange has gone awry and then clear the slate for a fresh attempt, rather than getting stuck in a futile back-and-forth. This can be as simple as rephrasing the question from scratch or as involved as starting a new session or reverting the code to a known good state. The key is not carrying the baggage of the bad output into the next try – much like a tennis pro shakes off a disappointing shot or game and tries to focus on the next rally with a clear head.
Another aspect of resetting is providing more guidance after a failure. Suppose Claude Code attempted to implement a feature but got part of the logic wrong. Instead of angrily asking “Why are you wrong?” (akin to smacking the next ball in frustration – rarely effective), a composed developer will calmly analyse the miss, then serve a new prompt incorporating that insight: “We got off track. Here is where the logic failed… let’s approach it this other way.” This mirrors a player adjusting strategy after a lost point – maybe switching up the serve or targeting a different weakness on the next rally. The tone remains constructive and focused on the next step. By resetting proactively, you avoid compounding errors. Each prompt exchange is a fresh point; even if the last one was a double fault, the next can still be an ace. Maintaining this resilience ensures that temporary setbacks don’t snowball. In the long run of a coding session (or a match), the ability to reset and adapt quickly is often what separates a productive outcome from a frustrating dead-end.
Forced and Unforced Errors: Reading the AI’s Footwork
Not every mistake in a match is created equal. In tennis, an unforced error is when a player hits the ball into the net on an easy return — no pressure, just a miss. A forced error, by contrast, comes from pressure — a tough shot they’re stretched to reach, pushed into the mistake by smart play. You’ll see both when working with Claude Code.
Sometimes, Claude Code simply misfires. It misunderstands a clear prompt. It hallucinates an API. It generates code that compiles but clearly doesn’t solve the problem. These are your unforced errors — mistakes made without you doing anything particularly tricky. They’re a sign that the model missed something obvious or didn’t fully grasp the context. These moments can be frustrating, but they’re also valuable feedback. They tell you when Claude needs more structure, a clearer spec, or better context to stay inside the lines.
Then there are the forced errors — and this is where things get interesting. A good developer, like a good tennis player, learns how to apply pressure. You might give Claude a deliberately ambiguous spec, or prompt it to handle a tricky edge case, or run a test that you know it’s unlikely to pass on the first try. When the AI fails under this pressure, it reveals where its understanding breaks — and that’s your opening. You now have something to work with: a missed case to fix, a false assumption to correct, a weakness to target. You’re not just prompting anymore — you’re playing the point.
The key is learning to tell the difference. When Claude makes a mess of something simple, that’s a cue to slow down, reset, and give it what it needs. But when you’re deliberately pushing its limits and it stumbles? That’s strategy. That’s how you tighten the rally and take control of the game.
Conclusion: Game, Set, Match
AI-assisted development, when viewed like a solo tennis match, highlights the active role of the developer in steering the outcome. Rather than a passive reliance on an “all-knowing” assistant, it’s a dynamic interplay where the human stays strategically in charge. By serving well-crafted prompts, constructing multi-step solutions, maintaining the momentum of success, resetting when things go wrong, and continuously reading the AI’s patterns, a developer can effectively outplay the AI’s unpredictability.
The competition is a friendly one – after all, when you “win” a point, it usually means the AI has generated the correct result. The real victory is a productive collaboration: the model pushes you to be clearer and more thoughtful in how you describe problems, and you push the model to stretch its capabilities while keeping it within the lines of correctness.
Top comments (0)