Forem

Luca Bartoccini for Superdots

Posted on • Originally published at superdots.sh

AI Pair Programming: Beyond GitHub Copilot

When developers hear "AI pair programming," most picture autocomplete on steroids — a tool that finishes your if-statement or writes a for-loop faster than you can type it. That is real, but it is the least interesting thing AI can do for developers.

The more valuable capabilities are the ones that change how you work, not just how fast you type. Exploring an unfamiliar codebase in minutes instead of days. Getting instant code review feedback on a function you just wrote. Rubber-ducking an architecture decision with an entity that knows every programming pattern ever documented. Writing the tests and documentation that you always skip because they are tedious.

AI pair programming has moved from novelty to essential tool for many development teams. Here is what it actually looks like in practice and how to get the most from it.

The Spectrum of AI Pair Programming

AI coding assistance exists on a spectrum. Understanding where different capabilities fall helps you extract the right value at each level.

Level 1: Autocomplete (basic)

The AI predicts what you are about to type and suggests completions — line by line or block by block. This is GitHub Copilot's original value proposition and it works well. It saves keystrokes on boilerplate, fills in common patterns, and reduces the cognitive load of remembering exact syntax.

Time savings: moderate. Value for experienced developers: low to moderate. The code you type fast is rarely the bottleneck.

Level 2: Code generation (intermediate)

You describe what you want in a comment or a prompt and the AI generates a complete function, class, or module. "Write a function that validates email addresses against RFC 5322" produces working code without you looking up the spec. "Create a REST endpoint for user registration with password hashing" generates a full implementation.

Time savings: significant for boilerplate-heavy work. Value increases when working in unfamiliar languages or frameworks.

Level 3: Conversational coding (advanced)

You have a dialogue with the AI about your code. "Why is this function O(n²)?" "What would this look like as a state machine instead of nested if-statements?" "Here is my database schema — what indexes should I add for these query patterns?" The AI understands context, explains trade-offs, and suggests approaches.

Time savings: hard to measure but substantial. Value: highest for learning, architectural decisions, and complex problem-solving.

Level 4: Autonomous agents (emerging)

AI agents that can navigate a codebase, make changes across multiple files, run tests, and iterate based on test results. You describe the task ("add pagination to the user list API") and the AI handles the implementation end-to-end, asking clarifying questions when needed.

This is the current frontier. Tools like Claude Code, Cursor Agent, and Devin operate at this level with varying degrees of reliability. IDEs like VS Code and JetBrains editors are adding native support for these agent workflows. The technology is advancing fast, but human oversight remains essential.

Use Cases That Deliver Value

Exploring unfamiliar codebases

You inherited a codebase. Or you are a new hire. Or you need to fix a bug in a service you have never touched. Traditionally, this means spending hours reading code, tracing call chains, and building a mental model.

AI pair programming compresses this dramatically. "What does this service do?" "How does authentication flow through this app?" "What calls this function and why?" You get answers in seconds instead of hours, with code references you can verify.

This is arguably the highest-ROI use case for AI in development. The time from "I know nothing about this code" to "I can work in this code safely" drops from days to hours.

Writing tests for existing code

Everyone agrees tests are important. Most teams are behind on test coverage. The reason is simple: writing tests is tedious. You know the code works. Writing the test that proves it works feels like paperwork.

AI is very good at this. Feed it a function and ask for unit tests. It generates tests that cover happy paths, edge cases, and error conditions. The tests are not always perfect — you need to review them and adjust for your specific testing conventions — but starting from 80% complete tests is far better than starting from zero.

For more on AI-assisted testing, see our guide on AI test generation.

Refactoring guidance

"This function is 200 lines long. How should I break it up?" AI analyzes the function, identifies logical groupings, suggests extraction points, and even generates the refactored code. It spots patterns you might miss: repeated logic that should be a helper, conditional branches that should be polymorphism, sequential operations that could be pipelined.

Boilerplate generation

Setting up a new project, adding a new API endpoint, creating database migrations, writing configuration files — these tasks follow predictable patterns that AI handles well. The value is not creativity; it is speed and accuracy on work that is necessary but not intellectually demanding.

Documentation generation

AI reads your code and generates documentation — function docstrings, API documentation, README files, architecture decision records. The output needs human editing for accuracy and voice, but the first draft saves significant time.

For more on documentation workflows, see our guide on writing better docs with AI.

Where Developers Get the Most Out of It

Junior developers: the learning accelerator

For junior developers, AI pair programming is like having a patient senior engineer always available. They can ask "why does this pattern exist?" and get an explanation, not just a code snippet. They can try an approach, get feedback, understand the trade-offs, and learn faster than reading documentation alone.

The key is asking "why" questions, not just "how" questions. "How do I connect to a database" gives you code. "Why would I use a connection pool instead of individual connections" gives you understanding.

Senior developers: the tedium eliminator

Senior developers do not need AI to tell them how to write a function. They need AI to handle the boring parts so they can focus on the interesting parts. Writing tests, generating boilerplate, updating documentation, creating migration scripts — the work that is necessary but does not require senior-level thinking.

The time savings compound. If AI handles 30% of the work that does not require senior judgment, that is 30% more time for architecture, mentoring, and complex problem-solving.

New team members: the codebase guide

The first month on a new team is spent learning the codebase. AI pair programming accelerates this dramatically by answering questions about code structure, conventions, and history that would otherwise require interrupting senior team members.

It also reduces the "I don't want to ask a dumb question" barrier. There is no social cost to asking an AI to explain something obvious.

The Pitfalls

Over-reliance

The biggest risk is accepting AI suggestions without understanding them. Code that works is not the same as code that is correct, secure, and maintainable. If you cannot explain why the AI's suggestion is right, you should not use it.

This is especially dangerous for junior developers. Learning to code means building intuition about what good code looks like. If AI bypasses that learning process, you get developers who can ship features but cannot debug, optimize, or architect.

Hallucinated APIs

AI sometimes suggests APIs, functions, or libraries that do not exist. It generates confident-looking code that calls someLibrary.nonexistentMethod() because it pattern-matched from training data rather than checking documentation. Always verify that suggested libraries and APIs actually exist in the versions you are using.

Security concerns

AI-generated code can contain security vulnerabilities. It learns from public code, including insecure patterns. Common issues: SQL injection from string concatenation, XSS from unescaped output, hardcoded credentials from example code, and insecure default configurations.

Treat AI-generated code with the same scrutiny you would apply to any external code contribution. Run static analysis. Review for OWASP top 10 issues. Never deploy security-critical AI-generated code without expert review.

For more on AI-assisted code review, see our guide on AI code review tools.

Confidentiality

When you send code to an AI service, consider what you are sharing. Proprietary algorithms, API keys, internal architecture details, and customer data patterns could be exposed. Understand your AI tool's data handling policies. Some tools offer enterprise tiers with data retention guarantees. Others process code on your local machine without sending it externally.

How to Pair Effectively with AI

Give context

AI produces better results with context. Instead of "write a function that processes orders," provide: "Write a function that processes orders in our Django app. Orders have a status field (pending, paid, shipped, delivered). The function should validate the payment, update inventory, and send a confirmation email. We use Celery for async tasks."

The more context you provide, the less you need to correct the output.

Ask why, not just what

"Write a caching layer" gives you code. "What are the trade-offs between Redis and an in-process LRU cache for this use case?" gives you understanding that informs better decisions.

Validate outputs

Run the tests. Check the logic. Read the code. AI pair programming saves time on writing, not on thinking. The thinking is still your job.

Use it for the boring parts

Writing tests, documentation, boilerplate, and migration scripts — let AI handle these. Spend your time on architecture, design, and the complex logic that defines your system's quality.

Getting Started

Start with test writing

This is the lowest-risk, highest-value entry point. Ask AI to write tests for your existing code. Review the tests, adjust for your conventions, and run them. You get better test coverage with less effort, and you learn how the AI interprets your code.

Graduate to documentation

Generate documentation for under-documented code. Edit for accuracy and voice. This improves your codebase while building familiarity with AI interaction patterns.

Expand to refactoring

Once you are comfortable with AI suggestions, use it for refactoring guidance. Start with small, well-tested functions. Verify refactored code against existing tests before expanding scope.

Explore architecture discussions

Use AI for rubber-ducking architectural decisions. Describe your problem, constraints, and current thinking. Ask for alternative approaches and trade-offs. Verify suggestions against your experience and team context.

For debugging-specific workflows, see our guide on AI debugging. For a comprehensive overview of AI tools across engineering and other departments, visit our AI tools for business guide.


Originally published on Superdots.

Top comments (0)