DEV Community

Cover image for Augmented Coding Patterns: The Developer's Playbook for Working With AI
Majdi Zlitni
Majdi Zlitni

Posted on

Augmented Coding Patterns: The Developer's Playbook for Working With AI

AI as your new pair partner but are you using it right?

Every developer I know has tried AI-assisted coding but most are doing it wrong.

They paste code, get output, accept or reject. Rinse, repeat. That's not augmented coding that's autocomplete with extra steps.

The Augmented Coding Patterns project created by Lada Kesseler
changes that.
It's a living catalog of patterns, anti-patterns, and obstacles that define how developers should truly collaborate with LLMs in their daily workflow.

Augmented Coding: Mapping the Uncharted Territory


What Are Augmented Coding Patterns?

Think of them like the Gang of Four design patterns but for human-AI collaboration. They're recurring solutions to recurring problems that arise when you use LLMs (Claude, GPT, Copilot, etc.) as coding partners.

The catalog is organized into three pillars:

Category What It Covers
Patterns Proven techniques that work feedback loops, semantic zooming, visible context
Anti-Patterns Traps developers fall into blind trust, context starvation, over-delegation
Obstacles Inherent limitations of LLMs you must design around

Pattern 1: Feedback Loops

The Problem

You give the AI a big task. It gives you 200 lines. You read them, find 3 bugs, rewrite half of it. You just wasted 15 minutes.

The Pattern

Build small, iterative cycles. Ask for one function. Validate. Ask for the next. Correct course early.

Real Example

Anti-pattern (one-shot prompting):

"Build me a complete ASP.NET Core REST API with authentication,
JWT, validation, rate limiting, and SQL Server integration for a Todo app."
Enter fullscreen mode Exit fullscreen mode

Pattern (feedback loop):

Step 1: "Create the Express server setup with CORS and JSON middleware."
→ Review → Approve

Step 2: "Add a POST /api/todos route with Zod validation for title (string, 
required) and completed (boolean, default false)."
→ Review → Fix the Zod schema import → Approve

Step 3: "Add JWT middleware. Here's my auth flow: [describe]."
→ Review → Adjust token expiry → Approve
Enter fullscreen mode Exit fullscreen mode

Each iteration is small enough to validate in under 60 seconds. Errors are caught before they compound.

Implementation in Practice

  • Step 1: You ask AI to scaffold the validation layer
  • Step 2: You ask AI to create the route handler using your schema
  • Step 3: You review: AI used safeParse correctly, returns flattened errors.
  • Step 4: Let's say it didn't add the userId from JWT. You correct: "Add userId from req.user.id to the created todo."

The key: never let more than 30 lines go unreviewed.


Pattern 2: Semantic Zooming

The Problem

You're stuck at one level of abstraction. Either you're in the weeds asking about semicolons, or you're at 30,000 feet asking "how should I architect this?"

The Pattern

Shift between levels deliberately.:

  • Start high (architecture)
  • Toom to mid-level (module design)
  • Dive to low-level (implementation)
  • Zoom out to verify the big picture still holds.

Real Example

ZOOM LEVEL 1 Architecture:
"I'm building a notification service. I need to support email, SMS, and push notifications. What's the best pattern? I want it extensible."

AI Response: Strategy pattern + factory + message queue.

ZOOM LEVEL 2 Module Design:
"Show me the interface for the NotificationStrategy and the factory that selects the right one."

ZOOM LEVEL 3 Implementation:
"Implement the EmailStrategy using AWS SES. Here are my SES configuration constraints: [details]."

ZOOM BACK TO LEVEL 1:
"Given what we've built, does this architecture handle the case where SMS fails and we need to retry with exponential backoff? What needs to change?"
Enter fullscreen mode Exit fullscreen mode

Each zoom level gives the AI the right amount of context to produce useful output:

  • Level 1: The architecture AI suggested
  • Level 2: The factory
  • Level 3: The implementation

Pattern 3: Visible Context Show Your Work

The Problem

You ask the AI to fix a bug but don't show it the error message, the test that failed, or the surrounding code. It hallucinates a solution for a problem it can't see.

The Pattern

Externalize everything the AI needs: error logs, test output, type definitions, database schemas, business rules. Make the invisible visible.

Real Example

Context-starved prompt:

"My user registration is broken. Fix it."
Enter fullscreen mode Exit fullscreen mode

Context-rich prompt:

"My user registration throws this error:

  PrismaClientKnownRequestError: Unique constraint failed 
  on the fields: (`email`)

Here's my register function: [paste code]
Here's my Prisma schema for User: [paste schema]
Here's the test that fails: [paste test]

The test creates a user, then tries to create another with 
the same email. I expect a 409 Conflict response but I'm 
getting a 500."
Enter fullscreen mode Exit fullscreen mode

The AI now has a crystal-clear picture and can respond with accurate response.


The Top 3 Anti-Patterns to Avoid

1. Blind Trust

Never accept AI output without reading it. Period. The AI doesn't know your business rules, your deployment constraints, or your team conventions.

2. Context Starvation

The less context you give, the more the AI hallucinates. Every prompt should include:

  • Relevant code
  • Errors
  • Types
  • Constraints.

3. Over-Delegation

"Build me a production-ready microservices architecture" is not a prompt it's a prayer.

  • Break it down
  • Own the decisions
  • AI will handle the implementation

The developers who will thrive in the AI era aren't the ones who prompt the best. They're the ones who collaborate the best.


References:

Top comments (0)