DEV Community

Cover image for Stop Asking Claude to "Write Code" — Do This Instead
New Riders Labs
New Riders Labs

Posted on

Stop Asking Claude to "Write Code" — Do This Instead

I've been coding with Claude AI for 500+ hours over the past year. I've seen what works and what doesn't.

The biggest mistake I see developers make? Treating Claude like a search engine.

"Write me a function that does X."

That's not how you get good results. Let me show you what actually works.


The Problem with Generic Prompts

When you say "write me a function that validates email addresses," Claude doesn't know:

  • What language you're using
  • What your error handling patterns look like
  • Whether you need RFC 5322 compliance or just basic validation
  • If this is for user registration or internal tooling
  • What your testing requirements are

So it gives you a generic answer. And you wonder why AI "doesn't work."


The Fix: Context Is Everything

Here's the same request, done right:

I need email validation for a Flask API endpoint that handles user registration.

Tech stack: Python 3.9, Flask, SQLAlchemy
Current pattern: We use a validate_* naming convention and raise ValidationError for bad input.

Requirements:
- Basic format validation is fine (no need for RFC 5322)
- Should reject disposable email domains
- Needs to be testable

Existing code style:
[paste an example function from your codebase]

Write the email validation function following these patterns.
Enter fullscreen mode Exit fullscreen mode

Same request. 10x better output.


10 Prompt Patterns That Actually Work

Here are my go-to patterns after a year of daily Claude use:

1. The Focused Review

Don't ask for a generic code review. Focus it:

Review this code specifically for security issues.
Ignore style, performance, everything else.
I just want to know: can this be exploited?

[paste code]
Enter fullscreen mode Exit fullscreen mode

Generic reviews give generic feedback. Focused reviews give actionable feedback.

2. The Debugging Frame

Structure your debugging requests:

I'm getting this error:
[paste error]

From this code:
[paste code]

Help me understand:
1. What's causing this error?
2. Where exactly is it happening?
3. How do I fix it?
Enter fullscreen mode Exit fullscreen mode

The numbered list forces Claude to give you complete answers instead of vague explanations.

3. The "Works Locally" Template

Every developer's nightmare:

This code works in my local environment but fails in production.

Local: Mac, Python 3.9, SQLite, DEBUG=True
Production: Linux, Python 3.9, PostgreSQL, DEBUG=False

The error:
[paste error]

What environment-specific issues should I check?
Enter fullscreen mode Exit fullscreen mode

Claude is surprisingly good at spotting environment mismatches.

4. The Gut Check

When something feels wrong but you can't articulate it:

This code works, but something feels off. I can't articulate what's bothering me.

Walk through this slowly and tell me what might be bugging me about it.

[paste code]
Enter fullscreen mode Exit fullscreen mode

Claude can often name the code smell you're sensing.

5. The Rubber Duck

Sometimes you don't need code:

I'm going to explain what I'm building. Just listen and ask clarifying questions. Don't write any code until I say so.
Enter fullscreen mode Exit fullscreen mode

This is underrated. Claude's questions often surface edge cases you hadn't considered.

6. The Project Context Setup

This is the single most powerful pattern. At the start of any project:

# Project Context

## What This Is
[Brief description]

## Tech Stack
- Language: Python 3.9
- Framework: Flask
- Key deps: SQLAlchemy, requests

## Conventions
- Naming: snake_case
- Errors: Custom exceptions in errors.py
- Testing: pytest, fixtures in conftest.py

## Current State
Working on: [current feature]
Known issues: [bugs/incomplete stuff]
Enter fullscreen mode Exit fullscreen mode

Paste this at the start of conversations. Watch your results improve dramatically.

7. The Trade-Off Analysis

Claude's reasoning is often better than its code:

I can solve this with approach A or approach B.

Compare them for:
- Performance at scale
- Code maintainability
- Edge case handling
- How easy it is to test

What would you choose and why?
Enter fullscreen mode Exit fullscreen mode

8. The Refactor Request

When code works but feels messy:

This code works, but it feels like it could be cleaner.
Suggest refactoring options and explain the trade-offs of each.
Keep the same behavior.

[paste code]
Enter fullscreen mode Exit fullscreen mode

The key: "explain trade-offs." Gets you teaching, not just code output.

9. The Explain It Back

Before any complex task:

Before you write any code, tell me your understanding of what I'm asking for and your planned approach.
Enter fullscreen mode Exit fullscreen mode

Catches misunderstandings before you waste tokens on the wrong solution.

10. The Session Continuity

When starting a new chat:

Continuing from previous work:

Last session: Working on user authentication
Got to: JWT implementation done, need refresh tokens
Stuck on: Token rotation strategy

Let's pick up from there.
Enter fullscreen mode Exit fullscreen mode

The Meta-Lesson

Good prompts aren't about tricking the AI. They're about giving it what it needs to help you.

  • Context about your project
  • Focus on what matters
  • Structure that enables complete answers
  • Examples of your existing patterns

Do this consistently and Claude becomes a genuinely useful collaborator instead of a frustrating autocomplete.


Want More?

I've packaged 75+ prompts like these into a complete collection, organized by use case:

  • Code Review & Refactoring
  • Debugging & Troubleshooting
  • Architecture & Design
  • Documentation
  • Project Scaffolding
  • Testing
  • Learning & Upskilling
  • Workflow & Productivity

Each prompt includes when to use it and real examples.

Grab it here: Claude Prompt Pack for Developers


What prompt patterns work best in your workflow? Drop them in the comments — I'm always looking for new approaches to add to my toolkit.

Top comments (0)