DEV Community

Atlas Whoff
Atlas Whoff

Posted on

Prompt Engineering for Developers: Beyond 'Be More Specific'

Prompt Engineering for Developers: Beyond 'Be More Specific'

Most prompt engineering advice is surface-level. 'Be more specific.' 'Give examples.' 'Use chain-of-thought.'

Here's what actually moves the needle when you're building LLM-powered features in production.

Mental Model: The LLM as a Junior Developer

Think of the model as a very capable junior developer who has read everything on the internet but has no context about your specific project. Your job is to give them:

  1. The role they're playing
  2. The constraints they're working within
  3. The format you want back
  4. Examples of success

System Prompt Architecture

const systemPrompt = `
You are a code review assistant for a TypeScript/React codebase.

Your job:
- Identify bugs, security issues, and performance problems
- Suggest idiomatic TypeScript patterns
- Flag violations of our conventions (listed below)

Our conventions:
- Use named exports (not default exports)
- Always define prop types with TypeScript interfaces
- Use React Query for all server state
- Error boundaries on all async components

Response format:
Return a JSON array of issues:
[{ severity: 'error'|'warning'|'info', line: number, message: string, suggestion: string }]

If no issues found, return [].
`;
Enter fullscreen mode Exit fullscreen mode

Notice what this does:

  • Defines role (code review assistant, not general assistant)
  • Specifies the project context (TypeScript/React)
  • Lists explicit conventions (model can't guess these)
  • Enforces structured output (parseable JSON)

Structured Output Is Non-Negotiable in Production

Free-form text is fine for chatbots. In production code, you need parseable responses:

import Anthropic from '@anthropic-ai/sdk';
import { z } from 'zod';

const ReviewSchema = z.array(z.object({
  severity: z.enum(['error', 'warning', 'info']),
  line: z.number(),
  message: z.string(),
  suggestion: z.string(),
}));

const client = new Anthropic();

async function reviewCode(code: string) {
  const response = await client.messages.create({
    model: 'claude-sonnet-4-6',
    max_tokens: 1024,
    system: systemPrompt,
    messages: [{ role: 'user', content: code }],
  });

  const text = response.content[0].type === 'text' ? response.content[0].text : '';
  return ReviewSchema.parse(JSON.parse(text));
}
Enter fullscreen mode Exit fullscreen mode

Few-Shot Examples Beat Instructions

When behavior is hard to describe, show examples:

const messages = [
  // Example 1: input
  { role: 'user', content: 'const x = require("lodash")' },
  // Example 1: expected output
  { role: 'assistant', content: '[{"severity":"warning","line":1,"message":"Use ES module import","suggestion":"import _ from \'lodash\'"}]' },
  // Example 2: input
  { role: 'user', content: 'export default function Button() {}' },
  // Example 2: expected output
  { role: 'assistant', content: '[{"severity":"info","line":1,"message":"Use named export","suggestion":"export function Button() {}"}]' },
  // Actual request
  { role: 'user', content: codeToReview },
];
Enter fullscreen mode Exit fullscreen mode

Reduce Hallucination With Constraints

Models hallucinate when they don't know the answer but feel compelled to respond. Add explicit uncertainty handling:

If you are not confident about an issue, omit it rather than flagging it speculatively.
Only flag issues you are certain about based on the code provided.
Enter fullscreen mode Exit fullscreen mode

Claude Code Skills: Prompt Engineering at Scale

The most powerful prompt engineering for developers isn't in API calls — it's in slash commands that encode your team's best practices into reusable workflows.

The Ship Fast Skill Pack includes pre-built Claude Code skills for PR review, test generation, architecture analysis, and deployment workflows — so your whole team benefits from the same carefully engineered prompts.

Top comments (0)