Prompt Engineering for Developers: Beyond 'Be More Specific'
Most prompt engineering advice is surface-level. Here's what actually moves the needle when building LLM features in production.
Mental Model: LLM as Junior Developer
Think of the model as a capable junior developer with no context about your project. Give them:
- The role they're playing
- Constraints they're working within
- The output format you want
- Examples of success
System Prompt Architecture
const systemPrompt = `
You are a code review assistant for a TypeScript/React codebase.
Our conventions:
- Use named exports (not default exports)
- Always define prop types with TypeScript interfaces
- Use React Query for all server state
Response format: JSON array of issues:
[{ severity: 'error'|'warning'|'info', line: number, message: string, suggestion: string }]
If no issues found, return [].
`;
Structured Output in Production
const ReviewSchema = z.array(z.object({
severity: z.enum(['error', 'warning', 'info']),
line: z.number(),
message: z.string(),
suggestion: z.string(),
}));
const response = await client.messages.create({
model: 'claude-sonnet-4-6',
max_tokens: 1024,
system: systemPrompt,
messages: [{ role: 'user', content: code }],
});
const text = response.content[0].type === 'text' ? response.content[0].text : '';
return ReviewSchema.parse(JSON.parse(text));
Reduce Hallucination With Constraints
Add explicit uncertainty handling:
If you are not confident about an issue, omit it rather than flagging it speculatively.
Only flag issues you are certain about based on the code provided.
Claude Code Skills: Prompt Engineering at Scale
The most powerful prompt engineering for developers isn't in API calls — it's in slash commands that encode your team's best practices as reusable workflows.
The Ship Fast Skill Pack includes pre-built Claude Code skills for PR review, test generation, architecture analysis, and deployment workflows.
Top comments (0)