DEV Community

Suifeng023
Suifeng023

Posted on

Stop Writing Prompts Like a Noob: The Engineering Behind Reliable AI Outputs

Stop Writing Prompts Like a Noob: The Engineering Behind Reliable AI Outputs

Everyone and their dog is "prompt engineering" now. But most people are just guessing — typing sentences, hoping for the best, and accepting whatever the AI spits back.

After generating thousands of outputs across my autonomous AI agent (articles, designs, code, research), I've developed a systematic approach that turns AI from a slot machine into a reliable tool.

Here's what I actually do differently.

The Problem: Why Most Prompts Fail

Let me start with a real example. Here's how most people ask AI to write code:

Write me a Python function that scrapes a website
Enter fullscreen mode Exit fullscreen mode

And here's what they get: a generic, probably broken script that looks like it was written by someone who read a tutorial once.

The problem isn't the AI. The problem is you're asking for a solution when you should be specifying a system.

The Three Failure Modes

Failure Mode Example Why It Fails
Too vague "Write a web scraper" No constraints = unpredictable output
Too specific "Write 47 lines of Python using BeautifulSoup with CSS selector '.item' and save to CSV" AI follows instructions literally, misses the point
No context "Fix this bug" AI doesn't know your architecture, dependencies, or goals

My Framework: C-R-A-F-T

After months of iteration, I've settled on a framework I call C-R-A-F-T:

C — Context

Tell the AI who it is and what world it lives in.

You are a senior backend engineer at a Series B startup.
You're working on a microservices architecture with Python, 
FastAPI, and PostgreSQL. The team values clean code, 
comprehensive tests, and detailed documentation.
Enter fullscreen mode Exit fullscreen mode

This single paragraph changes everything. The AI now knows what level of code to produce, what patterns to follow, and what standards to meet.

R — Requirements

Specify exactly what you need, structured as constraints.

Requirements:
- Build a REST API endpoint for user registration
- Email validation with regex
- Password hashing with bcrypt (min 12 chars)
- Return JWT token on success
- Handle duplicate email with 409 status code
- Include input validation error messages
Enter fullscreen mode Exit fullscreen mode

Notice: I'm specifying behavior, not implementation. The AI chooses HOW to meet these requirements.

A — Architecture

Define the structural boundaries.

Architecture constraints:
- Use FastAPI with Pydantic models
- Separate into: router, service, repository layers
- Database access through SQLAlchemy async session
- Follow existing project structure in /src/users/
Enter fullscreen mode Exit fullscreen mode

This prevents the AI from creating a monolithic 200-line function when you need separation of concerns.

F — Format

Tell it exactly what the output should look like.

Output format:
1. Pydantic request/response models
2. Router function with endpoint decorator
3. Service layer function (business logic)
4. Repository layer function (database query)
5. 3 edge case tests using pytest
Enter fullscreen mode Exit fullscreen mode

Without this, you might get the code but no tests. Or tests but no models. Or everything in one file.

T — Tone & Style

Match your existing codebase's voice.

Style requirements:
- Use type hints everywhere
- Docstrings in Google format
- Variable names: snake_case, descriptive
- Comments only for non-obvious logic
- Logging with structlog, not print()
Enter fullscreen mode Exit fullscreen mode

Real-World Example: The Before & After

Before (Typical Prompt)

Write a Python function to send email notifications
Enter fullscreen mode Exit fullscreen mode

Result: A 20-line script using smtplib with hardcoded credentials. Useless in production.

After (C-R-A-F-T Prompt)

## Context
You are a senior Python engineer building a notification 
service for a SaaS platform with 50K users. We use 
Celery for async tasks and SendGrid for email delivery.

## Requirements
- Async email sending function (non-blocking)
- Support HTML and plain text templates
- Retry logic: 3 attempts with exponential backoff
- Dead letter queue for failed emails after retries
- Rate limiting: max 100 emails per minute per tenant
- Template rendering from Jinja2 templates stored in S3

## Architecture
- Celery task for async execution
- SendGrid API client wrapper
- Template service that fetches from S3
- Configuration through environment variables
- Metrics tracking with Prometheus counters

## Format
1. Celery task definition
2. Email service class
3. Template service class  
4. Configuration dataclass
5. Unit tests with mocked SendGrid client
6. Error handling middleware

## Style
- Async/await throughout
- Type hints on all functions
- Structured logging with context vars for tenant ID
- No global state
Enter fullscreen mode Exit fullscreen mode

Result: A production-ready, 150-line module with proper separation of concerns, error handling, and tests. The kind of code you'd actually review and approve.

Advanced Patterns I Use Daily

Pattern 1: The Few-Shot Bootstrap

Instead of describing your style, show it:

Here's an existing function from our codebase. Match this 
exact style, patterns, and quality level:

Enter fullscreen mode Exit fullscreen mode


python
async def get_user_by_email(
email: EmailStr,
db: AsyncSession = Depends(get_db)
) -> Optional[User]:
"""Fetch a single user by email address.

Args:
    email: Validated email string
    db: Async database session

Returns:
    User object if found, None otherwise
"""
result = await db.execute(
    select(User).where(User.email == email)
)
return result.scalar_one_or_none()
Enter fullscreen mode Exit fullscreen mode

Now write the user registration function in the same style.
Enter fullscreen mode Exit fullscreen mode


markdown

This is 10x more effective than any style guide description.

Pattern 2: The Constraint Ladder

Start with broad constraints, then refine:

Round 1: Write the basic CRUD operations for the Order model.
Round 2: Now add pagination, filtering, and sorting.
Round 3: Add optimistic concurrency control.
Round 4: Add caching layer with TTL-based invalidation.
Enter fullscreen mode Exit fullscreen mode

Each round builds on the previous one. The AI maintains context across rounds, so you get increasingly sophisticated output without starting over.

Pattern 3: The Error Spec

Tell the AI what can go wrong and how to handle it:

Error scenarios to handle:
1. Database connection timeout → Retry with exponential backoff (3 max)
2. Duplicate key violation → Return 409 with descriptive message
3. Validation failure → Return 422 with field-level error details
4. External API rate limit → Queue for retry after cooldown
5. Data corruption → Log full context, alert ops team, return 500
Enter fullscreen mode Exit fullscreen mode

This is the difference between code that works on your machine and code that works in production.

Pattern 4: The Test-First Prompt

Ask for tests BEFORE implementation:

Write pytest tests for a user registration endpoint that:
- Validates email format
- Rejects passwords under 12 characters
- Hashes password with bcrypt
- Returns JWT token with 24h expiry
- Handles duplicate emails gracefully

Write ONLY the tests first. I'll implement the code to make them pass.
Enter fullscreen mode Exit fullscreen mode

This flips the dynamic: the tests become your specification. When you do implement the code, feed the tests back as constraints and the AI will generate code that passes them.

Pattern 5: The Negative Prompt

Tell the AI what NOT to do:

DO NOT:
- Use global variables or singletons
- Import deprecated libraries
- Write synchronous I/O operations
- Use print() for logging
- Hardcode any configuration values
- Skip error handling "for brevity"
- Add TODO comments instead of implementing features
Enter fullscreen mode Exit fullscreen mode

Surprisingly effective. AI defaults to shortcut patterns that negative prompts eliminate.

The Meta-Framework: How I Build My Prompts

Here's my actual workflow for creating C-R-A-F-T prompts:

  1. Start with the output I want — Work backwards from the ideal result
  2. Identify the minimum context — What's the LEAST the AI needs to know?
  3. List hard constraints — Non-negotiable requirements
  4. Define the interface — Input/output contracts
  5. Choose one advanced pattern — Few-shot, constraint ladder, or test-first
  6. Add negative constraints — What should the AI avoid?
  7. Iterate in rounds — Refine based on output quality

What This Means for Your Workflow

The C-R-A-F-T framework isn't just about writing better prompts. It's about treating AI interaction as engineering, not conversation.

Every prompt is a specification. Every output is a deliverable. Every iteration is a review cycle.

When I built my autonomous AI agent that runs 24/7 and generates content, designs, and code, C-R-A-F-T was the foundation. Without it, I'd spend hours correcting garbage output. With it, I get production-quality results on the first try about 80% of the time.

The difference between amateur and professional prompt engineering isn't creativity — it's discipline.


If you found this useful, I've compiled 500+ battle-tested prompts for developers and marketers into downloadable packs:

Each pack includes ready-to-use C-R-A-F.T formatted prompts with real examples.

Top comments (0)