DEV Community

Cover image for Prompting for Structure: How to Write AI Prompts That Don't Create Architectural Debt
Pablo Ifrán
Pablo Ifrán

Posted on • Originally published at elpic.Medium

Prompting for Structure: How to Write AI Prompts That Don't Create Architectural Debt

Post 1 showed the mess. Here's the fix.

The loyalty points example ended with a constrained prompt one paragraph that told the AI exactly which file, exactly which layer, exactly what wasn't allowed. The AI produced a clean domain object on the first try. No drift, no boundary violations, no cleanup.

That wasn't luck. It was a pattern. This post is about making that pattern repeatable.


Why Most Prompts Produce Drift

When you write "add a discount service," you're asking the AI to make three decisions you should be making:

  1. Where does this code live?
  2. What does it depend on?
  3. What is it not allowed to touch?

The AI will answer all three questions but from its context window, not from your architectural intent. If it saw a service file recently, the new code goes there. If it saw a database call in the last file it wrote, it might add one here too. It's not wrong. It's pattern-matching from incomplete context.

The fix is to answer those three questions yourself, in the prompt, before the AI has a chance to guess.


The Anatomy of a Structured Prompt

Every prompt that produces architecturally correct code has the same four parts:

[LAYER]       - which file or directory does this belong in
[INTERFACE]   — what does it take in, what does it return
[CONSTRAINTS] — what it must not import, touch, or know about
[TASK]        — what it actually needs to do
Enter fullscreen mode Exit fullscreen mode

Most prompts only have the last part. The first three are what prevents drift.

Here's what that looks like for the loyalty example from Post 1:

Without structure:

"Add a loyalty points system customers earn 10 points per dollar spent, can redeem 100 points for $5 off."

With structure:

"Add a LoyaltyAccount value object to domain/models.py. It should hold points: int and have two methods: earn(spend_amount: float) returning a new LoyaltyAccount with earned points added, and redeem(requested: bool) returning a tuple of the updated account and the discount amount. Keep the rate (10 points per dollar) and redemption value (100 points = $5) as class-level constants. No database calls, no imports outside the standard library. The methods should be pure no mutation of self."

Same ask. Different result. The constraint is the secret ingredient.

The Four-Part Prompt Anatomy


The Four Prompt Patterns

Pattern 1: The Layer-First Prompt

Use this when adding a new piece of functionality and you know exactly where it belongs.

Create [FILENAME] in [DIRECTORY].

It should only import from [ALLOWED_IMPORTS].
It must not import from [FORBIDDEN_IMPORTS].

[TASK DESCRIPTION]

No [SPECIFIC_FORBIDDEN_THING].
Enter fullscreen mode Exit fullscreen mode

Example adding an application service:

Create `application/notification_service.py`.

It should only import from `domain/`.
It must not import from `fastapi`, `sqlalchemy`, or any HTTP library.

Add a `NotificationService` class that takes a `NotificationPort` as a constructor argument. Add a `notify_order_confirmed(order_id: str, customer_email: str) -> None` method that calls the port.

No business logic this is an orchestration layer only.
Enter fullscreen mode Exit fullscreen mode

What you get: a thin service class that delegates to a port. No HTTP. No database. No surprises.

The Layer-First Prompt Template

Pattern 2: The Port-First Prompt

Use this when adding a new external dependency. Define the port before the adapter.

Add a `NotificationPort` Protocol to `domain/ports.py`.

It should define one method: `send_order_confirmation(order_id: str, customer_email: str) -> None`.

This protocol represents any notification delivery mechanism. Do not import from any external library. The type signature should use only Python builtins.
Enter fullscreen mode Exit fullscreen mode

Then separately:

Create `adapters/sendgrid/notification_adapter.py`.

It implements the `NotificationPort` protocol from `domain/ports.py`.
It may import from `sendgrid` and the standard library only.

Add a `SendGridNotificationAdapter` class that sends a transactional
email using the SendGrid API. Constructor takes `api_key: str`.
Enter fullscreen mode Exit fullscreen mode

Two prompts, two files, clear boundary. The domain never knows SendGrid exists.

Port-First Prompt Flow Diagram

Pattern 3: The Thin-Route Prompt

Use this when adding a new API endpoint. Keep the route handler as close to zero logic as possible.

Add a POST endpoint at `/orders/{order_id}/cancel` in `api/orders.py`.

The route handler should:
- Accept `order_id: str` from the path
- Call `service.cancel_order(order_id)` with no other arguments
- Return `{"status": "cancelled", "order_id": order_id}`
- Convert `ValueError` to 422, `KeyError` to 404

No business logic in the route. No direct database calls.
The route handler body should be 10 lines or fewer.
Enter fullscreen mode Exit fullscreen mode

The line count constraint is surprisingly effective. It forces the AI to delegate logic to the service instead of adding it inline.

Thin Route Line Count Comparison

Pattern 4: The Refactor-Into-Domain Prompt

Use this when you've already accepted some boundary-blind output and need to fix it without a full rewrite.

The loyalty points calculation in `api/orders.py` should move to
`domain/models.py`.

Extract it into a `LoyaltyAccount` dataclass with methods `earn()` and `redeem()`. No imports outside domain/ and stdlib.

Update `api/orders.py` to instantiate `LoyaltyAccount` and call those methods instead of calculating inline. Do not change any existing tests.
Enter fullscreen mode Exit fullscreen mode

Vague refactor prompts produce vague results. Giving the AI the destination and the contract works.

The Three-Question Check


Giving the AI Your Architecture as Context

The patterns above work for individual prompts. But there's a higher-leverage move: giving the AI a persistent understanding of your architecture at the start of every session.

Here's the architectural context block I use for a hexagonal FastAPI service:

## Architecture Context

This is a Python FastAPI service using hexagonal architecture.

Directory structure and dependency rules:
- domain/       — pure Python, no framework imports, no infrastructure
- application/  — imports from domain/ only; no fastapi, no sqlalchemy
- adapters/     — implements domain ports; may import sqlalchemy, httpx, etc.
- api/          — imports from application/ only; handles HTTP translation only

Rules:
- Business logic lives in domain/
- Orchestration lives in application/
- Infrastructure lives in adapters/
- Route handlers are thin: parse request → call service → return response
- No SQLAlchemy models in domain/
- No FastAPI types in application/ or domain/

When I ask you to add code, always specify which layer it belongs in and respect the import rules above.
Enter fullscreen mode Exit fullscreen mode

Paste this at the start of a Cursor chat or Claude project. Every prompt you write after it inherits these rules without repeating them.

The Architectural Context Block


Before/After: Order Cancellation

Let's run the full before/after on adding order cancellation.

The unconstrained prompt:

"Add order cancellation. Orders can be cancelled if they haven't shipped yet. Once cancelled, the customer gets a refund of their loyalty points."

Result: business rules in the route handler, direct database queries in the route, the loyalty points rate hard-coded again as a magic number, no test surface for the cancellation logic.

# api/orders.py everything in one place
@router.post("/{order_id}/cancel")
def cancel_order(order_id: str, db: Session = Depends(get_db)):
    order = db.query(Order).filter_by(id=order_id).first()
    if not order:
        raise HTTPException(status_code=404)
    if order.status == "shipped":
        raise HTTPException(status_code=400, detail="Cannot cancel shipped order")
    customer = db.query(Customer).filter_by(id=order.customer_id).first()
    if customer:
        points_earned = int(order.total * 10)  # magic number is back
        customer.loyalty_points = max(0, customer.loyalty_points - points_earned)
    order.status = "cancelled"
    db.commit()
    return {"status": "cancelled", "order_id": order_id}
Enter fullscreen mode Exit fullscreen mode

The constrained approach three prompts, three files:

Domain rule first:

Add a `cancel()` method to the `Order` dataclass in `domain/models.py`.

It should:
- Return a new `Order` with status "cancelled"
- Raise `ValueError("cannot cancel a shipped order")` if status is "shipped"
- Accept the current `LoyaltyAccount` and return a tuple of
  (updated Order, updated LoyaltyAccount) with the earned points reversed

No database calls, no imports outside domain/.
Enter fullscreen mode Exit fullscreen mode

Then the service method:

Add a `cancel_order(order_id: str) -> Order` method to
`application/order_service.py`.

Fetch the order, call `order.cancel(loyalty_account)`, save the results.
Raise `KeyError` if the order is not found.
No FastAPI, no SQLAlchemy, no HTTP types.
Enter fullscreen mode Exit fullscreen mode

Then the route:

Add a POST endpoint at `/orders/{order_id}/cancel` in `api/orders.py`.
Call `service.cancel_order(order_id)`.
Convert `ValueError` to 422, `KeyError` to 404.
Route body: 10 lines or fewer.
Enter fullscreen mode Exit fullscreen mode

Result:

# domain/models.py rule lives here, testable in isolation
def cancel(self, loyalty_account: "LoyaltyAccount") -> tuple["Order", "LoyaltyAccount"]:
    if self.status == "shipped":
        raise ValueError("cannot cancel a shipped order")
    reversed_account = loyalty_account.earn(-self.total.amount)
    return Order(
        id=self.id, customer_id=self.customer_id,
        items=self.items, status="cancelled", total=self.total,
    ), reversed_account
Enter fullscreen mode Exit fullscreen mode
# api/orders.py eight lines, no logic
@router.post("/{order_id}/cancel")
def cancel_order(order_id: str, service: OrderService = Depends(get_order_service)):
    try:
        order = service.cancel_order(order_id)
    except ValueError as exc:
        raise HTTPException(status_code=422, detail=str(exc))
    except KeyError:
        raise HTTPException(status_code=404)
    return {"status": order.status, "order_id": order.id}
Enter fullscreen mode Exit fullscreen mode

Three files. Each tests independently. The cancellation rule lives in the domain.


The Three-Question Check

Before you send any prompt to an AI coding assistant, ask:

1. Did I specify the layer?
If not, the AI will choose. Its choice will be defensible but probably not consistent with the last time you added something similar.

2. Did I specify the interface?
Method signatures, argument types, return types. The more specific you are, the less the AI has to guess.

3. Did I add at least one negative constraint?
What can this code not import? What can it not do? One negative constraint is usually enough to prevent the most common drift pattern for that layer.

If you can answer all three in under 30 seconds, you have a structured prompt.


Prompts Don't Replace Judgment

Structured prompts make AI output better, not perfect.

The AI will still occasionally violate a constraint particularly if the task is complex or the context is large. The patterns in this post reduce drift significantly, but they don't eliminate the need to review the output. Post 3 covers what to look for in that review.

What structured prompts actually do is shift the review burden. Instead of reading every line looking for architectural problems, you focus on the places where constraints were most likely to slip the import block, the layer boundaries, the line count. That's a much faster review.

The goal isn't perfect AI output. It's consistent, predictable AI output that lands close enough to correct that your review time is measured in seconds, not minutes.

Top comments (0)