DEV Community

Cover image for Your API Wasn't Designed for AI Agents. Here Are 5 Fixes.
klement Gunndu
klement Gunndu

Posted on

Your API Wasn't Designed for AI Agents. Here Are 5 Fixes.

AI agents are your API's newest power users. They don't read your docs the way humans do. They parse your OpenAPI spec, call endpoints in loops, retry on every failure, and chain 20 requests without pausing to ask "are you sure?"

Most APIs were designed for frontend developers clicking buttons. That design breaks when an autonomous agent starts calling your endpoints at 3 AM with no human in the loop.

Here are 5 architecture patterns that make your API survive the agent era.

1. Machine-Readable Descriptions in Your OpenAPI Spec

Human developers read your API docs and infer intent. AI agents read your OpenAPI specification and take descriptions literally. Vague descriptions produce wrong API calls.

The difference between an agent choosing the right endpoint and the wrong one is often a single sentence in your spec.

from fastapi import FastAPI
from pydantic import BaseModel, Field

app = FastAPI(
    title="Order Management API",
    description="Manages customer orders. All monetary values are in USD cents.",
)


class OrderCreate(BaseModel):
    """Create a new order. Does NOT charge the customer.
    Use POST /orders/{order_id}/confirm to finalize and charge."""

    customer_id: str = Field(
        description="Unique customer identifier. Format: cust_xxxxxxxxxxxx"
    )
    items: list[str] = Field(
        description="List of SKU strings. Each SKU must exist in the product catalog."
    )
    amount_cents: int = Field(
        description="Total order amount in USD cents. Must match sum of item prices.",
        ge=1,
    )


@app.post(
    "/orders",
    summary="Create a draft order (does not charge customer)",
    response_description="Returns the created order with status 'draft'",
)
async def create_order(order: OrderCreate):
    """Create a draft order. The order is NOT finalized until
    POST /orders/{order_id}/confirm is called. Draft orders
    expire after 30 minutes."""
    return {"order_id": "ord_123", "status": "draft"}
Enter fullscreen mode Exit fullscreen mode

Three things agents need that human developers infer:

  • Explicit side effects. "Does NOT charge the customer" prevents an agent from assuming a POST creates and charges in one step.
  • Format specifications in Field descriptions. Format: cust_xxxxxxxxxxxx eliminates guessing about ID formats.
  • Units in the schema. "USD cents" prevents an agent from sending 19.99 when you expect 1999.

Expose your spec at a standard endpoint like /openapi.json. Agents discover APIs through specs, not landing pages.

2. Idempotency Keys for Safe Retries

Agents retry. Every agent framework includes retry logic. LangChain retries on exceptions. OpenAI's function calling retries on malformed responses. Your own agent loop retries on timeouts.

Without idempotency keys, each retry creates a duplicate. One "create order" intent becomes three orders.

from fastapi import FastAPI, Header, HTTPException, Response
from typing import Optional
import hashlib
import json
import time

app = FastAPI()

# In production: use Redis or a database
idempotency_store: dict[str, dict] = {}
IDEMPOTENCY_TTL_SECONDS = 86400  # 24 hours


@app.post("/payments")
async def create_payment(
    amount_cents: int,
    customer_id: str,
    response: Response,
    idempotency_key: Optional[str] = Header(None, alias="Idempotency-Key"),
):
    if idempotency_key is None:
        raise HTTPException(
            status_code=400,
            detail={
                "error": "idempotency_key_required",
                "message": "POST /payments requires an Idempotency-Key header.",
                "docs": "/openapi.json#/paths/~1payments/post",
            },
        )

    # Check if this key was already processed
    if idempotency_key in idempotency_store:
        cached = idempotency_store[idempotency_key]
        response.headers["X-Idempotent-Replayed"] = "true"
        return cached["response"]

    # Process the payment (your real logic here)
    result = {
        "payment_id": f"pay_{hashlib.sha256(idempotency_key.encode()).hexdigest()[:12]}",
        "amount_cents": amount_cents,
        "customer_id": customer_id,
        "status": "completed",
    }

    # Cache the response
    idempotency_store[idempotency_key] = {
        "response": result,
        "created_at": time.time(),
    }

    return result
Enter fullscreen mode Exit fullscreen mode

The X-Idempotent-Replayed: true header tells the calling agent that this response is cached, not fresh. Agents that track state transitions need to know the difference.

Document idempotency in your OpenAPI spec. Add the Idempotency-Key header as a required parameter on every mutating endpoint. Agents read headers from specs — if the header is documented, the agent sends it.

3. Structured Error Responses

Human developers read "Something went wrong" and check the logs. Agents read it and have no idea what to do next.

Every error your API returns becomes input to an LLM deciding what to do next. Structured errors give agents a recovery path. Unstructured errors produce hallucinated retries.

from fastapi import FastAPI, HTTPException, Request
from fastapi.responses import JSONResponse
from pydantic import BaseModel

app = FastAPI()


class APIError(BaseModel):
    error_code: str
    message: str
    retryable: bool
    retry_after_seconds: int | None = None
    suggestion: str | None = None


ERROR_CATALOG = {
    "insufficient_funds": APIError(
        error_code="insufficient_funds",
        message="Customer balance is below the requested amount.",
        retryable=False,
        suggestion="Reduce the amount or add funds via POST /customers/{id}/balance",
    ),
    "rate_limited": APIError(
        error_code="rate_limited",
        message="Too many requests. Slow down.",
        retryable=True,
        retry_after_seconds=30,
        suggestion="Wait for retry_after_seconds, then retry the same request.",
    ),
    "resource_locked": APIError(
        error_code="resource_locked",
        message="This resource is being modified by another request.",
        retryable=True,
        retry_after_seconds=2,
        suggestion="Retry with the same idempotency key after the wait period.",
    ),
}


@app.exception_handler(HTTPException)
async def structured_error_handler(request: Request, exc: HTTPException):
    if isinstance(exc.detail, dict) and "error_code" in exc.detail:
        return JSONResponse(
            status_code=exc.status_code,
            content=exc.detail,
        )
    # Fallback for unstructured errors
    return JSONResponse(
        status_code=exc.status_code,
        content={
            "error_code": "unknown_error",
            "message": str(exc.detail),
            "retryable": False,
            "suggestion": "Check the request parameters and try again.",
        },
    )
Enter fullscreen mode Exit fullscreen mode

Three fields that change agent behavior:

  • retryable: Boolean. Agents check this before deciding to retry. Without it, agents retry everything — including errors that will never succeed.
  • retry_after_seconds: Integer. Tells the agent exactly how long to wait. Without it, agents use exponential backoff or guess.
  • suggestion: String. The LLM reads this and uses it to decide the next action. "Reduce the amount or add funds via POST /customers/{id}/balance" gives the agent a concrete recovery path.

This turns a dead-end error into a decision tree the agent can navigate.

4. Confirmation Endpoints for Destructive Actions

Agents chain actions. A prompt like "clean up old data" can trigger an agent to call DELETE /users/{id} in a loop without asking anyone. One bad filter and you lose production data.

Destructive operations need a two-step process: prepare, then confirm.

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import uuid
import time

app = FastAPI()

pending_deletions: dict[str, dict] = {}
CONFIRMATION_TTL_SECONDS = 300  # 5 minutes


class DeletionPreview(BaseModel):
    confirmation_token: str
    action: str
    affected_resources: list[str]
    affected_count: int
    expires_at: float
    warning: str


@app.post(
    "/users/bulk-delete/preview",
    summary="Preview a bulk deletion. Returns affected resources. Does NOT delete anything.",
)
async def preview_bulk_delete(
    filter_status: str,
    older_than_days: int,
) -> DeletionPreview:
    # Query what WOULD be deleted (read-only operation)
    affected = [f"user_{i}" for i in range(3)]  # Replace with real query

    token = str(uuid.uuid4())
    preview = DeletionPreview(
        confirmation_token=token,
        action="bulk_delete_users",
        affected_resources=affected,
        affected_count=len(affected),
        expires_at=time.time() + CONFIRMATION_TTL_SECONDS,
        warning=f"This will permanently delete {len(affected)} users. "
        f"Confirm within 5 minutes or the token expires.",
    )

    pending_deletions[token] = {
        "filter_status": filter_status,
        "older_than_days": older_than_days,
        "preview": preview.model_dump(),
        "created_at": time.time(),
    }

    return preview


@app.post(
    "/users/bulk-delete/confirm",
    summary="Execute a previewed bulk deletion. Requires a valid confirmation token.",
)
async def confirm_bulk_delete(confirmation_token: str):
    if confirmation_token not in pending_deletions:
        raise HTTPException(
            status_code=404,
            detail={
                "error_code": "invalid_token",
                "message": "Confirmation token not found or expired.",
                "retryable": False,
                "suggestion": "Call POST /users/bulk-delete/preview to get a new token.",
            },
        )

    pending = pending_deletions[confirmation_token]
    if time.time() > pending["preview"]["expires_at"]:
        del pending_deletions[confirmation_token]
        raise HTTPException(
            status_code=410,
            detail={
                "error_code": "token_expired",
                "message": "Confirmation token has expired.",
                "retryable": False,
                "suggestion": "Call POST /users/bulk-delete/preview to generate a new token.",
            },
        )

    # Execute the actual deletion
    del pending_deletions[confirmation_token]
    return {"status": "deleted", "count": pending["preview"]["affected_count"]}
Enter fullscreen mode Exit fullscreen mode

The preview endpoint is a read-only operation. It returns what would happen without doing it. The confirmation endpoint executes — but only with a valid, unexpired token.

This gives the agent (or the human reviewing agent actions) a checkpoint. The token expiration prevents stale confirmations from executing hours later when the data has changed.

5. Rate Limiting With Retry-After Headers

Agents hit rate limits more than humans do. A human clicks a button once. An agent calls your endpoint in a loop until a condition is met.

Most rate limiters return 429 Too Many Requests with no guidance. The agent retries immediately, gets another 429, retries again. This creates a thundering herd.

from fastapi import FastAPI, Request, HTTPException
from fastapi.responses import JSONResponse
import time

app = FastAPI()

# Simple per-client rate limiter (use Redis in production)
rate_limit_store: dict[str, list[float]] = {}
REQUESTS_PER_MINUTE = 60


@app.middleware("http")
async def rate_limit_middleware(request: Request, call_next):
    client_id = request.headers.get("X-API-Key", request.client.host)
    now = time.time()
    window_start = now - 60

    # Clean old entries and count recent requests
    requests = rate_limit_store.get(client_id, [])
    requests = [t for t in requests if t > window_start]

    if len(requests) >= REQUESTS_PER_MINUTE:
        oldest_in_window = min(requests)
        retry_after = int(oldest_in_window + 60 - now) + 1

        return JSONResponse(
            status_code=429,
            headers={
                "Retry-After": str(retry_after),
                "X-RateLimit-Limit": str(REQUESTS_PER_MINUTE),
                "X-RateLimit-Remaining": "0",
                "X-RateLimit-Reset": str(int(oldest_in_window + 60)),
            },
            content={
                "error_code": "rate_limited",
                "message": f"Rate limit exceeded. {REQUESTS_PER_MINUTE} requests per minute.",
                "retryable": True,
                "retry_after_seconds": retry_after,
                "suggestion": f"Wait {retry_after} seconds before retrying.",
            },
        )

    requests.append(now)
    rate_limit_store[client_id] = requests

    response = await call_next(request)
    response.headers["X-RateLimit-Limit"] = str(REQUESTS_PER_MINUTE)
    response.headers["X-RateLimit-Remaining"] = str(
        REQUESTS_PER_MINUTE - len(requests)
    )
    return response
Enter fullscreen mode Exit fullscreen mode

Four headers that agents need on every response:

Header Purpose
Retry-After Seconds until the agent should retry. Agents that respect this stop hammering your server.
X-RateLimit-Limit Total requests allowed per window. Agents use this to pace themselves.
X-RateLimit-Remaining Requests left in the current window. Agents can preemptively slow down before hitting zero.
X-RateLimit-Reset Unix timestamp when the window resets. Agents can schedule their next batch.

The Retry-After header is an HTTP standard (RFC 9110, Section 10.2.3). Most HTTP client libraries and agent frameworks parse it automatically. If you only add one header, make it this one.

The Pattern Behind the Patterns

All five patterns share one principle: make the implicit explicit.

Human developers infer intent from context, read between the lines of error messages, and know not to retry a payment twice. AI agents do none of that. They operate on what your API explicitly tells them.

  • Explicit descriptions in your spec (Pattern 1)
  • Explicit idempotency guarantees (Pattern 2)
  • Explicit error recovery paths (Pattern 3)
  • Explicit confirmation gates (Pattern 4)
  • Explicit rate limit boundaries (Pattern 5)

The good news: every pattern here makes your API better for human consumers too. Structured errors, idempotency keys, and rate limit headers are things your human developers have been asking for. Agents just forced the issue.

Start with Pattern 3 (structured errors) and Pattern 5 (rate limiting headers). These require the least refactoring and prevent the most agent-caused incidents. Then add idempotency keys to your payment and mutation endpoints. The confirmation pattern is for high-risk operations where the cost of a wrong action is permanent.

Your API was designed for humans. The agents are already calling it. Make the implicit explicit before they learn the hard way.


Follow @klement_gunndu for more AI architecture content. We're building in public.

Top comments (4)

Collapse
 
micelclaw profile image
Victor García

Solid post. We're building a self-hosted AI OS where the agent consumes our API via skill files, so the context is a bit different — but two things are going straight into our backlog:

  1. Adding retryable: boolean to our error envelope. We have structured errors already but never tell the agent explicitly whether to retry.
  2. Idempotency key on POST /emails/send — the one endpoint where a duplicate would actually hurt.

"Make the implicit explicit" is a great framing.

Collapse
 
klement_gunndu profile image
klement Gunndu

The retryable boolean addition is exactly the kind of thing that saves agents from burning tokens on retry loops that will never succeed. Most error envelopes stop at the status code and message — telling the agent whether the failure is transient or permanent eliminates an entire class of wasted calls.

Idempotency keys on POST /emails/send is the right call too. That is the pattern where the cost of a duplicate is asymmetric — a duplicate database write is annoying, a duplicate email erodes user trust. Good instinct to prioritize it there first.

Collapse
 
freerave profile image
freerave

If we optimize APIs specifically for AI agents (shorter payloads, descriptive errors), aren't we creating a 'shadow API layer' alongside our main REST/GraphQL services? How do we maintain both without doubling the engineering effort?

Collapse
 
klement_gunndu profile image
klement Gunndu

Good question. In practice it doesn't have to be a separate API layer. The changes I described — structured error envelopes, idempotency keys, pagination tokens — benefit human consumers too. The difference is making implicit contracts explicit. A retryable field helps frontend retry logic just as much as it helps an agent. Where it diverges is response verbosity: agents often want flatter, denser payloads while UIs want nested display-ready structures. Content negotiation (Accept headers or query params like ?format=agent) on the same endpoints handles that without maintaining two separate services.