Bad documentation is not a writing problem. It is a prioritization problem.
Developers know what the code does. They wrote it. The issue is that explaining it to someone else — clearly, without assuming context, at the right level of detail — requires a different mode of thinking than building it did. So it gets deprioritized. The README stays sparse. The docstrings describe the "what" instead of the "why." The API docs assume the reader already knows the integration. The CHANGELOG reads like a git blame.
The result: teammates spend forty minutes reading source code for information that should have taken four minutes to find in a doc. New contributors file issues that are already answered somewhere, buried in prose no one told them to read. External developers bounce off your API because the error message says 422 Unprocessable Entity and the documentation says nothing at all.
Claude doesn't fix the prioritization problem — you still have to decide documentation is worth shipping. But it eliminates the time cost that makes documentation feel unaffordable. A good prompt plus two minutes of context produces documentation you'd be proud to merge. These 10 Claude prompts for documentation cover the types you write most often: READMEs, docstrings, API docs, changelogs, onboarding guides, and the formats developers forget exist until they need them badly. If you've read the code review prompts, the debugging prompts, or the system design prompts, you know what to expect: copy-paste ready templates with realistic example output and an explanation of why each one works.
Before You Start: How to Feed Claude Documentation Context
Documentation prompts fail for one reason: Claude doesn't know who the reader is or what they already know. A README for library maintainers looks nothing like a README for first-time users. API docs for internal teammates differ from API docs for external developers paying for access.
Four things to give Claude before any documentation prompt:
The target reader. Be specific. "A developer" is not a target reader. "A backend engineer evaluating this library for the first time, who knows Python but has never used this SDK" is. The reader determines vocabulary, assumed knowledge, and what counts as sufficient explanation.
What the thing does. Paste a two-sentence plain-English description of the code being documented. Don't make Claude infer the purpose from source code — tell it directly. "This class manages OAuth 2.0 token refresh, including retry logic and thread-safe caching" takes ten seconds to write and saves Claude from producing generic documentation that misses the point.
The output format you want. Markdown README, JSDoc comment, OpenAPI prose, plain English, Google-style docstring — these are different formats with different conventions. Specify the format explicitly or Claude will pick the one that looks most common in training data, which may not match your codebase.
The quality bar. "Good enough to ship without editing" is the standard for every prompt in this article. Tell Claude that explicitly. "Draft for review" produces hedged, incomplete output. "Ship-ready" produces output you can paste directly into the file.
The 10 Prompts
1. Write a README from Scratch
The Prompt
Write a README for this project. Target reader: [describe the reader — e.g. "a backend developer evaluating this library for the first time"]. The README should include: a one-paragraph description of what it does and why someone would use it, a prerequisites section, a quickstart (working code example — not pseudocode), the three most important configuration options, a brief explanation of the project structure, and a link placeholder for full docs. Tone: direct and practical, no marketing language. Output: ship-ready markdown.
Project name: [name]
What it does: [2-3 sentence plain-English description]
Language/framework: [e.g. Python 3.11, FastAPI]
Key dependencies: [list]
Main entry point or usage pattern: [describe or paste]
What it does
Produces a complete, opinionated README from minimal input. The section list prevents the two most common README failures: writing a vague description with no quickstart, or writing a quickstart with no context. The "no marketing language" constraint stops Claude from generating the vapid "blazing-fast, production-ready" opener that makes developers close the tab.
Example output
# rate-shield
Rate limiting middleware for FastAPI. Drop it in front of any route to enforce
per-user, per-IP, or per-API-key request limits with Redis-backed counters and
automatic sliding window resets.
Use it when you need rate limiting that survives server restarts and works
across multiple API instances without coordinating state yourself.
## Prerequisites
- Python 3.11+
- Redis 6.2+ (local or managed — Upstash, Redis Cloud)
- FastAPI 0.100+
## Quickstart
pip install rate-shield
from fastapi import FastAPI
from rate_shield import RateLimiter
app = FastAPI()
limiter = RateLimiter(redis_url="redis://localhost:6379")
@app.get("/search")
@limiter.limit("100/minute")
async def search(query: str):
return {"results": []}
That's it. The decorator enforces 100 requests per minute per IP.
Exceeding the limit returns a 429 with a Retry-After header.
## Configuration
REDIS_URL Redis connection string (default: redis://localhost:6379)
KEY_PREFIX Namespace prefix for Redis keys (default: "rl:")
LIMIT_EXCEEDED_STATUS HTTP status on limit hit (default: 429, can set 503)
## Project Structure
rate_shield/
__init__.py Public API — RateLimiter class and limit decorator
backends.py Redis backend and in-memory fallback for testing
middleware.py ASGI middleware layer
exceptions.py RateLimitExceeded with Retry-After calculation
tests/ Full test suite, runnable with pytest
## Full Docs
[Link to docs]
Why it works
Named sections force a complete README, not just a description. Most AI-generated READMEs omit the quickstart or bury it after three paragraphs of context. Specifying "working code example — not pseudocode" is the critical constraint: it produces something a developer can actually run in under five minutes.
Pro tip — If your project has a companion CLI, add "include a one-line CLI invocation example in the quickstart" to the prompt. Showing CLI and library usage together lets developers pick their entry point immediately.
2. Write Docstrings for a Function or Class
The Prompt
Write docstrings for this [function / class]. Style: [Google | NumPy | JSDoc | plain reStructuredText]. Include: a one-sentence summary, parameter descriptions with types, return value with type, any exceptions raised and when, and one usage example for any parameter combination that isn't obvious. Do not restate the implementation — document the contract (what, not how). Ship-ready.
Code:
[paste function or class]
Context (optional but improves output):
Why this exists: [1 sentence]
Non-obvious behavior to document: [describe any gotchas, edge cases, or constraints]
What it does
Produces contract-level docstrings rather than implementation summaries. The distinction matters: "Iterates through items and appends to results list" documents the implementation. "Returns items matching the predicate, preserving insertion order. Returns an empty list if no items match — never returns None" documents the contract. The first is useless. The second is what developers read when they don't want to open the source file.
Example output
def paginate_results(
items: list[dict],
page: int,
per_page: int = 20,
*,
max_per_page: int = 100,
) -> dict:
"""Return a paginated slice of items with metadata.
Args:
items: The full list to paginate. Must be pre-sorted — this function
does not sort. Empty lists return a valid response with zero results.
page: 1-indexed page number. Page 1 returns the first `per_page` items.
Requesting a page beyond the last page returns an empty results list,
not an error.
per_page: Number of items per page. Clamped to `max_per_page` silently
if exceeded — callers should not rely on receiving more than
`max_per_page` items regardless of what they pass.
max_per_page: Upper bound on items per page. Defaults to 100.
Returns:
dict with keys:
results (list[dict]): Items for this page. Empty list on out-of-range page.
total (int): Total number of items across all pages.
page (int): Current page (mirrors input).
pages (int): Total number of pages given per_page.
has_next (bool): True if a next page exists.
has_prev (bool): True if a previous page exists.
Raises:
ValueError: If page < 1 or per_page < 1.
Example:
>>> paginate_results(items, page=2, per_page=10)
{"results": [...], "total": 47, "page": 2, "pages": 5, ...}
# Requesting beyond the last page returns empty results, not an error:
>>> paginate_results(items, page=99, per_page=10)
{"results": [], "total": 47, "page": 99, "pages": 5, "has_next": False, ...}
"""
Why it works
"Document the contract, not the implementation" is the constraint that separates useful docstrings from noise. Asking for exceptions and when they're raised forces Claude to think about failure cases, not just the happy path. The usage example requirement catches non-obvious edge cases — like page=99 returning empty results rather than an error — that developers discover by reading tests if they're lucky and by hitting a bug if they're not.
Pro tip — Add "flag any parameter names that are ambiguous or any behavior that surprised you" to the prompt. Claude occasionally surfaces naming issues or implicit behavior that the original author stopped noticing because it became familiar.
3. Write API Endpoint Documentation
The Prompt
Write documentation for this API endpoint. Target reader: [e.g. "an external developer integrating this API for the first time"]. Include: a one-sentence description of what the endpoint does, the HTTP method and path, all request parameters (path, query, body) with types and whether required or optional, the response schema with field descriptions, all possible error codes with plain-English explanations of what causes each one, an authentication note, and a complete working curl example. Output: markdown, ship-ready.
Endpoint: [HTTP method + path]
What it does: [plain English description]
Auth method: [e.g. Bearer token in Authorization header]
Request schema: [paste or describe]
Response schema: [paste or describe]
Error codes this endpoint can return: [list with causes]
What it does
Produces endpoint documentation an external developer can actually integrate from — not just the happy path schema. The error code section is the most important part. 400 Bad Request means nothing. "400 — Returned when amount is negative or when currency is not an ISO 4217 code. Check the errors array in the response for field-level details" is documentation that saves a support ticket.
Example output
## POST /v1/charges
Create a charge against a stored payment method. Returns the charge object
immediately — use webhooks to track async state changes after creation.
**Authentication:** Bearer token in `Authorization` header. Requires the
`charges:write` scope.
### Request
**Path parameters:** None
**Request body** (`application/json`):
| Field | Type | Required | Description |
|---------------|---------|----------|-------------|
| amount | integer | Yes | Charge amount in the smallest currency unit (e.g. cents for USD). Must be > 0. Max: 99999999. |
| currency | string | Yes | Three-letter ISO 4217 currency code (e.g. "usd", "eur"). Case-insensitive. |
| payment_method| string | Yes | ID of a stored payment method (`pm_...`). Must belong to the authenticated account. |
| description | string | No | Freeform note attached to the charge. Max 500 characters. Not shown to the payer. |
| idempotency_key | string | No | If provided, duplicate requests with the same key return the original charge rather than creating a new one. Recommended for all charge creation. |
### Response — 201 Created
json
{
"id": "ch_abc123",
"amount": 4999,
"currency": "usd",
"status": "pending",
"payment_method": "pm_xyz789",
"created_at": "2026-04-17T10:32:00Z",
"description": null
}
`status` is `pending` on creation. It transitions to `succeeded` or `failed`
asynchronously. Subscribe to the `charge.succeeded` and `charge.failed` webhook
events rather than polling.
### Errors
| Code | Cause |
|------|-------|
| 400 | `amount` is zero or negative, `currency` is not a valid ISO 4217 code, or a required field is missing. The `errors` array in the response body identifies the offending fields. |
| 401 | Missing or invalid Bearer token. |
| 403 | Token lacks `charges:write` scope. |
| 404 | `payment_method` ID not found or does not belong to this account. |
| 409 | `idempotency_key` was used in a previous request with different parameters. |
| 422 | `payment_method` exists but cannot be charged (expired card, insufficient funds, blocked by issuer). Check `decline_code` in the response. |
### Example
bash
curl -X POST https://api.example.com/v1/charges \
-H "Authorization: Bearer sk_live_abc..." \
-H "Content-Type: application/json" \
-d '{
"amount": 4999,
"currency": "usd",
"payment_method": "pm_xyz789",
"idempotency_key": "order_12345_attempt_1"
}'
markdown
Why it works
The error code section with plain-English causes is the constraint that matters most. Without it, Claude produces schema documentation that stops at 200 OK. With it, Claude produces the table developers actually consult when the integration breaks at 2am. The curl example with realistic values (not "string" placeholders) is the difference between documentation developers copy-paste and documentation they have to mentally translate first.
Pro tip — Add "note any fields that behave differently in test mode vs. live mode" if your API has environment-dependent behavior. Stripe's documentation does this well; most APIs don't, and the omission causes hours of confusion.
4. Write a CHANGELOG Entry
The Prompt
Write a CHANGELOG entry for this release. Format: [Keep a Changelog | conventional | plain] format. Target reader: a developer upgrading from the previous version. For each change: classify it as Added, Changed, Deprecated, Removed, Fixed, or Security. Write the entry from the user's perspective — what changed for them, not what the implementation did. Flag any breaking changes with a migration note. Ship-ready.
Version: [e.g. v2.3.0]
Release date: [e.g. 2026-04-17]
Changes (git log or summary):
[paste git log --oneline output or describe changes]
Previous version: [e.g. v2.2.1]
Breaking changes (if any): [describe]
markdown
What it does
Turns a raw git log or commit list into a CHANGELOG that answers the one question developers actually ask: "What do I need to change in my code when I upgrade?" User-perspective writing ("The --output flag now accepts relative paths" instead of "Refactored path resolution in OutputHandler") is the critical difference between a CHANGELOG that saves time and one developers skip entirely.
Example output
## [2.3.0] — 2026-04-17
### Breaking Changes
**`Client.connect()` no longer accepts positional arguments.**
Previously: `Client.connect("host", 5432, "dbname")`
Now: `Client.connect(host="host", port=5432, dbname="dbname")`
Keyword-only arguments were enforced to prevent silent misconfiguration when
argument order changed in a previous internal refactor. Update all call sites
before upgrading.
### Added
- **Automatic reconnect on connection drop.** The client now retries dropped
connections up to 3 times with exponential backoff before raising
`ConnectionError`. Configure with `max_retries` and `retry_delay` parameters.
- **`Client.ping()` method** for health checks. Returns `True` if the server is
reachable, `False` otherwise — does not raise.
- **Connection pool metrics via `Client.pool_stats()`.** Returns active, idle,
and waiting connection counts. Useful for debugging pool exhaustion.
### Changed
- `Client.query()` now returns `QueryResult` objects instead of raw tuples.
`QueryResult` has a `.rows` attribute that behaves identically to the
previous tuple return. Accessing `.rows` requires no code changes for most
callers; direct tuple unpacking will break.
- Default connection timeout increased from 5s to 30s to match common
cloud database cold-start times.
### Fixed
- Fixed race condition in connection pool that caused `NullPointerError` under
high concurrency when connections were acquired and released within the same
event loop tick.
- Fixed `Client.execute_many()` silently swallowing errors on batch item
failure. Failed items now raise `BatchExecutionError` with per-item results.
### Security
- Updated `cryptography` dependency to 42.0.8 to address CVE-2026-12345
(key material exposure in certain AES-GCM operations). Upgrade recommended
for all users.
Why it works
"Write from the user's perspective" reframes every entry from an implementation description to a behavior description. The migration note format for breaking changes — old syntax vs. new syntax, side by side — is what developers need to upgrade quickly. Without it, they have to read the source code change to understand what their code needs to do differently.
Pro tip — Add "sort entries within each section by impact — highest blast radius first" to the prompt. Developers scan CHANGELOGs from the top; breaking changes and security fixes should never be buried below minor additions.
5. Write an Onboarding Guide for New Contributors
The Prompt
Write an onboarding guide for a new contributor joining this project. Target reader: [e.g. "a mid-level backend engineer who knows Python but has never contributed to an open-source project" or "a new employee joining the platform team"]. Include: how to set up the local development environment (exact commands, no "install the usual stuff"), how to run tests, where to find the most important code (top 5 files/directories to read first), how the deployment pipeline works (briefly), and one common mistake new contributors make and how to avoid it. Tone: welcoming but direct. Ship-ready markdown.
Project: [name]
Stack: [languages, frameworks, infra]
Repo structure: [paste ls output or describe top-level directories]
Key development commands: [e.g. make dev, make test, make lint]
Common gotcha for new contributors: [describe if you know one]
What it does
Produces a day-one guide that gets a new contributor from zero to first commit without needing to ask questions in Slack. The "exact commands" constraint eliminates the single most common onboarding doc failure: assuming the reader knows what "install dependencies" means in your stack. The "common mistake" section is the most time-saving part — it surfaces the thing every senior developer knows to avoid and no one thought to write down.
Example output
# Contributing to payments-service
## Local setup
These steps get you from a fresh checkout to a running local instance.
Prerequisites: Docker Desktop 4.x, Node 20+, make. Verify:
docker --version
node --version
make --version
1. Clone and install:
git clone https://github.com/acme/payments-service
cd payments-service
npm install
2. Start the local stack (Postgres + Redis in Docker):
make dev-deps
This pulls two containers. Takes 2-3 minutes on first run.
Subsequent starts: under 10 seconds.
3. Seed the database:
make db-seed
4. Start the dev server:
make dev
Server runs on http://localhost:3001. Changes hot-reload automatically.
## Running tests
make test # Full suite (~90 seconds)
make test-unit # Unit tests only (~8 seconds) — run this constantly
make test e2e # End-to-end tests against local stack (~4 minutes)
Tests require the local stack running (step 2 above).
## Where to start reading
Read these five files before touching anything else:
1. src/core/payment-processor.ts — the main entry point for all payment operations
2. src/adapters/ — one file per payment provider (Stripe, PayPal, etc.)
3. src/types/index.ts — shared types used across the whole service
4. tests/fixtures/ — the test data shapes, which map to the real API contracts
5. docs/architecture.md — a 10-minute read that explains why the adapter pattern exists
## Deployment
PRs merge to `main`. CI runs automatically (unit + integration tests, ~8 minutes).
Merging to `main` triggers a deploy to staging automatically. Production deploys
are manual: a team lead runs `make deploy-prod` after staging validation. You won't
need to run this unless you're on the core team.
## Common mistake: modifying Stripe fixtures directly
The test fixtures in `tests/fixtures/stripe/` are copies of real Stripe API responses.
If you need a new fixture, run `make record-fixtures` with a real Stripe test key —
don't hand-edit existing ones. Hand-edited fixtures cause test passes that hide real
integration bugs. Ask in #payments-eng before adding new fixture types.
Why it works
"Exact commands, no 'install the usual stuff'" is the constraint that makes onboarding docs actually work. Every documentation team thinks their setup is obvious until a new hire spends two hours on it. The "common mistake" section is the institutional knowledge that typically lives only in Slack threads and the memory of whoever's been around longest.
Pro tip — Add "identify any step that commonly fails on macOS ARM / M-series chips" if your team is mixed Mac. Silicon-specific gotchas account for a disproportionate share of new contributor setup time.
6. Audit Existing Documentation for Gaps
The Prompt
Audit this documentation for gaps and inaccuracies from the perspective of [describe target reader]. Identify: missing information a reader needs to use this effectively (not nice-to-haves — things they'll actually need), sections that assume knowledge the reader doesn't have, steps that are ambiguous or could be interpreted multiple ways, error scenarios with no guidance on what to do, and anything that is likely outdated or contradicts common behavior for this stack. Do not rewrite — produce a prioritized gap list with severity (blocks the reader / slows the reader / minor) and a one-sentence fix note for each.
Documentation to audit:
[paste documentation]
Target reader: [describe]
Stack/context: [language, framework, what this docs covers]
What it does
Produces a gap analysis from the reader's perspective rather than the author's. The severity levels force prioritization — a "blocks the reader" gap (missing auth instructions) is a different priority than a "minor" gap (inconsistent formatting). The "do not rewrite" constraint stops Claude from producing a full rewrite when you just needed a diagnostic.
Example output
Audit: Node.js SDK quickstart documentation
Target reader: Backend developer integrating for the first time
BLOCKS THE READER (fix before publishing):
1. Auth instructions missing entirely.
The quickstart calls sdk.init() but never explains where the API key comes
from, how to get one, or where to store it safely. The reader cannot proceed.
Fix: Add a "Get your API key" step before the init() call, with a link to
the dashboard and a note to use environment variables, not hardcoded values.
2. No error handling in any code examples.
Every example shows the happy path. The reader has no model for what to do
when a call fails. sdk.charge() throws on network errors — undocumented.
Fix: Add a try/catch block to at least the first major example with a note
on what error types to expect.
SLOWS THE READER (fix in next pass):
3. "Install dependencies" step has no command.
"Install the SDK" is listed as step 1 but no npm/yarn command is shown.
Obvious to the author, not obvious to a developer coming from Python.
Fix: `npm install @acme/sdk` on its own line.
4. Response object fields undocumented.
The quickstart shows `response.id` being used but never explains what other
fields exist. Developers will open the source code to find this.
Fix: Add a brief "What you get back" section or link to the API reference.
5. Node version requirement unstated.
The SDK uses top-level await in examples, which requires Node 14.8+.
Readers on older Node will get a cryptic syntax error.
Fix: Add "Requires Node 14.8+" to prerequisites.
MINOR:
6. Inconsistent capitalization of "API Key" vs "API key" vs "api key" throughout.
7. The "Next steps" section links to docs that no longer exist (404s).
Why it works
Auditing from the reader's perspective produces different gaps than auditing as the author. Authors know the context; readers don't. The severity tiers prevent a flat list that forces the author to triage everything themselves. The "blocks the reader" framing identifies what actually stops someone from using the product — which is the only gap that matters urgently.
Pro tip — Paste two or three real support questions alongside the documentation. Claude will identify which documentation gaps caused each question, giving you a direct line from gap to user pain.
7. Rewrite Dense Technical Prose to Be Readable
The Prompt
Rewrite this technical documentation to be readable without losing accuracy. Rules: sentence length max 20 words, one idea per sentence, no passive voice, no nominalizations (don't turn verbs into nouns — use "configure" not "the configuration of"), no filler phrases ("in order to", "it should be noted that", "please be aware"). Keep all technical accuracy. Do not simplify the content — simplify the delivery. Target reading level: an experienced developer who is skimming, not studying. Ship-ready.
Original text:
[paste]
What it does
Rewrites documentation that is technically accurate but unreadable — the kind produced by engineers who are writing to cover every edge case rather than to communicate quickly. The named anti-patterns (passive voice, nominalizations, filler phrases) are the specific constructs that make technical prose dense. Naming them explicitly prevents Claude from just paraphrasing the original.
Example output
Original:
"It is important to note that the configuration of the retry mechanism should be
performed prior to the initialization of the client object, as the parameters
that have been specified will be utilized during the establishment of the initial
connection, and any modifications to these parameters following initialization
will not be reflected in the connection behavior until the client has been
reinitialized."
Rewritten:
Configure retry settings before initializing the client. The client reads these
parameters once, at startup. Changes after initialization have no effect until
you reinitialize.
If you need to change retry behavior at runtime, destroy the client and create
a new one with the updated config.
Why it works
The named anti-patterns act as explicit targets. "No nominalizations" is the single highest-impact constraint for technical writing — turning "the implementation of" back into "implementing" removes an entire layer of abstraction that slows reading. The "experienced developer who is skimming, not studying" frame stops Claude from over-explaining. Developers read documentation differently than students read textbooks.
Pro tip — Run this on your onboarding docs and your error messages, not just your API reference. Dense prose in an error message is the worst place for it — the developer is already stressed.
8. Write Inline Code Comments
The Prompt
Write inline comments for this code. Rule: explain the "why," not the "what." Do not comment obvious operations. Comment: non-obvious decisions (why this algorithm, why this limit, why this order), constraints that aren't visible from the code (external API behavior, legal limits, performance requirements), known footguns or edge cases, and anything a developer would need to know before modifying this code. Style: [language-appropriate — e.g. # for Python, // for JS]. No block comments for things that belong in a docstring. Ship-ready.
Code:
[paste]
Context (what would surprise a developer reading this for the first time):
[describe any non-obvious decisions or constraints]
What it does
Produces comments that justify decisions rather than narrate code. "Loop through users" on a for loop wastes a line. "Process in batches of 50 — Mailgun's bulk send API rejects payloads larger than 1MB" on a batch loop is worth reading. The "what would surprise a new developer" context field is the most important input — it tells Claude exactly where the non-obvious decisions live.
Example output
def send_notifications(user_ids: list[str]) -> dict:
results = {"sent": 0, "failed": 0, "skipped": 0}
# Process in chunks of 100. The push notification provider (Firebase FCM)
# accepts max 500 tokens per request, but 100 keeps individual request
# payloads under the 1.5MB limit when notification bodies are long.
for chunk in chunked(user_ids, 100):
# Filter unsubscribed users here, not before chunking. The subscription
# check is a DB call — doing it per-user before batching causes N+1 queries.
active = [uid for uid in chunk if subscription_cache.is_active(uid)]
results["skipped"] += len(chunk) - len(active)
if not active:
continue
try:
response = fcm_client.send_multicast(
tokens=active,
notification=build_notification(),
# android_config TTL is 0 — deliver now or drop. These are
# time-sensitive alerts; stale delivery is worse than no delivery.
android_config=AndroidConfig(ttl=0),
)
except FirebaseError as e:
# FCM errors are per-request, not per-token. Log and continue —
# failing one chunk shouldn't block the rest of the batch.
logger.error("FCM batch failed", chunk_size=len(active), error=str(e))
results["failed"] += len(active)
continue
results["sent"] += response.success_count
results["failed"] += response.failure_count
return results
Why it works
"Explain the why, not the what" is the single rule that separates useful comments from noise. The constraint list (non-obvious decisions, external constraints, footguns) gives Claude a mental checklist to apply to every block of code. Without it, Claude comments the obvious operations and misses the non-obvious ones, which is exactly backwards.
Pro tip — Add "note any line that should not be modified without reading [external documentation or internal doc]" for code that interfaces with external systems. Comments like "# Do not change this header name — AWS SigV4 requires lowercase" save hours.
9. Write an Architecture Decision Record (ADR)
The Prompt
Write an Architecture Decision Record (ADR) for this decision. Structure:
Title: [short imperative phrase — e.g. "Use PostgreSQL for session storage"]
Status: [Proposed | Accepted | Deprecated | Superseded]
Context: What situation forced a decision? (3-5 sentences)
Decision: What did we decide? (1-2 sentences, direct)
Alternatives considered: What else was on the table and why rejected? (list, 2-3 alternatives)
Consequences: What becomes easier? What becomes harder? What new risks does this introduce?
Revisit trigger: What specific condition would cause us to reconsider?
Decision to document: [describe the decision and reasoning]
Why alternatives were rejected: [brief notes]
Known downsides accepted: [list]
What it does
Produces a complete ADR that future engineers can use to understand a decision without reading the Slack thread it was buried in. The "revisit trigger" section is the most underrated — it makes the decision reversible by design rather than permanent by default, and it forces the author to think about what conditions would make the decision wrong.
Example output
# Use Background Job Queue for Webhook Delivery
**Status:** Accepted
## Context
Our webhook delivery currently runs synchronously within the request that triggers
it. If the receiving server is slow or unreachable, our server holds the thread for
up to 30 seconds. Under load, this causes thread exhaustion and cascading slowness
across unrelated endpoints. Three incidents in the past six weeks have been traced
to a single customer's webhook endpoint going down and holding our threads hostage.
## Decision
Move all webhook delivery to a background job queue (BullMQ backed by Redis). The
triggering request enqueues the job and returns immediately. A worker pool handles
delivery asynchronously with retry logic.
## Alternatives Considered
**Fire-and-forget with Promise (no queue):** Faster to implement, but no retry
logic, no visibility into failures, and unhandled rejections in background Promises
are silent in Node 18 without explicit handling. Rejected: trades correctness for
convenience.
**Dedicated webhook microservice:** Correct architecture at scale, but introduces
a new service, inter-service auth, distributed tracing requirements, and a deploy
dependency. Rejected: operational overhead exceeds the problem's current size.
**Synchronous with timeout reduction:** Capping the webhook timeout at 3s reduces
thread hold time but doesn't eliminate it. Still fails under load. Doesn't add
retry logic. Rejected: partial mitigation, not a solution.
## Consequences
Easier:
- Endpoints that trigger webhooks no longer block on delivery latency
- Failed deliveries are automatically retried (3x, exponential backoff)
- Delivery attempts and failures are visible in the job dashboard
Harder:
- Webhook delivery is now eventually consistent — callers receive `202 Accepted`,
not confirmation of delivery
- Local development requires Redis running
- Debugging delivery failures requires checking the job queue, not just logs
New risks:
- Redis becomes a dependency for webhook delivery; Redis outage means no webhooks
- Queue depth can grow unboundedly if the worker pool falls behind
## Revisit Trigger
Reconsider if: Redis operational cost exceeds $200/month before reaching 10k
webhooks/day, or if the product requires synchronous delivery confirmation
(certain financial compliance use cases). At that point, evaluate a dedicated
webhook service with its own SLA.
Why it works
ADRs are written infrequently and read constantly — often years later when the context is gone. The revisit trigger is the constraint that makes the document most useful: it tells the reader not just what was decided, but under what conditions the decision expires. The "alternatives considered + why rejected" section prevents future engineers from relitigating settled decisions without the context of what was already evaluated.
Pro tip — If your team uses a specific ADR template or wiki format, paste it into the prompt. Claude will match your format exactly. This prompt works equally well for decisions already made and decisions you're about to make — run it in "Proposed" status as a writing exercise before finalizing.
10. Write Error Message Documentation
The Prompt
Write documentation for these error messages. For each error: state the error code and message exactly as it appears, explain in plain English what state caused it (not just what the message says), list the two or three most common causes, and give the developer a clear resolution path — what to check, what to change, and what to do if none of those work. Do not just paraphrase the error message. Ship-ready markdown, formatted as a reference table + expandable details per error.
Errors to document:
[list error codes + messages, or paste from your error definitions file]
Context:
What system produces these errors: [describe briefly]
Target reader: [developer integrating this API / user of this SDK / etc.]
What it does
Turns error codes into actionable troubleshooting guides. The constraint "do not just paraphrase the error message" is what makes this prompt valuable — a documentation entry that says "INVALID_TOKEN: The token is invalid" adds nothing the developer didn't already know. "INVALID_TOKEN: The JWT signature verification failed. Common causes: wrong signing secret (check you're using the live key, not the test key), token generated by a different environment, or token expired and re-signed with a rotated key" saves 40 minutes of debugging.
Example output
## Error Reference
### AUTH_001 — Invalid or expired token
**Message:** `Authentication failed: token is invalid or has expired`
**What caused it:** The JWT presented in the `Authorization` header failed
signature verification, or the `exp` claim is in the past.
**Most common causes:**
1. **Wrong key environment.** You're using a test-mode key against the live API,
or vice versa. Live keys start with `sk_live_`; test keys start with `sk_test_`.
2. **Token expired.** JWTs from this API expire after 1 hour. If you're caching
tokens, check your cache TTL — it should be shorter than 1 hour to account
for clock skew.
3. **Rotated signing secret.** If you rotated your API secret in the dashboard
after generating the token, tokens signed with the old secret are immediately
invalid.
**Resolution:** Generate a new token. If the error recurs immediately after
generating a fresh token, verify the key environment matches the API endpoint
you're hitting.
**Still stuck?** Open a support ticket with your `request_id` (in the response
header as `X-Request-ID`). Do not include the token itself in the ticket.
---
### RATE_001 — Rate limit exceeded
**Message:** `Rate limit exceeded: too many requests`
**What caused it:** Your integration has exceeded the allowed request rate for
your current plan. The `Retry-After` response header contains the number of
seconds until your rate limit resets.
**Most common causes:**
1. **Retry loop without backoff.** If your error handling retries immediately on
any error, a single 5xx response can trigger a retry storm that trips the rate
limit.
2. **Parallel requests from multiple processes sharing one key.** If you run
multiple workers or servers, they share the same rate limit bucket per API key.
3. **Batch operation sending one request per item.** Use the `/batch` endpoints
for bulk operations — they count as one request regardless of item count.
**Resolution:** Read the `Retry-After` header and wait before retrying. Implement
exponential backoff. If you need a higher rate limit, contact support — limits
are adjustable for production accounts.
Why it works
The "most common causes" pattern is the critical structure. Developers reading error documentation are already frustrated — they want to scan three options and try the most likely one, not read an essay. Numbering by likelihood (most common first) is an unconscious UX choice that significantly reduces the time-to-resolution for the average reader. The "still stuck" fallback with support escalation instructions closes the loop entirely.
Pro tip — Pair this with Prompt #6 (audit existing documentation) and ask Claude specifically to identify error messages in the codebase that have no documentation. Every undocumented error is a future support ticket.
The Bigger Principle
Documentation debt compounds the same way technical debt does. One sparse README means one hour of onboarding questions. Ten sparse READMEs, undocumented errors, and no ADRs means a codebase where institutional knowledge lives exclusively in the heads of whoever was there at the beginning.
The pattern across all 10 prompts is the same as the rest of this series: context + constraint + format. Context means the target reader and what the code actually does. Constraint means the rule that prevents Claude from defaulting to generic output — "document the contract, not the implementation," "explain the why not the what," "do not paraphrase the error message." Format means the specific structure that makes output paste-ready.
None of these prompts require reading ten pieces of documentation first. They require two minutes of context — who's the reader, what does this do, what's non-obvious. That's the trade: two minutes of context in exchange for documentation you'd be proud to ship.
Start with whatever type of documentation you've been putting off the longest. It is probably a README, because it always is.
Want the full pack?
This article covers 10 prompts. The Claude Prompts for Developers pack includes 55 prompts across 6 categories — code review, debugging, architecture, docs, productivity, and 5 multi-step power combos. The Writing & Documentation section alone has 15 prompts: full coverage for API references, SDK guides, runbooks, incident post-mortems, migration guides, and more. One-time download, copy-paste ready.
Top comments (2)
TL;DR for the prompts:
The pattern across all 10: target reader + what the code does + quality bar ("ship without editing").
Two minutes of context. Documentation you'd be proud to merge.
The thing that made these prompts click for me: documentation debt compounds
exactly like technical debt, but it's invisible until it's already expensive.
One sparse README = one hour of onboarding questions.
Ten sparse READMEs + no ADRs = institutional knowledge that lives only in
the heads of whoever was in the room when the decisions were made.
The prompt that changed my workflow most: #6 (the documentation audit).
Running it on an existing codebase surfaces gaps you've been ignoring for
months — ranked by severity and reader impact, not just "here's what's missing."
What's the documentation type you keep putting off the longest? For most
devs I talk to it's either API docs or ADRs. Curious if that tracks.