The Exact Prompts I Use to Generate Technical Ebook Chapters with Claude
Prompt engineering for ebook generation is not about being clever. It's about setting up constraints that make bad outputs impossible.
Here are the exact prompts I use — system message, user message structure, and the four constraints that produce runnable, consistent, technical content at chapter scale.
🎁 Free: AI Publishing Checklist — 7 steps in Python · Full pipeline: germy5.gumroad.com/l/xhxkzz (pay what you want, min $9.99)
System Prompt
You are a technical author producing chapters for a Python programming book.
Your writing is direct, first-person, and code-forward. Every claim is demonstrated with a working code example. You never use marketing language or hedging phrases.
HARD CONSTRAINTS — these cannot be overridden by the user message:
1. Every code block must use only Python stdlib unless the chapter topic is specifically about a named third-party library
2. All variable names, function names, and inline comments must be in English
3. Every function must have a docstring
4. The chapter must end with a code example that a reader can run immediately in a clean Python environment
OUTPUT FORMAT:
- Start with the chapter title as H1
- Use H2 for major sections (3–5 sections per chapter)
- Use H3 for subsections only when necessary
- Code blocks must use triple backticks with language tag: ```
python
- Do not include a table of contents
- Do not include a conclusion section — end on the final code example
User Message Template
python
CHAPTER_PROMPT = """
Chapter {number}: {title}
Learning objective: {learning_objective}
Target length: {word_target} words (±15% is acceptable)
Style guide:
- Voice: {voice}
- Audience: {audience}
- Avoid: {avoid_list}
{notes_section}
Write this chapter now. Include:
1. An opening that states the problem this chapter solves (2–3 paragraphs, no fluff)
2. The core technical concept with a minimal working example
3. The full implementation with edge cases handled
4. A failure scenario and how to handle it
5. A closing runnable script that demonstrates everything covered
The final code block must produce visible output when run with: python3 script.py
"""
Building the prompt in code
python
import anthropic
client = anthropic.Anthropic()
def generate_chapter(chapter: dict, style_guide: dict) -> str:
notes_section = ""
if chapter.get("notes"):
notes_section = f"Additional notes for this chapter:\n{chapter['notes']}"
avoid_list = ", ".join(style_guide.get("avoid", []))
user_message = CHAPTER_PROMPT.format(
number=chapter["number"],
title=chapter["title"],
learning_objective=chapter["learning_objective"],
word_target=chapter["word_target"],
voice=style_guide.get("voice", "direct, technical"),
audience=style_guide.get("audience", "experienced Python developers"),
avoid_list=avoid_list or "passive voice, marketing language",
notes_section=notes_section,
)
response = client.messages.create(
model="claude-opus-4-5",
max_tokens=8192,
system=SYSTEM_PROMPT,
messages=[{"role": "user", "content": user_message}]
)
return response.content[0].text
The Four Constraints That Make It Work
1. stdlib only — the single biggest improvement
Before adding this constraint: ~30% of generated scripts failed subprocess validation because they imported pandas, numpy, or other packages not present in the clean test environment.
After: failure rate dropped to ~10%, and the remaining failures are logic errors (wrong output format), not import errors.
2. English variable names — critical for bilingual output
Without this constraint: the translation LLM sometimes "helpfully" translated variable names like validar_codigo or archivo_salida into Spanish in the Spanish edition.
After: all variable names stay in English in both EN and ES chapters, which is correct behavior for a programming book.
3. Every function must have a docstring
This sounds minor. It's not. Docstrings force the model to articulate what a function does before writing it. The generated code is noticeably more coherent when docstrings are mandatory.
4. End on a runnable script
This is the most important constraint. It forces the model to produce a complete, coherent example that exercises everything in the chapter. It also means the final subprocess validation call has a meaningful target.
Translation Prompt
After the English chapter is generated and validated, the translation runs:
python
TRANSLATION_PROMPT = """
Translate the following technical Python book chapter from English to Spanish.
TRANSLATION RULES — these are mandatory:
1. All prose, headings, and explanations must be translated to Spanish
2. All code blocks must remain EXACTLY as they are — do not translate variable names, function names, comments, or string literals inside code blocks
3. The number of
```python code blocks must be identical to the original
4. Technical terms that are commonly used in English within Spanish technical communities (API, endpoint, pipeline, framework, debug) may remain in English
5. Do not add, remove, or reorder sections
Chapter to translate:
{english_chapter}
"""
def translate_chapter(en_content: str) -> str:
response = client.messages.create(
model="claude-opus-4-5",
max_tokens=8192,
messages=[{
"role": "user",
"content": TRANSLATION_PROMPT.format(english_chapter=en_content)
}]
)
return response.content[0].text
Prompt Iteration Log
These are the prompts I removed after they made things worse:
❌ "Make the code production-ready" — produced over-engineered examples with complex error handling that made the learning objective harder to see.
❌ "Write in a conversational tone" — produced chatty prose that felt like a blog post, not a book chapter.
❌ "Include best practices" — vague instruction that added boilerplate comments without improving the code.
❌ "Imagine the reader is a beginner" — conflicted with the audience: experienced Python developers constraint, produced inconsistent chapters.
The current prompts are the result of about 40 generation runs across 4 books. They're conservative and explicit. They produce fewer surprises than clever prompts.
Full pipeline (including all prompt templates in PIPELINE_PROMPT_v4.md): germy5.gumroad.com/l/xhxkzz — pay what you want, min $9.99.
Top comments (0)