Most people use LLMs for writing like this:
“Rewrite this to sound better.”
And then they wonder why the output feels:
- generic
- too long
- weirdly confident
- inconsistent with their voice
Here’s a better approach that works extremely well for developers:
Treat writing like code.
You already know how to get consistent code quality:
- a style guide
- a linter
- a review checklist
You can build the same thing for writing with a simple prompt + a repeatable workflow.
Step 1: write your “style spec” (copy/paste)
Create a short style spec you can reuse. Keep it explicit.
STYLE SPEC
- Voice: knowledgeable, casual, senior dev. No hype.
- Sentences: mostly short. Avoid long chains of commas.
- Structure: strong headings, bullets, concrete examples.
- Avoid: buzzwords, vague claims, "revolutionary", "game-changing".
- Prefer: active voice, specific nouns, numbers.
- Audience: developers.
- Goal: teach something practical.
This alone fixes 50% of “LLM writing” problems.
Step 2: create a writing checklist (the linter rules)
These are my default checks:
- Does the intro explain the problem in 2–3 sentences?
- Are there copy/paste examples?
- Are claims backed by specifics?
- Is the conclusion actionable?
- Are there any “assistant-y” phrases? (remove them)
Now we turn that into a prompt.
Step 3: the “AI linter” prompt
Paste your draft, get a structured review.
You are my writing linter.
STYLE SPEC:
<PASTE STYLE SPEC>
TASK:
Review the draft and output:
1) A list of issues (bullet list) grouped by: Clarity, Structure, Tone, Specificity.
2) A suggested rewrite of the intro (max 120 words).
3) A list of 5 concrete edits (each as "Before -> After").
RULES:
- Do not rewrite the whole article.
- Be direct. No compliments.
- If something is vague, propose a specific replacement.
DRAFT:
<PASTE DRAFT>
This keeps the model in “review mode” instead of “rewrite everything mode”.
Step 4: apply edits with a second prompt (safely)
Once you have review feedback, apply it with constraints.
Apply the following edits to the draft.
Rules:
- Keep meaning.
- Keep headings.
- Do not add new sections.
- Keep total length within +/- 10%.
Draft:
<PASTE>
Edits to apply:
<PASTE LINTER OUTPUT>
This two-step flow produces writing that feels like you, not like a generic content bot.
Automate it: run the linter from the CLI
Here’s a minimal Node.js script that:
- reads a markdown file
- prints the linter prompt with your style spec included
// lint-prompt.js
import fs from "node:fs";
const style = fs.readFileSync("style-spec.txt", "utf8");
const draft = fs.readFileSync(process.argv[2], "utf8");
const prompt = `You are my writing linter.
STYLE SPEC:\n${style}
TASK:
Review the draft and output:
1) A list of issues grouped by: Clarity, Structure, Tone, Specificity.
2) A suggested rewrite of the intro (max 120 words).
3) 5 concrete edits (Before -> After).
RULES:
- Do not rewrite the whole article.
- Be direct. No compliments.
DRAFT:\n${draft}
`;
console.log(prompt);
Usage:
node lint-prompt.js article.md | pbcopy
You can do the same in Python, Makefiles, or whatever you already use.
The “final pass” prompt: catch last-mile problems
Before publishing, I run a final pass with a short checklist:
Final pass.
Check the draft for:
- unclear sentences
- missing context
- passive voice
- repeated phrases
- paragraphs longer than 5 lines
Output ONLY a bullet list of fixes. No rewrites.
Draft:
<PASTE>
It’s boring. It works.
Why this works
You’re not asking the model to “be creative.”
You’re asking it to:
- follow a spec
- run a checklist
- propose diffs
That’s exactly the kind of task LLMs are good at.
Want the templates I actually use?
I keep a curated set of prompt templates for dev + productivity workflows in my Prompt Engineering Cheatsheet.
- Free sample: https://getnovapress.gumroad.com/l/prompt-sample
- Full shop (Cheatsheet $9+): https://getnovapress.gumroad.com
Top comments (0)