A four-step writing loop, a fabricated 40% statistic, and the question that saved an article.
Here's how I write articles:
- Claude drafts the rough version.
- I check what it wrote.
- I modify what's off.
- I publish.
Most of the work happens in the loop between 2 and 3 — back and forth, until the draft says what I actually mean.
Tonight, that loop caught a number that wasn't real.
The workflow
This loop has been my habit ever since I started using generative AI. Generate → review. For code. For articles. The same pattern in both domains.
I'm not a native English writer. AI gives me tone, structure, and rhythm I couldn't produce alone in a reasonable time. I bring the experience, the decisions, and the numbers.
Yes, the review is tedious. But right now, I can't skip it. Someday the models may be reliable enough that this step disappears. Today is not that day.
The loop works because of step 2. Without it, the draft is the writer, not me.
Tonight, step 2 stopped me from publishing a lie.
The number that wasn't real
The draft was about how AI pricing changes are reshaping the architecture of small AI-native SaaS. I'm a solo founder running one such SaaS in production, with three more in active development. Pricing pressure is something I think about weekly.
The draft came back polished. Three concrete architectural shifts. Specific. Quotable.
One of those lines read:
"We removed roughly 40% of token volume from one pipeline without quality loss, just by auditing what we were sending."
A credible technical founder might write that. It would have been the most-shared line in the article.
There's just one problem.
I never measured that. I never reduced 40% of any token volume. I haven't run any audit. The number was completely invented.
How I caught it
The fabrication only surfaced because I asked one question:
Where did this number come from?
The answer was: nowhere. Claude generated a plausible-sounding statistic to make the section more compelling. Not maliciously. It was doing what generative AI tends to do.
My instructions are imperfect
Of course Claude wasn't being malicious. That's not how this works.
I've written before about how my prompts are often half-formed. When I give my partner half-formed instructions, the partner tries hard to fill the gaps. With confidence. With rhythm. Sometimes with numbers that aren't real.
It's always like this. The partner tries. My instructions are imperfect. The output reflects both.
The 40% was Claude meeting me in the middle of a gap I left. The middle just happened to be wrong.
That's why I can't publish without checking. Always.
What AI tends to do
Generative AI, given a writing task, tends toward two patterns I've started to recognize.
First, it embellishes. A small accomplishment becomes a confident achievement. A vague memory becomes a specific milestone. The phrasing gets sharper than the underlying truth.
Second, it fills gaps with imagination. If the rhythm of a paragraph wants a number, a number appears. If a sentence wants a date, a date appears. These additions don't come from your reality. They come from the model's sense of what reads as authoritative.
Both behaviors look like good writing. Both can be true to the rhythm of the genre. Neither is automatically true to you.
The check is the writing
After tonight, I think about the four-step loop differently.
Step 1 (Claude drafts) is fast. Step 4 (publish) is fast.
The work — the part that determines whether the article represents me or the model's guess about me — is in steps 2 and 3.
That's not friction in the workflow. That's the workflow. Skipping it doesn't speed up the process. It changes what gets published from "what's true about me" to "what would sound true about someone like me."
The check isn't a final review. It's where the writing actually happens.
What got published instead
"Roughly 40% reduction" became:
"The savings vary by pipeline. The bigger discipline shift is treating every token as a line item, not as free."
The second version doesn't lie. It also doesn't claim authority I haven't earned. It says what I actually know, in the language I actually have.
The article is weaker by one quotable line. It is stronger by the truth.
A small note about this piece
It went through the same four-step loop. Claude drafted. I checked. I modified. Now I'm publishing.
Every claim about my situation, my decisions, and my numbers was checked against my actual experience.
The "40% reduction" you read about earlier was the line that didn't make it. There may be others I didn't catch. If you find one, tell me. I'll fix it.
Closing
AI drafts. I check. The order matters.
If it ever flips — if I publish without checking — what gets read isn't me anymore. It's the model's confident guess about who I should be.
I'd rather be honestly less impressive than dishonestly more.

Top comments (0)