Everyone in AI circles eventually discovers the Magic Prompt Formula. If you haven't, here it is. It's the most widely shared structured approach to prompting, and it genuinely works.
The formula has five parts:
Role (Who): Assign a specific expert persona, "You are a senior content strategist" or "Act as a seasoned Python developer"
Action (What): Use a clear verb defining what the AI should do ,"Draft," "Refactor," "Analyze," "Summarize"
Context (Why): Provide relevant background so the model doesn't give generic answers; your audience, your product, your use case
Examples (How): Give one or two samples of the expected output style — this is few-shot prompting in practice
Constraints & Format (Boundaries): Set limits and define structure, "Under 200 words," "No jargon," "Single block of copy," "Use bullet points"
Instead of typing "write me a LinkedIn post," you write:
You are a senior content strategist specializing in B2B SaaS. Write a LinkedIn post announcing [product]. My audience is AI developers and technical founders. Here's an example of the tone I want: [example]. Keep it under 200 words, no buzzword fluff, single block of copy.
That's Role, Action, Context, Examples, and Constraints all in one prompt. You've gone from a generic instruction to a specialized brief. The output quality difference is real and immediate.
So what's the problem?
The Magic Prompt Formula covers about half of what actually determines prompt quality.
I know this because I built a tool that measures it.
PQS: Prompt Quality Score —> scores prompts across 8 dimensions before you send them to a model. The five ingredients of the Magic Formula map cleanly onto five of those dimensions: clarity, specificity, context, examples, and constraints. A well-structured Magic Formula prompt typically scores in the 47–52 range out of 80. Solid. Grade A territory even.
But there's a dimension the formula doesn't touch at all: chain-of-thought structure.
CoT structure measures whether your prompt scaffolds the model's reasoning: numbered steps, analysis frameworks, structured output sequences, decision trees. It's the difference between asking an expert to answer a question and asking them to walk you through how they'd think about it.
Magic Formula prompts score 3 or 4 out of 10 on CoT structure. Consistently. Across every vertical we've tested.
I ran three Magic Formula prompts through PQS this week. One content prompt, one software prompt, one crypto analysis prompt.
All three were well-constructed. All three scored Grade A before optimization.
After PQS optimization, all three jumped by 63–71%.
The CoT dimension went from an average of 3.7 to 9.0 across all three. Every other dimension moved 1–4 points. CoT moved 5–6.
Here's what that looks like in practice. The content prompt went from:
"Keep it under 200 words, no buzzword fluff, end with a clear CTA, format as a single block of copy"
To: "Structure your post: 1. Hook: open with a relatable developer pain point. 2. Problem: briefly explain the cost of discovering prompt issues too late. 3. Solution: introduce PQS as the pre-inference fix. 4. Benefit: one concrete outcome. 5. CTA: direct readers to pqs.onchainintel.net"
Same constraints. Completely different reasoning scaffolding. The model doesn't just know what to write, it knows how to think through the writing.
This is the AI 'input quality' problem in concrete form.
The Magic Formula solves the WHO and the WHAT. It tells the model who it is and what you want. What it doesn't do is solve the HOW, the reasoning path the model should follow to get there.
Most people find out their prompt was weak after the output disappoints them. By then you've already burned tokens, lost time, and often shipped something mediocre. PQS scores the prompt before inference. It catches the CoT gap, and every other gap, before you run it.
If you're already using the Magic Formula, you're ahead of most people! Prompt Quality Score shows you exactly how much further ahead you could be.
Score your next prompt before you send it → https://pqs.onchainintel.net
About OnChainIntel — AI-powered on-chain wallet analysis. We decode the behavioral patterns, hidden biases, and implicit bets behind any wallet's transaction history. Try it free at onchainintel.net · Follow us on X: @onchainaiintel · TikTok: @onchainintel · YouTube: @onchainaiintel


Top comments (0)