How to Write Better AI Prompts: A Signal Processing Approach
By Mario Alexandre
March 21, 2026
sinc-LLM
Prompt Engineering
The Problem with Prompt Advice
Most prompt advice is vague: "be specific," "give context," "use examples." This advice is not wrong, but it is incomplete. It does not tell you how specific, how much context, or which examples. It provides no way to verify that your prompt is complete.
The sinc-LLM framework replaces vague advice with a formal specification. A prompt is complete when it samples all 6 specification bands at sufficient resolution. No more guessing.
The 6 Things Every Prompt Needs
x(t) = Σ x(nT) · sinc((t - nT) / T)
Based on research analyzing 275 production prompts, every effective prompt explicitly addresses 6 components:
PERSONA, Tell the AI who it is. Not "helpful assistant" but "senior data scientist specializing in time series forecasting."
CONTEXT, Give the situation. What project, what stage, what has been tried, what environment.
DATA, Provide the actual inputs. Code, numbers, documents, examples.
CONSTRAINTS, This is the most important part (42.7% of output quality). Rules, limits, exclusions, requirements, edge cases. Allocate the most words here.
FORMAT, Specify exactly what the output looks like. JSON? Markdown? Bullet points? How long?
TASK, The instruction itself. By now, it is usually one sentence.
Before and After: Real Examples
Bad Prompt (1 band sampled)
"Analyze this data and give me insights."
This covers only TASK. The model must guess persona, context, what data, constraints, and format. Every guess is a potential hallucination.
Good Prompt (6 bands sampled)
PERSONA: Senior business analyst for e-commerce
CONTEXT: Q4 2025 sales data, comparing to Q3. Company sells B2B SaaS.
DATA: [attached CSV with columns: date, revenue, churn_rate, new_signups]
CONSTRAINTS: Focus on churn trends only. Do not speculate on causes
without data support. Flag any metric that changed more than 15%.
Max 500 words.
FORMAT: Executive summary (3 bullets), then detailed analysis
with one table comparing Q3 vs Q4 metrics.
TASK: Analyze churn trends and identify the top 3 actionable findings.
The CONSTRAINTS-First Method
The counterintuitive finding from 275 observations: start writing your prompt with CONSTRAINTS, not TASK. Here is why:
CONSTRAINTS carry 42.7% of output quality weight
Writing constraints forces you to think about edge cases before the model does
Constraints are the specification dimensions most often left to the model's imagination
A prompt with excellent constraints and a mediocre task outperforms a prompt with an excellent task and no constraints
Practical rule: if your CONSTRAINTS section is shorter than your TASK section, your prompt is undersampled in the highest-energy band.
Tools to Help
You do not need to do this manually every time:
sinc-LLM Transformer, Free online tool that decomposes any raw prompt into 6 bands automatically
sinc-LLM GitHub, Open source framework you can integrate into your workflow
Full Research Paper, 275 observations, mathematical proofs, and ablation studies
The difference between a good prompt and a great prompt is not creativity, it is completeness. Sample all 6 bands, and the model does the rest.
Transform any prompt into 6 Nyquist-compliant bands
Related Articles
The Prompt Engineering Framework for 2026: Signal-Theoretic Decomposition
The Best ChatGPT Prompt Template Based on Signal Processing Research
Real sinc-LLM Prompt Example
This is the exact JSON format that sinc-LLM uses. Paste any raw prompt at tokencalc.pro to generate one automatically.
{Install:
"formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
"T": "specification-axis",
"fragments": [
{
"n": 0,
"t": "PERSONA",
"x": "You are a practical AI coach who helps beginners get dramatically better results from ChatGPT, Claude, and Gemini. You focus on immediate, actionable improvements."
},
{
"n": 1,
"t": "CONTEXT",
"x": "The reader has been using AI chatbots for 6 months and gets 'okay' results. They know their prompts could be better but do not know where to start. They have never heard of sinc-LLM or prompt frameworks."
},
{
"n": 2,
"t": "DATA",
"x": "Average user prompt: 12 words. Average sinc prompt: 68 words. Quality improvement: 24% (composite score). The single biggest improvement comes from adding CONSTRAINTS (42.7% of quality)."
},
{
"n": 3,
"t": "CONSTRAINTS",
"x": "Write at a 10th-grade reading level. No jargon. Every tip must include a before/after example. Limit to 5 tips maximum. Each tip must be usable in under 30 seconds. Do not mention the word 'framework' or 'methodology.'"
},
{
"n": 4,
"t": "FORMAT",
"x": "Return: (1) The One Thing Wrong With Your Prompts (1 paragraph). (2) 5 Tips with Before/After examples. (3) The Template they can copy-paste."
},
{
"n": 5,
"t": "TASK",
"x": "Write 5 practical tips for writing better AI prompts that a beginner can apply immediately, based on the sinc-LLM finding that CONSTRAINTS carry 42.7% of quality."
}
]
}pip install sinc-llm | GitHub | Paper
Originally published at tokencalc.pro
sinc-LLM applies the Nyquist-Shannon sampling theorem to LLM prompts. Read the spec | pip install sinc-prompt | npm install sinc-prompt
Top comments (0)