DEV Community

Mario Alexandre
Mario Alexandre

Posted on • Originally published at tokencalc.pro

The Best ChatGPT Prompt Template Based on Signal Processing Research

The Best ChatGPT Prompt Template Based on Signal Processing Research

By Mario Alexandre
March 21, 2026
sinc-LLM
Prompt Engineering

Why Most Prompt Templates Fail

Search for "ChatGPT prompt template" and you will find hundreds of variations. Most share a common flaw: they are based on intuition rather than data. They tell you to "act as" and "give context" without quantifying how much context is enough or what kinds of specifications matter most.

The sinc-LLM framework provides a template backed by 275 production observations and the Nyquist-Shannon sampling theorem. It works for ChatGPT, Claude, Gemini, and any other LLM because it addresses a universal property of language model specification.

The Template

x(t) = Σ x(nT) · sinc((t - nT) / T)

Copy and adapt this template for any ChatGPT task:

PERSONA: [Role with specific expertise]
You are a [specific role] with expertise in [specific domain].

CONTEXT: [Situation and background]
[What project/situation this is for]
[What has been tried or decided already]
[Relevant environment or audience details]

DATA: [Specific inputs]
[Actual data, code, documents, or examples the model should use]

CONSTRAINTS: [Rules and boundaries -- allocate ~42% of your prompt here]

  • [Specific exclusion: what NOT to do]
  • [Measurable limit: word count, format restriction]
  • [Required inclusion: what MUST appear]
  • [Edge case handling: if X then Y]
  • [Accuracy rule: only use provided data]
  • [Style rule: tone, voice, jargon policy]
  • [Safety rule: compliance, sensitivity]

FORMAT: [Output structure -- allocate ~26% here]

  • [Exact structure: headers, sections, bullet points]
  • [Length specification]
  • [Code format if applicable]

TASK: [One clear instruction]
[What to do -- this is usually just one sentence by now]

Template in Action: 3 Examples

Example 1: Code Review

PERSONA: Senior software engineer, 10 years Python experience
CONTEXT: FastAPI microservice handling payment webhooks, production
DATA: [paste the function to review]
CONSTRAINTS: Focus on security vulnerabilities only. Do not suggest
style changes. Flag any unvalidated input. Check for SQL injection,
XSS, SSRF. Max 5 findings, ranked by severity.
FORMAT: Table with columns: Severity | Line | Issue | Fix
TASK: Review this code for security vulnerabilities.

Example 2: Content Writing

PERSONA: B2B SaaS content writer for developer audience
CONTEXT: Blog post for company engineering blog, readers are senior devs
DATA: Topic: "Why we migrated from Redis to DragonflyDB"
CONSTRAINTS: 800-1000 words. No marketing language. Include specific
metrics (latency, memory, cost). Must mention tradeoffs honestly.
No "we're excited" or "game-changing." Technical but readable.
FORMAT: Title + intro paragraph + 4 sections with H2 headers + conclusion
TASK: Write the blog post.

Example 3: Data Analysis

PERSONA: Data analyst for e-commerce company
CONTEXT: Monthly business review, comparing Feb vs Jan 2026
DATA: [paste CSV or key metrics]
CONSTRAINTS: Only analyze metrics that changed by more than 10%.
Do not speculate on causes without data. Round to 1 decimal.
Include confidence intervals where possible.
FORMAT: Executive summary (3 bullets) + detailed table + recommendations
TASK: Analyze month-over-month changes and identify top 3 action items.

Why This Template Works: The Math

The sinc-LLM research proved that a prompt is a sampled version of your specification signal. The template works because it forces you to sample all 6 specification bands, meeting the Nyquist rate for faithful reconstruction.

The token allocation (42% CONSTRAINTS, 26% FORMAT) matches the empirically-observed quality weights across 275 observations. This is not arbitrary, it reflects the actual information density of each band.

Auto-Generate from Any Prompt

You do not need to fill the template manually every time. The sinc-LLM transformer takes any raw prompt and decomposes it into the 6 bands automatically. It identifies missing bands and suggests content for them.

The entire framework is open source on GitHub. Use it in ChatGPT, Claude, or any LLM that accepts text prompts.

Transform any prompt into 6 Nyquist-compliant bands

Try sinc-LLM Free

Related Articles

Real sinc-LLM Prompt Example

This is the exact JSON format that sinc-LLM uses. Paste any raw prompt at tokencalc.pro to generate one automatically.

{
"formula": "x(t) = Σ x(nT) · sinc((t - nT) / T)",
"T": "specification-axis",
"fragments": [
{
"n": 0,
"t": "PERSONA",
"x": "You are a ChatGPT power user and template designer. You provide precise, evidence-based analysis with exact numbers and no hedging."
},
{
"n": 1,
"t": "CONTEXT",
"x": "This analysis is part of a production system where accuracy determines revenue. The sinc-LLM framework identifies 6 specification bands with measured importance weights."
},
{
"n": 2,
"t": "DATA",
"x": "Fragment importance: CONSTRAINTS=42.7%, FORMAT=26.3%, PERSONA=7.0%, CONTEXT=6.3%, DATA=3.8%, TASK=2.8%. SNR formula: 0.588 + 0.267 * G(Z1) * H(Z2) * R(Z3) * G(Z4). Production data: 275 observations, 51 agents."
},
{
"n": 3,
"t": "CONSTRAINTS",
"x": "State facts directly. Never hedge with 'I think' or 'probably'. Use exact numbers for every claim. Do not suggest generic solutions. Every recommendation must be specific and verifiable. Include at least 3 MUST/NEVER rules specific to this task."
},
{
"n": 4,
"t": "FORMAT",
"x": "Lead with the definitive answer. Use structured headers. Tables for comparisons. Numbered lists for sequences. Code blocks for implementations. No trailing summaries."
},
{
"n": 5,
"t": "TASK",
"x": "Create a universal 6-band ChatGPT prompt template for business analysis tasks"
}
]
}
Install: pip install sinc-llm | GitHub | Paper


Originally published at tokencalc.pro

sinc-LLM applies the Nyquist-Shannon sampling theorem to LLM prompts. Read the spec | pip install sinc-prompt | npm install sinc-prompt

Top comments (0)