DEV Community

Odinaka Joy
Odinaka Joy

Posted on

How To Use LLMs: Advanced Prompting Techniques + Framework for Reliable LLM Outputs

In my previous post, I introduced the basics of Prompt Engineering as a way to talk to Large Language Models (LLMs) so they understand us better and give more useful results.

That post focused on simple foundations:

  • Role/Context - Who the model should be (System)
  • Task/Goal - What the user is asking (Input)
  • Format/Constraints - How the response should look (Output)

But when prompt gets longer, task becomes complex or output must be reliable, we need more advanced techniques.

This post will cover:

  1. The S-I-O β†’ Eval framework for structured prompt design
  2. How advanced prompting techniques fit within this framework
  3. How to test and evaluate prompts (practical examples)

πŸ’‘ The S-I-O β†’ Eval Framework (Overview)

The S-I-O β†’ Eval framework gives structure to your prompts, ensuring you don’t miss critical elements:

Component Purpose
S β†’ Setup Defines system message, persona, and context. It primes the model to think and respond correctly.
I β†’ Instruction The task you want the model to perform and how it should approach it (step-by-step reasoning, examples, etc.).
O β†’ Output Specifies output format, level of detail, and constraints.
Eval β†’ Evaluation Measures correctness, consistency, and reliability of the output.

πŸ’‘ Advanced Prompting Techniques

Advanced prompting techniques enhance the S-I-O β†’ Eval framework, making instructions more reliable, outputs more structured, and evaluation easier.

🎯 1. Advanced Priming

Priming is like giving the model a warm-up before the real task so it knows how to respond. You set the tone, style, or level of detail first.

Example (step-by-step priming):

  • Set persona: "You are a friendly teacher."
  • Set style: "Use simple words and examples a 12-year-old can follow."
  • Give task: "Explain JavaScript promises."

πŸ‘‰ Output will sound more like a teacher talking to a child, not a textbook.

🎯 2. Chain of Density

Chain of Density means starting with a short answer and then gradually making it longer or richer in detail by expanding more on key entities. Each step adds more facts, context, or depth. This is great for summarization task.

Example:

  • Step 1: "Summarize this blog post in 1 sentence."
  • Step 2: "Now expand it into a paragraph with key examples."
  • Step 3: "Now add more technical details and statistics."

🎯 3. Prompt Variables and Templates

Prompt variables are placeholders (like blanks) in a prompt that you can fill in later. This makes the same prompt reusable for many different situations.

Example:

You are a {role}.
Explain {topic} to a {audience_level}.

Fill-ins:

  • {role} = "math tutor"
  • {topic} = "linear regression"
  • {audience_level} = "beginner"

πŸ‘‰ Output: As your math tutor, let me explain linear regression in simple beginner terms…

🎯 4. Prompt Chaining

Prompt chaining means solving a big task in smaller steps, where each answer feeds into the next prompt. This helps the model stay focused and produce more accurate results.

Example (step-by-step chaining):

  • Prompt 1: "Extract the key points from this research paper."
  • Prompt 2: "Summarize those key points in plain English."
  • Prompt 3: "Turn that summary into a blog post."

πŸ‘‰ Each step builds on the last, making the final blog post clearer and more reliable.

🎯 5. Compressing Prompts

You can save tokens by inventing short codes that stand for longer instructions. Once you define a code, you can reuse it anywhere in your prompts.

Example:

  • Long: "Simulate a job interview for a backend developer role. Ask me 5 questions one by one and give feedback after each answer."
  • Short: "Simul8: Backend dev interview, 5 Qs, give feedback each time."

πŸ‘‰ This way, the model still knows what to do, but your prompt is shorter and cheaper.

🎯 6. Emotional Stimuli

By adding an emotional cue, you signal to the model how serious or sensitive the task is. This often makes responses more careful and precise.

Example:

If your explanation is wrong, I might lose my job.
Please explain how to safely deploy a Node.js app to production, step by step.

πŸ‘‰ The emotional phrase I might lose my job pushes the model to give a more cautious and reliable answer.

🎯 7. Self-Consistency

Self-consistency means asking the model to generate multiple answers and then choosing the most consistent one. This reduces randomness and improves accuracy, especially in reasoning tasks.

Example:

Solve 27 Γ— 14.
Generate 3 different reasoning paths and return the most consistent answer.

πŸ‘‰ If two answers say 378 and one says something else, the model goes with the majority (378).

🎯 8. ReAct Prompting

ReAct combines reasoning (thinking step by step) with actions (like calling an API or tool) to solve problems. This keeps the model logical while also letting it interact with the outside world.

Example:

Q: If there are 12 apples and you give away 4, how many are left?  
A:  
Thought: This is a simple subtraction problem. I should compute how many remains.  
Action: Calculate 12 - 4.  
Observation: 12 - 4 = 8.  
Final Answer: 8

---

Now solve:  
Q: If you have 15 books and lend out 6, how many are left?  
A:
Enter fullscreen mode Exit fullscreen mode

πŸ‘‰ The model don’t just guess, it reasons, takes an action, and then answers. This Thought ➞ Action ➞ Observation may run repeatedly until the Final Answer is returned.

🎯 9. ReAct + CoT-SC (ReAct + Chain-of-Thought-Self-Consistency)

This method means letting the model think + act (ReAct) several times, then choosing the answer that shows up most often. By combining step-by-step reasoning with a majority vote, the final result is more accurate and reliable.

How:

  • System Prompt:
You are a highly capable AI assistant. For every question or task:
1. Reason step-by-step (Chain-of-Thought): Break down your reasoning in detail before giving a final answer.
2. Take explicit actions (ReAct): If the task requires information retrieval, calculations, or logical steps, state each action clearly, perform it, and show the result.
3. Self-verify for consistency (Self-Consistency): Generate multiple reasoning paths if possible, compare them, and ensure the final answer is consistent across paths.
4. Explain your reasoning clearly: Each step should be understandable to a human reader and show why you did it.
5. Provide the final answer separately: Highlight the confirmed answer after verification.

Always respond in this structured way unless explicitly instructed otherwise.
Enter fullscreen mode Exit fullscreen mode
  • Example Input for the LLM
Question: Solve 27 Γ— 14 and show your reasoning.
Enter fullscreen mode Exit fullscreen mode
  • Expected Output
Step 1: Path 1 – Standard multiplication...
Step 2: Path 2 – Using distribution...
Step 3: Path 3 – Using decomposition...
βœ… Consistent Answer: 378
Enter fullscreen mode Exit fullscreen mode

🎯 10. Tree of Thought (ToT)

ToT branches the model’s reasoning into several alternative paths, explores each one, then prunes or combines them to pick the best option. It brainstorms multiple solutions before choosing. This avoids tunnel vision and finds more creative or robust answers for complex problems.

How:

  • System Prompt:
You are a highly capable AI business advisor. For any startup or business planning question:

1. Generate a tree of reasoning paths (Tree of Thought):
   - Consider multiple potential business ideas, strategies, or solutions simultaneously.
   - Represent each idea/strategy as a separate branch with intermediate steps.
2. Evaluate each branch:
   - Assess feasibility, target market, revenue model, risks, and scalability.
   - Prune weak or unrealistic branches, and highlight promising ones.
3. Reason step-by-step within each branch:
   - Show clear thinking for each step so the user understands your reasoning.
4. **Select the best branch(s) for the final recommendation:**
   - Compare branches and pick the most promising and realistic option(s).
5. Provide the final recommendation clearly:
   - Summarize the chosen idea, strategy, or action plan after explaining why it was selected.

Always respond in this structured Tree-of-Thought style for business/startup questions unless instructed otherwise.
Enter fullscreen mode Exit fullscreen mode
  • Example Input for the LLM
I want to start a new tech startup. Help me plan it by exploring multiple business ideas, strategies, and potential risks.
Enter fullscreen mode Exit fullscreen mode
  • Expected Output
Branch 1: AI-powered SaaS for small businesses
  Step 1: Idea – Provide AI tools for automating customer support.
  Step 2: Target Market – Small businesses with limited tech teams.
  Step 3: Revenue Model – Subscription-based pricing.
  Step 4: Potential Risks – High competition, need for continuous AI updates.
  Step 5: Evaluation – Promising niche, moderate risk, scalable.

Branch 2: Marketplace for freelance developers
  Step 1: Idea – Connect companies with vetted freelance developers globally.
  Step 2: Target Market – Startups and SMEs needing flexible dev talent.
  Step 3: Revenue Model – Commission on projects + premium membership.
  Step 4: Potential Risks – Platform trust/reliability, network effects.
  Step 5: Evaluation – High growth potential, requires strong marketing.

Branch 3: Green tech solution for energy efficiency
  Step 1: Idea – IoT devices for monitoring and reducing energy consumption.
  Step 2: Target Market – Offices and commercial buildings.
  Step 3: Revenue Model – Hardware + subscription for analytics.
  Step 4: Potential Risks – High initial costs, slower adoption rate.
  Step 5: Evaluation – Strong social impact, capital-intensive.

βœ… Selected Branch: Branch 1 – AI-powered SaaS for small businesses.  
Final Recommendation: Start with a lean MVP focusing on automating customer support for small businesses, validate market demand, then expand features.
Enter fullscreen mode Exit fullscreen mode

πŸ‘‰ Use ToT when you want the model to explore alternatives instead of committing to the first idea.


πŸ’‘ Advanced Prompting Techniques within the Framework

Here’s how techniques map to different parts of the framework:

Setup (S) – Priming default behavior

  • Advanced Priming: Set persona, tone, or style upfront.
  • ReAct + CoT-SC: Make the model reason, act, and self-verify automatically.
  • Emotional Stimuli: Encourage careful, precise answers by signaling importance or risk. Example:
You are a highly capable AI assistant. For every task:
1. Reason step-by-step (Chain-of-Thought)
2. Take explicit actions if needed (ReAct)
3. Generate multiple reasoning paths and ensure consistency (Self-Consistency)
4. Explain each step clearly
5. Provide the final answer separately
Enter fullscreen mode Exit fullscreen mode

Β 

Instruction (I) – Task-specific guidance

  • Prompt Variables and Templates: Make prompts reusable for different roles, topics, or audience levels.
  • Prompt Chaining: Break complex tasks into smaller steps; feed each output into the next prompt.
  • Chain of Density: Gradually expand answers from short to detailed for summarization or explanation tasks.

Example (Instruction using chaining & variables):

You are a {role}.
"""  
TASK: Explain {topic} to a {audience_level}  
"""  
Example Output:  
As your {role}, let me explain {topic} in simple {audience_level} terms…
Enter fullscreen mode Exit fullscreen mode

Β 

Output (O) – Structuring results

  • Format enforcement: JSON, Markdown, tables, bullet points.
  • Length/detail control: Short summary vs full explanation.
  • Restrictions: Avoid hallucinations, personal opinions, or off-topic content.

Example:

Step 1: Path 1 – Standard multiplication
Step 2: Path 2 – Using distribution
Step 3: Path 3 – Using decomposition
βœ… Consistent Answer: 378
Enter fullscreen mode Exit fullscreen mode

Β 

Evaluation (Eval) – Testing and refining

  • Check vulnerabilities: hallucinations, bias, math/logic errors, weak sourcing.
  • Prompt testing: Run with multiple inputs and edge cases. Refine instructions if outputs fail.

Example Test Cases:

Input: const numbers = [1, 2, 3]
Expected: [2, 4, 6]
Edge Case: const numbers = []
Check: Model explains correctly without hallucinating.
Enter fullscreen mode Exit fullscreen mode

πŸ’‘ Examples of Techniques in Action

Here’s a brief mapping of common advanced techniques and where they fit:

Technique Framework Focus How it helps
ReAct Setup + Instruction Combines reasoning + actions for reliable problem-solving
Chain-of-Thought (CoT) Setup + Instruction Guides step-by-step reasoning
Self-Consistency (SC) Setup Reduces randomness, chooses majority answer across multiple reasoning paths
Prompt Chaining Instruction Handles complex tasks in smaller, manageable steps
Prompt Variables/Templates Instruction Makes prompts reusable and flexible
Chain of Density Instruction Builds richer, more detailed answers gradually
Tree of Thought (ToT) Setup + Instruction Explores multiple reasoning paths, evaluates, and selects best option
Emotional Stimuli Setup Encourages careful or high-stakes reasoning
Compressing Prompts Instruction Saves tokens while preserving meaning

Summary

Advanced prompting techniques do not work in isolation, they are most effective within the S-I-O β†’ Eval framework.

  • Setup primes the model with reasoning and action defaults.
  • Instruction shapes task-specific reasoning and guides step-by-step solutions.
  • Output ensures results are structured and consistent.
  • Evaluation tests prompts and allows continuous refinement.

These strategies help move from simply talking to LLMs to building reliable AI workflows, especially in multi-step reasoning, RAG systems, or production-grade applications.

To keep this post focused, I left out how to test and evaluate prompts with real-world tools (like PromptFoo). That will be the topic for another post.

Happy coding!!!

Top comments (0)