DEV Community

Cover image for Advanced Prompting Techniques — Automating Reasoning & Persona-Based AI(Part 3)
Prabhakar
Prabhakar

Posted on

Advanced Prompting Techniques — Automating Reasoning & Persona-Based AI(Part 3)

In Part 1, we learned what LLMs are and how they work.
In Part 2, we explored basic prompting and Chain-of-Thought reasoning.

Now in Part 3, we go deeper into advanced prompting techniques that help you build Agentic AI systems instead of simple chatbots.

In this section, we dive into one of the most important skills in the Agentic AI journey: Prompting.

A good prompt can improve your LLM’s output by 10x–20x in quality and accuracy.
A bad prompt leads to vague, unpredictable, or wrong answers.

In this guide, you’ll learn:

  • What prompting really means
  • Why system prompts matter
  • Zero-shot prompting
  • Few-shot prompting
  • Chain-of-Thought (CoT) prompting

1. What Is Prompting?

A prompt is the instruction you give to an LLM to control how it behaves and responds.

Without a prompt, the model behaves like a free-flowing chatbot:

“Hey, who are you?”
→ It can answer anything: math, jokes, code, history… anything.
Enter fullscreen mode Exit fullscreen mode

That’s not ideal in real applications.

Instead, we give a System Prompt — a special instruction that sets context and boundaries.

Example: System Prompt

You are an expert in mathematics.
Only answer math-related questions.
If the user asks anything else, say "Sorry, I can only help with math."
Enter fullscreen mode Exit fullscreen mode

2. Zero-Shot Prompting:

Zero-shot prompting means giving direct instructions with no examples.

You tell the model exactly what to do.

Example:

from openai import OpenAI

def ask_model(message):

    client = OpenAI("API_KEY")

    # Zero shot prompt: directly giving the instruction to the model
    SYSTEM_PROMPT = "You should only and only ans coding related questions, Sorry i can give you only coding related ans"

    response = client.chat.completions.create(
        model="gemini-3-flash-preview",
        messages=[
            {   "role": "system",
                "content": SYSTEM_PROMPT
            },
            {
                "role": "user",
                "content": message
            }
        ]
    )

    return response.choices[0].message.content

print(ask_model("Hey, I am Prabhas"))
print(ask_model("Please, Write the a program to calculate the factorial of 5 java"))

**Output:**
1. Sorry, I can only answer coding-related questions. Please let me know if you have any questions regarding programming, algorithms, or software development.

2. Here is a simple Java program to calculate the factorial of 5 using a `for` loop:

java public class Main {
    public static void main(String[] args) {
        int number = 5;
        long factorial = 1;

        for (int i = 1; i <= number; i++) {
            factorial *= i;
        }

        System.out.println("The factorial of " + number + " is: " + factorial);
    }
}
Enter fullscreen mode Exit fullscreen mode
  • No examples
  • Direct command
  • Fast, but less accurate than other methods

Use Zero-Shot when:

  • The task is simple
  • You don’t need much reasoning

3. Few-Shot Prompting

Few-shot prompting provides examples along with instructions.

This teaches the model how you want it to behave.

Example:

from openai import OpenAI

client = OpenAI("API_KEY")

SYSTEM_PROMPT = """
You should only and only answer coding related questions.
If the user asks something other than coding, return JSON with:
- code: null
- isCodingQuestion: false

Rules:
- Strictly follow the output in JSON format.

Output format:
{
  "code": "string" or null,
  "isCodingQuestion": boolean
}

Example:
Q. Can you explain a + b whole square?
A. {
  "code": null,
  "isCodingQuestion": false
}
"""

def ask_model(user_message: str) -> str:
    """
    Sends a prompt to the LLM and returns the response.
    """
    response = client.chat.completions.create(
        model="gemini-3-flash-preview",
        messages=[
            {"role": "system", "content": SYSTEM_PROMPT},
            {"role": "user", "content": user_message}
        ]
    )
    return response.choices[0].message.content


# Example usage
    print(ask_model("Hey, I am Prabhas"))
    print(ask_model("Write a Python function to reverse a string"))

Output:-
1. {
  "code": null,
  "isCodingQuestion": false
}

2. {
  "code": "def reverse_string(s): return s[::-1]",
  "isCodingQuestion": true
}
Enter fullscreen mode Exit fullscreen mode
  • Uses examples
  • Much higher accuracy
  • Widely used in production systems

Use Few-Shot when:

  • You want consistency
  • You need better control over answers

4. Chain-of-Thought (CoT) Prompting:

This is my personal favorite 💡

Chain-of-Thought prompting forces the model to think step-by-step before answering.

Instead of jumping straight to the answer, the model:

  • Analyzes the problem
  • Plans the solution
  • Executes step by step
  • Produces the final output

Example System Prompt:

import json 
from openai import OpenAI

client = OpenAI("API_KEY")

# Chain-of-Thought prompt: asking model to think step by step
SYSTEM_PROMPT = """
You are a helpful coding assistant. When solving problems, think step by step and explain your reasoning clearly.

Follow this format:
1. Understand the problem
2. Break it down into steps
3. Write the solution
4. Explain the approach

Always provide code in JSON format:
{
  "thinking": "Your step-by-step reasoning",
  "code": "Your code solution",
  "explanation": "Brief explanation of the solution"
}
"""

def ask_model_with_cot(user_message: str) -> str:
    """
    Sends a Chain-of-Thought style prompt to the LLM and returns the response.
    """
    response = client.chat.completions.create(
        model="gemini-3-flash-preview",
        messages=[
            {"role": "system", "content": SYSTEM_PROMPT},
            {"role": "user", "content": user_message}
        ]
    )
    return response.choices[0].message.content


# Example:
  response = ask_model_with_cot("How do I reverse a string in Python?")
  result = json.loads(response.choices[0].message.content) 

  print("Thinking:", result["thinking"])                                                                                                              
  print("\nCode:")                                                                                                                                    
  print(result["code"])                                                                                                                               
  print("\nExplanation:", result["explanation"])  


#Output Looks:

Thinking: To reverse a string in Python, the most efficient and common method is using string slicing. Since strings are immutable, we create a new string that traverses the original string from the last character to the first. 

1. **Slicing Method**: Use the syntax `[start:stop:step]`. By setting the step to `-1`, Python moves through the string backwards.
2. **Built-in Function**: Use `reversed()` which returns an iterator, and then `join()` it back into a string.
3. **Looping**: Manually build a new string by prepending characters (though this is less efficient).

Code:
# Method 1: String Slicing (Recommended)
original_string = "Hello World"
reversed_string = original_string[::-1]
print(f"Slicing method: {reversed_string}")

# Method 2: reversed() and join()
reversed_string_2 = "".join(reversed(original_string))
print(f"Reversed function method: {reversed_string_2}")

Explanation: The most Pythonic way is `string[::-1]`. This slicing notation starts at the end of the string and moves toward the beginning with a step of -1. Alternatively, `"".join(reversed(string))` uses a built-in function to iterate backwards and joins the characters into a new string. 

Enter fullscreen mode Exit fullscreen mode

Why CoT Is Powerful
Instead of this: "Here’s the answer."

You get this: “Let me think… first I’ll analyze the problem… then plan… then solve…”

  • More human-like
  • Higher reasoning accuracy
  • Used in GPT-4o, DeepSeek, O3 models, etc.

5. Automating Chain-of-Thought(Agent Style Reasoning):

In the CoT part, we manually asked the model to “think step by step.”
That works — but it’s not scalable.

The real problem is this:
You can’t keep adding prompts again and again by hand.

So the next step is obvious… automate it.

Instead of treating the LLM like a one-shot chatbot, we turn it into a small reasoning engine that plans, thinks, and then answers.

The Core Idea

We give the model a fixed reasoning framework and tell it:

  • Always analyze first
  • Always plan before answering
  • Always justify the solution

**Step 1: Create an Auto-Reasoning System Prompt

Here’s Automated Chain-of-Thought prompt:

SYSTEM_PROMPT = """
You are an intelligent reasoning engine. For any problem given, follow this automatic reasoning process:

REASONING FRAMEWORK:
1. Problem Analysis: Break down the problem into components
2. Information Gathering: Identify what information is needed
3. Hypothesis Formation: Generate potential solutions
4. Evaluation: Compare solutions against criteria
5. Conclusion: Recommend the best approach

RESPONSE FORMAT:
{
    "problem": "restate the problem",
    "analysis": {
        "key_components": ["component 1", "component 2"],
        "constraints": ["constraint 1"],
        "requirements": ["requirement 1"]
    },
    "reasoning_steps": [
        {
            "step": 1,
            "action": "what to consider",
            "finding": "what we discover"
        }
    ],
    "solution": {
        "approach": "chosen approach",
        "code": "implementation",
        "reasoning": "why this approach is best"
    },
    "alternatives": [
        {
            "approach": "alternative approach",
            "pros": ["pro 1"],
            "cons": ["con 1"]
        }
    ]
}

IMPORTANT: Always show your reasoning explicitly. Do not jump to conclusions.
"""
Enter fullscreen mode Exit fullscreen mode

**Step 2: Call the Model Using the Python Code

Now plug this prompt into program:

from openai import OpenAI

client = OpenAI("API_KEY")

response = client.chat.completions.create(
    model="gemini-3-flash-preview",
    messages=[
        {"role": "system", "content": SYSTEM_PROMPT},
        {"role": "user", "content": "How do I optimize React Native App?"}
    ]
)

print(response.choices[0].message.content)

Enter fullscreen mode Exit fullscreen mode

What’s Happening Internally:

  1. Restates the problem
  2. Breaks it into parts
  3. Generates multiple solution ideas
  4. Evaluates them
  5. Then gives the final answer

With this pattern, your LLM now:

  • Thinks before answering
  • Plans instead of guessing
  • Evaluates instead of reacting

Final Thoughts

Prompting isn’t about clever wording.
It’s about designing thought processes.

When you move from “Give me the answer” to
“Here’s how to analyze, reason, and decide,”
you unlock a completely different level of output quality.

Manual Chain-of-Thought is a great start.
But automated reasoning is where real systems are built.

So next time you write a prompt, don’t ask:

❌ What’s the answer?

Ask instead:

✅ How should the model think?

Because the future of AI isn’t better responses —
it’s better reasoning.

Top comments (0)