I didn’t start learning Prompt Engineering to become an expert. I started because I kept getting inconsistent results while using AI tools in real projects. While reading about Prompt Engineering, I came across several prompting techniques that I found interesting and useful, and I wanted to share what I learned.
We are all very familiar with prompts. We use them almost every day while working with ChatGPT, Gemini, Claude, and other AI models. Every AI model needs some form of input to generate an output. Let's start with the basics.
What is a prompt?
It is an instruction given to the AI models to produce an output. It could be as simple as “Which is the best smartphone of 2026” or it could be as complex as “Act as frontend engineer and create a portfolio website with these specifications."
What is Prompt Engineering?
Prompt engineering is the process of crafting and refining prompts to get more accurate and useful outputs from generative AI models.
Why is Prompt Engineering important?
AI models respond based on how clearly the task is defined. Prompt engineering helps bridge the gap between vague, open-ended queries and precise, goal-oriented instructions.
For example, which one would give a better result?
“What are Samsung’s sales in 2026?”
vs
“Act as a senior sales manager, analyze Samsung’s Q1 2026 performance, identify trends and shortcomings, and present the data in a table.”
The second prompt provides context, role, and expected output making it far more actionable.
Don't do this when your prompt doesn't bring you the desired output. Try and use one of the prompting techniques I shared below.
Good Prompting Techniques
Here are 8 prompting techniques that I liked and have used while working with AI models to get better results.
- Zero-shot Prompting
In this prompting technique, examples aren’t provided. The model relies on its existing knowledge.
Prompt:
Translate this to French: “Hello”
- One-shot Prompting
In this prompting technique, the model is provided with one example to understand the context and provide the result based on that context.
Prompt:
Translate English to French.
Example:
English: “Good morning” → French: “Bonjour”
Now translate:
English: “Hello”
- Few-shot Prompting
In this prompting technique, the model is provided with multiple examples to get a more accurate answer with better context.
Prompt:
Translate English to French.
English: “Good morning” → French: “Bonjour”
English: “Thank you” → French: “Merci”
English: “Good night” → French: “Bonne nuit”
Now translate:
English: “Hello”
I noticed this works especially well when the model’s first response feels slightly off.
- Chain-of-Thought Prompting
This technique encourages the model to reason step by step before answering, which helps with more complex tasks.
Prompt:
Translate the following sentence to French.
Think step by step before giving the final answer.
Sentence: “Good morning”
I found this useful for logic heavy or multi-step problems.
- Emotion Prompting
In this technique, the model is given emotional or situational context so it responds in a more thoughtful way rather than a direct or insensitive one.
Prompt:
Translate the following sentence to French.
Imagine you are helping a beginner who is excited to learn a new language. This is important as I have a French exam tomorrow.
Sentence: “Good morning”
This helped improve tone when the default response felt too robotic.
- Role-based Prompting
This is one of the most common prompting techniques. It usually starts with phrases like “Act as __” or “You are a __”. It helps control tone, style, and depth.
Prompt:
Act as a professional French language teacher.
Translate the following sentence to French.
Sentence: “Good morning”
I used this often when I needed more structured or professional responses.
- Rephrase and Respond (RaR) Prompting
Instead of writing the perfect prompt, we let the model rephrase the task and then answer it.
Prompt:
First, rephrase the task in your own words.
Then, translate the sentence to French.
Sentence: “Good morning”
This helped when I wasn’t sure how to phrase my request clearly.
- Chain-of-Dictionary (CoD) Prompting
A less common technique where explicit word meanings are provided before forming the final translation. It’s useful in multilingual contexts.
Prompt:
Translate the sentence using a dictionary-style breakdown.
Step 1: List key words and their French meanings.
Step 2: Combine them into a natural French sentence.
Sentence: “Good morning”
An AI is like a high performance engine. Prompt engineering is how you learn to drive it. All of these techniques work well individually, but combining one or two thoughtfully can significantly improve results.
Combining Prompting Techniques
Combining prompts means adding the best parts of multiple techniques into a single prompt to help the AI understand complex tasks better and generate more accurate outputs.
Example: Role-based + Instructions
Prompt:
Role: You are a professional French language teacher who specializes in clear, beginner-friendly translations.
Instruction: Translate the following English sentence into French, keeping the tone simple and natural for beginners.
Text: “Hello”
Example: Context + Instruction + Few-shot Prompting
Prompt:
Context: You are translating short English sentences into French for a travel phrasebook. The translations should be natural, conversational, and suitable for everyday use.
Instruction: Translate the English sentence into French following the style of the examples below.
English: “Good morning” → French: “Bonjour”
English: “Thank you” → French: “Merci”
English: “Good night” → French: “Bonne nuit”
Now translate:
English: “Hello”
I found this approach useful when tasks became more complex and needed consistency.
Bad Prompting Techniques (Things I Learned to Avoid)
Avoid these as much as possible to get better results.
- Overly Vague Prompts
Too broad and usually leads to generic output.
Prompt:
Build me a website
- Contradictory Instructions
Confuses the model.
Prompt:
Make it minimal but add a lot of animations and details.
- Overloading a Single Prompt
Trying to solve design, logic, deployment, and testing in one prompt often leads to messy results.
Prompt:
Act as a senior full-stack developer.
Design a portfolio website with animations, build the frontend in React, create the backend APIs, deploy it to the cloud, write tests, optimize performance, fix any errors, and make it SEO-friendly.
I realized smaller prompts worked better.
- Blind Trust in Output
Assuming the model is always right without reviewing or validating the result.
Prompt:
Generate production-ready React code for authentication.
Do not explain anything. Just give the final answer.
This usually required manual fixes later.
I learned that Prompt engineering isn’t about writing clever prompts it’s about clear thinking. I’m still learning, and I’m sure there are better ways to do this. For now, breaking problems down, giving context, and reviewing outputs has worked best for me.
Would love to know: What prompting techniques have worked best for you, and which ones completely failed?


Top comments (2)
Mannnnn. I could have read your post while working on the GitHub Copilot CLI challenge. Would be very useful since the CLI was limited for the Free Tier. Oh well. Snooze you lose I guess.
In any case, Great work!
Thank you so much, @francistrdev. I'm sure you will rock your next challenge.