In this article, I explained what LLMs are and how to use them to build smart applications.
But just using an LLM isn’t enough. We need to communicate with it clearly and strategically.
That’s where Prompt Engineering comes in.
💡 What is Prompt Engineering?
Prompts are instructions and context (clear and structured inputs) provided to a language model for a certain task.
Prompt engineering is the practice of crafting and refining prompts so a language model can generate outputs that are useful, accurate, and relevant.
These LLMs are very powerful assistants but need smart instructions to produce quality results and the quality of what we get depends on how we ask.
Why it matters:
- It saves time and frustration
- It reduces irrelevant or wrong answers
- It unlocks advanced LLM capabilities like reasoning, coding, creativity
- It is an essential skill for developers building AI-powered apps 😊
📌 Understanding How LLMs Respond to Prompts
Before writing prompts, know that LLMs:
-
Predict the next token based on probability - They don’t
understand
like humans. They pattern-match words to generate content. - Rely heavily on context - The quality of your input determines the quality of its output. So, Better prompts = Better outputs.
- Don’t know your intent unless you tell them - LLMs don’t read our minds, ambiguity leads to confusion. If you are vague, the model will be vague 😄.
📌 Hierarchy of Instructions
When you give an LLM instructions, they are not all treated equally. There’s a hierarchy that decides which rules the model follows first and which ones get ignored in the case of conflicts.
1. System Instructions (Highest Priority) - Set by the model provider (e.g., OpenAI, Anthropic) and invisible to the user. It define core behavior, safety rules, and identity. It cannot be overridden.
2. Developer Instructions - Set by the app developer through API or integration. It control tone, style, and behavior for a specific app.
3. User Instructions – Direct requests from the person interacting with the model. It can override some developer rules but never system rules.
4. Contextual/Embedded Instructions (Lowest Priority) – Found in documents, chat history, or examples. It is the weakest in priority and easily overridden.
📌 Core Components of a Good Prompt
A good prompt often has 3 parts:
Component | Purpose | Example |
---|---|---|
Role/Context | Tell the model who it is or what perspective to take | "You are a professional backend engineer…" |
Task/Goal | The exact thing you want done | "Explain microservices in simple terms." |
Format/Constraints | How you want the output delivered | "Use bullet points, under 200 words." |
Example:
You are a career coach with 10 years experience.
Explain to a fresh graduate how to prepare for a software engineering interview.
Give me 5 bullet points and a short motivational ending.
📌 Basic Prompting Techniques (With Examples)
Prompting techniques are styles or strategies for writing prompts.
1. Zero-shot prompting
Ask the model to perform a task with no example, just instructions.
Translate this sentence to French: "I love programming."
Use this when the task is simple and clear.
2. One-shot prompting
Give one example before asking the model to perform the same task again.
Translate this sentence to French:
English: "I love cats."
French: "J'aime les chats."
English: "I love programming."
French:
This is good for moderately complex tasks where one example helps show the pattern.
3. Few-shot prompting
Provide a few examples to help the model understand the expected format or logic. Between 5 to 8 is ideal according to reseaarch.
Convert the following to a formal business email tone:
Casual: "Need the report by tomorrow."
Formal: "Kindly ensure the report is ready by tomorrow."
Casual: "Can't make the meeting."
Formal: "Unfortunately, I won’t be able to attend the meeting."
Casual: "What's the update on the task?"
Formal:
This is great when:
- We need consistency.
- The task involves writing style.
- We want the model to follow a specific structure.
4. Chain-of-thought prompting
Ask the model to follow reasoning steps before answering by adding think step by step to the prompt.
Question: If Sarah has 3 apples and buys 4 more, then gives 2 to her friend, how many apples does she have?
Answer: Let's think step by step.
This is best for tasks that require reasoning, calculation, or logic.
5. Role prompting
Give the model a role or identity to respond from.
You are a senior software engineer. Explain the difference between GraphQL and REST to a junior developer.
This is perfect for:
- Customer support bots
- Teaching/educational apps
- Task-specific assistants like lawyer, doctor, manager
There are more advanced prompting techniques and many more emerging as research continues but I will cover those in a separate post.
📌 Practical Tips for Better Prompts
1️⃣ Be clear and specific, not vague.
- ❌ "Tell me about AI."
- ✅ "Explain AI in under 150 words for a 10-year-old."
2️⃣ Break down complex requests
- ❌ "Write me a business plan for a bakery"
- ✅ "List 5 business model options for a bakery" 👉🏼 "Write an executive summary for model #3"
3️⃣ Use iteration: Your first prompt is rarely perfect. Tweak, re-run, and refine.
4️⃣ Set output boundaries
- Word count (under 150 words)
- Style (formal, casual, humorous)
- Language tone (beginner-friendly, expert-level)
5️⃣ Use bullet points or steps if possible.
6️⃣ Provide examples if the task has a pattern.
7️⃣ Use delimiters like """
to separate instructions from data.
8️⃣ Use XML tags like <article>...<article>
to group data within the instruction.
📌 How to Choose the Right Prompting Technique
Use Case | Suggested Prompt Type |
---|---|
Simple data transformation | Zero-shot |
Text classification | Few-shot |
Reasoning tasks | Chain-of-thought |
Needs personality or tone | Role prompt |
New use cases, no examples | Zero-shot + Instructions |
Task where examples help | Few-shot or One-shot |
📌 Prompt Engineering in Real Projects
- Chatbots: Role prompts + output format for consistent replies
- Content Generation: Few-shot prompts for tone consistency
- Code Assistants: Chain-of-thought for debugging explanations
- Data Extraction: Instruction-based prompts returning JSON
📌 Real Example: Job Description Analyzer
I built a project called Job Application Assistant, which helps users understand and respond to job listings. Before I integrated Function Calling, I used Prompting techniques with the OpenAI API to extract structured data from job descriptions.
Here’s how I did it using a combination of Few-shot and Role-based prompting:
const jobDescriptionExample = "We need a frontend developer skilled in React, JavaScript, and TailwindCSS. You will build UIs and collaborate with backend teams. 2+ years experience required.";
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{
role: "system",
content:
"You are an AI assistant that extracts key skills, responsibilities, and experience from job descriptions.",
},
{
role: "user",
content: `Extract the following from this job description:\n
1. Required Skills
2. Responsibilities
3. Required Experience\n\n${jobDescriptionExample}`,
},
],
max_tokens: 200,
});
return response.choices[0]?.message?.content?.trim() || "";
Sample output:
Skills: React, JavaScript, TailwindCSS
Responsibilities: Build UIs, collaborate with backend
Experience: 2+ years
📌 Some sample projects that illustrates Prompt Engineering
- Soul Sync - A safe space where users can check in emotionally, express themselves, and receive gentle, AI-powered guidance that helps them reconnect with their inner self.
- Therabot - A web app where users can chat with an AI-powered therapist for emotional support.
I also use this guide when prompting LLMs.
Prompting is about clear communication, iteration, and testing.
The more intentional your prompt, the more reliable your LLM becomes.
Happy coding!!!
Top comments (17)
Interesting. Thanks for sharing!
can i have meeting with you about career suggestion i need ur suggestion
yeah mam when u free thanks
Please join in: meet.google.com/adv-skos-yes
Are you joining again?
i comment on linkdin u see that
mam i comment on linkdin u see that
voice is not clear i hardly listen ur suggestion
Apologies, I waited but did not get a response from you.
I will chat with you on LinkedIn tomorrow please.
I will be stepping out soon
when we meet
Some comments may only be visible to logged-in visitors. Sign in to view all comments. Some comments have been hidden by the post's author - find out more