Your First Superpower in Tech
Have you ever tried to get help from an AI chatbot, like ChatGPT, and received a completely useless answer? It feels frustrating. You know it's powerful, but it doesn't seem to understand you.
Think of a Large Language Model (LLM) as a brilliant intern. They have read almost every book and website in existence. They are incredibly fast and knowledgeable. But they are also very literal. They have no real-world experience and will do exactly what you ask, even if it's not what you meant.
Your instruction to this intern is called a "prompt". Learning to write good prompts is like learning a new language for communicating with computers. It is one of the most valuable skills you can build today, whether you are a developer, a student, or just curious about technology. It's a true superpower.
The Core of a Good Prompt
Getting a great result from an LLM isn't about secret tricks or magic words. It's about being clear and providing the right information. Let's break down the four key ingredients of a perfect prompt.
1. Be Specific and Clear
Imagine walking into a coffee shop and saying, "Give me coffee." You might get a black filter coffee, an espresso, or a latte. It's a gamble. Instead, you say, "I'd like a large iced Americano with no sugar." Now you get exactly what you want.
Prompts work the same way. Vague prompts lead to vague answers.
A Bad Prompt:
"Write code for a button."
This is like asking for "coffee". What language? What does it look like? What does it do?
A Good Prompt:
"Write the HTML and CSS code for a clickable button. The button's text should be 'Download Report'. It should have a blue background (#3498db), white text, rounded corners (5px), and a light gray border. When a user hovers over it, the background should change to a darker blue (#2980b9)."
See the difference? We specified the language (HTML/CSS), the text, the colors, the shape, and the behavior. There is no room for guessing.
2. Provide Context
The LLM does not know what you are working on or what you were thinking about five minutes ago. You have to provide all the necessary background information in your prompt.
Think of it like asking a friend for directions. You wouldn't just ask, "How do I get to the library?" You'd say, "I'm currently at the corner of Main Street and Park Avenue. What's the quickest way to walk to the central library from here?"
A Bad Prompt:
"How can I make this function faster?"
The LLM has no idea what "this function" is.
A Good Prompt:
"I'm working on a Python project. I have the following function that searches for a user in a list of a million user objects. It's very slow. How can I make it faster? Here is the code:
def find_user_by_email(users_list, email): for user in users_list: if user.email == email: return user return None
"
By providing the code and explaining the problem (it's slow with a large list), you give the LLM the context it needs to provide a helpful answer, like suggesting a dictionary for faster lookups.
3. Define the Persona and Format
One of the most powerful features of LLMs is that you can tell them who to be and how to answer.
Persona: Ask the LLM to adopt a role. This helps shape the tone and style of the response.
- "Act as a senior software engineer conducting a code review."
- "You are a friendly tutor explaining a complex topic to a beginner."
- "Pretend you are a skeptical project manager and point out potential flaws in my plan."
Format: Tell the LLM exactly how you want the output structured.
- "Provide the answer as a JSON object."
- "Explain the steps in a numbered list."
- "Create a markdown table comparing these two databases."
A Good Prompt Combining Both:
"Explain the concept of 'git merge' versus 'git rebase'. Act as a patient mentor talking to a junior developer. Use a simple analogy to explain each one. Finally, summarize the pros and cons of each in a markdown table."
This prompt tells the LLM its role (mentor), the target audience (junior developer), the content required (explanation, analogy), and the final output format (a markdown table). This level of detail almost guarantees a high-quality answer.
4. Iterate and Refine
Your first prompt is often just the beginning of a conversation. Don't be discouraged if the first answer isn't perfect. Use it as a starting point and refine your request.
Think of it like sculpting. You start with a block of clay and slowly shape it.
A typical conversation might look like this:
You: "Give me some ideas for a new mobile app."
LLM: (Gives a generic list: a to-do list app, a fitness tracker, etc.)
You: "Those are okay, but I want something more unique. Focus on apps for people who enjoy gardening."
LLM: (Gives better ideas: a plant identification app, a watering schedule tracker, a community for local gardeners.)
You: "I like the plant identification idea. Can you list the key features for an app like that? Present it as a bulleted list."
Each follow-up prompt gets you closer to the perfect result.
A Couple of Simple "Pro" Tricks
Once you master the basics, you can try these slightly more advanced techniques.
Few-Shot Prompting: Show, Don't Just Tell
Sometimes, the best way to get the format you want is to show the LLM a few examples. This is called "few-shot prompting".
Example:
"I need to extract the main subject from these sentences. Follow the pattern below.
Sentence: The quick brown fox jumps over the lazy dog.
Subject: foxSentence: My team is shipping a new feature tomorrow.
Subject: teamSentence: Artificial intelligence is transforming the world.
Subject:"
By providing examples, you train the model on the exact output you expect. It will almost certainly answer "Artificial intelligence".
Chain-of-Thought: Ask it to Think Step-by-Step
For problems that require logic or calculation, you can ask the LLM to explain its reasoning process. This often leads to more accurate results because it forces the model to break the problem down. Simply add "Let's think step-by-step" to your prompt.
Example:
"I have 50 apples. I give 10 to my friend, then I buy 25 more. I then split the new total evenly among myself and 4 friends. How many apples does each person get? Let's think step-by-step."
The LLM will first calculate the intermediate steps before giving you the final answer, reducing the chance of a simple math error.
Key Takeaways
Writing a good prompt is a skill, and like any skill, it gets better with practice. Don't worry about getting it perfect on the first try.
Remember these key principles:
- Be Specific: Avoid vague requests. Detail exactly what you need.
- Give Context: Provide all the necessary background information. The LLM knows nothing but what you tell it.
- Assign a Persona and Format: Tell the AI who to be and how to structure its answer.
- Iterate: Treat it as a conversation. Refine your prompts based on the answers you get.
- Show, Don't Tell: Use examples (few-shot) to guide the AI's output for complex formatting.
- Think Step-by-Step: For logic problems, ask the AI to show its work.
Mastering this skill will not only help you get better answers from AI but also make you a clearer communicator and a more effective problem-solver. Now go ahead and give that brilliant intern some clear instructions!
About the Author
Hi, I'm Qudrat Ullah, an Engineering Lead with 10+ years building scalable systems across fintech, media, and enterprise. I write about Node.js, cloud infrastructure, AI, and engineering leadership.
Find me online: LinkedIn · qudratullah.net
If you found this useful, share it with a fellow engineer or drop your thoughts in the comments.
Originally published at qudratullah.net.

Top comments (2)
This is a really solid summary — especially the “show, don’t tell” part.
I think a lot of people underestimate how powerful few-shot prompting is compared to just rewording instructions. It often makes the difference between generic and actually useful output.
🤠
Quick personal review of AhaChat after trying it
I recently tried AhaChat to set up a chatbot for a small Facebook page I manage, so I thought I’d share my experience.
I don’t have any coding background, so ease of use was important for me. The drag-and-drop interface was pretty straightforward, and creating simple automated reply flows wasn’t too complicated. I mainly used it to handle repetitive questions like pricing, shipping fees, and business hours, which saved me a decent amount of time.
I also tested a basic flow to collect customer info (name + phone number). It worked fine, and everything is set up with simple “if–then” logic rather than actual coding.
It’s not an advanced AI that understands everything automatically — it’s more of a rule-based chatbot where you design the conversation flow yourself. But for basic automation and reducing manual replies, it does the job.
Overall thoughts:
Good for small businesses or beginners
Easy to set up
No technical skills required
I’m not affiliated with them — just sharing in case someone is looking into chatbot tools for simple automation.
Curious if anyone else here has tried it or similar platforms — what was your experience?