AI Fundamentals, No Fluff — Day 3/10
For the first few months I used AI, my prompts looked like Google searches. Short, vague, and hoping the machine would figure out what I meant. "Write me a marketing email." "Explain this code." "Help me with my resume."
The results were fine. Generic, but fine. I assumed that was just what AI was capable of.
Then I started being more specific, and the difference was dramatic. Same tool, completely different output. The AI didn't get smarter. I got better at talking to it.
The search engine trap
The natural instinct is to interact with AI the way you interact with a search engine: type a few keywords and hope for the best. This works for simple questions ("What is the capital of France?") but it falls apart for anything that requires judgment, context, or a specific format.
A search engine retrieves information. An AI generates a response based on everything you give it. The better information you give it, the better the response. That shift in mental model is the single biggest thing you can do to improve your results.
What a good prompt actually looks like
There are three things that consistently make the difference between a vague response and a useful one: context, task, and format.
Context is the background information the AI needs to understand your situation. Without it, the AI has to guess, and its guesses are going to be generic.
Bad: "Write me a marketing email."
Better: "I run a small landscaping company in Denver. I want to send an email to past customers who have not used our services in over a year, offering a 15% discount on spring cleanup."
The second prompt gives the AI a company, a location, an audience, a goal, and a specific offer. The output will be dramatically more useful.
Task is what you actually want the AI to do. Be specific about the action, not just the topic.
Bad: "Help me with my resume."
Better: "Review my resume and suggest three specific changes that would make it stronger for a senior product manager role at a mid-size tech company."
"Help me with my resume" could mean anything. The second version tells the AI exactly what kind of help, for what role, and at what level.
Format is how you want the response structured. This is the one that is easiest to skip entirely, and it makes a bigger difference than might be expected.
"Give me the response as a bulleted list with no more than five items."
"Write this as a professional email, three paragraphs max."
"Explain this like I am a smart person who has never seen this technology before."
When you skip format, the AI defaults to whatever structure it thinks is most likely. That default is often a wall of text. Telling it how you want the response saves you from having to reshape the output yourself.
Show, don't tell
One of the most powerful things you can do in a prompt is give the AI an example of what you want. This works better than describing it in most cases.
Instead of: "Write a product description in a casual, friendly tone."
Try: "Write a product description in a tone similar to this example: 'Meet the backpack that actually fits your life. Three compartments, laptop sleeve, and a water bottle pocket that does not pretend to be something it is not.'"
The AI now has a concrete reference point instead of interpreting your idea of "casual and friendly," which might be very different from its default.
Iterating is the process, not the problem
Something I have come to realize is that your first prompt is rarely your best prompt. That is not a failure. It is how the process works. Think of it as iterative refinement.
You send a prompt. The response is close but not quite right. You adjust: add more context, change the format, give a better example. The second attempt is better. Maybe a third round gets it where you need it.
This is normal. Expecting a perfect result from a single prompt is like expecting a perfect first draft of anything. The back-and-forth IS the process. Getting comfortable with that iteration, instead of getting frustrated by it, is what separates people who find AI useful from people who tried it once and gave up.
The natural evolution of this process is knowing when you should keep going and when you should start over in a new conversation.
Stop commanding, start conversing
Something you can do that will make a noticeable difference is to stop treating prompts as commands and start treating them as the beginning of a conversation.
Think about how you would start a conversation with someone that doesn't have any context about the conversation you are about to have. You would think about what they might need to know and you would tell them about that first. You probably wouldn't walk up to a coworker and say "fix this for me" with no context; you would explain the situation first.
"Write me a marketing email" is a command. You fire it and hope for the best.
"I run a landscaping company and I need to re-engage past customers. Here is what I am thinking, but I am not sure about the approach. What would you suggest before I commit to a draft?" That is a conversation.
You are inviting the AI to think with you, not just execute for you.
Two techniques I use constantly:
The first is letting the AI interview you. Instead of trying to anticipate everything it needs to know, just say: "Ask me questions until you are confident you understand what I need." The AI will ask clarifying questions you would not have thought to answer, and the result is almost always better than what you would have gotten by trying to write the perfect prompt upfront.
The second is giving the AI a role or a stance. "Be selfish and tell me what you would actually recommend." "Play devil's advocate on this plan." "Debate me on the pros and cons before I commit." These frames push the AI out of its default helpful-but-generic mode and into something more specific and honest. I use variations of this in almost every serious conversation I have with AI, and the difference in quality is significant.
The best results I have gotten from AI have come from treating it as a collaborator, not a vending machine. The quality of the collaboration is directly proportional to the quality of the conversation.
Next time: your prompt collection is growing. How do you keep it all organized?
If there is anything I left out or could have explained better, tell me in the comments.
Top comments (0)