DEV Community

Cover image for Intermediate strategies (Prompt Engineering for Devs - Part: 2)
Ananya S
Ananya S

Posted on

Intermediate strategies (Prompt Engineering for Devs - Part: 2)

You can instruct an LLM to write code. But can you instruct it to act like a Senior Principal Software Architect reviewing code for security and best practices?

If the answer is no, this post is for you. Building on our introduction to Prompt Engineering (Part 1), we’re now moving into the intermediate strategies that unlock true control. We'll explore how simple additions—like assigning a role or providing just one perfect example—can radically transform an LLM's performance and consistency. Let’s dive in and learn how to make your LLMs work smarter, not just harder.

Provide Specific Context.

This strategy involves providing the LLM with essential background information or specific constraints before the main request. It helps the model tailor its output, making it more relevant and targeted to your specific needs or audience.

I am a person with 0 knowledge on Artificial Intelligence. Explain this topic using analogies.
Enter fullscreen mode Exit fullscreen mode

Assign a role (Role-Playing)

By assigning the LLM a specific persona (e.g., 'expert code reviewer,' 'creative director,' 'cynical history professor'), you force it to adopt a corresponding tone, knowledge base, and style of thinking. This is highly effective for tasks requiring specialized expertise or a particular viewpoint.

You are a Python developer and code reviewer. Review the following code and provide:

- Bugs or issues

- Best practices

- Security concerns

{Code block}

Enter fullscreen mode Exit fullscreen mode

Zero-shot prompting:

We do not provide examples to the LLM and expect it to answer us.
It relies solely on the vast knowledge the model acquired during its training. This works well for straightforward classification or summarization tasks.
This is the default behavior of LLMs but it acts as the starting point for the "shot" techniques.

Task: Classify the following text into one of two labels: Positive or Negative
Input: "Customer review: This food tastes bad."
Output:
Enter fullscreen mode Exit fullscreen mode

One-shot prompting

Provide one example to show the LLM how you want your answer to be. This single 'shot' is usually enough to define the required format, tone, or style for the model to follow for subsequent inputs.

"Task: Classify the sentiment of the following customer reviews as Positive or Negative.

Example:
Review: "This food tastes bad."
Sentiment: Negative

New Input:
Review: "The movie was amazing!"
Sentiment:"

Enter fullscreen mode Exit fullscreen mode

Few shot prompting

For certain tasks LLMs require multiple examples to understand what the user expects from it. This is applicable to certain complex tasks where the LLM would be more efficient when provided with context. The model analyzes the provided examples, identifying patterns in how inputs are transformed into outputs.

"Provide taglines for famous companies:
Myntra: Shopping partner
Gucci: Luxury partner
Lenskart:"

Enter fullscreen mode Exit fullscreen mode

These intermediate strategies move us beyond basic instruction by incorporating context and examples into the prompt structure. Mastering them is key to unlocking the true potential of modern LLMs.

Which of these strategies have you found most effective in your projects? Drop a comment below!
Different models provide different outputs using these strategies. Try it out yourself and let me know your favorite LLM and the strategy that works best for you.
You can also check out this article ChatGPT vs DeepSeek vs Copilot vs Claude: Who Wins the AI Crown? regarding different models.
Stay tuned for the final post in this series, where we'll dive into advanced techniques like Chain-of-Thought and Tree-of-Thought prompting!

Top comments (0)