DEV Community

Cover image for Article 1 :Intro to Gen AI,LLMS and LangChain Frameworks(Part C)
Raunak ALI
Raunak ALI

Posted on

Article 1 :Intro to Gen AI,LLMS and LangChain Frameworks(Part C)

Chapter C: Basics of Prompt Engineering


1. What Is Prompt Engineering?

Prompt Engineering is basically how you “speak AI” so it actually understands you.

  • As a student or fresher, think of it like framing your questions to a teacher clearly so you get the best answer.
  • As a business leader, it’s about giving your AI “assistant” the exact instruction it needs so it really does not do any guesswork, just action.

2. Why Prompt Engineering Is Important

  • Accuracy — You get relevant, sharp responses, not hallucinations.
  • Consistency — The AI understands what you want, every time.
  • Efficiency — Fewer rewrites, faster results.
  • Control — You guide tone, structure, and style.

3. Core Prompting Techniques

(a) Zero-Shot Prompting

  • Ask directly; no setup.
  • Example:

What is the capital of Brazil?

  • Great for simple questions.
  • Not always reliable if the prompt is vague.

(b) Few-Shot Prompting

  • Show a couple of demonstrations, then ask.
  • Example:

Translate to French:

  1. I love programming. → J'adore la programmation.
  2. This food is delicious. → Cette nourriture est délicieuse. Now translate: “Where is the nearest train station?”
  • Excellent for establishing format.
  • Builds pattern understanding quickly.

(c) Role Prompting

  • Assign a persona or expertise.
  • Example:

You are a professional business consultant.
Suggest three cost-saving strategies for a small retail store.

  • Directs tone and domain; powerful for business output.

(d) Chain-of-Thought (CoT) Prompting

  • Encourage step-by-step reasoning.
  • Example:

A train leaves at 3 PM traveling 60 km/h. Another leaves at 4 PM traveling 80 km/h.
When does the second catch up? Think step by step.

  • Great for math and logic tasks.
  • Small models may struggle to maintain coherence.

(e) Instruction-Tuning vs. Prompting

  • Instruction-tuned models (e.g., Mistral-Instruct, Falcon-Instruct, Gemini Flash) follow concise prompts better.
  • How you ask still matters; even tuned models can misinterpret sloppy instructions.

4. The PROMPT Method — A Practical Framework

  • P — Provide context (who, what, why)
  • R — Role (persona or expertise level)
  • O — Output format (bullet list, essay, JSON, etc.)
  • M — Models/examples (few-shot if needed)
  • P — Point out constraints (length, style, tone)
  • T — Test & tweak iteratively

Sample Business Prompt (using PROMPT):

You are a marketing strategist. Write a LinkedIn post (100 words max, professional tone) about why small businesses should start using AI-powered chatbots. Include 3 bullet points with key benefits at the end .That covers all elements—context, role, format, constraints.


5. Use Cases — Who Benefits and How?

Audience Use Cases
Business Client proposals, marketing copy, market summaries, brainstorming ideas
Students Summarizing lectures, generating practice questions, debugging code, translating and simplifying concepts

6. Common Prompting Mistakes (and Fixes)

  • Too vague:
    “Write something about AI.”
    → Better: “Write a 200-word introduction to AI for high-school students, with three real-world examples.”

  • No structure:
    “Summarize this article.”
    → Better: “Summarize this article in five bullet points, each under 15 words.”

  • Overloaded prompt:
    Asking multiple unrelated tasks at once
    → Better: Break them into separate prompts to stay clear.


Limitations of Prompt Engineering (With Models)

Issue Examples What Happens
Hallucinations LLaMA-2-7B, Mistral-7B AI confidently states incorrect “facts.”
Weak reasoning DistilGPT-2, GPT4All Chain-of-Thought fails; logic falls apart.
Poor instruction follow Falcon-7B, LLaMA-2-Base Ignores role or tone instructions.
Small context window GPT-NeoX-20B, DistilGPT2 Cannot summarize long documents.
Bias / tone issues RedPajama-INCITE, Pythia Unfiltered models may produce off-color responses.
Free-tier limitations Google AI Studio free tier, Hugging Face Spaces Lower rate limits or slow response times during peak usage.

This is why prompt engineering isn’t optional — it helps you get more out of limited or smaller models.


7. Prompt Chaining — Breaking Down Complex Tasks

Sometimes a single prompt isn’t enough. That’s where Prompt Chaining comes in—breaking a complex request into smaller steps, feeding outputs from one step into the next.

Think of it as building a pipeline: Prompt → Response → Refined Prompt → Final Output.

  1. Example 1 — Business Case

    Task: “Write a business strategy for launching an eco-friendly fashion brand.”

    Chained Approach:

    • Prompt 1 → “List 5 challenges eco-fashion startups face.”
    • Prompt 2 → “Suggest 3 strategies to overcome each challenge.”
    • Prompt 3 → “Combine into a polished 500-word strategy report.” Instead of dumping one big prompt, you steer the model step-by-step, ensuring quality at each stage.
  2. Example 2 — Student Case

    Task: “Write an essay on climate change.”

    Chained Approach:

    • Prompt 1 → “List the top 5 causes of climate change.”
    • Prompt 2 → “Explain each cause in detail.”
    • Prompt 3 → “Summarize into a 1000-word essay with intro and conclusion.” This modular approach reduces hallucination, improves structure, and gives you checkpoints to verify accuracy.

When to Use Prompt Chaining

  • Long research reports
  • Multi-step reasoning (financial forecasts, legal summaries)
  • Structured workflows (customer journey mapping, educational guides)

Limitation: Models with short context windows (like DistilGPT2, GPT-NeoX-20B) may “forget” earlier steps if the chain grows too long. Larger models (like GPT-4, Gemini 1.5 Pro, Claude 3.5) handle this far better.


Simple Example (LangChain Style):

from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.llms import OpenAI

llm = OpenAI()

# Step 1: Generate outline
outline_prompt = PromptTemplate.from_template("Give me 3 bullet points on climate change.")
outline_chain = LLMChain(llm=llm, prompt=outline_prompt)

# Step 2: Expand outline into essay
essay_prompt = PromptTemplate.from_template("Expand this into a 300-word essay:\n{outline}")
essay_chain = LLMChain(llm=llm, prompt=essay_prompt)

outline = outline_chain.run({})
essay = essay_chain.run({"outline": outline})
print(essay)
Enter fullscreen mode Exit fullscreen mode
  • First prompt creates an outline, second prompt expands into an essay. This is Prompt Chaining — and it’s especially powerful for business workflows (e.g., first create meeting notes → then turn into action items → then draft emails).

Demo notebook Link with Gemini model:

Google Colab Demo Notebook


Previous Chapter (Chapter B: Introduction to LLMs And Free LLM Resources)

Now that you know how to work with prompts so you can unlock the full capability. We now start the basic transition from GenAi to Agentic AI

Next Chapter Chapter D: Transition from Gen AI to Agentic AI


Got questions or ideas?Drop a comment below — I’d love to hear your thoughts.

Let’s connect: 🔗 My LinkedIn


Top comments (0)