Two type of LLM
- Base LLM (Pre-trained LLMs) - Base LLMs undergo pre-training on massive unlabeled text datasets to predict the next word or token, gaining broad language understanding and knowledge. These models excel at tasks like text completion or generation but often respond loosely to prompts, producing generalized or unpredictable outputs without specific guidance.
- Instruction Tuned LLM - Instruction-tuned LLMs start from a base model and undergo supervised fine-tuning on datasets of instruction-response pairs, teaching them to follow explicit user directives like “summarize” or “explain.” This process, often enhanced by reinforcement learning from human feedback (RLHF), improves precision, reduces hallucinations, and enables better handling of diverse tasks such as question-answering or step-by-step reasoning.
Guidelines
Principle-1: Write clear and specific instructions
- Use delimiters - such as triple quotes, backticks, angle brackets etc. to define what are instructions and what is the actual text on which task is to be performed in your prompt
- Ask for structured output
- Check whether conditions are satisfied, check assumptions are required to do the task
- Few-shot prompting - Give successful examples of completing tasks, then ask model to perform the task
Principle-2: Give model time to think
1. Specify the steps to complete a task
Step-1: ______
Step-2: ______
……
Step-N: ______
2. Instruct the model to workout its own solution before rushing to a conclusion
Ask model to solve the problem itself first, then compare with your given solution.
For example, to solve a maths problem give the question first then ask model to solve the question first and also give your solution for comparison in your prompt
Model Limitations
- LLM might not have memorized the information it has seen while training. So sometimes it gives plausible response but not actually true
- Hallucinations → Make statements that sound plausible but not true
Iterative Prompt Development
Prompt guidelines
- Be clear and specific
- Analyze why result does not give desired output
- Refine the idea and the prompt
- Repeat
→ Iteratively refine the prompt for the length and style
Iterative Process
- Try something
- Analyze where the result does not give what you want
- Clarify instructions, give more time to think
- Refine prompts with a batch of examples
LLM Capabilities
- Summarizing
- Inferring → Identifying sentiments, identifying emotions, identifying anger, Extracting something, inferring topics from a text
- Transforming → Translation, Tone transformation, Format conversion (e.g. - JSON to HTML), Spellcheck/grammar check
- Expanding → e.g. Customize the automated reply to a customer email
- Chatbot → Setting Role: system/assitant/user while chatting with the model
Reference: https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/


Top comments (0)