<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: neha</title>
    <description>The latest articles on DEV Community by neha (@neha).</description>
    <link>https://dev.to/neha</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/neha"/>
    <language>en</language>
    <item>
      <title>Long Long Ago — The History of Generative AI</title>
      <dc:creator>neha</dc:creator>
      <pubDate>Fri, 17 Oct 2025 16:11:56 +0000</pubDate>
      <link>https://dev.to/neha/long-long-ago-the-history-of-generative-ai-5dmm</link>
      <guid>https://dev.to/neha/long-long-ago-the-history-of-generative-ai-5dmm</guid>
      <description>&lt;p&gt;We have all seen it and are mesmerised by it. ChatGPT can write new essays, generate images, create stories, create art, write code, be our friend, and give advice as a mentor — &lt;strong&gt;It is doing it all&lt;/strong&gt;. It seems like magic, but this is a result of decades of ideas, technological evolutions and small steps taken behind the scenes — while the &lt;strong&gt;Generative AI&lt;/strong&gt; models emerged as winners on the grand stage of AI.&lt;/p&gt;

&lt;p&gt;How did we get here? I will trace through the story of how we got to these powerful LLMs. The journey of how we got here draws a lot of parallels from trying to mimic how the actual human mind works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Symbolic AI: In the Beginning [1950s - 1980s]&lt;/strong&gt;&lt;br&gt;
The very first experiment with AI was not smart but they were the front benchers — the very obedient set of students who followed rules exactly as told.&lt;/p&gt;

&lt;p&gt;The early version of AI followed statements which seemed more like the If-Else clause — “&lt;em&gt;If X happens, do Y.&lt;/em&gt;” This is referred to as — &lt;strong&gt;Symbolic AI&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;For example —
We give the robot some rules to follow:

If it is yellow and curved → it is a banana.
If it is red and round → it is an apple.
If it is orange and round → it is an orange.

Now,
If the fruit is a yellow and curved → “It’s a banana!”
If the fruit is red and round → “It’s an apple!”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Symbolic AI uses two concepts — Symbolic representation and Rule-based Inference. Some real-world examples of Symbolic AI include —&lt;/p&gt;

&lt;p&gt;Expert systems for medical diagnosis&lt;br&gt;
Knowledge graphs and Ontologies&lt;br&gt;
The initial version of Chatbots — ELIZA (1960s) was the first chatbot designed to mimic a therapist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Machine Learning — Learning Takesover the Rules [1990–2010s]&lt;/strong&gt;&lt;br&gt;
The next major shift came from the question — “Can machines learn from examples instead of a definitive set of rules? “&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yhgw8pdep6qkjutxefc.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yhgw8pdep6qkjutxefc.jpeg" alt="A learning robot — Image generated using AI (Gemini)" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This question and experimentation led to Machine learning — Instead of giving rules, data was fed to systems to figure out patterns and learn from them. Emails labelled as Spam or Brain Images labelled as having a tumour. These resulted in task-specific intelligent systems like —&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Spam filters&lt;/li&gt;
&lt;li&gt;Recommendation engines&lt;/li&gt;
&lt;li&gt;Image Recognition&lt;/li&gt;
&lt;li&gt;Speech Recognition&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Why is this better than before? These systems are more adaptive of real-world scenarios.&lt;br&gt;
But what was missing here? These models were very specific to the task and could navigate only their area of expertise. They do not have any general intelligence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Big Shift: Learning From Everything [2017s — 2020]&lt;/strong&gt;&lt;br&gt;
The next paradigm shift was to learn from everything, but what changed now? Why weren't we training models on everything before?&lt;/p&gt;

&lt;p&gt;With time technology evolved there were two aspects to LLMs — The brain and the Muscle.&lt;br&gt;
The muscle — GPUs became faster, and thousands of cores for parallel computation possible.&lt;br&gt;
The brain — In 2017, Google researchers published — “Attention is All You Need”. This paper introduced the concept of transformers.&lt;/p&gt;

&lt;p&gt;Enter — &lt;strong&gt;Transformers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfb4xm0zbpq6xd4a5vhy.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfb4xm0zbpq6xd4a5vhy.jpeg" alt="I think therefore I am — Image generated using AI (Gemini)" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before 2017, the models which relied on RNNs (Recurrent Neural Networks) read everything sequentially — word by word . Transformers “transformed” this approach, they could look at sentences.&lt;/p&gt;

&lt;p&gt;Let us look at the key concepts which made this possible —&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Self-attention&lt;/li&gt;
&lt;li&gt;Multi-head Attention&lt;/li&gt;
&lt;li&gt;Positional Encoding&lt;/li&gt;
&lt;li&gt;Feedforward Layers&lt;/li&gt;
&lt;li&gt;Layer Normalization&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's get into the details of each of these. I will use the below sentence to walk through each of these&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“ Jim Halpert loves the mountains but Dwight Schrute loves the beach”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgva68jmr7y2sxnadoe1.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgva68jmr7y2sxnadoe1.jpeg" alt="Image generated using AI (Gemini)" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-Attention: Understanding the importance&lt;/strong&gt;&lt;br&gt;
In the context of each word, which word is more important? Self-Attention, allows the model to understand which word is important in a particular word's context. It computes the “attention” score for every other word in a sentence in relation to itself.&lt;/p&gt;

&lt;p&gt;Let's see this with our example sentence now. Here, the attention score for &lt;em&gt;“Jim”&lt;/em&gt; is high for the word &lt;em&gt;“mountains”&lt;/em&gt; and the attention score for &lt;em&gt;“Dwight”&lt;/em&gt; is high for the word &lt;em&gt;“beach”&lt;/em&gt;. The model gives more weight to the word &lt;em&gt;“mountain&lt;/em&gt;” in the context of &lt;em&gt;“Jim”&lt;/em&gt; and more weight to the word &lt;em&gt;“beach”&lt;/em&gt; in the context of &lt;em&gt;“Dwight”&lt;/em&gt;. The model at every word asks the question — “Which other words matter the most to me right now”. With this, the model concludes — &lt;em&gt;“Jim Halpert → mountains” and “Dwight Schrute → beach”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-head attention: Many perspectives one problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Multi-head attention is self-attention done multiple times with a different aspect as the focus point for each iteration. This brings different patterns into the model. It is similar to different people interpreting the sentence based on their expertise, one could focus on syntax, one could focus on meaning etc. Let's use the same example —&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The first head might focus on &lt;em&gt;“Jim Halpert → loves → mountains”&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;The second head might focus on the symmetry between the two phrases —_ “Jim Halpert → loves → mountains”_ and &lt;em&gt;“Dwight Schrute → loves → beach”&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;The third head might focus on the &lt;em&gt;“but”&lt;/em&gt; clause &lt;em&gt;“but Dwight Schrute loves the beach”&lt;/em&gt; clause and the contrast in the sentence is trying to bring it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This can go on, more the heads more the viewpoints the model has. GPT-3 used around 96 attention heads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Positional Encoding: Giving a sense of order&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The above concepts and models by themselves don't understand the sequence of the words in a sentence. The model initially doesn't understand the difference between — &lt;em&gt;“Jim Halpert loves mountains”&lt;/em&gt; and &lt;em&gt;“mountains love Jim Halpert”&lt;/em&gt;. The positional encoding makes sure that Jim comes first then the mountains and then Dwight and finally the beach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feedforward layers: Generation is here&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After attention layers, *&lt;em&gt;Feed“forward” *&lt;/em&gt; layer registers the learnings as abstractions which get used for generations, summarization etc. The feedforward layer leaves out the details and captures the essence. In our example something as simple as — people with different tastes. The model arrives at these abstractions after learning from many examples.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer normalization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Training with so many layers can be extremely unstable and unpredictable. A specific set of data can add bias into a layer which can percolate and increase exponentially down the newer layers. Layer normalization normalizes the learning for each layer. This layer is applied after attention and feedforward layers.&lt;/p&gt;

&lt;p&gt;Here is the final sequence in which LLMs are trained —&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy99agqzyokch9qjpqb9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy99agqzyokch9qjpqb9.png" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*Residual connection adds the output for the step back to the input.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion: The Journey Has Just Begun&lt;/strong&gt;&lt;br&gt;
From the obedient rule-followers as the Symbolic AI to machines that can learn from data, and now to models that can Generate and Create — the journey of Generative AI has been a series of small, powerful steps coming together.&lt;/p&gt;

&lt;p&gt;Transformers have changed the game. With self-attention, multi-head perspectives, and an understanding of context, these models can now write stories, answer questions, generate art, and mimic creativity.&lt;/p&gt;

&lt;p&gt;We are just at the beginning, the systems are evolving fast, there is a lot more coming next Agentic AI, AI and human in the loop, Personalized AI and more.&lt;/p&gt;

&lt;p&gt;The story so far was been magical and what awaits next are yet to be imagined.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>machinelearning</category>
      <category>genai</category>
    </item>
    <item>
      <title>Top 5 Prompt Engineering Techniques for LLMs in 2025</title>
      <dc:creator>neha</dc:creator>
      <pubDate>Thu, 16 Oct 2025 15:40:08 +0000</pubDate>
      <link>https://dev.to/neha/top-5-prompt-engineering-techniques-for-llms-in-2025-1gb4</link>
      <guid>https://dev.to/neha/top-5-prompt-engineering-techniques-for-llms-in-2025-1gb4</guid>
      <description>&lt;p&gt;As LLMs continue to get better, the debate — Is prompt engineering dead or becoming even more important is totally on. With today’s models, the extent to which prompting is needed is drastically reducing, but no matter how smart LLMs become, they still need directions on various ‘whats’, ‘hows’, 'whys’, and 'whens.' The guidelines become important to get the outputs you are looking for. We all have experiences with LLMs where the results are outright unacceptable. *&lt;em&gt;Here are 5 prompt engineering techniques that have always worked and are very much relevant with the newest models, be it GPT, Gemini or Claude : *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Few Shots&lt;/strong&gt;&lt;br&gt;
Few-shot is a prompting technique where you include examples as part of your prompt. This provides LLM guidelines on how to approach a task and the format in which we expect the output. Few-shot prompting is 80% more efficient than zero-shot prompts (without any examples).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Template&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Task&amp;gt; Use examples below as ref :
&amp;lt;Example1&amp;gt;
&amp;lt;Example2&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqgwy35q8ktq3vwgo8ly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqgwy35q8ktq3vwgo8ly.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How many examples are the right number of examples? — The goal with few-shot prompts is to cover enough so that LLM can respond to a similar new task. Trying out and experimenting is the best approach here. Starting with 2 or 3 examples is a good idea, and depending on the complexity and different use cases that have to be covered, more can be included. More complex tasks, which need to cover a lot of cases, can use techniques like dynamic few-shot. For dynamic few-shot inclusion, refer to —&lt;br&gt;


&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://www.thoughtworks.com/en-us/radar/techniques/dynamic-few-shot-prompting?source=post_page-----f114d4958b4d---------------------------------------" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.thoughtworks.com%2Fen-us%2Fradar%2Ftechniques%2Fdynamic-few-shot-prompting%2Fmeta.jpeg" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://www.thoughtworks.com/en-us/radar/techniques/dynamic-few-shot-prompting?source=post_page-----f114d4958b4d---------------------------------------" rel="noopener noreferrer" class="c-link"&gt;
             Dynamic few-shot prompting | Technology Radar | Thoughtworks United States
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            Dynamic few-shot prompting builds upon few-shot prompting by dynamically including specific examples in the prompt to guide the model's responses. Adjusting the number and relevance [...]
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.thoughtworks.com%2Fetc.clientlibs%2Fthoughtworks%2Fclientlibs%2Fclientlib-site%2Fresources%2Fimages%2Ffavicon.ico"&gt;
          thoughtworks.com
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;With dynamic few-shots, we provide LLM just the relevant examples instead of overloading it with a lot of examples and leaving LLM to figure out which one to use as a reference. Dynamic few-shot approach is great of complex tasks, resulting in better outputs at the same time, reducing cost and latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Persona / Role-Based Prompting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Persona or Role-based prompting, you direct the LLM to act as an expert or assume the role of a domain expert, for example, a potential client or an employer and then respond to the task. This allows LLMs to wear a specific hat and focus on the task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Template&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are an expert in &amp;lt;Persona&amp;gt;. &amp;lt;Task&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3w8zxrx7zap1jj9ro18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3w8zxrx7zap1jj9ro18.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Without the persona, based LLMs are free to attempt the task from multiple directions, of which not all might be relevant to you. Defining the persona pushes the LLM to attempt the task with a more focused and relevant lens and, hence, gets better outputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Chain of Thought&lt;/strong&gt;&lt;br&gt;
With Chain of Thought (COT) prompting, you prompt LLMs to do step-by-step thinking while generating the output of a task. This technique is very efficient for logic-based tasks, which is usually a weak area for LLMs. COT allows LLMs to break down a problem into smaller problems before solving them. Explicitly prompting LLMs to think through each step allows LLMs to engage in logical thinking at each step, making the final output literally “Add up”. COT is great for tasks which need reasoning and logic, like SQL generation or maths problems or tasks which might need multiple steps to arrive at the correct output. The most common way of making your prompt COT-driven is by prompting the LLM to explain each step or walk through each step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Template&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. &amp;lt;Task prompt &amp;gt;. Explain your answer step by step.
2. &amp;lt;Task prompt &amp;gt;. Think through the output step by step.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kgta3lv39j7gqz6a1x8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kgta3lv39j7gqz6a1x8.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While COT is great for complex/reasoning tasks, including COT can also give a deeper insight into LLM's thought process while arriving at the output. Experiment with mathematical problems to see how LLMs tend to give wrong answers without COT.&lt;br&gt;
COT can be used with Few-Shot prompting — Few-Shot COT. Few-shot COTs are usually more consistent with outputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. ReAct — Reasoning and Acting&lt;/strong&gt;&lt;br&gt;
ReAct prompting combines reasoning and action. ReAct allows LLMs to reason and then take an action based on the reasoning. ReAct prompting technique becomes extremely important when a task needs a series of tools or functions to be invoked. While LLMs have natively become enabled with tool/function calling, making ReAct part of your prompt has many advantages. The reasoning and actions are explicit with ReAct. Making these steps explicit can make debugging easier. ReAct with COT is a powerful combination that allows reasoning at each sub-problem level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Template&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Task
Follow this exact loop:
Thought: &amp;lt;brief reasoning about next step&amp;gt;
Action: &amp;lt;ToolName&amp;gt;[Arguments]
Observation: &amp;lt;result returned by the operator&amp;gt;

Tools available
&amp;lt;tools&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Multiple iterations of thought-action-observation can be enforced till a task is completed. I have not included an example screenshot, as the comparison here needs to trigger the LLM function calling. I will be attaching a sample code link here soon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Tree of Thoughts (TOT)&lt;/strong&gt;&lt;br&gt;
Tree of Thoughts, or TOT, is a supercharged COT. While COT follows a single chain of reasoning, TOT branches out and weighs multiple possible options before arriving at the output. TOT is extremely useful for tasks which need to consider multiple variations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Template&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Task&amp;gt;
Consider multiple options to approach the task
Option 1: &amp;lt;Option&amp;gt;
Option 2: &amp;lt;Option&amp;gt;
Evaluate the options and decide on the best one.
&amp;lt;Result&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkrjzqtb60vljgaxzcjx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkrjzqtb60vljgaxzcjx.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TOT is ideal for examining multiple scenarios, weighing pros and cons and picking output which is apt for the given task. TOT combined with Persona/Role-based prompting can also be used to approach a task from multiple personas to get a holistic picture before arriving at the output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary Snapshot&lt;/strong&gt;&lt;br&gt;
Below is the snapshot of the 5 prompting techniques, along with when to use them and which techniques together yield even better results. Always keep the prompt clear and concise. Provide enough context in the prompt for LLMs to respond better. And always &lt;strong&gt;Verify— Fix — Repeat.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtrogo94t1gx6pahnljs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtrogo94t1gx6pahnljs.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>genai</category>
      <category>promptengineering</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
