<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Prabhakar</title>
    <description>The latest articles on DEV Community by Prabhakar (@prabhasg56).</description>
    <link>https://dev.to/prabhasg56</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prabhasg56"/>
    <language>en</language>
    <item>
      <title>Advanced Prompting Techniques — Automating Reasoning &amp; Persona-Based AI(Part 3)</title>
      <dc:creator>Prabhakar</dc:creator>
      <pubDate>Sat, 31 Jan 2026 19:41:48 +0000</pubDate>
      <link>https://dev.to/prabhasg56/advanced-prompting-techniques-automating-reasoning-persona-based-aipart-3-1mh6</link>
      <guid>https://dev.to/prabhasg56/advanced-prompting-techniques-automating-reasoning-persona-based-aipart-3-1mh6</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/prabhasg56/what-is-an-llm-how-chatgpt-gpt-ai-language-models-really-work-beginner-guide-1l1a"&gt;Part 1&lt;/a&gt;, we learned what LLMs are and how they work.&lt;br&gt;
In &lt;a href="https://dev.to/prabhasg56/how-tokenization-embeddings-attention-work-in-llms-part-2-35i2"&gt;Part 2&lt;/a&gt;, we explored basic prompting and Chain-of-Thought reasoning.&lt;/p&gt;

&lt;p&gt;Now in Part 3, we go deeper into advanced prompting techniques that help you build Agentic AI systems instead of simple chatbots.&lt;/p&gt;

&lt;p&gt;In this section, we dive into one of the most important skills in the Agentic AI journey: Prompting.&lt;/p&gt;

&lt;p&gt;A good prompt can improve your LLM’s output by 10x–20x in quality and accuracy.&lt;br&gt;
A bad prompt leads to vague, unpredictable, or wrong answers.&lt;/p&gt;

&lt;p&gt;In this guide, you’ll learn:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What prompting really means&lt;/li&gt;
&lt;li&gt;Why system prompts matter&lt;/li&gt;
&lt;li&gt;Zero-shot prompting&lt;/li&gt;
&lt;li&gt;Few-shot prompting&lt;/li&gt;
&lt;li&gt;Chain-of-Thought (CoT) prompting&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  1. What Is Prompting?
&lt;/h2&gt;

&lt;p&gt;A prompt is the instruction you give to an LLM to control how it behaves and responds.&lt;/p&gt;

&lt;p&gt;Without a prompt, the model behaves like a free-flowing chatbot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;“Hey, who are you?”
→ It can answer anything: math, jokes, code, history… anything.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s not ideal in real applications.&lt;/p&gt;

&lt;p&gt;Instead, we give a System Prompt — a special instruction that sets context and boundaries.&lt;/p&gt;

&lt;p&gt;Example: System Prompt&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are an expert in mathematics.
Only answer math-related questions.
If the user asks anything else, say "Sorry, I can only help with math."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Zero-Shot Prompting:
&lt;/h2&gt;

&lt;p&gt;Zero-shot prompting means giving direct instructions with no examples.&lt;/p&gt;

&lt;p&gt;You tell the model exactly what to do.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from openai import OpenAI

def ask_model(message):

    client = OpenAI("API_KEY")

    # Zero shot prompt: directly giving the instruction to the model
    SYSTEM_PROMPT = "You should only and only ans coding related questions, Sorry i can give you only coding related ans"

    response = client.chat.completions.create(
        model="gemini-3-flash-preview",
        messages=[
            {   "role": "system",
                "content": SYSTEM_PROMPT
            },
            {
                "role": "user",
                "content": message
            }
        ]
    )

    return response.choices[0].message.content

print(ask_model("Hey, I am Prabhas"))
print(ask_model("Please, Write the a program to calculate the factorial of 5 java"))

**Output:**
1. Sorry, I can only answer coding-related questions. Please let me know if you have any questions regarding programming, algorithms, or software development.

2. Here is a simple Java program to calculate the factorial of 5 using a `for` loop:

java public class Main {
    public static void main(String[] args) {
        int number = 5;
        long factorial = 1;

        for (int i = 1; i &amp;lt;= number; i++) {
            factorial *= i;
        }

        System.out.println("The factorial of " + number + " is: " + factorial);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;No examples&lt;/li&gt;
&lt;li&gt;Direct command&lt;/li&gt;
&lt;li&gt;Fast, but less accurate than other methods&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Zero-Shot when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The task is simple&lt;/li&gt;
&lt;li&gt;You don’t need much reasoning&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Few-Shot Prompting
&lt;/h2&gt;

&lt;p&gt;Few-shot prompting provides examples along with instructions.&lt;/p&gt;

&lt;p&gt;This teaches the model how you want it to behave.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from openai import OpenAI

client = OpenAI("API_KEY")

SYSTEM_PROMPT = """
You should only and only answer coding related questions.
If the user asks something other than coding, return JSON with:
- code: null
- isCodingQuestion: false

Rules:
- Strictly follow the output in JSON format.

Output format:
{
  "code": "string" or null,
  "isCodingQuestion": boolean
}

Example:
Q. Can you explain a + b whole square?
A. {
  "code": null,
  "isCodingQuestion": false
}
"""

def ask_model(user_message: str) -&amp;gt; str:
    """
    Sends a prompt to the LLM and returns the response.
    """
    response = client.chat.completions.create(
        model="gemini-3-flash-preview",
        messages=[
            {"role": "system", "content": SYSTEM_PROMPT},
            {"role": "user", "content": user_message}
        ]
    )
    return response.choices[0].message.content


# Example usage
    print(ask_model("Hey, I am Prabhas"))
    print(ask_model("Write a Python function to reverse a string"))

Output:-
1. {
  "code": null,
  "isCodingQuestion": false
}

2. {
  "code": "def reverse_string(s): return s[::-1]",
  "isCodingQuestion": true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Uses examples&lt;/li&gt;
&lt;li&gt;Much higher accuracy&lt;/li&gt;
&lt;li&gt;Widely used in production systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Few-Shot when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want consistency&lt;/li&gt;
&lt;li&gt;You need better control over answers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. Chain-of-Thought (CoT) Prompting:
&lt;/h2&gt;

&lt;p&gt;This is my personal favorite 💡&lt;/p&gt;

&lt;p&gt;Chain-of-Thought prompting forces the model to think step-by-step before answering.&lt;/p&gt;

&lt;p&gt;Instead of jumping straight to the answer, the model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Analyzes the problem&lt;/li&gt;
&lt;li&gt;Plans the solution&lt;/li&gt;
&lt;li&gt;Executes step by step&lt;/li&gt;
&lt;li&gt;Produces the final output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example System Prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json 
from openai import OpenAI

client = OpenAI("API_KEY")

# Chain-of-Thought prompt: asking model to think step by step
SYSTEM_PROMPT = """
You are a helpful coding assistant. When solving problems, think step by step and explain your reasoning clearly.

Follow this format:
1. Understand the problem
2. Break it down into steps
3. Write the solution
4. Explain the approach

Always provide code in JSON format:
{
  "thinking": "Your step-by-step reasoning",
  "code": "Your code solution",
  "explanation": "Brief explanation of the solution"
}
"""

def ask_model_with_cot(user_message: str) -&amp;gt; str:
    """
    Sends a Chain-of-Thought style prompt to the LLM and returns the response.
    """
    response = client.chat.completions.create(
        model="gemini-3-flash-preview",
        messages=[
            {"role": "system", "content": SYSTEM_PROMPT},
            {"role": "user", "content": user_message}
        ]
    )
    return response.choices[0].message.content


# Example:
  response = ask_model_with_cot("How do I reverse a string in Python?")
  result = json.loads(response.choices[0].message.content) 

  print("Thinking:", result["thinking"])                                                                                                              
  print("\nCode:")                                                                                                                                    
  print(result["code"])                                                                                                                               
  print("\nExplanation:", result["explanation"])  


#Output Looks:

Thinking: To reverse a string in Python, the most efficient and common method is using string slicing. Since strings are immutable, we create a new string that traverses the original string from the last character to the first. 

1. **Slicing Method**: Use the syntax `[start:stop:step]`. By setting the step to `-1`, Python moves through the string backwards.
2. **Built-in Function**: Use `reversed()` which returns an iterator, and then `join()` it back into a string.
3. **Looping**: Manually build a new string by prepending characters (though this is less efficient).

Code:
# Method 1: String Slicing (Recommended)
original_string = "Hello World"
reversed_string = original_string[::-1]
print(f"Slicing method: {reversed_string}")

# Method 2: reversed() and join()
reversed_string_2 = "".join(reversed(original_string))
print(f"Reversed function method: {reversed_string_2}")

Explanation: The most Pythonic way is `string[::-1]`. This slicing notation starts at the end of the string and moves toward the beginning with a step of -1. Alternatively, `"".join(reversed(string))` uses a built-in function to iterate backwards and joins the characters into a new string. 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why CoT Is Powerful&lt;/strong&gt;&lt;br&gt;
Instead of this: "Here’s the answer."&lt;/p&gt;

&lt;p&gt;You get this: “Let me think… first I’ll analyze the problem… then plan… then solve…”&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More human-like&lt;/li&gt;
&lt;li&gt;Higher reasoning accuracy&lt;/li&gt;
&lt;li&gt;Used in GPT-4o, DeepSeek, O3 models, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  5. Automating Chain-of-Thought(Agent Style Reasoning):
&lt;/h2&gt;

&lt;p&gt;In the CoT part, we manually asked the model to “think step by step.”&lt;br&gt;
That works — but it’s not scalable.&lt;/p&gt;

&lt;p&gt;The real problem is this:&lt;br&gt;
You can’t keep adding prompts again and again by hand.&lt;/p&gt;

&lt;p&gt;So the next step is obvious… &lt;strong&gt;automate it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of treating the LLM like a one-shot chatbot, we turn it into a small reasoning engine that plans, thinks, and then answers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Core Idea&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We give the model a fixed reasoning framework and tell it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always analyze first&lt;/li&gt;
&lt;li&gt;Always plan before answering&lt;/li&gt;
&lt;li&gt;Always justify the solution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;**Step 1: Create an Auto-Reasoning System Prompt&lt;/p&gt;

&lt;p&gt;Here’s Automated Chain-of-Thought prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SYSTEM_PROMPT = """
You are an intelligent reasoning engine. For any problem given, follow this automatic reasoning process:

REASONING FRAMEWORK:
1. Problem Analysis: Break down the problem into components
2. Information Gathering: Identify what information is needed
3. Hypothesis Formation: Generate potential solutions
4. Evaluation: Compare solutions against criteria
5. Conclusion: Recommend the best approach

RESPONSE FORMAT:
{
    "problem": "restate the problem",
    "analysis": {
        "key_components": ["component 1", "component 2"],
        "constraints": ["constraint 1"],
        "requirements": ["requirement 1"]
    },
    "reasoning_steps": [
        {
            "step": 1,
            "action": "what to consider",
            "finding": "what we discover"
        }
    ],
    "solution": {
        "approach": "chosen approach",
        "code": "implementation",
        "reasoning": "why this approach is best"
    },
    "alternatives": [
        {
            "approach": "alternative approach",
            "pros": ["pro 1"],
            "cons": ["con 1"]
        }
    ]
}

IMPORTANT: Always show your reasoning explicitly. Do not jump to conclusions.
"""
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;**Step 2: Call the Model Using the Python Code&lt;/p&gt;

&lt;p&gt;Now plug this prompt into program:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from openai import OpenAI

client = OpenAI("API_KEY")

response = client.chat.completions.create(
    model="gemini-3-flash-preview",
    messages=[
        {"role": "system", "content": SYSTEM_PROMPT},
        {"role": "user", "content": "How do I optimize React Native App?"}
    ]
)

print(response.choices[0].message.content)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What’s Happening Internally: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Restates the problem&lt;/li&gt;
&lt;li&gt;Breaks it into parts &lt;/li&gt;
&lt;li&gt;Generates multiple solution ideas &lt;/li&gt;
&lt;li&gt;Evaluates them &lt;/li&gt;
&lt;li&gt;Then gives the final answer&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With this pattern, your LLM now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Thinks before answering&lt;/li&gt;
&lt;li&gt;Plans instead of guessing&lt;/li&gt;
&lt;li&gt;Evaluates instead of reacting&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Prompting isn’t about clever wording.&lt;br&gt;
It’s about designing thought processes.&lt;/p&gt;

&lt;p&gt;When you move from “Give me the answer” to&lt;br&gt;
“Here’s how to analyze, reason, and decide,”&lt;br&gt;
you unlock a completely different level of output quality.&lt;/p&gt;

&lt;p&gt;Manual Chain-of-Thought is a great start.&lt;br&gt;
But automated reasoning is where real systems are built.&lt;/p&gt;

&lt;p&gt;So next time you write a prompt, don’t ask:&lt;/p&gt;

&lt;p&gt;❌ What’s the answer?&lt;/p&gt;

&lt;p&gt;Ask instead:&lt;/p&gt;

&lt;p&gt;✅ How should the model think?&lt;/p&gt;

&lt;p&gt;Because the future of AI isn’t better responses —&lt;br&gt;
it’s better reasoning.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How Tokenization, Embeddings &amp; Attention Work in LLMs (Part 2)</title>
      <dc:creator>Prabhakar</dc:creator>
      <pubDate>Mon, 26 Jan 2026 16:14:10 +0000</pubDate>
      <link>https://dev.to/prabhasg56/how-tokenization-embeddings-attention-work-in-llms-part-2-35i2</link>
      <guid>https://dev.to/prabhasg56/how-tokenization-embeddings-attention-work-in-llms-part-2-35i2</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/prabhasg56/what-is-an-llm-how-chatgpt-gpt-ai-language-models-really-work-beginner-guide-1l1a"&gt;Part 1&lt;/a&gt;, we learned what an LLM is and how it generates text.&lt;br&gt;
Now let’s go deeper into how models like ChatGPT actually process language internally.&lt;/p&gt;

&lt;p&gt;This article covers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What a token really is&lt;/li&gt;
&lt;li&gt;How tokenization works&lt;/li&gt;
&lt;li&gt;Encoding &amp;amp; decoding with Python&lt;/li&gt;
&lt;li&gt;Vector embeddings&lt;/li&gt;
&lt;li&gt;Positional encoding&lt;/li&gt;
&lt;li&gt;Self-attention &amp;amp; multi-head attention&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  1. What Is a Token?
&lt;/h2&gt;

&lt;p&gt;A token is a piece of text converted into a number that the model understands.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;A → 1  
B → 2  
C → 3 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So if you type:&lt;br&gt;
B D E → it becomes → 2 4 5&lt;/p&gt;

&lt;p&gt;LLMs don’t understand words.&lt;br&gt;
They understand numbers.&lt;/p&gt;

&lt;p&gt;This process of converting text → numbers is called tokenization.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. What Is Tokenization?
&lt;/h2&gt;

&lt;p&gt;Tokenization means:&lt;/p&gt;

&lt;p&gt;Converting user input into a sequence of numbers that the model can process.&lt;/p&gt;

&lt;p&gt;Workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Text → Tokens → Model → Tokens → Text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;Input:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Hey there, my name is Piyush"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Internally becomes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[20264, 1428, 225216, 3274, ...]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These numbers go into the transformer, which predicts the next token again and again.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Encoding &amp;amp; Decoding Tokens in Python
&lt;/h2&gt;

&lt;p&gt;Using the tiktoken library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import tiktoken

encoder = tiktoken.encoding_for_model("gpt-4o")

text = "Hey there, my name is Prabhas Kumar"
tokens = encoder.encode(text)

print(tokens)

decoded = encoder.decode(tokens)
print(decoded)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;encode() → converts text → tokens&lt;/li&gt;
&lt;li&gt;decode() → converts tokens → readable text&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is exactly how ChatGPT works internally.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Vector Embeddings – Giving Words Meaning
&lt;/h2&gt;

&lt;p&gt;Tokens alone are just numbers.&lt;br&gt;
Embeddings give them meaning.&lt;/p&gt;

&lt;p&gt;An embedding is a vector (list of numbers) that represents the semantic meaning of a word.&lt;/p&gt;

&lt;p&gt;Example idea:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dog and Cat → close together&lt;/li&gt;
&lt;li&gt;Paris and India → close together&lt;/li&gt;
&lt;li&gt;Eiffel Tower and India Gate → close together&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Words with similar meaning are placed near each other in vector space.&lt;/p&gt;

&lt;p&gt;That’s how LLMs understand relationships like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Paris → Eiffel Tower  
India → Taj Mahal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is called semantic similarity.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Positional Encoding – Order Matters
&lt;/h2&gt;

&lt;p&gt;Consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Dog ate cat"&lt;/li&gt;
&lt;li&gt;"Cat ate dog"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same words.&lt;br&gt;
Different meaning.&lt;/p&gt;

&lt;p&gt;Embeddings alone don’t know position.&lt;br&gt;
So the model adds positional encoding.&lt;/p&gt;

&lt;p&gt;Positional encoding tells the model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This word is first&lt;/li&gt;
&lt;li&gt;This word is second&lt;/li&gt;
&lt;li&gt;This word is third&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the model understands order and structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Self-Attention – Words Talking to Each Other
&lt;/h2&gt;

&lt;p&gt;Self-attention lets tokens influence each other.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"river bank"&lt;/li&gt;
&lt;li&gt;"ICICI bank"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same word: bank&lt;br&gt;
Different meaning.&lt;/p&gt;

&lt;p&gt;Self-attention allows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"river" → changes meaning of "bank"&lt;/li&gt;
&lt;li&gt;"ICICI" → changes meaning of "bank"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So context decides meaning.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Multi-Head Attention – Looking at Many Angles
&lt;/h2&gt;

&lt;p&gt;Multi-head attention means the model looks at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Meaning&lt;/li&gt;
&lt;li&gt;Position&lt;/li&gt;
&lt;li&gt;Context&lt;/li&gt;
&lt;li&gt;Relationship&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the same time.&lt;/p&gt;

&lt;p&gt;Like a human observing many things at once.&lt;/p&gt;

&lt;p&gt;This gives the model a deep understanding of the sentence.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Final Flow of an LLM
&lt;/h2&gt;

&lt;p&gt;User enters text&lt;/p&gt;

&lt;p&gt;Tokenization → numbers&lt;/p&gt;

&lt;p&gt;Embeddings → meaning&lt;/p&gt;

&lt;p&gt;Positional encoding → order&lt;/p&gt;

&lt;p&gt;Self + Multi-head attention → context&lt;/p&gt;

&lt;p&gt;Linear + Softmax → probability of next token&lt;/p&gt;

&lt;p&gt;Decode → readable output&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;LLMs don’t know language.&lt;br&gt;
They predict tokens based on probability and patterns.&lt;/p&gt;

&lt;p&gt;Yet the result feels intelligent because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tokens carry meaning (embeddings)&lt;/li&gt;
&lt;li&gt;Order is preserved (positional encoding)&lt;/li&gt;
&lt;li&gt;Context is understood (attention)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that’s the magic behind ChatGPT.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>programming</category>
      <category>mobile</category>
    </item>
    <item>
      <title>How Tokenization, Embeddings &amp; Attention Work in LLMs (Part 2)</title>
      <dc:creator>Prabhakar</dc:creator>
      <pubDate>Sat, 24 Jan 2026 18:33:01 +0000</pubDate>
      <link>https://dev.to/prabhasg56/how-tokenization-embeddings-attention-work-in-llms-part-2-3i0c</link>
      <guid>https://dev.to/prabhasg56/how-tokenization-embeddings-attention-work-in-llms-part-2-3i0c</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/prabhasg56/what-is-an-llm-how-chatgpt-gpt-ai-language-models-really-work-beginner-guide-1l1a"&gt;Part 1&lt;/a&gt;, we learned what an LLM is and how it generates text.&lt;br&gt;
Now let’s go deeper into how models like ChatGPT actually process language internally.&lt;/p&gt;

&lt;p&gt;This article covers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What a token really is&lt;/li&gt;
&lt;li&gt;How tokenization works&lt;/li&gt;
&lt;li&gt;Encoding &amp;amp; decoding with Python&lt;/li&gt;
&lt;li&gt;Vector embeddings&lt;/li&gt;
&lt;li&gt;Positional encoding&lt;/li&gt;
&lt;li&gt;Self-attention &amp;amp; multi-head attention&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  1. What Is a Token?
&lt;/h2&gt;

&lt;p&gt;A token is a piece of text converted into a number that the model understands.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;A → 1  
B → 2  
C → 3 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So if you type:&lt;br&gt;
B D E → it becomes → 2 4 5&lt;/p&gt;

&lt;p&gt;LLMs don’t understand words.&lt;br&gt;
They understand numbers.&lt;/p&gt;

&lt;p&gt;This process of converting text → numbers is called tokenization.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. What Is Tokenization?
&lt;/h2&gt;

&lt;p&gt;Tokenization means:&lt;/p&gt;

&lt;p&gt;Converting user input into a sequence of numbers that the model can process.&lt;/p&gt;

&lt;p&gt;Workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Text → Tokens → Model → Tokens → Text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;Input:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Hey there, my name is Piyush"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Internally becomes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[20264, 1428, 225216, 3274, ...]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These numbers go into the transformer, which predicts the next token again and again.&lt;/p&gt;

&lt;p&gt;👉&lt;strong&gt;Note&lt;/strong&gt;: Every model has its own mechanism for generating tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Encoding &amp;amp; Decoding Tokens in Python
&lt;/h2&gt;

&lt;p&gt;Using the tiktoken library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import tiktoken

encoder = tiktoken.encoding_for_model("gpt-4o")

text = "Hey there, my name is Prabhas Kumar"
tokens = encoder.encode(text)

print(tokens)

decoded = encoder.decode(tokens)
print(decoded)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;encode() → converts text → tokens&lt;/li&gt;
&lt;li&gt;decode() → converts tokens → readable text&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is exactly how ChatGPT works internally.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Vector Embeddings – Giving Words Meaning
&lt;/h2&gt;

&lt;p&gt;Tokens alone are just numbers.&lt;br&gt;
Embeddings give them meaning.&lt;/p&gt;

&lt;p&gt;An embedding is a vector (list of numbers) that represents the semantic meaning of a word.&lt;/p&gt;

&lt;p&gt;Example idea:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dog and Cat → close together&lt;/li&gt;
&lt;li&gt;Paris and India → close together&lt;/li&gt;
&lt;li&gt;Eiffel Tower and India Gate → close together&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Words with similar meaning are placed near each other in vector space.&lt;/p&gt;

&lt;p&gt;That’s how LLMs understand relationships like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Paris → Eiffel Tower  
India → Taj Mahal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is called semantic similarity.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Positional Encoding – Order Matters
&lt;/h2&gt;

&lt;p&gt;Consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Dog ate cat"&lt;/li&gt;
&lt;li&gt;"Cat ate dog"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same words.&lt;br&gt;
Different meaning.&lt;/p&gt;

&lt;p&gt;Embeddings alone don’t know position.&lt;br&gt;
So the model adds positional encoding.&lt;/p&gt;

&lt;p&gt;Positional encoding tells the model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This word is first&lt;/li&gt;
&lt;li&gt;This word is second&lt;/li&gt;
&lt;li&gt;This word is third&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the model understands order and structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Self-Attention – Words Talking to Each Other
&lt;/h2&gt;

&lt;p&gt;Self-attention lets tokens influence each other.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"river bank"&lt;/li&gt;
&lt;li&gt;"ICICI bank"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same word: bank&lt;br&gt;
Different meaning.&lt;/p&gt;

&lt;p&gt;Self-attention allows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"river" → changes meaning of "bank"&lt;/li&gt;
&lt;li&gt;"ICICI" → changes meaning of "bank"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So context decides meaning.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Multi-Head Attention – Looking at Many Angles
&lt;/h2&gt;

&lt;p&gt;Multi-head attention means the model looks at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Meaning&lt;/li&gt;
&lt;li&gt;Position&lt;/li&gt;
&lt;li&gt;Context&lt;/li&gt;
&lt;li&gt;Relationship&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the same time.&lt;/p&gt;

&lt;p&gt;Like a human observing many things at once.&lt;/p&gt;

&lt;p&gt;This gives the model a deep understanding of the sentence.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Final Flow of an LLM
&lt;/h2&gt;

&lt;p&gt;User enters text&lt;/p&gt;

&lt;p&gt;Tokenization → numbers&lt;/p&gt;

&lt;p&gt;Embeddings → meaning&lt;/p&gt;

&lt;p&gt;Positional encoding → order&lt;/p&gt;

&lt;p&gt;Self + Multi-head attention → context&lt;/p&gt;

&lt;p&gt;Linear + Softmax → probability of next token&lt;/p&gt;

&lt;p&gt;Decode → readable output&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;LLMs don’t know language.&lt;br&gt;
They predict tokens based on probability and patterns.&lt;/p&gt;

&lt;p&gt;Yet the result feels intelligent because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tokens carry meaning (embeddings)&lt;/li&gt;
&lt;li&gt;Order is preserved (positional encoding)&lt;/li&gt;
&lt;li&gt;Context is understood (attention)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that’s the magic behind ChatGPT.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>programming</category>
      <category>mobile</category>
    </item>
    <item>
      <title>What Is an LLM? How ChatGPT, GPT &amp; AI Language Models Really Work (Part 1)</title>
      <dc:creator>Prabhakar</dc:creator>
      <pubDate>Sun, 18 Jan 2026 17:12:28 +0000</pubDate>
      <link>https://dev.to/prabhasg56/what-is-an-llm-how-chatgpt-gpt-ai-language-models-really-work-beginner-guide-1l1a</link>
      <guid>https://dev.to/prabhasg56/what-is-an-llm-how-chatgpt-gpt-ai-language-models-really-work-beginner-guide-1l1a</guid>
      <description>&lt;h1&gt;
  
  
  How Large Language Models (LLMs) Work — A Beginner-Friendly Guide
&lt;/h1&gt;

&lt;p&gt;Learn how Large Language Models (LLMs) like ChatGPT work. Understand tokens, GPT, transformers, and how AI generates human-like text in simple terms.&lt;/p&gt;

&lt;p&gt;If you’ve used ChatGPT, Gemini, or Claude, you’ve already interacted with a &lt;strong&gt;Large Language Model (LLM)&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;It feels like chatting with a human, but behind the scenes, it’s all &lt;strong&gt;math, data, tokens, and probabilities&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this article, you will learn:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What an LLM is
&lt;/li&gt;
&lt;li&gt;How LLMs are trained
&lt;/li&gt;
&lt;li&gt;What tokens are and how they work
&lt;/li&gt;
&lt;li&gt;The meaning of GPT
&lt;/li&gt;
&lt;li&gt;How LLMs generate answers step by step
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  1. What Is an LLM?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;LLM = Large Language Model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An LLM is an AI system trained to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand human language
&lt;/li&gt;
&lt;li&gt;Generate human-like responses
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Explain recursion like I’m 10.”  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;LLMs allow humans to talk to computers using &lt;strong&gt;natural language instead of code&lt;/strong&gt;, making it easier for anyone to interact with AI systems without learning programming.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. How Are LLMs Trained?
&lt;/h2&gt;

&lt;p&gt;LLMs are trained on massive datasets, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Books
&lt;/li&gt;
&lt;li&gt;Blogs
&lt;/li&gt;
&lt;li&gt;Articles
&lt;/li&gt;
&lt;li&gt;Code repositories
&lt;/li&gt;
&lt;li&gt;Web content
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unlike databases, LLMs &lt;strong&gt;don’t store facts verbatim&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
Instead, they learn &lt;strong&gt;patterns, relationships, and probabilities in language&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Think of it like humans learning a language&lt;/strong&gt; — the more you read, the better you understand how sentences are structured and how to respond appropriately.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Tokens: How AI Understands Text
&lt;/h2&gt;

&lt;p&gt;Computers don’t understand words — they understand &lt;strong&gt;numbers&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;When you type:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Hello world”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It might become something like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;[15496, 995]&lt;/code&gt;  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This process is called &lt;strong&gt;tokenization&lt;/strong&gt;, and it’s how LLMs convert text into something they can process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow of AI Text Generation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Text → Tokens → Model → Tokens → Text&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tokenization:&lt;/strong&gt; Converts text into numbers (tokens).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model processing:&lt;/strong&gt; Predicts the next token based on input and learned patterns.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detokenization:&lt;/strong&gt; Converts the output tokens back into human-readable text.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  4. Input Tokens vs Output Tokens
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input Tokens:&lt;/strong&gt; The message or question you send to the AI.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Tokens:&lt;/strong&gt; The AI’s generated response.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI predicts &lt;strong&gt;one token at a time&lt;/strong&gt; and continues until a complete response is generated — similar to an advanced &lt;strong&gt;autocomplete&lt;/strong&gt; system.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. What Does GPT Mean?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GPT = Generative Pretrained Transformer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Breaking it down:&lt;/p&gt;

&lt;h3&gt;
  
  
  5.1 Generative
&lt;/h3&gt;

&lt;p&gt;LLMs &lt;strong&gt;generate responses on the fly&lt;/strong&gt; rather than searching the web.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
You: “Call me Captain Dev”&lt;br&gt;&lt;br&gt;
LLM: “Sure, Captain Dev!”  &lt;/p&gt;

&lt;p&gt;This response is &lt;strong&gt;original&lt;/strong&gt;, created by the AI based on patterns it learned — it didn’t exist anywhere before.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.2 Pretrained
&lt;/h3&gt;

&lt;p&gt;Before interacting with users, LLMs are trained on &lt;strong&gt;large datasets&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Like humans, they &lt;strong&gt;learn first, then generate content&lt;/strong&gt;. This pretraining allows them to answer questions accurately and contextually.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.3 Transformer
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;transformer&lt;/strong&gt; is the neural network architecture powering modern LLMs.  &lt;/p&gt;

&lt;p&gt;It allows the model to &lt;strong&gt;process context effectively&lt;/strong&gt; and predict the next token accurately.  &lt;/p&gt;

&lt;p&gt;All major LLMs use transformer-based architectures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GPT (OpenAI)
&lt;/li&gt;
&lt;li&gt;Gemini (Google)
&lt;/li&gt;
&lt;li&gt;Claude (Anthropic)
&lt;/li&gt;
&lt;li&gt;Mistral
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, all these models are &lt;strong&gt;Generative + Pretrained + Transformers&lt;/strong&gt; in nature.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. How LLMs Generate Answers Step by Step
&lt;/h2&gt;

&lt;p&gt;Think of an LLM as a &lt;strong&gt;super-smart autocomplete system&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You type: “The sky is…”
&lt;/li&gt;
&lt;li&gt;The model predicts: “blue”
&lt;/li&gt;
&lt;li&gt;Then predicts the next token: “today”
&lt;/li&gt;
&lt;li&gt;It continues predicting &lt;strong&gt;one token at a time&lt;/strong&gt; until the full response is generated
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This token-by-token generation allows LLMs to create &lt;strong&gt;long, coherent responses&lt;/strong&gt; based on context.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. Real-World Example
&lt;/h2&gt;

&lt;p&gt;Let’s say you ask an LLM:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Write a short introduction about yourself for a portfolio website.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The process happens like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Input:&lt;/strong&gt; The AI receives your text (input tokens).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prediction:&lt;/strong&gt; The model predicts the next word or token based on pretraining and context.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iteration:&lt;/strong&gt; It continues token by token until the response is complete.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output:&lt;/strong&gt; Detokenization converts the tokens into readable text for you to copy and use.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is why AI can &lt;strong&gt;generate blog posts, code snippets, summaries, and more&lt;/strong&gt; instantly.&lt;/p&gt;




&lt;h2&gt;
  
  
  8. Final Thoughts
&lt;/h2&gt;

&lt;p&gt;LLMs are transforming how humans interact with machines.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instead of humans learning programming languages, machines are learning &lt;strong&gt;human language&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;LLMs are &lt;strong&gt;tools for communication, automation, and creative generation&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;This is just the beginning of what AI can do.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With a better understanding of &lt;strong&gt;tokens, GPT, and transformers&lt;/strong&gt;, you can now appreciate &lt;strong&gt;how AI generates intelligent, human-like responses&lt;/strong&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Next in the Series:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;Deep Dive into Tokens, Embeddings, and Vector Search in LLMs&lt;/em&gt; — Stay tuned for the next article!&lt;/p&gt;




</description>
      <category>ai</category>
      <category>llm</category>
      <category>chatgpt</category>
      <category>generativeai</category>
    </item>
  </channel>
</rss>
