<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Abheeshta P</title>
    <description>The latest articles on DEV Community by Abheeshta P (@abheeshta).</description>
    <link>https://dev.to/abheeshta</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abheeshta"/>
    <language>en</language>
    <item>
      <title>Classification vs Object detection vs Segmentation</title>
      <dc:creator>Abheeshta P</dc:creator>
      <pubDate>Sun, 23 Feb 2025 13:24:15 +0000</pubDate>
      <link>https://dev.to/abheeshta/classification-vs-object-detection-vs-segmentation-28k5</link>
      <guid>https://dev.to/abheeshta/classification-vs-object-detection-vs-segmentation-28k5</guid>
      <description>&lt;h2&gt;
  
  
  Computer vision classification
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Classification&lt;/strong&gt; 🎯
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Categorizes an image into predefined classes.
&lt;/li&gt;
&lt;li&gt;Provides a &lt;strong&gt;yes/no&lt;/strong&gt; answer (belongs to a class or not).
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Object Detection&lt;/strong&gt; 🔍
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Draws a &lt;strong&gt;bounding box&lt;/strong&gt; around detected objects.
&lt;/li&gt;
&lt;li&gt;Uses &lt;strong&gt;sub-classification&lt;/strong&gt; for each detected region.
&lt;/li&gt;
&lt;li&gt;Improved by &lt;strong&gt;YOLO&lt;/strong&gt; for real-time, single-shot detection.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Segmentation&lt;/strong&gt; ✂️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;No bounding boxes, instead, it creates &lt;strong&gt;masks&lt;/strong&gt; based on object shape.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Types of Segmentation:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image Segmentation&lt;/strong&gt; 🖼: Uses abstract contour-based masking.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic Segmentation&lt;/strong&gt; 🌍: Assigns class-wise masks to all objects.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instance Segmentation&lt;/strong&gt; 🔢: Identifies multiple instances of the same class separately.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Panoptic Segmentation&lt;/strong&gt; 🏷: Combines semantic and instance segmentation, identifying both classes and individual instances.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;TLDR : In &lt;strong&gt;Deep Learning&lt;/strong&gt; and &lt;strong&gt;Image Processing&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Classification&lt;/strong&gt; 📌: Used in tasks like spam detection, medical diagnosis, and species identification.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Object Detection&lt;/strong&gt; 🎯: Applied in self-driving cars, surveillance, and facial recognition.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Segmentation&lt;/strong&gt; ✂️: Essential for medical imaging (tumor detection), autonomous vehicles, and augmented reality.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These methods help AI &lt;strong&gt;"see"&lt;/strong&gt; and understand images more effectively! 🚀&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>computervision</category>
      <category>deeplearning</category>
      <category>learning</category>
    </item>
    <item>
      <title>🧠Generative AI - 3</title>
      <dc:creator>Abheeshta P</dc:creator>
      <pubDate>Tue, 24 Dec 2024 15:39:54 +0000</pubDate>
      <link>https://dev.to/abheeshta/generative-ai-3-1g4e</link>
      <guid>https://dev.to/abheeshta/generative-ai-3-1g4e</guid>
      <description>&lt;h2&gt;
  
  
  How Are Generative AI Models Trained? 🏋️‍♂️
&lt;/h2&gt;

&lt;p&gt;Generative AI models like GPT are trained in two main stages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Unsupervised Pretraining 📚:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The model is fed massive amounts of text data (e.g., 45TB of text for GPT models).&lt;/li&gt;
&lt;li&gt;The model learns patterns, language structures, grammar, semantics and general knowledge by predicting the next word/token in a sentence without labeled data.&lt;/li&gt;
&lt;li&gt;This results in &lt;strong&gt;175 billion parameters&lt;/strong&gt; for models like GPT-3.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Supervised Fine-Tuning 🎯:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After pretraining, the model is fine-tuned on smaller, labeled datasets for specific tasks (e.g., summarization, sentiment analysis).&lt;/li&gt;
&lt;li&gt;Fine-tuning ensures the model generates more accurate and task-relevant outputs.
Eg : text summarization, Language translation etc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37hdhhvoy8ljto40twlg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37hdhhvoy8ljto40twlg.png" alt="GPT-3" width="784" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📝 Stay tuned in this learning journey for more on generative AI! I'd love to discuss this topic further – special thanks to Guvi for the course!&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>genai</category>
      <category>gpt3</category>
      <category>gemini</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>🧠Generative AI - 2</title>
      <dc:creator>Abheeshta P</dc:creator>
      <pubDate>Tue, 24 Dec 2024 15:30:14 +0000</pubDate>
      <link>https://dev.to/abheeshta/generative-ai-2-1d45</link>
      <guid>https://dev.to/abheeshta/generative-ai-2-1d45</guid>
      <description>&lt;h2&gt;
  
  
  Transformer Architecture in Generative AI 🤖
&lt;/h2&gt;

&lt;p&gt;The transformer architecture is the foundation of many generative AI models, including language models like GPT and BERT. It consists of two main components: the &lt;strong&gt;encoder&lt;/strong&gt; 📂 and the &lt;strong&gt;decoder&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiadvcbf2aaoyduoo1a7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiadvcbf2aaoyduoo1a7y.png" alt="Basic transformer architecture" width="800" height="625"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Components:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Encoder 🔄:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The encoder processes input data and generates context-rich representations.&lt;/li&gt;
&lt;li&gt;It consists of:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Self-Attention Mechanism 🧐:&lt;/strong&gt; Allows the encoder to evaluate relationships between different parts of the input. Each token can attend to every other token, capturing dependencies regardless of distance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feed Forward Layer ➡️:&lt;/strong&gt; Applies transformations to the attended data and passes it to the next encoder layer.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Decoder 🔄:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The decoder generates outputs by attending to both encoder outputs and previously generated tokens.&lt;/li&gt;
&lt;li&gt;It consists of:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Self-Attention Mechanism 🧐:&lt;/strong&gt; The decoder looks at the tokens it has already generated to predict the next one. At the start, the decoder is given the target data (shifted by one position, so it doesn’t just copy it directly). It generates each new token step by step, learning from what it has produced so far.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encoder-Decoder Attention 📈:&lt;/strong&gt; Aligns decoder outputs with encoded representations to refine predictions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feed Forward Layer ➡️:&lt;/strong&gt; Further processes the data and forwards it to the next decoder layer.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc644aq8qo4ycpkvzsn2e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc644aq8qo4ycpkvzsn2e.png" alt="Encoder decoder parts" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Important Concepts:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Self-Attention 🧐:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A key mechanism where each input token attends to all other tokens in the sequence.&lt;/li&gt;
&lt;li&gt;This is computed using the dot product between embeddings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Challenge:&lt;/strong&gt; Self-attention loses track of the token's original position.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Feed Forwarding ➡️:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After attention, the data is passed through a fully connected layer for further processing.&lt;/li&gt;
&lt;li&gt;In encoders, this forwards data to the next encoder layer.&lt;/li&gt;
&lt;li&gt;In decoders, it contributes to generating the final output.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Encoder-Decoder Attention 📈:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A layer in the decoder that allows it to attend to the encoder's output.&lt;/li&gt;
&lt;li&gt;This helps the decoder extract insights from the encoded input for better output generation.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Positional Encoding 📊:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;To address the issue of lost positional information in self-attention, transformers use positional encoding.&lt;/li&gt;
&lt;li&gt;Positional encodings are added to input embeddings, providing context about token positions.&lt;/li&gt;
&lt;li&gt;This ensures sequential relationships are maintained, making output more coherent and human-like.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w1bjpzc52nhdma8654s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w1bjpzc52nhdma8654s.png" alt="Detailed architecture1" width="800" height="828"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Do You Need Both Encoder and Decoder? 🤔
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;No, not always!&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Encoder-Only Architecture:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Used when you don't need to generate new data but instead analyze or classify input.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Examples:&lt;/strong&gt; Sentiment analysis, image classification (like BERT).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Decoder-Only Architecture:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Used primarily for generative tasks where new data needs to be created.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Examples:&lt;/strong&gt; Chatbots, text generation (like GPT and Gemini).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Both Encoder and Decoder:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Required when the task involves transforming input into different output, like translating languages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Examples:&lt;/strong&gt; Machine translation (like T5 and original Transformer model).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m45vl79x8sd5nqljrg8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m45vl79x8sd5nqljrg8.png" alt="Detailed architecture2" width="630" height="904"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Summary 📊:
&lt;/h3&gt;

&lt;p&gt;The transformer architecture's ability to capture long-range dependencies, align encoder and decoder outputs, and maintain positional context is what makes it powerful for generative AI tasks. These mechanisms together allow models to generate human-like text, translate languages, and perform various NLP tasks with high accuracy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📝 Stay tuned in this learning journey to know about GENAI training! I'd love to discuss this topic further – special thanks to Guvi for the course!&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>genai</category>
      <category>gpt3</category>
      <category>gemini</category>
      <category>learning</category>
    </item>
    <item>
      <title>How does chat bot work?</title>
      <dc:creator>Abheeshta P</dc:creator>
      <pubDate>Sun, 24 Nov 2024 14:43:27 +0000</pubDate>
      <link>https://dev.to/abheeshta/how-does-chat-bot-work-358f</link>
      <guid>https://dev.to/abheeshta/how-does-chat-bot-work-358f</guid>
      <description>&lt;p&gt;A chatbot operates using a blend of natural language processing (NLP) and machine learning to understand user inputs and deliver relevant responses. &lt;/p&gt;

&lt;h3&gt;
  
  
  Steps to Train a Chatbot:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Import Corpus&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gather the required data that the chatbot will use for training, ensuring it is relevant and comprehensive.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Process the Data&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean the data by removing redundant or irrelevant entries.
&lt;/li&gt;
&lt;li&gt;Ensure the data is well-organized, clear, and beneficial for training.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Test Case Handling&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standardize the text by converting it to all uppercase or lowercase.
&lt;/li&gt;
&lt;li&gt;Address potential misinterpretations or misrepresentations to improve accuracy.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Tokenization&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Break down sentences into individual words.

&lt;ul&gt;
&lt;li&gt;Example: &lt;em&gt;"This is a blog"&lt;/em&gt; → &lt;em&gt;["This", "is", "a", "blog"]&lt;/em&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Stemming&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extract the root form of words (etymology).
&lt;/li&gt;
&lt;li&gt;Identify and group similar words across different tenses or variations.

&lt;ul&gt;
&lt;li&gt;Example: &lt;em&gt;"running" → "run"&lt;/em&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Generating Bag of Words (BoW)&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Represent words as numbers by generating vector embeddings.
&lt;/li&gt;
&lt;li&gt;Perform dot operations to compare vectors for similarity or relationships.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;One-Hot Encoding&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Convert categorical variables into a format that machine learning algorithms can understand.
&lt;/li&gt;
&lt;li&gt;Ensure data is clean and structured for better model performance.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Architecture :
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshuph3j90k0yyjdxgk71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshuph3j90k0yyjdxgk71.png" alt="chatbot architecture" width="752" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>🧠Generative AI - 1</title>
      <dc:creator>Abheeshta P</dc:creator>
      <pubDate>Sat, 02 Nov 2024 08:39:13 +0000</pubDate>
      <link>https://dev.to/abheeshta/generative-ai-1-ano</link>
      <guid>https://dev.to/abheeshta/generative-ai-1-ano</guid>
      <description>&lt;h3&gt;
  
  
  🤖 What is AI?
&lt;/h3&gt;

&lt;p&gt;Artificial Intelligence (AI) is a branch of computer science aimed at enabling machines to think, act, and behave like humans. Through AI, computers can learn from data, recognize patterns, and make informed decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✨ What is Generative AI?
&lt;/h3&gt;

&lt;p&gt;Generative AI is an advanced form of AI trained on vast datasets, often containing trillions of data points. It learns complex patterns, features, and relationships, allowing it to create new content, like text, images, or even music. Generative AI is far more advanced than traditional deep learning, a subset of machine learning, due to its ability to generate diverse and contextually relevant outputs.&lt;/p&gt;

&lt;h3&gt;
  
  
  🌍 Where is Generative AI Used?
&lt;/h3&gt;

&lt;p&gt;Generative AI has become integral to various applications, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🌐 Text translation&lt;/li&gt;
&lt;li&gt;🎨 Image, audio, and video generation&lt;/li&gt;
&lt;li&gt;🗣️ Advanced Natural Language Processing (NLP) tasks, such as summarization and conversational AI&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔍 How is Generative AI Different from Deep Learning?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scale of Data and Parameters&lt;/strong&gt;: Most deep learning models operate with millions of parameters, while generative AI models are trained on trillions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context Range&lt;/strong&gt;: Deep learning models, like RNNs and LSTMs, are designed to handle shorter sequences. They’re useful in sequential tasks but store only limited history, such as a brief context in NLP.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk55xea0uxur6p2v70tou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk55xea0uxur6p2v70tou.png" alt="Deep Learning Model Diagram" width="479" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Long-Range Understanding&lt;/strong&gt;: Generative AI models, on the other hand, are capable of retaining extensive context, making them highly accurate and context-aware.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohtwq8flevm5d8ylsyu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohtwq8flevm5d8ylsyu6.png" alt="Generative AI Model Diagram" width="443" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚙️ Architecture of Generative AI
&lt;/h3&gt;

&lt;p&gt;Generative AI models primarily rely on &lt;strong&gt;transformer architecture&lt;/strong&gt;. Transformers process inputs in parallel, efficiently managing long-range dependencies. This approach enables generative AI to produce coherent, context-rich outputs across diverse applications.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📝 Stay tuned in this learning journey for more on generative AI architecture! I'd love to discuss this topic further – special thanks to &lt;strong&gt;Guvi&lt;/strong&gt; for the course!&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>learninginpublic</category>
      <category>machinelearning</category>
      <category>codenewbie</category>
    </item>
  </channel>
</rss>
