<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rashmi Roy</title>
    <description>The latest articles on DEV Community by Rashmi Roy (@rashmi_roy_447a69fec6d340).</description>
    <link>https://dev.to/rashmi_roy_447a69fec6d340</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rashmi_roy_447a69fec6d340"/>
    <language>en</language>
    <item>
      <title>How Transformer Models Actually Work</title>
      <dc:creator>Rashmi Roy</dc:creator>
      <pubDate>Wed, 08 Apr 2026 03:19:47 +0000</pubDate>
      <link>https://dev.to/rashmi_roy_447a69fec6d340/how-transformer-models-actually-work-23h9</link>
      <guid>https://dev.to/rashmi_roy_447a69fec6d340/how-transformer-models-actually-work-23h9</guid>
      <description>&lt;p&gt;If you’ve been hearing about GPT, LLMs, or AI models everywhere and wondering &lt;em&gt;“what’s actually happening under the hood?”&lt;/em&gt; — this article is for you.&lt;/p&gt;

&lt;p&gt;Let’s break down transformer models in the simplest way possible, without heavy math or jargon.&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 The Big Idea
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;transformer model&lt;/strong&gt; is a type of neural network designed to understand and generate language by looking at &lt;strong&gt;relationships between words in a sentence&lt;/strong&gt; — all at once.&lt;/p&gt;

&lt;p&gt;Unlike older models that read text &lt;strong&gt;word by word&lt;/strong&gt;, transformers read &lt;strong&gt;the entire sentence simultaneously&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;👉 That’s the core superpower.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 Step 1: Turning Words into Numbers (Embeddings)
&lt;/h2&gt;

&lt;p&gt;Computers don’t understand words — they understand numbers.&lt;/p&gt;

&lt;p&gt;So the first step is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Convert each word into a &lt;strong&gt;vector (a list of numbers)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"I love AI"
↓
[I] → [0.2, 0.8, ...]
[love] → [0.9, 0.1, ...]
[AI] → [0.7, 0.6, ...]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These vectors capture meaning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"king" and "queen" will have similar vectors&lt;/li&gt;
&lt;li&gt;"cat" and "car" will be very different&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔍 Step 2: Understanding Context with Attention
&lt;/h2&gt;

&lt;p&gt;This is the &lt;strong&gt;heart of transformers&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of reading left to right, the model asks:&lt;/p&gt;

&lt;p&gt;👉 &lt;em&gt;“Which words in this sentence are important for understanding each word?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"The animal didn’t cross the road because it was tired"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What does &lt;strong&gt;“it”&lt;/strong&gt; refer to?&lt;/p&gt;

&lt;p&gt;The model uses &lt;strong&gt;attention&lt;/strong&gt; to connect:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"it" → "animal" (not "road")&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How Attention Works (Conceptually)
&lt;/h3&gt;

&lt;p&gt;For every word:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It looks at all other words&lt;/li&gt;
&lt;li&gt;Assigns importance scores&lt;/li&gt;
&lt;li&gt;Builds a richer understanding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of it like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Every word is having a conversation with every other word.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🔁 Step 3: Self-Attention (The Magic Layer)
&lt;/h2&gt;

&lt;p&gt;This process is called &lt;strong&gt;self-attention&lt;/strong&gt; because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The sentence is paying attention to &lt;em&gt;itself&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each word gets updated based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Its own meaning&lt;/li&gt;
&lt;li&gt;Context from other words&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So after attention:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Words are no longer isolated&lt;/li&gt;
&lt;li&gt;They become &lt;strong&gt;context-aware&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧩 Step 4: Multi-Head Attention
&lt;/h2&gt;

&lt;p&gt;Instead of doing attention once, transformers do it &lt;strong&gt;multiple times in parallel&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Each “head” focuses on different things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Grammar&lt;/li&gt;
&lt;li&gt;Meaning&lt;/li&gt;
&lt;li&gt;Relationships&lt;/li&gt;
&lt;li&gt;Position&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 This is called &lt;strong&gt;multi-head attention&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Think of it like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One head looks at subject-verb relation&lt;/li&gt;
&lt;li&gt;Another looks at sentiment&lt;/li&gt;
&lt;li&gt;Another looks at long-distance dependencies&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📍 Step 5: Positional Encoding
&lt;/h2&gt;

&lt;p&gt;Since transformers read everything at once, they need to know:&lt;/p&gt;

&lt;p&gt;👉 &lt;em&gt;“What is the order of words?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So we add &lt;strong&gt;positional encoding&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Special numbers added to each word vector&lt;/li&gt;
&lt;li&gt;Helps the model understand sequence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"dog bites man" ≠ "man bites dog"&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🏗️ Step 6: Feedforward Layers
&lt;/h2&gt;

&lt;p&gt;After attention:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The data goes through simple neural network layers&lt;/li&gt;
&lt;li&gt;These refine the understanding further&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of it as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Processing the “insights” gathered from attention&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🔄 Step 7: Stacking Layers
&lt;/h2&gt;

&lt;p&gt;A transformer is not just one layer — it’s many layers stacked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input → Attention → Feedforward → Attention → Feedforward → ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each layer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Builds deeper understanding&lt;/li&gt;
&lt;li&gt;Refines context&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ✍️ Step 8: Generating Output (For GPT-like Models)
&lt;/h2&gt;

&lt;p&gt;When generating text:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The model looks at previous words&lt;/li&gt;
&lt;li&gt;Predicts the next most likely word&lt;/li&gt;
&lt;li&gt;Repeats the process&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input: "AI is"
Prediction → "powerful"
Next → "AI is powerful"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This continues until a full sentence is formed.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚡ Why Transformers Are So Powerful
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;✅ Understand context better than older models&lt;/li&gt;
&lt;li&gt;✅ Handle long sentences efficiently&lt;/li&gt;
&lt;li&gt;✅ Train in parallel (faster than RNNs)&lt;/li&gt;
&lt;li&gt;✅ Scale massively (billions of parameters)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s why they power:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chatbots (like ChatGPT)&lt;/li&gt;
&lt;li&gt;Translation systems&lt;/li&gt;
&lt;li&gt;Code generators&lt;/li&gt;
&lt;li&gt;Search engines&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧠 Simple Analogy
&lt;/h2&gt;

&lt;p&gt;Think of a transformer like a &lt;strong&gt;smart meeting room&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every word = a person&lt;/li&gt;
&lt;li&gt;Everyone listens to everyone else&lt;/li&gt;
&lt;li&gt;Important voices get more attention&lt;/li&gt;
&lt;li&gt;Multiple discussions happen in parallel&lt;/li&gt;
&lt;li&gt;Final decision = best understanding of the whole conversation&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🎯 Final Takeaway
&lt;/h2&gt;

&lt;p&gt;A transformer model:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Reads all words together → figures out relationships → builds context → predicts meaningful output&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;No magic — just &lt;strong&gt;attention, layers, and lots of training data&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  💬 Closing Thought
&lt;/h2&gt;

&lt;p&gt;You don’t need to memorize equations to understand transformers.&lt;/p&gt;

&lt;p&gt;If you remember just one thing:&lt;br&gt;
👉 &lt;strong&gt;“Transformers understand language by learning how words relate to each other.”&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;If you're building AI products or exploring LLMs, understanding this foundation will give you a huge edge 🚀&lt;/p&gt;

</description>
      <category>ai</category>
      <category>gpt3</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>AWS Key Services Every Developer Should Know — A Practical Guide</title>
      <dc:creator>Rashmi Roy</dc:creator>
      <pubDate>Thu, 11 Dec 2025 03:37:00 +0000</pubDate>
      <link>https://dev.to/rashmi_roy_447a69fec6d340/aws-key-services-every-developer-should-know-a-practical-guide-47jh</link>
      <guid>https://dev.to/rashmi_roy_447a69fec6d340/aws-key-services-every-developer-should-know-a-practical-guide-47jh</guid>
      <description>&lt;p&gt;This article summarizes the essential AWS services you must know to build, deploy, secure, and scale modern applications.&lt;/p&gt;




&lt;h2&gt;
  
  
  Compute
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EC2&lt;/strong&gt; — VM instances with OS-level control.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda&lt;/strong&gt; — Event-driven serverless compute.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ECS/EKS/Fargate&lt;/strong&gt; — Container orchestration and serverless container runtime.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Storage
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;S3&lt;/strong&gt; — Object storage for binaries, logs, backups.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EBS&lt;/strong&gt; — Block-level storage for EC2.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EFS&lt;/strong&gt; — Distributed NFS file system.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Databases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RDS&lt;/strong&gt; — Managed SQL engines (MySQL, PostgreSQL, etc.).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB&lt;/strong&gt; — Fully managed NoSQL key-value database.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ElastiCache&lt;/strong&gt; — Redis/Memcached for caching.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Networking
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VPC&lt;/strong&gt; — Isolated cloud network environment.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Route53&lt;/strong&gt; — DNS and traffic routing.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Gateway&lt;/strong&gt; — REST/WebSocket interface for serverless and microservices.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IAM&lt;/strong&gt; — Identity &amp;amp; access management.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KMS&lt;/strong&gt; — Encryption key management.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets Manager&lt;/strong&gt; — Secure credential storage.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shield/WAF&lt;/strong&gt; — DDoS and app-layer protection.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  DevOps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CodePipeline&lt;/strong&gt; — CI/CD orchestration.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudWatch&lt;/strong&gt; — Monitoring and observability.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudTrail&lt;/strong&gt; — Full audit log for the AWS account.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Mastering these services provides a solid foundation for building scalable cloud-native applications on AWS.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>webdev</category>
    </item>
    <item>
      <title>10 React.js Interview Questions That Stumped Me (With Answers)</title>
      <dc:creator>Rashmi Roy</dc:creator>
      <pubDate>Sat, 12 Apr 2025 04:36:34 +0000</pubDate>
      <link>https://dev.to/rashmi_roy_447a69fec6d340/10-reactjs-interview-questions-that-stumped-me-with-answers-24mf</link>
      <guid>https://dev.to/rashmi_roy_447a69fec6d340/10-reactjs-interview-questions-that-stumped-me-with-answers-24mf</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfujvcubiv15hsxd8qm7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfujvcubiv15hsxd8qm7.jpg" alt="React js code interview" width="800" height="533"&gt;&lt;/a&gt;Interviews are like open-book exams where the book is &lt;em&gt;in your brain&lt;/em&gt; — and sometimes, mind blanked out. 😅&lt;/p&gt;

&lt;p&gt;After 7+ years in frontend, I thought I’d seen it all… until these React.js questions made me pause.&lt;/p&gt;

&lt;p&gt;I’m sharing them with clear explanations so you don’t get stumped like I did. Let’s go! 👇&lt;/p&gt;




&lt;h3&gt;
  
  
  1. 🔄 What's the difference between &lt;code&gt;useEffect(() =&amp;gt; {}, [])&lt;/code&gt; and &lt;code&gt;useLayoutEffect(() =&amp;gt; {}, [])&lt;/code&gt;?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Hint&lt;/strong&gt;: Timing matters.&lt;/p&gt;

&lt;p&gt;✅ &lt;code&gt;useEffect&lt;/code&gt; runs &lt;em&gt;after&lt;/em&gt; the DOM paints.&lt;br&gt;&lt;br&gt;
✅ &lt;code&gt;useLayoutEffect&lt;/code&gt; runs &lt;em&gt;before&lt;/em&gt; paint — can block rendering.&lt;br&gt;&lt;br&gt;
Use &lt;code&gt;useLayoutEffect&lt;/code&gt; only when DOM measurement or mutation is needed.&lt;/p&gt;


&lt;h3&gt;
  
  
  2. 🧠 Why is &lt;code&gt;key&lt;/code&gt; prop important in lists?
&lt;/h3&gt;

&lt;p&gt;It helps React identify which items changed, are added, or removed. Without a key, React may re-render unnecessarily.&lt;/p&gt;


&lt;h3&gt;
  
  
  3. ⚡ Can you explain React reconciliation?
&lt;/h3&gt;

&lt;p&gt;React compares the new virtual DOM with the previous one. It tries to &lt;strong&gt;minimally update&lt;/strong&gt; the actual DOM using keys and diffing.&lt;/p&gt;


&lt;h3&gt;
  
  
  4. 🌀 What is a closure, and how can it cause bugs in React hooks?
&lt;/h3&gt;

&lt;p&gt;Closures remember variable states. In hooks, outdated closures can cause &lt;strong&gt;stale state bugs&lt;/strong&gt;, especially in &lt;code&gt;setInterval&lt;/code&gt; or event handlers.&lt;/p&gt;


&lt;h3&gt;
  
  
  5. ❓ What's the difference between controlled and uncontrolled components?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Controlled&lt;/strong&gt;: React manages the input state (&lt;code&gt;value&lt;/code&gt;, &lt;code&gt;onChange&lt;/code&gt;)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uncontrolled&lt;/strong&gt;: DOM handles it via &lt;code&gt;ref&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  6. 🧪 How would you test a component using React Testing Library?
&lt;/h3&gt;

&lt;p&gt;Use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;MyComponent&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getByText&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/hello/i&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;toBeInTheDocument&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  7. 🚀 What is code-splitting, and how do you implement it in React?
&lt;/h3&gt;

&lt;p&gt;Using React.lazy() and Suspense to load components only when needed, improving performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. ⚙️ What happens if you update state inside useEffect?
&lt;/h3&gt;

&lt;p&gt;It can cause re-renders. Be cautious of infinite loops if dependencies aren't correctly defined.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. 🛑 What are some common mistakes with useState?
&lt;/h3&gt;

&lt;p&gt;Initial value not set correctly&lt;/p&gt;

&lt;p&gt;Forgetting it’s async&lt;/p&gt;

&lt;p&gt;Updating state based on current state without using callback form:&lt;/p&gt;

&lt;p&gt;setCount(prev =&amp;gt; prev + 1);&lt;/p&gt;

&lt;h3&gt;
  
  
  10. 👻 Why do people still use Redux if we have Context API?
&lt;/h3&gt;

&lt;p&gt;Redux offers predictable state, middleware support, and better dev tools. Context is fine for low-frequency updates, but not large-scale state.&lt;/p&gt;

&lt;p&gt;💡 Final Thought&lt;br&gt;
Interviews test your thinking more than just syntax. These questions helped me level up, and I hope they help you too.&lt;/p&gt;

&lt;p&gt;What’s the trickiest React question you’ve faced? Drop it below! ⬇️&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
