<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nilavukkarasan R</title>
    <description>The latest articles on DEV Community by Nilavukkarasan R (@rnilav).</description>
    <link>https://dev.to/rnilav</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rnilav"/>
    <language>en</language>
    <item>
      <title>The Transformer: The Architecture Behind Modern AI</title>
      <dc:creator>Nilavukkarasan R</dc:creator>
      <pubDate>Thu, 07 May 2026 02:44:04 +0000</pubDate>
      <link>https://dev.to/rnilav/the-transformer-the-architecture-behind-modern-ai-36ia</link>
      <guid>https://dev.to/rnilav/the-transformer-the-architecture-behind-modern-ai-36ia</guid>
      <description>&lt;p&gt;&lt;em&gt;"Attention Is All You Need."&lt;/em&gt;   -- &lt;strong&gt;Vaswani, 2017&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Path So Far
&lt;/h2&gt;

&lt;p&gt;We started with a single neuron drawing a line. Added hidden layers to bend it. Taught the network to learn its own weights. Scaled training with mini-batches and Adam. Fought overfitting with dropout. Built filters for images. Gave networks memory for sequences. Replaced compression with attention.&lt;/p&gt;

&lt;p&gt;Each architecture solved a problem the previous one couldn't. Each carried forward what worked and discarded what didn't.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0y4r1xia0g9nxcpuknw1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0y4r1xia0g9nxcpuknw1.png" alt="Architecture evolution: MLP → CNN → RNN → Transformer" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Personal Connect
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://dev.to/rnilav/attention-mechanisms-stop-compressing-start-looking-back-1bol"&gt;Attention&lt;/a&gt; blog post, I described how I used to compose sentences in Tamil first, then translate word by word into English. It was slow, sequential, and lossy. When I finally started thinking directly in English, everything changed. I wasn't translating anymore. I was processing meaning, grammar, and context all at once, shaped by everything I'd read and heard before.&lt;/p&gt;

&lt;p&gt;That shift, from sequential translation to parallel understanding, is exactly what the Transformer does. And the core idea is simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;P(next token | all previous tokens)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What is the probability of the next token, given everything that came before? That single equation is the foundation of GPT, Claude, and every modern language model. Everything you produce is shaped by your past and present context, conscious or not. The Transformer makes that idea computational.&lt;/p&gt;

&lt;h2&gt;
  
  
  Breaking Down the Decoder
&lt;/h2&gt;

&lt;p&gt;The decoder-only Transformer (used by GPT, Claude, and most generative AI models) is a stack of identical layers. Each layer has four components, and we've seen every one of them before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Token + Position Embedding:&lt;/strong&gt; Each token becomes a vector (say, 128 numbers). Since attention doesn't care about order, a position signal is added. Token "slow" at position 3 gets a different embedding than "slow" at position 6. The model learns that position matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Masked Multi-Head Self-Attention:&lt;/strong&gt; This is the core. Every token computes how relevant every previous token is to it, then blends their information accordingly.&lt;/p&gt;

&lt;p&gt;Consider the sentence from &lt;a href="https://dev.to/rnilav/understanding-recurrent-neural-networks-from-forgetting-to-remembering-5f7"&gt;RNN&lt;/a&gt;: "My teacher said I was slow, but &lt;strong&gt;he&lt;/strong&gt; didn't know I was just getting started."&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;When predicting what "he" refers to:
  "My"       → low relevance (possessive, context)
  "teacher"  → high relevance (the subject — "he" refers back here)
  "said"     → low relevance (verb, not a referent)
  "I"        → medium relevance (another person in the sentence)
  "was"      → low relevance (auxiliary verb)
  "slow"     → low relevance (adjective)
  "but"      → low relevance (conjunction)
  "he"       → current position
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://dev.to/rnilav/understanding-recurrent-neural-networks-from-forgetting-to-remembering-5f7"&gt;RNN&lt;/a&gt; had to compress everything into a fixed-size hidden state and hope "teacher" survived the journey. Here, attention reaches back directly. No compression, no forgetting.&lt;/p&gt;

&lt;p&gt;The attention formula from &lt;a href="https://dev.to/rnilav/attention-mechanisms-stop-compressing-start-looking-back-1bol"&gt;Attention&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tex"&gt;&lt;code&gt;Attention(Q, K, V) = softmax(Q·Kᵀ / √d) · V
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each token generates a &lt;strong&gt;Query&lt;/strong&gt; ("what am I looking for?"), a &lt;strong&gt;Key&lt;/strong&gt; ("what do I offer?"), and a &lt;strong&gt;Value&lt;/strong&gt; ("what information do I carry?"). The dot product Q·Kᵀ scores how well each key matches the query. Softmax turns scores into weights. The weighted sum of values produces the output. The &lt;strong&gt;causal mask&lt;/strong&gt; ensures token 5 only sees tokens 1 through 4. No peeking ahead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-head attention&lt;/strong&gt; runs this operation multiple times in parallel with different learned projections. Conceptually similar to CNN's multiple filters: in a CNN, each filter detects a different spatial pattern (edges, textures). In a Transformer, each head detects a different relationship (grammar, coreference, meaning). Eight heads, eight perspectives, same total computation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add &amp;amp; LayerNorm:&lt;/strong&gt; The residual connection from Post 07. The input bypasses the attention layer and gets added back:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LayerNorm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nc"&gt;Attention&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This keeps gradients alive through deep stacks. Layer normalization stabilizes the signal between layers. Without these, a 12-layer Transformer wouldn't train.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feed-Forward Network:&lt;/strong&gt; A two-layer MLP with GELU activation, applied to each position independently:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nc"&gt;FFN&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GELU&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="err"&gt;·&lt;/span&gt; &lt;span class="n"&gt;W&lt;/span&gt;&lt;span class="err"&gt;₁&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="err"&gt;₁&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="err"&gt;·&lt;/span&gt; &lt;span class="n"&gt;W&lt;/span&gt;&lt;span class="err"&gt;₂&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="err"&gt;₂&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where the non-linearity lives. Attention itself is a weighted sum (linear). The FFN transforms what each token learned from attention through a non-linear function, the same principle from Post 02. Without it, stacking attention layers would collapse to a single linear operation.&lt;/p&gt;

&lt;p&gt;These four components repeat N times. Each layer refines the representation. By the final layer, the vector for each token encodes its meaning in the full context of the sequence.&lt;/p&gt;

&lt;p&gt;A final linear layer followed by softmax produces the probability distribution over the next token. This last layer is intentionally linear. Its job is to project the rich representations into vocabulary space. The non-linearity has already done its work in the layers below.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Learns
&lt;/h2&gt;

&lt;p&gt;All weights start random. The Transformer knows nothing. Training uses the same loop from this series: backprop computes gradients, Adam updates weights, dropout prevents memorization.&lt;/p&gt;

&lt;p&gt;What's different is what it learns &lt;em&gt;from&lt;/em&gt;. No labels. No human annotations. Just raw text. "Given these tokens, predict the next one." Billions of times. The model learns grammar, facts, reasoning, style, all as a side effect of next-token prediction.&lt;/p&gt;

&lt;p&gt;This is called &lt;strong&gt;self-supervised learning&lt;/strong&gt;. The training signal comes from the data itself. Every sentence is both the input and the answer. Predict the next word, check if you were right, adjust. The same try-miss-adjust loop from &lt;a href="https://dev.to/rnilav/3-backpropagation-errors-flow-backward-knowledge-flows-forward-5320"&gt;Bakcpropagation&lt;/a&gt;, at a scale that would have seemed impossible when we started with XOR.&lt;/p&gt;

&lt;h2&gt;
  
  
  See It
&lt;/h2&gt;

&lt;p&gt;Open the &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/tree/main/10-transformer" rel="noopener noreferrer"&gt;playground&lt;/a&gt;. Two pretrained models on Shakespeare, a small one (112K params) and a larger one (826K params). Type a prompt like "ROMEO:" and generate text instantly. Both models are tiny, so the output will still be rough, not real Shakespeare. But compare the two side by side and you'll see the 826K model produces noticeably better structure: dialogue format, character names, verse-like line breaks. Scale matters, even at this toy level.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Series, Complete
&lt;/h2&gt;

&lt;p&gt;This series started because I was building with AI tools but didn't understand how any of it worked. Ten posts later, I understand the foundations. Not because I memorised the formulas, but because I recreated each piece, watched it work, and saw how it connects to the next. There is still plenty to learn. The journey continues.&lt;/p&gt;

&lt;p&gt;The Transformer didn't invent any of these pieces. It composed them. The genius was in what it removed, not what it added.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;We've built the architecture. But architecture alone doesn't make intelligence. Training is what brings it to life: how data is prepared, how models scale, how they're fine-tuned, how they learn to follow instructions. That's a separate series.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
Vaswani, A., and team (2017). &lt;em&gt;Attention Is All You Need&lt;/em&gt;. NeurIPS.&lt;br&gt;
Radford, A., and team. (2018). &lt;em&gt;Improving Language Understanding by Generative Pre-Training&lt;/em&gt;. (GPT-1)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt; From Perceptrons to Transformers | &lt;strong&gt;Code:&lt;/strong&gt; &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/tree/main/10-transformer" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gpt</category>
      <category>transformer</category>
      <category>llm</category>
      <category>ai</category>
    </item>
    <item>
      <title>Attention Mechanisms: Stop Compressing, Start Looking Back</title>
      <dc:creator>Nilavukkarasan R</dc:creator>
      <pubDate>Sun, 19 Apr 2026 05:32:31 +0000</pubDate>
      <link>https://dev.to/rnilav/attention-mechanisms-stop-compressing-start-looking-back-1bol</link>
      <guid>https://dev.to/rnilav/attention-mechanisms-stop-compressing-start-looking-back-1bol</guid>
      <description>&lt;p&gt;&lt;em&gt;"The art of being wise is the art of knowing what to overlook."&lt;/em&gt; &lt;br&gt;
--&lt;strong&gt;William James&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Three Problems LSTM Didn't Solve
&lt;/h2&gt;

&lt;p&gt;LSTMs gave networks memory. But I didn't fully understand what was still missing until I thought about my own experience learning English.&lt;/p&gt;

&lt;p&gt;I studied in &lt;strong&gt;Tamil medium&lt;/strong&gt; all the way through school. English was a subject, not a language I lived in. When I started my first job 20 years ago, I had to learn to actually speak it and write it. Client emails. Professional communication.&lt;/p&gt;

&lt;p&gt;My strategy: compose the sentence in Tamil first, then &lt;strong&gt;translate word by word&lt;/strong&gt; into English. It worked for simple things. It broke down in three specific ways. Those three breakdowns map exactly onto the three problems attention was built to solve.&lt;/p&gt;
&lt;h2&gt;
  
  
  Problem 1: The Compressed Summary
&lt;/h2&gt;

&lt;p&gt;Long emails broke me. I'd compose a full paragraph in Tamil mentally, then try to hold it all in my head while translating into English. By the third sentence, the first one had blurred. I'd lose the subject I'd introduced. The English output would drift from the original Tamil thought.&lt;/p&gt;

&lt;p&gt;The problem: I was trying to carry a &lt;strong&gt;compressed summary&lt;/strong&gt; of a long paragraph in working memory, and that summary wasn't big enough.&lt;/p&gt;

&lt;p&gt;That's exactly what an RNN encoder does. It reads the entire input and compresses it into a &lt;strong&gt;single fixed-size vector&lt;/strong&gt;. The decoder uses only that compressed summary. For short sentences, fine. For long ones, something always gets lost.&lt;/p&gt;

&lt;p&gt;The fix (Bahdanau): &lt;strong&gt;don't compress&lt;/strong&gt;. Keep every hidden state the encoder produced, one per input word. Let the decoder &lt;strong&gt;look back&lt;/strong&gt; at any of them when generating each output word.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Without attention:  decoder sees only h_final (compressed summary)
With attention:     decoder sees h₁, h₂, ..., hₙ and picks what it needs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Problem 2: Word Order
&lt;/h2&gt;

&lt;p&gt;Tamil is &lt;strong&gt;verb-final&lt;/strong&gt;. "Can you send the report by tomorrow?" in Tamil is roughly "Tomorrow-by that report send can-you?" I'd start translating left to right and end up with "By tomorrow the report send" before realizing "Can you" needed to come first.&lt;/p&gt;

&lt;p&gt;Attention solves this. The decoder can look at &lt;strong&gt;any encoder position in any order&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Tamil:    நாளைக்குள்  அந்த  report-ஐ  அனுப்ப  முடியுமா
              h₁        h₂      h₃       h₄       h₅

English output → attention focus:
"Can"      → h₅  (முடியுமா — can you?)
"send"     → h₄  (அனுப்ப — send)
"the report" → h₃  (report-ஐ)
"by tomorrow" → h₁  (நாளைக்குள்)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The decoder doesn't follow the Tamil order. It follows the English order, looking back at whatever Tamil position it needs. This is what the Q/K/V formulation captures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Query (Q)&lt;/strong&gt;: what the decoder is asking for right now&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key (K)&lt;/strong&gt;: what each encoder position offers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Value (V)&lt;/strong&gt;: the content retrieved when you attend to that position
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tex"&gt;&lt;code&gt;Attention(Q, K, V) = softmax(Q·Kᵀ / √d) · V
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reading it piece by piece: &lt;strong&gt;&lt;code&gt;Q·Kᵀ&lt;/code&gt;&lt;/strong&gt; computes a score between every query and every key, measuring how well what I'm asking for matches what each position offers. &lt;strong&gt;Softmax&lt;/strong&gt; turns those scores into weights that sum to 1, so the decoder distributes its focus across positions. Multiplying by &lt;strong&gt;&lt;code&gt;V&lt;/code&gt;&lt;/strong&gt; retrieves a weighted blend of the actual content. &lt;strong&gt;&lt;code&gt;√d&lt;/code&gt;&lt;/strong&gt; (where d is the dimension of the key vectors) is a scaling factor that prevents dot products from growing too large in high dimensions, which would push softmax into extreme values where gradients vanish.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem 3: Speed
&lt;/h2&gt;

&lt;p&gt;The third breakdown was about conversation. Word-by-word translation is sequential. Think in Tamil, translate, speak. Listen in English, translate back to Tamil, formulate response, translate to English, speak. For a fast-moving technical discussion, completely unworkable. By the time I'd finished translating, the conversation had moved on.&lt;/p&gt;

&lt;p&gt;The bottleneck wasn't comprehension. It was that the process was &lt;strong&gt;sequential&lt;/strong&gt;. Each step waited for the previous one.&lt;/p&gt;

&lt;p&gt;RNNs have the same problem. Step 2 waits for step 1. For 100 tokens, that's 100 sequential operations. &lt;strong&gt;Self-attention&lt;/strong&gt; breaks this entirely. Instead of processing word by word, it computes relationships between &lt;strong&gt;all positions simultaneously&lt;/strong&gt;. No sequential chain. The entire sequence processed at once.&lt;/p&gt;

&lt;p&gt;When I started thinking directly in English, the same shift happened. Grammar, meaning, context, all processed in parallel, automatically. Self-attention is the architectural version of that shift.&lt;/p&gt;

&lt;h2&gt;
  
  
  Self-Attention: Every Word Sees Every Other Word
&lt;/h2&gt;

&lt;p&gt;Consider: "The report that the client who called yesterday requested is ready."&lt;/p&gt;

&lt;p&gt;What is "ready"? The report. Which report? The one the client requested. Which client? The one who called yesterday. These connections span many positions. An RNN carries all of this through its hidden state, hoping nothing gets lost.&lt;/p&gt;

&lt;p&gt;Self-attention resolves them in &lt;strong&gt;one operation&lt;/strong&gt;. Every word attends to every other word, &lt;strong&gt;regardless of distance&lt;/strong&gt;. "Ready" looks back at "report." "Requested" looks back at "client." No sequential chain, no compression bottleneck.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multiple attention heads&lt;/strong&gt; run in parallel, each learning to notice different relationships. One head tracks grammar. Another tracks what pronouns refer to. Another tracks meaning. Eight heads, eight perspectives, same computation cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  See It
&lt;/h2&gt;

&lt;p&gt;Open the &lt;a href="https://rnilav-perceptrons-to-t-09-attentionattention-playground-7cap4z.streamlit.app" rel="noopener noreferrer"&gt;playground&lt;/a&gt;. Five concept demos that follow this post's narrative. No training loops, no waiting. Every slider updates instantly because it's all matrix math.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2b1to6j32rbl61ba6kj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2b1to6j32rbl61ba6kj8.png" alt="Tamil to English attention alignment: naive left-to-right vs learned reordering" width="800" height="362"&gt;&lt;/a&gt;&lt;br&gt;
On the left, naive left-to-right alignment: "Can" looks at "by-tmrw," which is wrong. On the right, learned attention: "Can" jumps to "can-you?" at position 5, "send" jumps to position 4, "tomorrow" reaches back to position 1. The non-diagonal pattern is the reordering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rnilav/perceptrons-to-transformers/blob/main/09-attention/ATTENTION_MATH_DEEP_DIVE.md" rel="noopener noreferrer"&gt;ATTENTION_MATH_DEEP_DIVE&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Attention solves the bottleneck. But the architecture still has an RNN encoder underneath. It's still sequential at its core.&lt;/p&gt;

&lt;p&gt;What if we removed the RNN entirely? What if the whole architecture was just attention, stacked?&lt;/p&gt;

&lt;p&gt;That's the Transformer.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
Bahdanau, D., Cho, K., &amp;amp; Bengio, Y. (2014). &lt;em&gt;Neural Machine Translation by Jointly Learning to Align and Translate&lt;/em&gt;.&lt;br&gt;
Vaswani, A., et al. (2017). &lt;em&gt;Attention Is All You Need&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt; From Perceptrons to Transformers | &lt;strong&gt;Code:&lt;/strong&gt; &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/tree/main/09-attention" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>selfattention</category>
      <category>ai</category>
      <category>transformer</category>
      <category>multiheadattention</category>
    </item>
    <item>
      <title>Recurrent Neural Networks: Giving Networks Memory</title>
      <dc:creator>Nilavukkarasan R</dc:creator>
      <pubDate>Mon, 13 Apr 2026 14:43:48 +0000</pubDate>
      <link>https://dev.to/rnilav/understanding-recurrent-neural-networks-from-forgetting-to-remembering-5f7</link>
      <guid>https://dev.to/rnilav/understanding-recurrent-neural-networks-from-forgetting-to-remembering-5f7</guid>
      <description>&lt;p&gt;&lt;em&gt;"The present contains nothing more than the past, and what is found in the effect was already in the cause."&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;Henri Bergson&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Everything So Far Assumed a Snapshot
&lt;/h2&gt;

&lt;p&gt;Every network we've built treats the input as a static snapshot. Feed it in, get a prediction out. The order doesn't matter. There's no before or after.&lt;/p&gt;

&lt;p&gt;That works for isolated images. A digit is a digit regardless of what came before it. But even for images, context matters in complex scenes. A round object next to a table is a plate. The same round object in the sky is the moon. CNNs detect the shape but don't understand the surrounding context.&lt;/p&gt;

&lt;p&gt;For language, the problem is even more fundamental. Text is inherently sequential. What came before changes the meaning of what comes after.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"My teacher said I was slow, but &lt;strong&gt;he&lt;/strong&gt; didn't know I was just getting started."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What does "he" refer to? The teacher. But only because you held "my teacher" in mind while reading the rest. You carried context forward, unconsciously, effortlessly.&lt;/p&gt;

&lt;p&gt;Every architecture we've built so far would fail this. None of them carry anything forward.&lt;/p&gt;
&lt;h2&gt;
  
  
  Learning to Read, Letter by Letter
&lt;/h2&gt;

&lt;p&gt;I remember learning to read. Not the fluent reading I do now. The early, effortful kind.&lt;/p&gt;

&lt;p&gt;Each letter had to be identified consciously. Then combined with the next to form a sound. Then sounds stitched into a word. Then words assembled into meaning. It was slow, sequential, and exhausting. By the time I reached the end of a long sentence, I'd often forgotten how it started.&lt;/p&gt;

&lt;p&gt;That's a vanilla RNN. It processes sequences one step at a time, maintaining a hidden state, a running summary of everything seen so far:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;At each step t:
  hidden(t) = tanh( W_h × hidden(t-1) + W_x × input(t) )
  output(t) = W_o × hidden(t)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The hidden state is the memory. It blends the new input with what came before. The same weights are reused at every step. One set of weights, applied repeatedly across time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;h(0) ──► h(1) ──► h(2) ──► h(3) ──► ...
  ▲         ▲         ▲         ▲
  │         │         │         │
x(0)      x(1)      x(2)      x(3)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It works for short sequences. Just like the early reader who handles a short word fine but loses the thread of a long sentence.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Long Sentence Problem
&lt;/h2&gt;

&lt;p&gt;Training uses backpropagation unrolled across time steps. And here's where the familiar problem returns: the vanishing gradient from &lt;a href="https://dev.to/rnilav/understanding-internal-covariate-shift-and-residual-connections-beyond-activation-functions-and-2c8"&gt;Post 07&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For a sequence of 50 words, the gradient gets multiplied by the weight matrix at each step backward. That's 50 multiplications. The gradient reaching step 1 is effectively zero. The network forgets the beginning of the sentence. Like my early reading days: by the end of a long sentence, I'd forgotten how it started.&lt;/p&gt;

&lt;p&gt;In Post 07, skip connections fixed vanishing gradients by adding a direct additive path. We need the same idea, but for time.&lt;/p&gt;

&lt;h2&gt;
  
  
  LSTM: Learning to Read Fluently
&lt;/h2&gt;

&lt;p&gt;Think about what changes when reading becomes fluent. You stop processing letter by letter. You chunk into words, phrases, meaning. More importantly, you become selective. You don't hold every word in memory with equal weight. You retain what matters: the subject, the tension, the unresolved question. You discard the filler.&lt;/p&gt;

&lt;p&gt;That selectivity is what the Long Short-Term Memory network introduced.&lt;/p&gt;

&lt;p&gt;An LSTM has two states: a hidden state (what it's currently working with) and a cell state (long-term memory). The cell state runs through the sequence with only small, controlled modifications, an additive path that lets gradients flow backward without decaying.&lt;/p&gt;

&lt;p&gt;Three gates control what happens to memory at each step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Forget gate:  f = sigmoid( W_f × [h(t-1), x(t)] )   → keep or erase old memory?
Input gate:   i = sigmoid( W_i × [h(t-1), x(t)] )   → is this input worth storing?
Output gate:  o = sigmoid( W_o × [h(t-1), x(t)] )   → what to expose right now?

Cell update:  c(t) = f × c(t-1)  +  i × candidate
Hidden:       h(t) = o × tanh( c(t) )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each gate outputs a value between 0 and 1. Near 1 means "yes, do this." Near 0 means "no, skip it." Consider reading "My teacher said I was slow, but &lt;strong&gt;he&lt;/strong&gt; didn't know I was just getting started." When the network reads "my teacher," the input gate fires high to store the subject. As it reads "said I was slow," the forget gate stays high to keep "teacher" in memory. When it reaches "he," the output gate surfaces "teacher" from memory to resolve the reference.&lt;/p&gt;

&lt;p&gt;All three gates are learned from data. Nobody programs when to remember or forget.&lt;/p&gt;

&lt;p&gt;The cell state update is additive: old memory plus new information. That additive structure is what saves the gradient. Instead of multiplying through a squashing function at every step, gradients flow through the cell state with far less decay. Same idea as the ResNet skip connection from Post 07, applied to time instead of depth.&lt;/p&gt;

&lt;p&gt;The hidden state isn't a recording of the past. It's a compressed summary of the parts that seem relevant for predicting what comes next. Just like a fluent reader doesn't remember the exact words from three pages ago, but does remember that the detective is suspicious of the butler.&lt;/p&gt;

&lt;h2&gt;
  
  
  See It
&lt;/h2&gt;

&lt;p&gt;Open the &lt;a href="https://rnilav-perceptrons-to-transformers-08-rnnrnn-playground-cj8yxy.streamlit.app" rel="noopener noreferrer"&gt;playground&lt;/a&gt;. Train both a vanilla RNN and an LSTM, then pick a sentence length and watch the confidence bars update word by word. You'll see the exact step where the vanilla RNN changes its mind and the LSTM doesn't.&lt;/p&gt;

&lt;p&gt;That's the difference between letter-by-letter reading and fluent reading. One forgets. The other holds on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmog269zrbzzhyp8vwxu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmog269zrbzzhyp8vwxu.png" alt="RNN vs LSTM: confidence in the subject fades for RNN, holds for LSTM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;RNNs gave networks memory. But they process sequences step by step. Step 2 waits for step 1. Step 50 waits for step 49. For a sequence of 100 tokens, that's 100 sequential operations. You can't parallelize.&lt;/p&gt;

&lt;p&gt;There's a deeper problem too. The hidden state has to compress everything seen so far into a fixed-size vector. For long sequences, that bottleneck loses information no matter how good the gating is.&lt;/p&gt;

&lt;p&gt;What if the network could look back at any part of the input directly, regardless of distance? No compression. No sequential chain.&lt;/p&gt;

&lt;p&gt;That's attention. And it's what made Transformers possible.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
Hochreiter, S., &amp;amp; Schmidhuber, J. (1997). &lt;em&gt;Long Short-Term Memory&lt;/em&gt;. Neural Computation, 9(8).&lt;br&gt;
Cho, K., et al. (2014). &lt;em&gt;Learning Phrase Representations using RNN Encoder-Decoder&lt;/em&gt;. EMNLP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt; From Perceptrons to Transformers | &lt;strong&gt;Code:&lt;/strong&gt; &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/tree/main/08-rnn" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rnn</category>
      <category>lstm</category>
      <category>ai</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>Batch Normalization and Residual Connections: Going Deeper Without Breaking</title>
      <dc:creator>Nilavukkarasan R</dc:creator>
      <pubDate>Sat, 11 Apr 2026 14:46:14 +0000</pubDate>
      <link>https://dev.to/rnilav/understanding-internal-covariate-shift-and-residual-connections-beyond-activation-functions-and-2c8</link>
      <guid>https://dev.to/rnilav/understanding-internal-covariate-shift-and-residual-connections-beyond-activation-functions-and-2c8</guid>
      <description>&lt;p&gt;&lt;em&gt;"No man ever steps in the same river twice, for it's not the same river and he's not the same man."&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;Heraclitus&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  When Deeper Made Things Worse
&lt;/h2&gt;

&lt;p&gt;CNNs rethought how networks process images. Filters, weight sharing, spatial structure. The natural next step: go deeper. Early layers detect edges, middle layers combine them into shapes, deeper layers recognize objects. To go from recognizing handwritten digits to understanding complex scenes, faces, medical scans, you need that depth. More layers, more abstraction, more power.&lt;/p&gt;

&lt;p&gt;Researchers took a 20-layer network and added 36 more layers. The 56-layer network should have been better. Instead, it was worse. Not just on test data. On training data too.&lt;/p&gt;

&lt;p&gt;That's not overfitting. Overfitting means you're too good on training data. This was the opposite: a bigger network that couldn't even fit the data it was trained on.&lt;/p&gt;

&lt;p&gt;Two things were broken. Fixing them required two ideas.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Signal Drifts
&lt;/h2&gt;

&lt;p&gt;Each layer transforms its input and passes it to the next. But as weights update during training, each layer's output distribution shifts. The next layer was calibrated for the old distribution. Now it's receiving something different.&lt;/p&gt;

&lt;p&gt;A small shift in layer 3 gets amplified by layer 4, amplified again by layer 5. After 20 layers, the signal has either exploded into enormous numbers or collapsed to near zero.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Without batch norm:
  Layer 5 output:  mean=2.3,  std=4.7
  Layer 10 output: mean=18.4, std=31.2   ← exploding
  Layer 20 output: mean=NaN              ← collapsed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every layer is chasing a moving target. That's the problem batch normalization solves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Batch Normalization
&lt;/h2&gt;

&lt;p&gt;Before each layer processes its input, normalize it to zero mean and unit variance. Then let the network re-scale with two learned parameters (γ and β) so it can undo the normalization if needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;x_norm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nf"&gt;sqrt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;variance&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;γ&lt;/span&gt; &lt;span class="err"&gt;×&lt;/span&gt; &lt;span class="n"&gt;x_norm&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;β&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now every layer starts from a stable baseline. Activations stay stable (no explosions), you can use higher learning rates, and weight initialization matters less.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;With batch norm:
  Layer 5:  mean≈0, std≈1
  Layer 10: mean≈0, std≈1
  Layer 20: mean≈0, std≈1    ← stable all the way down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One detail: batch norm computes statistics from the current mini-batch during training. At inference, there's no batch, so it uses running averages accumulated during training.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Gradient Vanishes
&lt;/h2&gt;

&lt;p&gt;Batch norm fixes the forward pass. But there's a second problem in the backward pass.&lt;/p&gt;

&lt;p&gt;Backpropagation multiplies derivatives together as it moves backward. Each layer contributes a factor. If those factors are consistently less than 1, the gradient shrinks at every layer. By the time it reaches layer 1 of a 50-layer network, the gradient is effectively zero.&lt;/p&gt;

&lt;p&gt;This is why the 56-layer network performed worse than the 20-layer one. The early layers weren't getting any useful gradient signal. They were frozen. It's like studying so much for an exam that your brain goes blank. More preparation, worse performance. Not because you lack knowledge, but because the signal got lost somewhere along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Residual Connections: The Shortcut
&lt;/h2&gt;

&lt;p&gt;Instead of learning a full transformation, a residual block learns the difference from identity:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Normal layer:    output = F(x)
Residual block:  output = F(x) + x     ← add the input back
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That &lt;code&gt;+ x&lt;/code&gt; is the skip connection. The input bypasses the transformation and gets added to the output.&lt;/p&gt;

&lt;p&gt;Why this fixes vanishing gradients: in a normal layer, the gradient gets multiplied by F'(x) at every step. If F'(x) is 0.1, after 50 layers you're multiplying fifty 0.1s together. The gradient is gone.&lt;/p&gt;

&lt;p&gt;With a residual block, the chain rule becomes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Normal:    ∂L/∂x = ∂L/∂output × F'(x)
Residual:  ∂L/∂x = ∂L/∂output × (F'(x) + 1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That &lt;code&gt;+ 1&lt;/code&gt; comes from the skip connection. Instead of multiplying values less than 1 at every layer, the skip connection keeps each factor close to 1. The gradient stays alive all the way back to layer 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70ayycxib35q3zhhfsac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70ayycxib35q3zhhfsac.png" alt="Gradient flow: normal network vs ResNet" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the left, a 30-layer normal network. The gradient starts at 1.0 at the output and shrinks at every layer. By layer 1, it's 0.007. The early layers are frozen. On the right, the same depth with skip connections. The gradient stays close to 1.0 across all layers because the skip provides a direct path that doesn't decay.&lt;/p&gt;

&lt;p&gt;Before ResNets, the practical limit was around 20 layers. After, researchers trained networks with over 1,000 layers.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Everything Fits Together
&lt;/h2&gt;

&lt;p&gt;Seven posts in, it can feel like an ever-growing list of techniques. It's not. Each solved a specific failure: hidden layers for non-linearity (02), backprop for learning (03), mini-batches for scale (04), dropout for overfitting (05), convolutions for spatial data (06), batch norm for signal drift (07), skip connections for vanishing gradients (07). Each patches a gap the others can't cover. Together, they make modern deep networks trainable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;We can now train deep networks on images. But images are static. What about data where order matters? Text, audio, time series, where what came before changes the meaning of what comes after.&lt;/p&gt;

&lt;p&gt;A fully connected network has no concept of sequence. A CNN has no concept of time. We need an architecture with memory.&lt;/p&gt;

&lt;p&gt;That's where recurrent neural networks come in. And the vanishing gradient problem we just solved for depth comes back for length.&lt;/p&gt;




</description>
      <category>deeplearning</category>
      <category>residualconnections</category>
      <category>batchnormalization</category>
      <category>cnn</category>
    </item>
    <item>
      <title>Convolutional Neural Networks: Teaching Networks to See</title>
      <dc:creator>Nilavukkarasan R</dc:creator>
      <pubDate>Tue, 31 Mar 2026 13:28:17 +0000</pubDate>
      <link>https://dev.to/rnilav/from-generalists-to-specialists-the-cnn-shift-1h1d</link>
      <guid>https://dev.to/rnilav/from-generalists-to-specialists-the-cnn-shift-1h1d</guid>
      <description>&lt;p&gt;&lt;em&gt;"Vision is the art of seeing what is invisible to others."&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;Jonathan Swift&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  150 Million Parameters for One Layer
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/rnilav/regularization-fighting-overfitting-2pj"&gt;Post 05&lt;/a&gt; ended with a number: a 224×224 color photograph has 150,528 pixels. Connect each pixel to 1,000 neurons in a fully connected layer and you need 150 million weights. Just for the first layer. Before the network has learned anything.&lt;/p&gt;

&lt;p&gt;That's not a training problem. That's an architecture problem. Fully connected networks treat every pixel as equally related to every other pixel. A pixel in the top-left corner connects to the same neurons as a pixel in the bottom-right. But images don't work that way. Nearby pixels form edges, textures, shapes. Distant pixels are usually unrelated.&lt;/p&gt;

&lt;p&gt;We need an architecture that knows this.&lt;/p&gt;
&lt;h2&gt;
  
  
  One Small Filter, Everywhere
&lt;/h2&gt;

&lt;p&gt;Instead of connecting every pixel to every neuron, a CNN slides a small filter (say 3×3) across the image. At each position, it multiplies the 9 filter weights by the 9 pixel values underneath and sums them up, exactly like the perceptron's weighted sum from &lt;a href="https://dev.to/rnilav/understanding-perceptrons-the-foundation-of-modern-ai-2g04"&gt;Post 01&lt;/a&gt;, just applied to a small patch instead of the whole input. Then it moves one pixel over and repeats.&lt;/p&gt;

&lt;p&gt;The same 9 weights are used at every position. One filter, applied everywhere. If it learns to detect vertical edges, it detects them in the top-left, the center, and the bottom-right, all with the same 9 weights.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FC first layer:   150,528 × 1,000 = 150 million parameters
CNN first layer:  32 filters × 3×3×3 = 864 parameters
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the core idea. Instead of one giant layer that sees everything, many small filters that each detect one pattern locally. The technical term is weight sharing, and it's why CNNs are practical for images.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthsxo07r7q1bzj1tvpss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthsxo07r7q1bzj1tvpss.png" alt="FC vs CNN: how they see an image" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the left, the FC network flattens the image into a row of 784 pixels. Spatial structure destroyed. Every pixel connects to every neuron. On the right, the CNN keeps the image as a 2D grid and slides a small 3×3 filter across it. Same 9 weights, applied everywhere. Spatial structure preserved.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Filters Learn
&lt;/h2&gt;

&lt;p&gt;A filter is just a small grid of numbers. Backpropagation (same algorithm from &lt;a href="https://dev.to/rnilav/3-backpropagation-errors-flow-backward-knowledge-flows-forward-5320"&gt;Post 03&lt;/a&gt;) adjusts these numbers until the filter detects something useful. Nobody designs the filters by hand. The network learns them from data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Learned vertical edge filter:    Learned horizontal edge filter:
[-1  0  1]                        [-1 -2 -1]
[-2  0  2]                        [ 0  0  0]
[-1  0  1]                        [ 1  2  1]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Slide a vertical edge filter across a digit and you get a feature map: a heat map showing where vertical edges were detected and how strongly. Stack 32 filters and you get 32 feature maps, 32 different views of the same image.&lt;/p&gt;

&lt;p&gt;Early layers learn edges. Deeper layers combine edges into shapes. Even deeper layers combine shapes into parts of objects. It's a hierarchy: simple patterns compose into complex ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pooling: Summarize, Don't Memorize
&lt;/h2&gt;

&lt;p&gt;After detecting features, we don't need to know exactly where they appeared. Just roughly where. Max pooling takes a small window (2×2) and keeps only the strongest activation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Before pooling:        After 2×2 max pooling:
[1  3  2  4]           [6  4]
[5  6  1  2]     →     [8  7]
[3  8  4  7]
[1  2  6  3]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shrinks the spatial size (fewer parameters downstream) and makes the network tolerant to small shifts. A digit shifted 2 pixels left still produces similar pooled features. The network recognizes the pattern regardless of exact position.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Pipeline
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input Image
    ↓
[Conv → ReLU → Pool]    ← detect edges, textures
    ↓
[Conv → ReLU → Pool]    ← combine into shapes
    ↓
Flatten
    ↓
[Fully Connected]        ← classify based on learned features
    ↓
Softmax → Prediction
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything we built still applies. Activation(Relu) adds non-linearity (Post 02). Backpropagation trains the filters (Post 03). Adam optimizes the updates (Post 04). Dropout regularizes (Post 05). CNNs didn't replace what we built. They added a smarter way to process spatial data on top of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  See It
&lt;/h2&gt;

&lt;p&gt;The original image is just pixels. After the first convolution, you see edges. After the second, you see shapes. The network is building its own understanding of the digit, layer by layer, from nothing but raw pixels and backpropagation.&lt;/p&gt;

&lt;p&gt;Open the &lt;a href="https://rnilav-perceptrons-to-transformers-06-cnncnn-playground-7vlhgb.streamlit.app" rel="noopener noreferrer"&gt;playground&lt;/a&gt;. Train both an FC network and a CNN on the same MNIST subset. The CNN reaches higher accuracy compared to FC network. &lt;/p&gt;

&lt;p&gt;In the second tab, Pick a digit and watch what each filter detects. One filter lights up along vertical strokes. Another responds to horizontal edges. Another catches curves.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgozuii6i9lq7zlgjbovm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgozuii6i9lq7zlgjbovm.png" alt="CNN pipeline: original digit → feature maps → pooled output" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The top row shows the digit after four different filters. The vertical edge filter lights up along the strokes of the 3. The horizontal edge filter catches the top and bottom curves. Each filter sees something different in the same image. The bottom row shows the same feature maps after max pooling: smaller, coarser, but the important patterns survive.&lt;/p&gt;

&lt;h2&gt;
  
  
  CNNs Are Built for Spatial Data
&lt;/h2&gt;

&lt;p&gt;CNNs work because of two assumptions: nearby inputs are related, and the same pattern can appear anywhere. Images satisfy both. So do audio spectrograms and video frames. Text doesn't. The word "not" next to "good" means something completely different from "not" next to "bad." That's why text needs different architectures, which we'll get to.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;CNNs solve the parameter problem for images. But go deep enough and training breaks down again. Gradients vanish through many layers. Adding more layers actually hurts accuracy, not from overfitting, but because the gradient signal can't reach the early layers.&lt;/p&gt;

&lt;p&gt;Two ideas fixed this: batch normalization (stabilize activations between layers) and residual connections (let gradients skip layers entirely). Together, they made 50-layer and 100-layer networks trainable.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
Krizhevsky, A., (2012). &lt;em&gt;ImageNet Classification with Deep Convolutional Neural Networks&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt; From Perceptrons to Transformers | &lt;strong&gt;Code:&lt;/strong&gt; &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/tree/main/06-cnn" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>convolutionalnetworks</category>
      <category>computervision</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Neural Network Regularization: Fighting Overfitting</title>
      <dc:creator>Nilavukkarasan R</dc:creator>
      <pubDate>Fri, 13 Mar 2026 14:35:02 +0000</pubDate>
      <link>https://dev.to/rnilav/regularization-fighting-overfitting-2pj</link>
      <guid>https://dev.to/rnilav/regularization-fighting-overfitting-2pj</guid>
      <description>&lt;p&gt;&lt;em&gt;"Learning without thought is labor lost."&lt;/em&gt; --&lt;strong&gt;Confucius&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  99% Accuracy That Means Nothing
&lt;/h2&gt;

&lt;p&gt;Train a network on MNIST with Adam, mini-batches, 100 epochs. Training accuracy climbs past 99%. It feels like we've solved it.&lt;/p&gt;

&lt;p&gt;Now check test accuracy. It hit 97% around epoch 50, then slowly dropped as training continued.&lt;/p&gt;

&lt;p&gt;Imagine studying for an exam by memorizing that "Question 5 is always B" instead of understanding why B is correct. You'd ace the practice test but fail when questions are reordered. Neural networks do the same thing. With enough capacity and enough passes over the same data, they memorize training examples instead of learning the underlying patterns.&lt;/p&gt;

&lt;p&gt;That's overfitting.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Gap
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Epoch 1:   Train: 87%  Test: 87%  Gap: 0%
Epoch 10:  Train: 97%  Test: 97%  Gap: 0%
Epoch 50:  Train: 99%  Test: 97%  Gap: 2%
Epoch 100: Train: 99.7% Test: 97%  Gap: 3%
Epoch 200: Train: 99.9% Test: 96%  Gap: 4%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Early on, both accuracies rise together. The network is learning real patterns. But past epoch 50, training accuracy keeps climbing while test accuracy stalls. The network has shifted from learning to memorizing.&lt;/p&gt;

&lt;p&gt;Why? The network has 100,000 weights and only 60,000 training examples. It has enough capacity to memorize every single example. Given enough epochs, it will. The loss function only rewards getting training predictions right. It doesn't care whether the network learned a general rule or memorized each answer individually.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dropout: Randomly Breaking the Network
&lt;/h2&gt;

&lt;p&gt;What if we randomly disabled neurons during training? Not permanently. Just for each mini-batch, randomly turn off some neurons.&lt;/p&gt;

&lt;p&gt;This sounds like chaos engineering. And thats exactly what it is. It forces the network to build redundancy.&lt;/p&gt;

&lt;p&gt;Think of a team with a frontend developer, a backend developer, and a database specialist. To ship a feature, all three must contribute. If the database specialist is out sick, the feature stalls because nobody else knows the database layer. The team is fragile because each person owns one piece and nobody else can cover for them.&lt;/p&gt;

&lt;p&gt;Now imagine the manager randomly rotates people out of the team each sprint. Nobody can afford to be the only person who knows their piece. Knowledge spreads. The frontend developer picks up some backend. The backend developer learns the database. The team becomes resilient because no single person is a bottleneck.&lt;/p&gt;

&lt;p&gt;That's dropout. During training, each neuron has a chance (say 20%) of being turned off for that mini-batch. The network can't rely on any single neuron, so it spreads useful information across multiple pathways. At test time, all neurons are active, and the network benefits from all those redundant pathways working together.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;training&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;mask&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;binomial&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;dropout_rate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;mask&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;dropout_rate&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;   &lt;span class="c1"&gt;# scale to keep expected value the same
&lt;/span&gt;&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;                                &lt;span class="c1"&gt;# no dropout at test time
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The scaling by &lt;code&gt;1 / (1 - dropout_rate)&lt;/code&gt; matters. Say you have 10 neurons and drop 20%. During training, only 8 are active. Their outputs sum to, say, 8.0. At test time, all 10 are active, so the sum jumps to 10.0. The next layer suddenly sees larger numbers than it was trained on, and predictions break.&lt;/p&gt;

&lt;p&gt;The fix: during training, scale the surviving neurons' outputs up by &lt;code&gt;1 / (1 - 0.2) = 1.25&lt;/code&gt;. Now those 8 neurons produce 8.0 × 1.25 = 10.0, matching what the next layer will see at test time. Training and testing stay consistent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Weight Decay: Preferring Simple Explanations
&lt;/h2&gt;

&lt;p&gt;Dropout prevents neurons from co-depending. Weight decay takes a different approach: it penalizes large weights.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Total Loss = Prediction Loss + λ × (sum of squared weights)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Say your network has three weights: [3.0, -4.0, 2.0]. The sum of their squares is 9 + 16 + 4 = 29. With λ = 0.001, the penalty is 0.029, added on top of the prediction loss.&lt;/p&gt;

&lt;p&gt;Now compare a network with weights [0.3, -0.4, 0.2]. Sum of squares: 0.09 + 0.16 + 0.04 = 0.29. Penalty: 0.00029. Ten times smaller weights, hundred times smaller penalty.&lt;/p&gt;

&lt;p&gt;The optimizer now faces a trade-off: reduce prediction loss, but also keep weights small. A weight can grow large if it genuinely helps predictions enough to justify the penalty. But weights that grew large just to memorize noise get pulled back toward zero because the penalty outweighs their benefit.&lt;/p&gt;

&lt;p&gt;Why does this help? Large weights make the network sensitive to small input changes, creating sharp decision boundaries that fit noise in the training data. Small weights produce smoother boundaries that generalize better.&lt;/p&gt;

&lt;p&gt;In practice, most people use both. Start with dropout at 0.2 and weight decay at 0.0001, then adjust based on the gap. If the gap is still large, increase dropout. If training accuracy drops too much, ease off.&lt;/p&gt;

&lt;h2&gt;
  
  
  See It
&lt;/h2&gt;

&lt;p&gt;Open the &lt;a href="https://rnilav-percep-05-regularizationregularization-playground-z72npw.streamlit.app" rel="noopener noreferrer"&gt;playground&lt;/a&gt;. Train with no regularization and watch the gap between train and test accuracy widen. Then add dropout at 0.2. The gap shrinks. Add weight decay at 0.0001. It shrinks further.&lt;/p&gt;

&lt;p&gt;The visual is the two curves: training accuracy climbing, test accuracy following or falling behind. Regularization is what keeps them close together.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny2wplx7j8tisrveu5oz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny2wplx7j8tisrveu5oz.png" alt="No regularization vs dropout + weight decay" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;We can now train networks that generalize. But we're still using fully connected networks where every input connects to every neuron. For MNIST's 28×28 images, that's 784 inputs. For a real photograph at 224×224×3, that's 150,528 inputs. The first layer alone would need 150 million weights.&lt;/p&gt;

&lt;p&gt;We need an architecture that understands spatial structure. One where nearby pixels matter more than distant ones, and the same pattern can be detected anywhere in the image.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
Hinton, G. E., et al. &lt;em&gt;Improving neural networks by preventing co-adaptation of feature detectors&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt; From Perceptrons to Transformers | &lt;strong&gt;Code:&lt;/strong&gt; &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/tree/main/05-regularization" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>regularization</category>
      <category>weightdecay</category>
    </item>
    <item>
      <title>Neural Network Optimizers: Training at Scale</title>
      <dc:creator>Nilavukkarasan R</dc:creator>
      <pubDate>Wed, 04 Mar 2026 15:06:50 +0000</pubDate>
      <link>https://dev.to/rnilav/neural-network-optimizers-from-baby-steps-to-intelligent-learning-44po</link>
      <guid>https://dev.to/rnilav/neural-network-optimizers-from-baby-steps-to-intelligent-learning-44po</guid>
      <description>&lt;p&gt;&lt;em&gt;"Adapt what is useful, reject what is useless, and add what is specifically your own."&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;Bruce Lee&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  From 4 Examples to 60,000
&lt;/h2&gt;

&lt;p&gt;Backpropagation learned XOR from 4 training examples. Compute the gradient using all 4, update the weights, repeat. Every update sees the complete picture.&lt;/p&gt;

&lt;p&gt;Now consider MNIST: 60,000 handwritten digit images, each 28×28 pixels. The task is to look at an image and predict which digit (0-9) it represents. The network needs 784 inputs, a hidden layer, and 10 outputs. Roughly 100,000 weights.&lt;/p&gt;

&lt;p&gt;Computing the gradient using all 60,000 examples requires 60,000 forward and backward passes per update. On a simple NumPy implementation, that's a few seconds per update. Training for 100 epochs takes several minutes.&lt;/p&gt;

&lt;p&gt;That's just MNIST. Models like GPT-4o and Claude train on trillions of tokens. Full-batch gradient descent doesn't scale.&lt;/p&gt;
&lt;h2&gt;
  
  
  You Don't Need Every Example
&lt;/h2&gt;

&lt;p&gt;Think about cooking. You don't taste every grain of rice to know if you need more salt. A spoonful tells you enough.&lt;/p&gt;

&lt;p&gt;That's mini-batch gradient descent. Instead of computing the gradient from all 60,000 examples, grab a small batch (say 64), compute the gradient from those, update the weights, grab the next 64, repeat.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for each epoch:
    shuffle training data
    for each mini-batch of 64:
        forward pass
        compute loss
        backward pass
        update weights
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each mini-batch gradient is noisy, not the exact direction from all 60,000 examples. But it points roughly right. And it's fast. Instead of one slow update per epoch using all data, you get hundreds of quick updates. Training that took minutes with full-batch finishes in seconds with mini-batches.&lt;/p&gt;

&lt;p&gt;One complete pass through all the data is an &lt;strong&gt;epoch&lt;/strong&gt;. With 60,000 examples and batch size 64, one epoch is 937 updates. We shuffle the data before each epoch so the mini-batches differ every time. This randomness (the "stochastic" in stochastic gradient descent) prevents the network from memorizing the order of examples.&lt;/p&gt;

&lt;h2&gt;
  
  
  Not All Weights Need the Same Push
&lt;/h2&gt;

&lt;p&gt;Mini-batches solve the speed problem. But remember the radio from the last post? Seed 5 with a small network got stuck between stations. At MNIST scale, this problem gets worse. With 100,000 weights, some are tuning into strong signals and getting large gradients on every update. Others are listening for faint signals and barely getting any gradient at all. One learning rate can't serve both. The loud signals overshoot while the faint ones barely move.&lt;/p&gt;

&lt;p&gt;This is where optimizers diverge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SGD&lt;/strong&gt; applies the same learning rate to every weight. It's the basic radio dial. Turn at one speed, hope for the best. If the station is strong, you find it. If it's faint, you might turn right past it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Momentum&lt;/strong&gt; keeps a running average of past gradients. If the last ten updates all pushed a weight in the same direction, momentum makes the next push bigger. If the updates keep flipping direction (up, down, up, down), momentum cancels them out and the weight stays steady. It smooths out the noise from mini-batches.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Adam&lt;/strong&gt; does what momentum does, plus one more thing: it tracks how large each weight's gradients typically are. A weight that always gets big gradients is already moving fast, so Adam gives it smaller steps to avoid overshooting. A weight that gets tiny gradients is barely moving, so Adam gives it bigger steps to catch up. Every weight gets its own learning rate, adjusted automatically.&lt;/p&gt;

&lt;p&gt;In practice, Adam is the default choice for most problems. It converges faster and is less sensitive to the initial learning rate.&lt;/p&gt;

&lt;p&gt;For more details about the math model behind --&amp;gt; &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/blob/main/04-optimization/OPTIMIZERS_SIMPLIFIED.md" rel="noopener noreferrer"&gt;optimizers&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  See It
&lt;/h2&gt;

&lt;p&gt;Open the &lt;a href="https://rnilav-perceptron-04-optimizationoptimization-playground-anxskw.streamlit.app/" rel="noopener noreferrer"&gt;playground&lt;/a&gt; and train all three optimizers on MNIST side by side. Watch which one pulls ahead first.&lt;/p&gt;

&lt;p&gt;The gap is most visible in the first few epochs. Adam adapts quickly because it adjusts per weight. SGD treats every weight the same and takes longer to find its footing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cysxo55tdld429ju9nf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cysxo55tdld429ju9nf.png" alt="Optimizer comparison on MNIST" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;We can now train on real data, efficiently. Backprop computes gradients, mini-batches make it fast, Adam adapts the learning rate per weight. 99% accuracy on MNIST in minutes.&lt;/p&gt;

&lt;p&gt;But train longer and something breaks. Training accuracy keeps climbing, past 99%. Test accuracy stalls and drops. The network isn't learning patterns anymore. It's memorizing training examples.&lt;/p&gt;

&lt;p&gt;That gap between training and test accuracy is called overfitting. Closing it is the next problem.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
Kingma, D. P., &amp;amp; Ba, J. (2014). &lt;em&gt;Adam: A Method for Stochastic Optimization&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt; From Perceptrons to Transformers | &lt;strong&gt;Code:&lt;/strong&gt; &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/tree/main/04-optimization" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>neuralnetworks</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>Backpropagation: How Neural Networks Learn From Mistakes</title>
      <dc:creator>Nilavukkarasan R</dc:creator>
      <pubDate>Thu, 19 Feb 2026 14:37:13 +0000</pubDate>
      <link>https://dev.to/rnilav/3-backpropagation-errors-flow-backward-knowledge-flows-forward-5320</link>
      <guid>https://dev.to/rnilav/3-backpropagation-errors-flow-backward-knowledge-flows-forward-5320</guid>
      <description>&lt;p&gt;&lt;em&gt;"The backpropagation algorithm was a key historical step in demonstrating that deep neural networks could be trained effectively."&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;Geoffrey Hinton&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  From Hand-Crafted to Learned
&lt;/h2&gt;

&lt;p&gt;A network that recognizes handwritten digits has hundreds of thousands of weights. A language model has billions. You can't hand-pick billions of numbers. There has to be a way for the network to find its own weights.&lt;/p&gt;

&lt;p&gt;That's what backpropagation does. It starts with random weights and adjusts them automatically, using the errors the network figures out which direction to nudge each weight.&lt;/p&gt;
&lt;h2&gt;
  
  
  Try, Miss, Adjust
&lt;/h2&gt;

&lt;p&gt;Think about learning to throw darts. Your first throw misses the bullseye by a foot. You don't start over with a completely random throw. You adjust. A little less force, slightly different angle. The error (how far you missed) tells you which way to correct.&lt;/p&gt;

&lt;p&gt;Backpropagation does the same thing, for every weight in the network, simultaneously:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Forward pass:   feed input through the network, get a prediction
2. Compute error:  how far off was the prediction?
3. Backward pass:  trace the error back, figure out each weight's share of blame
4. Update weights: nudge each weight to reduce the error
5. Repeat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 3 is where the name comes from. The error at the output is clear: prediction minus target. But how much did each hidden neuron contribute to that error?&lt;/p&gt;

&lt;p&gt;Think of it like a relay race where the team finishes 10 seconds too slow. The coach doesn't just blame the last runner. She works backward: the last runner lost 3 seconds, the one before lost 5, the first lost 2. Each runner's share of blame is traced back through the chain.&lt;/p&gt;

&lt;p&gt;Backpropagation does the same thing. It starts at the output error and works backward through each layer, computing how much each weight contributed. This is the chain rule from calculus applied layer by layer. Each weight gets a gradient. &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/blob/main/03-backpropagation/BACKPROPAGATION_CALCULUS.md" rel="noopener noreferrer"&gt;Backpropagation calculus explained&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Learning Rate
&lt;/h2&gt;

&lt;p&gt;The gradient tells you which direction to move. The learning rate tells you how big a step to take.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new_weight = old_weight - learning_rate × gradient
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Too high (1.0) and the network overshoots. The loss bounces around, never settling. Too low (0.01) and training crawls. Each update barely moves the weights. A learning rate around 0.3 to 0.5 usually gives steady progress.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://rnilav-perceptrons-03-backpropagationbackprop-playground-tlr5ql.streamlit.app/" rel="noopener noreferrer"&gt;playground&lt;/a&gt;, try training with learning rate 0.5 and seed 123. Watch the loss drop smoothly. Then try learning rate 1.0. Watch it bounce. The learning rate is the difference between a network that converges and one that thrashes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Loss Curve
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/rnilav/understanding-ai-from-first-principles-multi-layer-perceptrons-and-the-hidden-layer-breakthrough-44pl"&gt;MLP-Post&lt;/a&gt;, I hand-crafted weights and got 100% accuracy instantly. No learning, no process.&lt;/p&gt;

&lt;p&gt;With backpropagation, you start with random weights. The network gets everything wrong. The loss is high. Then, epoch by epoch, the loss drops. The predictions get closer. The decision boundary shifts from random noise to something that actually separates the classes.&lt;/p&gt;

&lt;p&gt;That curve going down is learning happening in real time.&lt;/p&gt;

&lt;p&gt;Open the &lt;a href="https://rnilav-perceptrons-03-backpropagationbackprop-playground-tlr5ql.streamlit.app/" rel="noopener noreferrer"&gt;playground&lt;/a&gt; and train a 2-4-1 network with learning rate 0.5 and seed 123. Watch the loss curve drop. &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/blob/main/03-backpropagation/explore_hyperparameters.py" rel="noopener noreferrer"&gt;Explore hyperparameters&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Same Algorithm, Any Scale
&lt;/h2&gt;

&lt;p&gt;Every modern neural network learned its weights through backpropagation. Image classifiers, language models, speech recognition. The algorithm that learned 9 weights for XOR is the same one that trains GPT-4's 1.76 trillion parameters. Forward pass, compute loss, backward pass, update weights. The scale changes. The principle doesn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Starting Point Matters
&lt;/h2&gt;

&lt;p&gt;Backpropagation starts with random weights. The random seed controls which random numbers you start with. Think of tuning an old analog radio. You turn the dial looking for a clear signal. Where you start turning from (the seed) decides which station you find first. Sometimes you land on a strong station. Sometimes you get stuck between two stations, hearing nothing but static, and no small turn of the dial fixes it.&lt;/p&gt;

&lt;p&gt;A small network (2-2-1) is like a radio with a narrow dial. The stations are packed tight, and a tiny turn jumps past the one you wanted. Very sensitive to where you start. A larger network (2-4-1) is a wider dial with more room between stations. Easier to land on a clear signal from almost any starting position.&lt;/p&gt;

&lt;p&gt;In the playground, seed 5 with a 2-2-1 network gets stuck at 75%. Switch to 2-4-1 with the same seed and it converges to 100%. More neurons don't just add capacity. They add alternative routes to the solution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80wzku5d6fdtfddiyvse.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80wzku5d6fdtfddiyvse.png" alt="Same seed, different architecture: 2-2-1 gets stuck, 2-4-1 converges" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;We can now train networks automatically. But XOR has 4 training examples. Real datasets have thousands, millions. Computing the gradient using all examples at once is slow. And a single learning rate for every weight isn't ideal, some weights need bigger steps, others smaller.&lt;/p&gt;

&lt;p&gt;Training a network is one thing. Training it efficiently, at scale, is a different problem. That's where optimizers come in.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
Rumelhart, D. E., Hinton, G. E., &amp;amp; Williams, R. J. (1986). &lt;em&gt;Learning representations by back-propagating errors&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt; From Perceptrons to Transformers | &lt;strong&gt;Code:&lt;/strong&gt; &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/tree/main/03-backpropagation" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>backpropagation</category>
      <category>machinelearning</category>
      <category>neuralnetworks</category>
    </item>
    <item>
      <title>Multi-Layer Perceptron: Where One Line Becomes Two</title>
      <dc:creator>Nilavukkarasan R</dc:creator>
      <pubDate>Tue, 17 Feb 2026 15:14:10 +0000</pubDate>
      <link>https://dev.to/rnilav/understanding-ai-from-first-principles-multi-layer-perceptrons-and-the-hidden-layer-breakthrough-44pl</link>
      <guid>https://dev.to/rnilav/understanding-ai-from-first-principles-multi-layer-perceptrons-and-the-hidden-layer-breakthrough-44pl</guid>
      <description>&lt;p&gt;&lt;em&gt;"The perceptron has many limitations... the most serious is its inability to learn even the simplest nonlinear functions."&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;Marvin Minsky&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  XOR Needs More Than One Line
&lt;/h2&gt;

&lt;p&gt;The perceptron solved AND, OR, and NAND. The natural next question: what can't it do?&lt;/p&gt;

&lt;p&gt;XOR. Output 1 when inputs differ, 0 when they match.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[0, 0] → 0    [0, 1] → 1
[1, 0] → 1    [1, 1] → 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The class 1 points sit diagonally opposite each other. Unlike AND or OR, where one straight line cleanly separates the classes, XOR needs at least two lines to carve out the right regions.&lt;/p&gt;

&lt;p&gt;The obvious fix? Add more neurons. Stack another layer. Surely more layers means more power.&lt;/p&gt;

&lt;p&gt;It doesn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Stacking Layers Alone Changes Nothing
&lt;/h2&gt;

&lt;p&gt;A perceptron computes w·x + b and draws a line. Stack two layers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Layer 1:  z₁ = w₁·x + b₁
Layer 2:  z₂ = w₂·z₁ + b₂
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expand it: z₂ = w₂·(w₁·x + b₁) + b₂ = (w₂·w₁)·x + (w₂·b₁ + b₂)&lt;/p&gt;

&lt;p&gt;That's just W·x + B. A single line with different numbers. Two layers collapsed into one. Stack ten, a hundred, the math always simplifies to one straight line. More layers feel like more power. But without something to break the linearity between them, depth is an illusion.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Carry
&lt;/h2&gt;

&lt;p&gt;When I was a kid, single digit addition was simple. 3 + 5 = 8. One step, done.&lt;/p&gt;

&lt;p&gt;Then came 27 + 15. I kept getting it wrong. I'd add 2 + 1 = 3, then 7 + 5 = 12, and write 312. Two separate problems stacked together. I was missing something invisible.&lt;/p&gt;

&lt;p&gt;The breakthrough: 7 + 5 doesn't just equal 12. It creates a 1 that carries over to the next column. That carry doesn't stay where it was computed. It transforms into a 1 in a different column, changing what comes next.&lt;/p&gt;

&lt;p&gt;Without the carry, stacking columns is useless. Each column is independent, and you get 312. With the carry, the columns interact, and you get 42.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sigmoid: The Carry Between Layers
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Perceptron:   output = w·x + b
MLP neuron:   output = sigmoid(w·x + b)

sigmoid(z) = 1 / (1 + e^(-z))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sigmoid takes any number and squashes it between 0 and 1. Feed it −5, you get 0.007. Feed it 0, you get 0.5. Feed it +5, you get 0.993. It takes one layer's output and transforms it into a new range before the next layer sees it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Layer 1:  h = sigmoid(w₁·x + b₁)
Layer 2:  y = sigmoid(w₂·h + b₂)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try to simplify this into a single W·x + B. You can't. The sigmoid in the middle prevents the layers from collapsing. The hidden layer matters not because it adds more neurons, but because the activation function between layers stops them from collapsing into one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond Sigmoid
&lt;/h2&gt;

&lt;p&gt;Sigmoid isn't the only activation function. There are others, each with a different shape:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sigmoid(z) = 1 / (1 + e^(-z))       → squashes to (0, 1)
tanh(z)    = (e^z - e^-z)/(e^z+e^-z) → squashes to (-1, 1)
ReLU(z)    = max(0, z)                → passes positives, zeros out negatives
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Think of them as different volume knobs. Sigmoid only turns between 0 and 1. Tanh turns between -1 and 1, which is useful when you need the output centered around zero. ReLU is the simplest: if the signal is positive, pass it through unchanged. If negative, silence it.&lt;/p&gt;

&lt;p&gt;ReLU is the default for hidden layers in modern networks. It's fast to compute and avoids a problem called vanishing gradients, where sigmoid and tanh squash large values so flat that the gradient nearly disappears, making learning extremely slow in deep networks. We'll see this problem firsthand at a later stage.&lt;/p&gt;

&lt;p&gt;For output layers, the choice depends on the task: sigmoid for binary yes/no, softmax (a generalization of sigmoid) for picking one class out of many.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Solves XOR
&lt;/h2&gt;

&lt;p&gt;A 2-2-1 network (2 inputs, 2 hidden neurons with sigmoid, 1 output) solves XOR. Each hidden neuron draws its own line. These two parallel lines create a band, and the region between them is where exactly one input is 1.&lt;/p&gt;

&lt;p&gt;The output neuron combines them: neuron 1's signal (OR) minus neuron 2's signal (AND). What's left is OR but NOT AND, which is XOR.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  [0,0] → both neurons low  → output low  → class 0 ✓
  [0,1] → neuron 1 high, neuron 2 low → output high → class 1 ✓
  [1,0] → neuron 1 high, neuron 2 low → output high → class 1 ✓
  [1,1] → both neurons high → they cancel → output low → class 0 ✓
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this scales beyond a single hidden layer. Stack another layer on top, and the second layer doesn't see the original inputs. It sees the transformed outputs of the first layer. So the first layer draws boundaries, the second layer combines those boundaries into shapes, a third could combine shapes into patterns. Each layer builds on the previous one's transformation. That's why they're called &lt;em&gt;deep&lt;/em&gt; neural networks.&lt;/p&gt;

&lt;h2&gt;
  
  
  See It
&lt;/h2&gt;

&lt;p&gt;Open the &lt;a href="https://rnilav-perceptro-02-multi-layer-perceptronmlp-playground-f24pnx.streamlit.app/" rel="noopener noreferrer"&gt;playground&lt;/a&gt;. Perceptron on the left, stuck with one line, failing. MLP on the right, two hidden neurons creating a band that captures the XOR region.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wlf9ds9vfyxdfmr8y4d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wlf9ds9vfyxdfmr8y4d.png" alt="Perceptron vs MLP on XOR" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Two hidden neurons needed 9 hand-crafted weights to solve XOR. A network that recognizes handwritten digits needs thousands of neurons and hundreds of thousands of weights. One that understands language needs billions. The architecture scales, but hand-picking weights doesn't.&lt;/p&gt;

&lt;p&gt;There has to be a way for the network to find its own weights. That's the next post.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
Minsky, M., &amp;amp; Papert, S. (1969). &lt;em&gt;Perceptrons: An Introduction to Computational Geometry&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt; From Perceptrons to Transformers | &lt;strong&gt;Code:&lt;/strong&gt; &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/tree/main/02-multi-layer-perceptron" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>mlp</category>
    </item>
    <item>
      <title>Perceptron: The Foundation of Modern AI</title>
      <dc:creator>Nilavukkarasan R</dc:creator>
      <pubDate>Sun, 15 Feb 2026 08:40:21 +0000</pubDate>
      <link>https://dev.to/rnilav/understanding-perceptrons-the-foundation-of-modern-ai-2g04</link>
      <guid>https://dev.to/rnilav/understanding-perceptrons-the-foundation-of-modern-ai-2g04</guid>
      <description>&lt;p&gt;&lt;em&gt;"We now have a new kind of programming paradigm. Instead of telling the computer what to do, we show it examples of what we want, and it figures out how to do it."&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;-- &lt;strong&gt;Michael Nielsen&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Back to the Beginning
&lt;/h2&gt;

&lt;p&gt;My first encounter with AI was in college. I memorised more than I understood. None of what I memorised appeared in the exam. So i wrote whatever I could in the exam, and I'm sure the professor didn't understand my answers either.&lt;/p&gt;

&lt;p&gt;Fast forward Twenty years of building software systems. In all that time, I barely touched AI/ML. Sure, I designed applications that integrated with black box, AI/ML systems for OCR, but that was it.&lt;/p&gt;

&lt;p&gt;Then &lt;strong&gt;ChatGPT&lt;/strong&gt; happened. &lt;/p&gt;

&lt;p&gt;Like many of you, I started experimenting. RAG chatbots, embedding models, agents, agentic patterns. I was building with these tools, but something bothered me. I didn't understand how any of it actually worked.&lt;/p&gt;

&lt;p&gt;So I went back. Not to the latest paper, but to the very beginning. To the first artificial neuron.&lt;/p&gt;

&lt;h2&gt;
  
  
  Five Lines That Changed Programming
&lt;/h2&gt;

&lt;p&gt;A perceptron takes inputs, multiplies each by a weight, adds them up, and makes a decision.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;perceptron&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;weights&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;bias&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;weighted_sum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;weights&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;weighted_sum&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;bias&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;weighted_sum&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Each input has a weight (how important is this input?). We sum them up, add a bias, and if the result is positive, output 1. Otherwise, output 0.&lt;/p&gt;

&lt;p&gt;Now consider the AND logic gate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input: [0, 0] → Output: 0
Input: [0, 1] → Output: 0
Input: [1, 0] → Output: 0
Input: [1, 1] → Output: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The traditional way? Write an if/else. The perceptron way? Show it examples and let it figure out the weights.&lt;/p&gt;

&lt;p&gt;With learned weights [0.5, 0.5] and bias of −0.7, the perceptron solves this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[0, 0]: 0×0.5 + 0×0.5 − 0.7 = −0.7 → Output: 0 ✓&lt;/li&gt;
&lt;li&gt;[0, 1]: 0×0.5 + 1×0.5 − 0.7 = −0.2 → Output: 0 ✓&lt;/li&gt;
&lt;li&gt;[1, 0]: 1×0.5 + 0×0.5 − 0.7 = −0.2 → Output: 0 ✓&lt;/li&gt;
&lt;li&gt;[1, 1]: 1×0.5 + 1×0.5 − 0.7 = 0.3  → Output: 1 ✓&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The if/else is hardcoded. The perceptron learned these numbers from examples.&lt;/p&gt;

&lt;p&gt;How? It starts with random weights. It feeds in [1,1], gets the wrong answer, and nudges the weights a little in the direction that would have been correct. Feeds in [0,1], checks again, nudges again. After a few passes through all four examples, the weights settle at values that get everything right. That's the entire learning algorithm. Try, fail, adjust.&lt;/p&gt;

&lt;p&gt;That shift, from writing rules to showing examples, is what Nielsen meant. And it's the same shift that powers every modern AI system.&lt;/p&gt;

&lt;h2&gt;
  
  
  It Draws a Line
&lt;/h2&gt;

&lt;p&gt;A perceptron draws a line. The weights control the angle. The bias controls where it sits. Everything on one side is class 0, everything on the other is class 1. Training just means nudging the line until it separates the classes correctly.&lt;/p&gt;

&lt;p&gt;For AND, the line puts [1,1] on one side and everything else on the other. Easy. For OR, it puts [0,0] alone on one side. Also easy.&lt;/p&gt;

&lt;p&gt;For XOR (output 1 when inputs differ, 0 when they match), the class 1 points sit diagonally opposite each other. Try drawing one straight line that separates them. You can't. It's geometrically impossible.&lt;/p&gt;

&lt;p&gt;That's the perceptron's entire story. If your problem lives on opposite sides of a line, it works beautifully. If not, no amount of training will help.&lt;/p&gt;

&lt;h2&gt;
  
  
  See It
&lt;/h2&gt;

&lt;p&gt;Open the &lt;a href="https://perceptrons-to-transformers-23pcqtappwdfpdoywndoewt.streamlit.app/" rel="noopener noreferrer"&gt;playground&lt;/a&gt; and train on AND. Watch the red dashed line settle into place, cleanly separating the orange dots from the blue ones. The error count drops to zero. Done.&lt;/p&gt;

&lt;p&gt;Now switch to XOR. The line thrashes around, never settling. The error count never hits zero. The perceptron keeps trying, keeps adjusting, and keeps failing.&lt;/p&gt;

&lt;p&gt;That contrast is the concept. Stare at it until it sticks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedrbxa9pija8mv1yb30c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedrbxa9pija8mv1yb30c.png" alt="AND vs XOR: the line settles for AND, fails for XOR" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Every single neuron in GPT-4, in every transformer you've ever used, works on these same principles. The perceptron isn't history. It's the foundation.&lt;/p&gt;

&lt;p&gt;But there's one simple logic gate it cannot learn. No matter how you adjust the weights, that single straight line can never solve it.&lt;/p&gt;

&lt;p&gt;The problem is called XOR. And solving it required an idea that changed everything.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
Nielsen, M. (2015). &lt;em&gt;Neural Networks and Deep Learning&lt;/em&gt;. &lt;a href="http://neuralnetworksanddeeplearning.com/" rel="noopener noreferrer"&gt;neuralnetworksanddeeplearning.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt; Learning AI from First Principles | &lt;strong&gt;Code:&lt;/strong&gt; &lt;a href="https://github.com/rnilav/perceptrons-to-transformers/tree/main/01-perceptron" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>perceptron</category>
      <category>neuralnetworks</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
