<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Thokozani Buthelezi</title>
    <description>The latest articles on DEV Community by Thokozani Buthelezi (@thokozani_buthelezi_2cd41).</description>
    <link>https://dev.to/thokozani_buthelezi_2cd41</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thokozani_buthelezi_2cd41"/>
    <language>en</language>
    <item>
      <title>Reproducing Chinchilla Scaling on a Budget</title>
      <dc:creator>Thokozani Buthelezi</dc:creator>
      <pubDate>Sat, 02 May 2026 11:58:58 +0000</pubDate>
      <link>https://dev.to/thokozani_buthelezi_2cd41/reproducing-chinchilla-scaling-on-a-budget-227b</link>
      <guid>https://dev.to/thokozani_buthelezi_2cd41/reproducing-chinchilla-scaling-on-a-budget-227b</guid>
      <description>&lt;p&gt;Training a 70B parameter model costs millions of dollars. Scaling laws exist so you don't have to guess how to spend that budget. Here's what I learned reproducing them on a free GPU.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Scaling laws are basically rules that tell us how model performance improves as you increase quantities such as model size, dataset size, and compute.&lt;/p&gt;

&lt;p&gt;Instead of guessing "bigger models = better", scaling laws gives a mathematical relationship between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;model size (N, number of parameters)&lt;/li&gt;
&lt;li&gt;dataset size (D, number of tokens) &lt;/li&gt;
&lt;li&gt;compute (C, number of training FLOPs)&lt;/li&gt;
&lt;li&gt;loss (L, how wrong the model is)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;the core idea&lt;/strong&gt; &lt;br&gt;


&lt;/p&gt;
&lt;div class="katex-element"&gt;
  &lt;span class="katex-display"&gt;&lt;span class="katex"&gt;&lt;span class="katex-mathml"&gt;L(N,D)=ANα+BDβ+E
L(N, D) = \frac{A}{N^{\alpha}} + \frac{B}{D^{\beta}} + E
&lt;/span&gt;&lt;span class="katex-html"&gt;&lt;span class="base"&gt;&lt;span class="strut"&gt;&lt;/span&gt;&lt;span class="mord mathnormal"&gt;L&lt;/span&gt;&lt;span class="mopen"&gt;(&lt;/span&gt;&lt;span class="mord mathnormal"&gt;N&lt;/span&gt;&lt;span class="mpunct"&gt;,&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="mord mathnormal"&gt;D&lt;/span&gt;&lt;span class="mclose"&gt;)&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="mrel"&gt;=&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="base"&gt;&lt;span class="strut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mopen nulldelimiter"&gt;&lt;/span&gt;&lt;span class="mfrac"&gt;&lt;span class="vlist-t vlist-t2"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;N&lt;/span&gt;&lt;span class="msupsub"&gt;&lt;span class="vlist-t"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="sizing reset-size6 size3 mtight"&gt;&lt;span class="mord mtight"&gt;&lt;span class="mord mathnormal mtight"&gt;α&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="frac-line"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;A&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-s"&gt;​&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="mclose nulldelimiter"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="mbin"&gt;+&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="base"&gt;&lt;span class="strut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mopen nulldelimiter"&gt;&lt;/span&gt;&lt;span class="mfrac"&gt;&lt;span class="vlist-t vlist-t2"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;D&lt;/span&gt;&lt;span class="msupsub"&gt;&lt;span class="vlist-t"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="sizing reset-size6 size3 mtight"&gt;&lt;span class="mord mtight"&gt;&lt;span class="mord mathnormal mtight"&gt;β&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="frac-line"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;B&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-s"&gt;​&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="mclose nulldelimiter"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="mbin"&gt;+&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="base"&gt;&lt;span class="strut"&gt;&lt;/span&gt;&lt;span class="mord mathnormal"&gt;E&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;
&lt;/div&gt;


&lt;p&gt;This looks intimidating but it's simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;increasing N(model size) -&amp;gt; loss goes down&lt;/li&gt;
&lt;li&gt;increasing D(data) -&amp;gt; loss goes down&lt;/li&gt;
&lt;li&gt;but both have &lt;strong&gt;diminishing returns&lt;/strong&gt; because of the scaling exponents (α,β)&lt;/li&gt;
&lt;li&gt;where E is the irreducible entropy error of the model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The relationship between the loss and these quantities is not linear, it is a power law.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Kaplan vs Chinchilla disagreement
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Kaplan said scale model size faster than dataset size&lt;/li&gt;
&lt;li&gt;Chinchilla said scale both equally&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;why they disagreed?&lt;/strong&gt;&lt;br&gt;
The three experimental assumptions used by Kaplan led to conclusions that model size should be scaled faster. These assumptions include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the use of non-embedding parameters only when scaling&lt;/li&gt;
&lt;li&gt;undertraining of large models &lt;/li&gt;
&lt;li&gt;omission of the offset term in the compute-loss form&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When these factors are corrected by Chinchilla you have:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the use of all model's parameters &lt;/li&gt;
&lt;li&gt;models are fully trained to compute-optimal point&lt;/li&gt;
&lt;li&gt;the offset term is included in the compute-loss form&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;one clean takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kaplan didn’t “get it wrong”, the setup just made model scaling look more effective than it actually is.&lt;br&gt;
Chinchilla corrected the setup, and revealed the true balance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The experiment
&lt;/h2&gt;

&lt;p&gt;In my experiment to reconstruct the Chinchilla scaling, I used 3 models of different parameter sizes: 786K, 4M, 25M params on the same dataset &lt;strong&gt;WikiText-2&lt;/strong&gt;, for the same compute budget. I trained all three models for 500 steps using a T4 GPU on Google Colab. At every 50 steps I logged the validation loss and total FLOPs consumed. Here's what the data showed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkny13y432u1r7b6k0xf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkny13y432u1r7b6k0xf.png" alt=" " width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75q1sfjx4to384bzhp6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75q1sfjx4to384bzhp6g.png" alt=" " width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Graph 1: Validation Loss vs Training Steps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This graph shows what happens as you let the models practice over time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;em&gt;size matters immediately&lt;/em&gt;: even at the very first step, the larger model(green) starts with a much lower error than the small model(blue)&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;the "Head Start" effect&lt;/em&gt;: the large model's starting point is actually better than the small model's finishing point. This shows that having more parameters makes inherently more capable.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;plateauing&lt;/em&gt;: all the three lines curve and flatten out. This represents the &lt;u&gt;diminishing returns&lt;/u&gt;, that is, the longer you train a model, the harder it becomes to extract extra accuracy from it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Graph 2: Loss vs Compute (Log-Log Scale)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the "Power Law" graph. By plotting the data on a log-log scale, the curves become straight lines.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;predictable progress&lt;/em&gt;: because these lines are straight, researchers can look at the small model's slope and mathematically predict exactly how much more compute they need to reach a specific performance level.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;efficiency gains&lt;/em&gt;: notice how the green dots(large model) extend further to the right. To get the lowest loss on the chart, you must use the large model; the small model simply doesn't have the capacity to get that "smart", no matter how much compute you throw at it.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;the slope (-α)&lt;/em&gt;: the legend show "slopes" like -0.136. This is the scaling exponent that tells us the "exchange rate" between spending more money on GPUs and getting a smarter AI.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Big Picture&lt;/strong&gt;&lt;br&gt;
Together, these graphs prove scaling isn't random. If you want a smarter AI you don't just guess, you use these straight lines to calculate exactly how many parameters and how much compute you need to reach your goal.&lt;/p&gt;

&lt;p&gt;Full code and results are on my GitHub: &lt;a href="https://github.com/Thoki-Buthelezi/elite-ai-systems-engineer-2026" rel="noopener noreferrer"&gt;https://github.com/Thoki-Buthelezi/elite-ai-systems-engineer-2026&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Next:&lt;/strong&gt; I'll be running lm-evaluation-harness across all three model sizes and analysing what benchmarks like HellaSwag and GSM8K actually measure and where they mislead.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>python</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
