<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mitchel Novoa Q.</title>
    <description>The latest articles on DEV Community by Mitchel Novoa Q. (@mitchel_novoaq_4f356cf3).</description>
    <link>https://dev.to/mitchel_novoaq_4f356cf3</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mitchel_novoaq_4f356cf3"/>
    <language>en</language>
    <item>
      <title>Building an ML Language from Scratch: Introducing Charl</title>
      <dc:creator>Mitchel Novoa Q.</dc:creator>
      <pubDate>Mon, 10 Nov 2025 13:39:03 +0000</pubDate>
      <link>https://dev.to/mitchel_novoaq_4f356cf3/building-an-ml-language-from-scratch-introducing-charl-2c39</link>
      <guid>https://dev.to/mitchel_novoaq_4f356cf3/building-an-ml-language-from-scratch-introducing-charl-2c39</guid>
      <description>&lt;p&gt;I spent the last year building Charl - a programming language designed specifically for machine learning. Not a library on top of Python, but a language where tensors and autograd are native features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;PyTorch and TensorFlow are excellent, but they're libraries bolted onto general-purpose languages. I wanted to explore: what's possible when ML is built into the language itself?&lt;/p&gt;

&lt;h2&gt;
  
  
  What Works Today
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Native tensor operations&lt;/li&gt;
&lt;li&gt;Automatic differentiation (dynamic graphs)&lt;/li&gt;
&lt;li&gt;Neural network training (validated on MNIST)&lt;/li&gt;
&lt;li&gt;22x faster than PyTorch CPU&lt;/li&gt;
&lt;li&gt;GPU support via wgpu&lt;/li&gt;
&lt;li&gt;Type-safe (static typing with inference)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example: Training a Neural Network
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Network: 2 -&amp;gt; 4 -&amp;gt; 1
let w1 = tensor_randn([2, 4])
let w2 = tensor_randn([4, 1])

while epoch &amp;lt; 1000 {
    // Forward
    let h = nn_relu(nn_linear(x, w1, b1))
    let pred = nn_sigmoid(nn_linear(h, w2, b2))
    let loss = loss_mse(pred, target)

    // Backward (automatic)
    tensor_backward(loss)

    // Update
    w1 = optim_sgd_step(w1, tensor_grad(w1), 0.5)
    w2 = optim_sgd_step(w2, tensor_grad(w2), 0.5)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: XOR converges from 0.259 → 0.006 loss (99% accuracy).&lt;/p&gt;

&lt;h2&gt;
  
  
  Current Limitations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Alpha quality - expect bugs&lt;/li&gt;
&lt;li&gt;Small ecosystem&lt;/li&gt;
&lt;li&gt;Missing features (modules, generics)&lt;/li&gt;
&lt;li&gt;Not production-ready&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a research experiment, not a PyTorch replacement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Website: &lt;a href="https://charlbase.org" rel="noopener noreferrer"&gt;https://charlbase.org&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/charlcoding-stack/charlcode" rel="noopener noreferrer"&gt;https://github.com/charlcoding-stack/charlcode&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docs: Full API reference + 20+ examples&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Looking For
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Testers (try it, break it, report issues)&lt;/li&gt;
&lt;li&gt;Feedback (what's confusing? what's missing?)&lt;/li&gt;
&lt;li&gt;Ideas (what would make this useful?)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Questions?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Is the syntax intuitive for ML work?&lt;/li&gt;
&lt;li&gt;What's the first thing you tried that didn't work?&lt;/li&gt;
&lt;li&gt;What ML use case would make you actually use this?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Genuinely curious about feedback. This is an experiment - let's see where it goes.&lt;/p&gt;

&lt;h1&gt;
  
  
  machinelearning #programming #opensource
&lt;/h1&gt;

</description>
      <category>tooling</category>
      <category>performance</category>
      <category>programming</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
