<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anicet Nougaret</title>
    <description>The latest articles on DEV Community by Anicet Nougaret (@anicetn).</description>
    <link>https://dev.to/anicetn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/anicetn"/>
    <language>en</language>
    <item>
      <title>How to debug AI generated code?</title>
      <dc:creator>Anicet Nougaret</dc:creator>
      <pubDate>Wed, 16 Apr 2025 01:20:12 +0000</pubDate>
      <link>https://dev.to/anicetn/how-to-debug-ai-generated-code-3434</link>
      <guid>https://dev.to/anicetn/how-to-debug-ai-generated-code-3434</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/ariana-dot-dev/.github/blob/main/ariana_shortdemo.gif" rel="noopener noreferrer"&gt;Watch super short demo gif&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're using AI to generate code, you've probably experienced this: the AI spits out a bunch of code that looks reasonable, but when you run it... nothing works.&lt;/p&gt;

&lt;p&gt;And then begins the tedious process of figuring out why.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Debugging AI Code
&lt;/h2&gt;

&lt;p&gt;When I first started working with AI-generated code, I found debugging to be incredibly frustrating. The AI would write complex functions that looked correct, but tracing through execution was a nightmare.&lt;/p&gt;

&lt;p&gt;I'll usually have not even the beginning of an idea of where to start debugging. That's why I'd end up with console logs everywhere.&lt;/p&gt;

&lt;p&gt;The worst part? When you go back to the AI with the error message, it just guesses what might be wrong without knowing what actually happened during runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Better Way to Debug
&lt;/h2&gt;

&lt;p&gt;This is actually why I built Ariana. I got so tired of this exact problem that I wanted something that would just show me everything happening at runtime without requiring me to modify the code.&lt;/p&gt;

&lt;p&gt;In the cover gif, you can see how I debug an AI-generated fullstack app. API calls keep failing. Instead of sprinkling &lt;code&gt;console.log()&lt;/code&gt; statements everywhere in backend and frontend to figure it out, I just:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run the backend and frontend with &lt;code&gt;ariana npm run dev&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;Click through the execution traces in the side panel&lt;/li&gt;
&lt;li&gt;Notice the red highlighted code (showing where errors happened)&lt;/li&gt;
&lt;li&gt;Hover over the &lt;code&gt;OPENROUTER_API_KEY&lt;/code&gt; variable to see it's still set to "placeholder"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Found the bug in seconds, not hours, and didn't have to modify a single line of code or use a regular debugger (considering I'd have no idea instinctively where to start debugging since it's not "my" code).&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Ariana is still experimental, but it's free and works with JavaScript/TypeScript and Python codebases. Here's how to try it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install the extension for &lt;a href="https://marketplace.visualstudio.com/items?itemName=Ariana.ariana" rel="noopener noreferrer"&gt;VSCode&lt;/a&gt;, Cursor, or Windsurf&lt;/li&gt;
&lt;li&gt;Install the CLI: &lt;code&gt;npm install -g ariana&lt;/code&gt; or &lt;code&gt;pip install ariana&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run your code with: &lt;code&gt;ariana npm run dev&lt;/code&gt; or &lt;code&gt;ariana python app.py&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it. Now you can see what values every expression had during execution by hovering over them in your code editor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;When debugging code generated by the AI, you need to understand what's actually happening inside the program, not just what the code looks like. Having runtime visibility without putting Console.log/print or breakpoints and following them all by hand, makes all the difference.&lt;/p&gt;

&lt;p&gt;You can check out more info at &lt;a href="https://ariana.dev" rel="noopener noreferrer"&gt;ariana.dev&lt;/a&gt;. I'm actively improving it, so expect some rough edges, but I'd love to hear if it helps you debug AI-generated code more easily.&lt;/p&gt;

&lt;p&gt;What's your biggest pain point with debugging (AI-generated) code? Let me know in the comments!&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>python</category>
      <category>programming</category>
      <category>ai</category>
    </item>
    <item>
      <title>Deep Learning in Rust with my own framework focusing on ergonomics</title>
      <dc:creator>Anicet Nougaret</dc:creator>
      <pubDate>Mon, 10 Jul 2023 12:34:34 +0000</pubDate>
      <link>https://dev.to/anicetn/deep-learning-in-rust-with-my-own-framework-focusing-on-ergonomics-536k</link>
      <guid>https://dev.to/anicetn/deep-learning-in-rust-with-my-own-framework-focusing-on-ergonomics-536k</guid>
      <description>&lt;p&gt;Hi! I'm Anicet, a Master CS student at INSA Lyon in France, and for the past few months, I have been building a Deep Learning and data preprocessing framework in Rust. The initial goal was only to learn how these tools and algorithms work. But as it kept growing, it progressively became the perfect opportunity to put something out in the world that would be useful to the community.&lt;/p&gt;

&lt;p&gt;It is currently a work in progress framework with a focus on small to medium-sized workflows and on the ergonomics above all. Because, sure, everyone loves Rust and compile-time guarantees, and everyone hates ambiguity and run-time shapes mismatch. But I simply can't keep my sanity with stuff like &lt;code&gt;SizedMatrix&amp;lt;Rank6&amp;lt;A, B, Dyn, Dyn, Dyn, Dyn&amp;gt;, f64, Backend=Cuda&amp;gt;&lt;/code&gt; that requires &lt;code&gt;impl&amp;lt;'a, const A:usize, const B:usize, T: Scalar, M: MatrixCore&amp;lt;A, B, T&amp;gt;&amp;gt; DotProductTrait&amp;lt;'a, A, B, T, M&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://github.com/AnicetNgrt/jiro-nn" rel="noopener noreferrer"&gt;&lt;code&gt;jiro-nn&lt;/code&gt;&lt;/a&gt; just rely on auto-complete and keep your sanity while following this King County houses sales regression workflow example using a Deep Neural Network:&lt;/p&gt;

&lt;h2&gt;
  
  
  Solving King County houses sales regression with JIRO
&lt;/h2&gt;

&lt;p&gt;The goal here is to predict the price of a house given a bunch of its features. For this task we could preprocess and clean the data, then train a Neural Network to make the right guesses. And for the whole process, JIRO comes in handy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Init the project, import the necessary modules and download the &lt;a href="https://www.kaggle.com/datasets/harlfoxem/housesalesprediction" rel="noopener noreferrer"&gt;dataset&lt;/a&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cargo new &lt;span class="nt"&gt;--bin&lt;/span&gt; king_county
&lt;span class="nb"&gt;cd &lt;/span&gt;king_county
cargo add jiro-nn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Put the dataset in the project's root directory&lt;/li&gt;
&lt;li&gt;Tweak the compile-time features a little bit to make sure we have everything we need:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[dependencies]&lt;/span&gt;
&lt;span class="py"&gt;jiro-nn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.8.1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;features&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"ndarray"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;From now on we'll edit &lt;code&gt;src/main.rs&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;jiro_nn&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Dataset&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;jiro_nn&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;model&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ModelBuilder&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;jiro_nn&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;monitor&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;TM&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;jiro_nn&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;preprocessing&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Pipeline&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;jiro_nn&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;preprocessing&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;attach_ids&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;AttachIds&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;jiro_nn&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;preprocessing&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;map&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;jiro_nn&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;trainers&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;kfolds&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;KFolds&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;jiro_nn&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;FeatureTags&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now let's "tag" our features. Basically it is telling our framework which column of our dataset needs which  kind of preprocessing, and any kind of metadata that the Network may need after the preprocessing phase. Our preprocessing pipeline would need to consist of the following steps:

&lt;ul&gt;
&lt;li&gt;Remove the features we don't need&lt;/li&gt;
&lt;li&gt;Extract the timestamp and month from the date&lt;/li&gt;
&lt;li&gt;Replace the 0 values of yr_renovated with the yr_built value on the same rows&lt;/li&gt;
&lt;li&gt;Log10 some of the features&lt;/li&gt;
&lt;li&gt;For each feature, add its squared value, so for instance if we have the feature "surface" we'll add the feature "surface^2" alongside it&lt;/li&gt;
&lt;li&gt;Filter out the outliers using Tukey's fence method&lt;/li&gt;
&lt;li&gt;Normalize everything so it's all in the same [0;1] range
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;dataset_config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Dataset&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"kc_house_data.csv"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dataset_config&lt;/span&gt;
    &lt;span class="c1"&gt;// The code describes itself&lt;/span&gt;
    &lt;span class="nf"&gt;.remove_features&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"zipcode"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"sqft_living15"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"sqft_lot15"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="nf"&gt;.tag_feature&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;IsId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.tag_feature&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;DateFormat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"%Y%m%dT%H%M%S"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="c1"&gt;// The AddExtracted* tags create new features out of &lt;/span&gt;
    &lt;span class="c1"&gt;// existing ones.&lt;/span&gt;
    &lt;span class="nf"&gt;.tag_feature&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AddExtractedMonth&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.tag_feature&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AddExtractedTimestamp&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;// Here, we don't care about the date, only the features &lt;/span&gt;
    &lt;span class="c1"&gt;// we will create from it.&lt;/span&gt;
    &lt;span class="c1"&gt;// But we can't remove it unlike the zipcode, because &lt;/span&gt;
    &lt;span class="c1"&gt;// we need it during the pipeline.&lt;/span&gt;
    &lt;span class="nf"&gt;.tag_feature&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;Not&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;UsedInModel&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="c1"&gt;// This part is a bit trickier: &lt;/span&gt;
    &lt;span class="c1"&gt;// We replace the 0 values of yr_renovated with the yr_built &lt;/span&gt;
    &lt;span class="c1"&gt;// value on the same rows.&lt;/span&gt;
    &lt;span class="nf"&gt;.tag_feature&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="s"&gt;"yr_renovated"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nf"&gt;Mapped&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="nn"&gt;MapSelector&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;equal_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="nn"&gt;MapOp&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;replace_with_feature&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"yr_built"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="c1"&gt;// Indicate which features need to be predicted of course&lt;/span&gt;
    &lt;span class="nf"&gt;.tag_feature&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"price"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Predicted&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.tag_all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Log10&lt;/span&gt;&lt;span class="nf"&gt;.only&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"sqft_living"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"sqft_above"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"price"&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;
    &lt;span class="c1"&gt;// incl_added_features tells the framework to also tag &lt;/span&gt;
    &lt;span class="c1"&gt;// all the features created previously during the pipeline&lt;/span&gt;
    &lt;span class="c1"&gt;// (e.g. resulting from the AddExtracted* tags)&lt;/span&gt;
    &lt;span class="nf"&gt;.tag_all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;AddSquared&lt;/span&gt;&lt;span class="nf"&gt;.except&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"price"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
        &lt;span class="nf"&gt;.incl_added_features&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="nf"&gt;.tag_all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;FilterOutliers&lt;/span&gt;&lt;span class="nf"&gt;.except&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;&lt;span class="nf"&gt;.incl_added_features&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="nf"&gt;.tag_all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Normalized&lt;/span&gt;&lt;span class="nf"&gt;.except&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;&lt;span class="nf"&gt;.incl_added_features&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now that we specified everything we can run our pipeline
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Since from now on computations may take a while, we start &lt;/span&gt;
&lt;span class="c1"&gt;// monitoring the tasks.&lt;/span&gt;
&lt;span class="c1"&gt;// This will launch a nice TUI just for that&lt;/span&gt;
&lt;span class="nn"&gt;TM&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;start_monitoring&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// We take a generic pipeline, which will try to do most steps &lt;/span&gt;
&lt;span class="c1"&gt;// if needed. &lt;/span&gt;
&lt;span class="c1"&gt;// But you may need to customize it by appending/prepending &lt;/span&gt;
&lt;span class="c1"&gt;// steps in some cases.&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Pipeline&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;basic_single_pass&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dataset_config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pipeline&lt;/span&gt;
    &lt;span class="nf"&gt;.load_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"dataset/kc_house_data.csv"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;dataset_config&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="nf"&gt;.run&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now that our data is well and clean, let's build our Neural Network which consists of 4 hidden layers and one output layer. We use the builder pattern which is one way to make your Rust APIs both simple and flexible.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;hidden_neurons&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;output_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// only the price is predicted&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;nn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;ModelBuilder&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dataset_config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.neural_network&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="c1"&gt;// We declare our layers and add customization calls like &lt;/span&gt;
        &lt;span class="c1"&gt;// .relu() or .momentum().&lt;/span&gt;
        &lt;span class="c1"&gt;// These calls are optional.&lt;/span&gt;
        &lt;span class="c1"&gt;// Many exist to override all sorts of defaults.&lt;/span&gt;
        &lt;span class="nf"&gt;.full_dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hidden_neurons&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;.relu&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;.momentum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.end&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.full_dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hidden_neurons&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;.relu&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;.momentum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.end&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.full_dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hidden_neurons&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;.relu&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;.momentum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.end&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.full_dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hidden_neurons&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;.relu&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;.momentum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.end&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.full_dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output_size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;.linear&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;.momentum&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="nf"&gt;.end&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;.end&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;.batch_size&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.epochs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.build&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// The model and the dataset configurations could be saved &lt;/span&gt;
&lt;span class="c1"&gt;// in a .json file for later use.&lt;/span&gt;
&lt;span class="c1"&gt;// The model really tries to embody everything (dataset + &lt;/span&gt;
&lt;span class="c1"&gt;// network + training).&lt;/span&gt;
&lt;span class="c1"&gt;// The idea is to tie the model to its result, and changing &lt;/span&gt;
&lt;span class="c1"&gt;// any of these things would change its results.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now let's train using K-Folds cross validation
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;kfold&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;KFolds&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Since we use k-folds, all the training data will get &lt;/span&gt;
&lt;span class="c1"&gt;// predicted as validation over the course of the 4 folds.&lt;/span&gt;
&lt;span class="c1"&gt;// So we both get how the model performed and which predictions &lt;/span&gt;
&lt;span class="c1"&gt;// it made on the last epoch for each fold.&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;preds_and_ids&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model_eval&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;kfold&lt;/span&gt;
    &lt;span class="c1"&gt;// Tell it to keep the best model at the end&lt;/span&gt;
    &lt;span class="nf"&gt;.compute_best_model&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;.run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Its the end of long computations, we can stop monitoring.&lt;/span&gt;
&lt;span class="nn"&gt;TM&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;stop_monitoring&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;And save everything necessary to disk
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// The best model is the one from the fold with the &lt;/span&gt;
&lt;span class="c1"&gt;// lowest loss on the last epoch.&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;best_model_params&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;kfold&lt;/span&gt;&lt;span class="nf"&gt;.take_best_model&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="c1"&gt;// Learned parameters of the network can be saved/loaded &lt;/span&gt;
&lt;span class="c1"&gt;// in/from a compressed format.&lt;/span&gt;
&lt;span class="n"&gt;best_model_params&lt;/span&gt;&lt;span class="nf"&gt;.to_binary_compressed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"best_model_params.gz"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Predictions come tied to the ids.&lt;/span&gt;
&lt;span class="c1"&gt;// We need to revert the preprocessing and join the &lt;/span&gt;
&lt;span class="c1"&gt;// predictions with the original data.&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;preds_and_ids&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="nf"&gt;.revert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;preds_and_ids&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="nf"&gt;.revert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;data_and_preds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="nf"&gt;.inner_join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;preds_and_ids&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"pred"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;data_and_preds&lt;/span&gt;&lt;span class="nf"&gt;.to_csv_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"data_and_preds.csv"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;model_eval&lt;/span&gt;&lt;span class="nf"&gt;.to_json_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"model_eval.json"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here you go, preprocessing, model building, training, with extensive customization in just 80 lines. Yeah, I mean, it's Rust, what did you expect? Sadly I can't do magic but I've probably saved you hundreds of lines, even if you were using other Rust Deep Learning frameworks, which are good at what they do, but don't go the extra mile to make preprocessing easy too.&lt;/p&gt;

&lt;p&gt;It is probably not the fastest nor the most compliant framework ever, but it is enough to toy with. My goal with it for now is to improve it and learn a ton in the process while giving good ideas and inspiration to the Rust community.&lt;/p&gt;

&lt;p&gt;Check out &lt;a href="https://github.com/AnicetNgrt/jiro-nn" rel="noopener noreferrer"&gt;&lt;code&gt;jiro-nn&lt;/code&gt;&lt;/a&gt; and tell me what you think!&lt;/p&gt;

&lt;p&gt;For more details and a MNIST example, &lt;a href="https://anicetnougaret.fr/blog/introducing-jiro-nn" rel="noopener noreferrer"&gt;a longer article&lt;/a&gt; sprinkled with insights and friendly hot takes about this weird world of Rust awaits you on my personal blog.&lt;/p&gt;

&lt;p&gt;Thank you for reading!&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>deeplearning</category>
      <category>rust</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
