<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mohamed Arbi </title>
    <description>The latest articles on DEV Community by Mohamed Arbi  (@goodnight).</description>
    <link>https://dev.to/goodnight</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/goodnight"/>
    <language>en</language>
    <item>
      <title>1minMLOps #1 : What is MLOps and why should you care?</title>
      <dc:creator>Mohamed Arbi </dc:creator>
      <pubDate>Thu, 07 May 2026 15:08:56 +0000</pubDate>
      <link>https://dev.to/goodnight/1minmlops-1-what-is-mlops-and-why-should-you-care-17an</link>
      <guid>https://dev.to/goodnight/1minmlops-1-what-is-mlops-and-why-should-you-care-17an</guid>
      <description>&lt;p&gt;If you've ever trained a beautiful model in a Jupyter notebook, watched the metrics shine, and then realized you have no idea how to actually put it in front of users, congratulations: you've just discovered why MLOps exists.&lt;/p&gt;

&lt;p&gt;In this series, we are going to walk together from a notebook to a fully deployed, monitored and self-retraining ML system, one tiny step at a time. But before we write any code, let's get the foundations straight&lt;/p&gt;

&lt;h2&gt;
  
  
  So, what is MLOps?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;MLOps&lt;/strong&gt; (short for Machine Learning Operations) is the set of practices, tools and culture that lets you ship machine learning models to production &lt;em&gt;reliably and repeatedly&lt;/em&gt;. Think of it as DevOps' younger sibling: same spirit (automation, reproducibility, monitoring), but adapted to the weirdness of ML, where your code is not the only thing that changes, your &lt;strong&gt;data&lt;/strong&gt; changes, your &lt;strong&gt;model&lt;/strong&gt; changes, and the &lt;strong&gt;world&lt;/strong&gt; your model lives in changes too&lt;/p&gt;

&lt;p&gt;A useful way to picture it is the ML lifecycle:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data collection &amp;amp; versioning&lt;/strong&gt; — where does the data come from, and which version did we train on?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Experimentation&lt;/strong&gt; — which features, which model, which hyperparameters?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Training &amp;amp; evaluation&lt;/strong&gt; — does it actually work, and is it better than what we had?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Packaging&lt;/strong&gt; — wrap the model in something deployable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment&lt;/strong&gt; — serve predictions to real users (batch or real-time)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt; — is it still working? Did the data drift?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retraining&lt;/strong&gt; — close the loop and start again&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Traditional software has steps 4–6. ML has all seven, and steps 1–3 keep coming back to haunt you&lt;/p&gt;

&lt;h2&gt;
  
  
  Why "it works on my machine" is &lt;em&gt;worse&lt;/em&gt; in ML
&lt;/h2&gt;

&lt;p&gt;In classical software, if your code runs locally, it has a decent chance of running in production. In ML, that's a trap, because the model's behavior depends on &lt;strong&gt;three&lt;/strong&gt; moving things, not one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code&lt;/strong&gt;: the training script, the preprocessing, the inference logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data&lt;/strong&gt;: the exact dataset (and its version) you trained on&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment&lt;/strong&gt;: Python version, library versions, CUDA versions, OS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Change any of these three and your "great model from Tuesday" becomes "mysterious garbage on Friday" This is why ML teams need stricter versioning, tracking and packaging discipline than most web teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  What problems does MLOps actually solve?
&lt;/h2&gt;

&lt;p&gt;Concrete pains you'll feel without MLOps, and that we'll fix in this series:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Which dataset gave us that 0.94 F1 score? Nobody remembers."&lt;/li&gt;
&lt;li&gt;"The model works locally but crashes in the Docker container."&lt;/li&gt;
&lt;li&gt;"We retrained the model and accuracy dropped, but we can't roll back."&lt;/li&gt;
&lt;li&gt;"Production is silently degrading and we noticed two weeks later."&lt;/li&gt;
&lt;li&gt;"Every deploy is a hand-crafted artisanal disaster."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these has a tool and a workflow that solves it, and we are going to meet them(almost) one by one&lt;/p&gt;

&lt;h2&gt;
  
  
  The MLOps stack we'll build
&lt;/h2&gt;

&lt;p&gt;Here's a sneak peek of the tools we'll touch in the next articles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DVC&lt;/strong&gt; for data versioning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MLflow&lt;/strong&gt; for experiment tracking and the model registry&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FastAPI&lt;/strong&gt; for serving&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt; for packaging (we'll lean a bit on Clelia's 1minDocker series here)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions&lt;/strong&gt; for CI/CD&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evidently&lt;/strong&gt; for monitoring data and model drift (we can use prometheus and grafana too)&lt;/li&gt;
&lt;li&gt;A cloud provider (we'll pick one later) for actually deploying it all&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Don't worry if some of these names sound intimidating, we'll introduce them gently, one per article, and always with a working example.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you need to follow along
&lt;/h2&gt;

&lt;p&gt;Nothing fancy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.10+&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;git&lt;/code&gt; installed&lt;/li&gt;
&lt;li&gt;A GitHub account&lt;/li&gt;
&lt;li&gt;Docker installed (highly recommend to follow this series &lt;a href="https://dev.to/astrabert/1mindocker-1-what-is-docker-3baa"&gt;https://dev.to/astrabert/1mindocker-1-what-is-docker-3baa&lt;/a&gt;) &lt;/li&gt;
&lt;li&gt;A laptop and ~1 minute per article 😉&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the next article, we'll get our hands dirty: we'll take a small dataset, version it with &lt;strong&gt;DVC&lt;/strong&gt;, and finally answer the question &lt;em&gt;"which data did we train on?"&lt;/em&gt; without crying&lt;/p&gt;

&lt;p&gt;Stay tuned and have fun! &lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>ai</category>
      <category>mlops</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
