<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Girma</title>
    <description>The latest articles on DEV Community by Girma (@girma35).</description>
    <link>https://dev.to/girma35</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/girma35"/>
    <language>en</language>
    <item>
      <title>Starting Point for Kagglers: Customer Churn Prediction Competition</title>
      <dc:creator>Girma</dc:creator>
      <pubDate>Tue, 24 Mar 2026 18:44:13 +0000</pubDate>
      <link>https://dev.to/girma35/starting-point-for-kagglers-customer-churn-prediction-competition-37jj</link>
      <guid>https://dev.to/girma35/starting-point-for-kagglers-customer-churn-prediction-competition-37jj</guid>
      <description>&lt;p&gt;You open the Playground Series S6E3 competition, see 250k+ rows of customer data, and think: “Where do I even start?”  &lt;/p&gt;

&lt;p&gt;I’ve been there. This post is exactly the first notebook I wish I had when I jumped in   a dead-simple, copy-paste-ready pipeline that takes you from raw CSV to a solid submission. No theory overload, just the steps that actually work (and why they matter). Let’s go!&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Grab the Tools
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;matplotlib.pyplot&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;plt&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;seaborn&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;sns&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.model_selection&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;train_test_split&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.metrics&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;roc_auc_score&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.ensemble&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RandomForestClassifier&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;lightgbm&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LGBMClassifier&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;warnings&lt;/span&gt;
&lt;span class="n"&gt;warnings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filterwarnings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ignore&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are my go-to imports for every tabular comp. LightGBM will be your hero later.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Load &amp;amp; Quick Look
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/kaggle/input/competitions/playground-series-s6e3/train.csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Churn&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;y&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Churn&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;df.shape&lt;/code&gt;, &lt;code&gt;df.head()&lt;/code&gt;, &lt;code&gt;df.info()&lt;/code&gt;. Clean data, zero missing values — we’re lucky today!&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Tiny Cleanup (Just in Case)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TotalCharges&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_numeric&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TotalCharges&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;errors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;coerce&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Always make sure numbers are actually numbers.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Know Your Columns
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Numbers&lt;/strong&gt;: tenure, MonthlyCharges, TotalCharges, SeniorCitizen
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Categories&lt;/strong&gt;: gender, Contract, PaymentMethod, streaming stuff, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Models only understand numbers, so categories need love.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. My Secret Weapon: Merge Columns
&lt;/h3&gt;

&lt;p&gt;This one trick makes everything faster and cleaner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;StreamingAny&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;StreamingTV&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Yes&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;StreamingMovies&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Yes&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;StreamingTV&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;StreamingMovies&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why I do this every time:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cuts 4–5 columns → 20–40% faster training&lt;/li&gt;
&lt;li&gt;Saves RAM (huge on big datasets)&lt;/li&gt;
&lt;li&gt;Removes confusing duplicate signals&lt;/li&gt;
&lt;li&gt;Model learns real customer habits instead of memorizing noise&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Feels like decluttering your code  suddenly everything runs smoother.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Turn Words into Numbers
&lt;/h3&gt;

&lt;p&gt;Easy Yes/No first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;binary_cols&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Partner&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Dependents&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;PhoneService&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;PaperlessBilling&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;col&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;binary_cols&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Yes&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;No&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then the rest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_dummies&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;drop_first&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All numeric now. Boom.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Split Smart
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;X_val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_val&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;train_test_split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;test_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stratify&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;random_state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Stratify keeps the churn ratio the same   critical for this competition.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Train Two Models (Quick Check + Real Deal)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Baseline (Random Forest):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;rf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;RandomForestClassifier&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;rf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;RF ROC-AUC:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;roc_auc_score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict_proba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_val&lt;/span&gt;&lt;span class="p"&gt;)[:,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The one that actually scores well (LightGBM):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;lgb&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LGBMClassifier&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;random_state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;lgb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;LGB ROC-AUC:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;roc_auc_score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;y_val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;lgb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict_proba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_val&lt;/span&gt;&lt;span class="p"&gt;)[:,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;LightGBM usually jumps ahead — this is your starting leaderboard model.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. Test Set (Same Steps, No Leaks!)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;test&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/kaggle/input/competitions/playground-series-s6e3/test.csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;test_X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;test&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;# Same merge
&lt;/span&gt;&lt;span class="n"&gt;test_X&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;StreamingAny&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;test_X&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;StreamingTV&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Yes&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_X&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;StreamingMovies&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Yes&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;test_X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;test_X&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;StreamingTV&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;StreamingMovies&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;# Same encoding
&lt;/span&gt;&lt;span class="n"&gt;test_X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_dummies&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;drop_first&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;test_X&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;test_X&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reindex&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;X&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fill_value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;preds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;lgb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict_proba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;test_X&lt;/span&gt;&lt;span class="p"&gt;)[:,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;submission&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DataFrame&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;test&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Churn&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;preds&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="n"&gt;submission&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;submission.csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Want to Level Up Later?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Add cross-validation&lt;/li&gt;
&lt;li&gt;Merge more groups (add-ons, contract type)&lt;/li&gt;
&lt;li&gt;Tune LightGBM with Optuna&lt;/li&gt;
&lt;li&gt;Try CatBoost (zero encoding needed)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  One-Sentence Recap
&lt;/h3&gt;

&lt;p&gt;Start with clean loading → merge redundant columns → encode → split → train LGB → apply exact same steps to test → submit.&lt;/p&gt;

&lt;p&gt;That’s the real starting point every Kaggler needs.&lt;/p&gt;

&lt;p&gt;Copy this notebook, run it, and you’re already ahead.  &lt;/p&gt;

&lt;p&gt;Got a score? Hit a bug? Drop it in the comments or tag me   I reply to every one.&lt;/p&gt;

&lt;p&gt;Happy starting ! &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Girma Wakeyo&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Kaggle → &lt;a href="https://www.kaggle.com/girmawakeyo" rel="noopener noreferrer"&gt;https://www.kaggle.com/girmawakeyo&lt;/a&gt;&lt;br&gt;&lt;br&gt;
GitHub → &lt;a href="https://github.com/Girma35" rel="noopener noreferrer"&gt;https://github.com/Girma35&lt;/a&gt;&lt;br&gt;&lt;br&gt;
X → &lt;a href="https://x.com/Girma880731631" rel="noopener noreferrer"&gt;https://x.com/Girma880731631&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Follow for more quick-start notebooks and competition tips. Let’s climb those leaderboards together!&lt;/p&gt;

</description>
      <category>kaggle</category>
      <category>machinelearning</category>
      <category>aimodeling</category>
      <category>ai</category>
    </item>
    <item>
      <title>The Heart of Machine Learning: Underfitting, Overfitting, and How Models Actually Learn</title>
      <dc:creator>Girma</dc:creator>
      <pubDate>Sat, 21 Mar 2026 06:51:01 +0000</pubDate>
      <link>https://dev.to/girma35/the-heart-of-machine-learning-underfitting-overfitting-and-how-models-actually-learn-2l8j</link>
      <guid>https://dev.to/girma35/the-heart-of-machine-learning-underfitting-overfitting-and-how-models-actually-learn-2l8j</guid>
      <description>&lt;p&gt;Imagine you’re teaching a kid math.&lt;/p&gt;

&lt;p&gt;If the kid just memorizes every single example you give → he aces the homework but bombs the test.&lt;br&gt;
If the kid barely understands anything → he fails both homework and the test.&lt;/p&gt;

&lt;p&gt;That’s exactly what happens with machine learning models.&lt;br&gt;
These three ideas — underfitting, overfitting, and generalization — are the real “physics” behind why some models work in the real world and others don’t.&lt;br&gt;
Let’s break them down in the simplest, clearest way possible (with pictures so your brain doesn’t hurt).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generalization – The Only Thing That Actually Matters
Generalization = the model performs well on new, unseen data, not just the data it was trained on.
You train on Dataset A (D_train).
You test on Dataset B (D_test).
If the accuracy is almost the same → great generalization.
If it crashes on the test set → poor generalization.
The model isn’t learning your specific photos, numbers, or sentences.
It’s trying to learn the hidden rules (the underlying distribution) of the world.&lt;/li&gt;
&lt;li&gt;Overfitting – The Student Who Memorized Everything
The model becomes a parrot.
It learns:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Real patterns ✅&lt;br&gt;
Random noise ❌&lt;/p&gt;

&lt;p&gt;Result?&lt;br&gt;
Training error → almost zero&lt;br&gt;
Test error → sky high&lt;br&gt;
Classic signs:&lt;/p&gt;

&lt;p&gt;Way too many parameters (a huge neural net)&lt;br&gt;
Not enough training data&lt;br&gt;
No regularization&lt;/p&gt;

&lt;p&gt;Think of it as a student who memorizes every past exam question word-for-word instead of understanding the concepts.&lt;br&gt;
medium.commedium.com&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Underfitting – The Student Who Gave Up
The model is too dumb or too lazy.
It can’t even capture the basic patterns in the training data.
You get:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;High training error&lt;br&gt;
High test error&lt;/p&gt;

&lt;p&gt;Causes:&lt;/p&gt;

&lt;p&gt;Model too simple (tiny linear regression on complex data)&lt;br&gt;
Training stopped too early&lt;br&gt;
Bad features&lt;/p&gt;

&lt;p&gt;It’s like trying to predict house prices using only the color of the front door.&lt;br&gt;
superannotate.comOverfitting and underfitting in machine learning | SuperAnnotate&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Bias–Variance Tradeoff (The Real Engine)
This is the fundamental law.
Bias = error because your model makes wrong assumptions (too simple)
Variance = error because your model is too sensitive to small changes in the training data (too complex)
Here’s the famous picture that explains everything:
oaconn.medium.comConceptualizing the Bias-Variance Trade-Off | by Orin Conn | Medium
Sweet spot in the middle = best generalization.&lt;/li&gt;
&lt;li&gt;The Modern Surprise: Double Descent
Old textbooks said: “More complexity = worse generalization after a point.”
Deep learning laughed at them.
Today we see the double descent curve:
Error goes down → up (classic overfitting) → then down again when the model becomes ridiculously huge.
This is why GPT-4, Stable Diffusion, etc. work at all.
medium.comBeyond Overfitting and Beyond Silicon: The double descent curve | by  LightOn | Medium&lt;/li&gt;
&lt;li&gt;The 4 Pillars That Create Generalization
Generalization doesn’t come from magic. It emerges from four things working together:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Data → size, quality, diversity&lt;br&gt;
Model Architecture → right inductive bias (CNNs love images, Transformers love sequences)&lt;br&gt;
Objective Function → loss + regularization terms&lt;br&gt;
Optimization → SGD, Adam, learning rate tricks&lt;/p&gt;

&lt;p&gt;Change any one pillar and the whole building shakes.&lt;br&gt;
Best Books – From Zero to Research Level&lt;br&gt;
Beginner (build intuition)&lt;/p&gt;

&lt;p&gt;Hands-On Machine Learning – Aurélien Géron (practical gold)&lt;br&gt;
Pattern Recognition and Machine Learning – Christopher Bishop (bias-variance explained perfectly)&lt;/p&gt;

&lt;p&gt;Intermediate (theory)&lt;/p&gt;

&lt;p&gt;Understanding Machine Learning: From Theory to Algorithms – Shalev-Shwartz &amp;amp; Ben-David&lt;br&gt;
The Elements of Statistical Learning – Hastie, Tibshirani, Friedman (the bible)&lt;/p&gt;

&lt;p&gt;Advanced / Research (what experts read)&lt;/p&gt;

&lt;p&gt;Deep Learning – Goodfellow, Bengio, Courville&lt;br&gt;
Deep Learning Generalization: Theoretical Foundations and Practical Strategies – Liu Peng (the book that goes deep into double descent, NTK, overparameterization)&lt;br&gt;
Information Theory, Inference, and Learning Algorithms – David MacKay&lt;/p&gt;

&lt;p&gt;Recommended learning order&lt;br&gt;
Géron → Bishop → Shalev-Shwartz → Goodfellow → Liu Peng&lt;/p&gt;

&lt;p&gt;If you found this article helpful and want to dive deeper into machine learning, deep learning, and practical projects, you can connect with me, &lt;/p&gt;

&lt;p&gt;Kaggle – Explore my notebooks, datasets, and competitions:&lt;br&gt;
    &lt;a href="https://www.kaggle.com/girmawakeyo" rel="noopener noreferrer"&gt;https://www.kaggle.com/girmawakeyo&lt;/a&gt;&lt;br&gt;
GitHub –  Check out my code, experiments, and open-source projects:&lt;br&gt;
&lt;a href="https://github.com/Girma35" rel="noopener noreferrer"&gt;https://github.com/Girma35&lt;/a&gt;&lt;br&gt;
X  Follow for insights, updates, and discussions on AI and software development:&lt;br&gt;
&lt;a href="https://x.com/Girma880731631" rel="noopener noreferrer"&gt;https://x.com/Girma880731631&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to follow, explore, or reach out. I look forward to sharing knowledge and building projects together.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>modelbuilding</category>
      <category>ai</category>
    </item>
    <item>
      <title>No, the software developer job isn't dead in 2026 but damn, it's changed more in the last couple of years</title>
      <dc:creator>Girma</dc:creator>
      <pubDate>Thu, 12 Feb 2026 15:51:22 +0000</pubDate>
      <link>https://dev.to/girma35/no-the-software-developer-job-isnt-dead-in-2026-but-damn-its-changed-more-in-the-last-couple-4pp5</link>
      <guid>https://dev.to/girma35/no-the-software-developer-job-isnt-dead-in-2026-but-damn-its-changed-more-in-the-last-couple-4pp5</guid>
      <description>&lt;p&gt;I've been watching this space closely (hell, we've all been living it), and the headlines screaming "AI KILLS CODING FOREVER" feel like clickbait from people who never shipped real production code. The truth is messier, more interesting, and honestly a bit exciting if you're willing to adapt. Let me break it down honestly, no hype, no doom scrolling.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Panic Was Real (and Partly Right)
&lt;/h3&gt;

&lt;p&gt;Back in 2024 2025, when Anthropic's CEO dropped that bomb about AI writing 90% of code in months, a lot of us rolled our eyes... until it kinda happened. Tools like Claude Opus 4.6, GPT Codex variants, and agentic frameworks (shoutout to stuff like OpenClaw that blew up on GitHub) let one solid dev orchestrate agents to crank out what used to take a small team weeks. Entry level hiring tanked — Stanford studies showed jobs for 22 25 year olds in software dropping 20% from peaks, junior postings down 60% in spots. Companies shrunk teams: a 2 3 person crew with AI can now handle what 8 10 used to.&lt;/p&gt;

&lt;p&gt;Layoffs hit hard in big tech, and "vibe coding" became a meme for the sloppy, regret filled output when people let agents run wild without oversight. Managers who thought "just hire AI" ended up with mountains of unmaintainable slop — hallucinations, security holes, brittle systems that break in prod. That $61 billion technical debt crisis everyone's whispering about? Not made up.&lt;/p&gt;

&lt;p&gt;So yeah, if your job was mostly boilerplate CRUD, copy paste from Stack Overflow, or being the 10th guy on a ticket queue... that version of "software developer" is on life support.&lt;/p&gt;

&lt;h3&gt;
  
  
  But Here's the Flip: The Job Didn't Die  It Leveled Up
&lt;/h3&gt;

&lt;p&gt;Look at the actual numbers from people who aren't selling fear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;U.S. Bureau of Labor Statistics still projects ~15 18% growth for software devs through 2034   way above average, adding hundreds of thousands of roles.&lt;/li&gt;
&lt;li&gt;Demand for AI native engineers (folks who orchestrate agents, design systems, evaluate output, handle edge cases AI hallucinates on) exploded. Salaries for seniors with agentic skills carry an 18 30% premium in many spots.&lt;/li&gt;
&lt;li&gt;World Economic Forum and JetBrains surveys: 4 in 10 devs say AI already expanded their opportunities; 7 in 10 expect their role to evolve further in 2026. We're shifting from "code writer" to "system orchestrator"   architecture, agent coordination, strategic decomposition, quality gates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Peter Steinberger (the OpenClaw guy on Lex Fridman's podcast) nailed it: AI agents will probably replace 80% of traditional apps because personal agents handle tasks better than siloed software. But programmers? They evolve into directors — guiding agents through long sessions, voice prompting, refactoring on the fly, integrating tests, even letting agents self modify safely in sandboxes.&lt;/p&gt;

&lt;p&gt;The skill gap widened, not closed. Bad devs (or lazy ones) get exposed fast — AI makes their weaknesses obvious. Great ones become weapons grade productive. The market rewards thinkers over typists now.&lt;/p&gt;

&lt;h3&gt;
  
  
  What This Means for You in 2026
&lt;/h3&gt;

&lt;p&gt;If you're a good engineer already dipping into agentic coding (like the workflows we talked about  voice prompting, long autonomous runs, self modifying agents), you're in a sweet spot. The future isn't fewer jobs; it's fewer rote jobs and way more leverage for those who adapt.&lt;/p&gt;

&lt;p&gt;Juniors/bootcamp folks? Tougher road  the traditional "grind LeetCode → junior role → learn on the job" pipeline shrank. But if you skip straight to mastering AI orchestration, product thinking, and domain expertise, you can leapfrog.&lt;/p&gt;

&lt;p&gt;Everyone else? Upskill or get comfortable being commoditized. Learn to prompt like a pro, build with agents (OpenClaw, Cursor, Aider stacks), focus on what AI sucks at: real world judgment, security, ethics, cross team empathy, turning business chaos into clean systems.&lt;/p&gt;

&lt;p&gt;Coding isn't dead. Hand writing every line like it's 2015? Yeah, that's fading fast. But engineering  solving hard problems, building reliable things that matter, directing intelligence at scale — that's thriving.&lt;/p&gt;

&lt;h3&gt;
  
  
  Call to Action (Freelancer Focused)
&lt;/h3&gt;

&lt;p&gt;The freelance world is booming for adaptable devs right now — companies need quick, high leverage builds without full time overhead. If you're shipping AI augmented work, clients are paying premiums.&lt;/p&gt;

&lt;p&gt;Check me out if you need a reliable partner for web/apps, AI integrations, or full stack projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upwork: &lt;a href="https://www.upwork.com/freelancers/%7E015e94f70259a74e1d?mp_source=share" rel="noopener noreferrer"&gt;https://www.upwork.com/freelancers/~015e94f70259a74e1d?mp_source=share&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Fiverr: &lt;a href="https://www.fiverr.com/s/Q7ArERy" rel="noopener noreferrer"&gt;https://www.fiverr.com/s/Q7ArERy&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Portfolio &amp;amp; more: &lt;a href="https://girma.studio/" rel="noopener noreferrer"&gt;https://girma.studio/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Hit me on X: &lt;a href="https://x.com/Girma880731631" rel="noopener noreferrer"&gt;https://x.com/Girma880731631&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's build something cool in this new era — because the job isn't dead. It's just finally interesting again.&lt;/p&gt;

&lt;p&gt;What do you think — are you feeling the shift, or still riding the old wave? Drop your take below. &lt;/p&gt;

</description>
      <category>agentic</category>
      <category>codepen</category>
      <category>coding</category>
      <category>hacktoberfest23</category>
    </item>
    <item>
      <title>How to Deploy OpenClaw (Moltbot) Securely on DigitalOcean Step-by-Step using 1-Click Droplet &amp; Docker</title>
      <dc:creator>Girma</dc:creator>
      <pubDate>Wed, 11 Feb 2026 17:50:34 +0000</pubDate>
      <link>https://dev.to/girma35/how-to-deploy-openclaw-moltbot-securely-on-digitalocean-step-by-step-using-1-click-droplet--5ak8</link>
      <guid>https://dev.to/girma35/how-to-deploy-openclaw-moltbot-securely-on-digitalocean-step-by-step-using-1-click-droplet--5ak8</guid>
      <description>&lt;p&gt;Literally, 2026 is the year autonomous AI agents exploded—OpenClaw (formerly Clawdbot, then Moltbot) went viral with hundreds of thousands of GitHub stars in weeks, millions of agents spawning on platforms like Moltbook, and everyone scrambling to run these "Claude with hands" beasts 24/7 without frying their laptops or exposing everything to prompt-injection nightmares. Running it locally? Battery drain, security holes, no mobile access. The fix: cloud deployment. This guide walks you through DigitalOcean's official 1-click droplet setup (from their Feb 2026 tutorial) so you get a hardened, always-on instance in minutes—sandboxed in Docker, firewalled, token-auth'd, and ready to chat via Telegram like it's 2026.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tech Stack&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DigitalOcean Droplets (cloud VM)
&lt;/li&gt;
&lt;li&gt;Docker (containerization + sandboxing)
&lt;/li&gt;
&lt;li&gt;OpenClaw / Moltbot (open-source autonomous AI agent)
&lt;/li&gt;
&lt;li&gt;LLM providers: Anthropic Claude, DigitalOcean Gradient AI, etc.
&lt;/li&gt;
&lt;li&gt;Integrations: Telegram (primary channel), WhatsApp/Slack/Discord optional
&lt;/li&gt;
&lt;li&gt;Web dashboard + TUI (terminal chat)
&lt;/li&gt;
&lt;li&gt;MoltHub (skill marketplace for tools like summarization, browsing)
&lt;/li&gt;
&lt;li&gt;Built-in security: Gateway tokens, UFW firewall, rate limiting, non-root execution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Spin Up the 1-Click Droplet&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Head to the DigitalOcean Marketplace → search "OpenClaw" (or "Moltbot").&lt;br&gt;&lt;br&gt;
Select the app → choose region (low-latency pick), size (start with 2GB RAM / 1 vCPU ~$12/mo for smooth agentic tasks), and SSH key (mandatory for security).&lt;br&gt;&lt;br&gt;
Create. Done in ~60 seconds.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SSH In &amp;amp; Grab Config Info&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;code&gt;ssh root@YOUR_DROPLET_IP&lt;/code&gt;&lt;br&gt;&lt;br&gt;
The welcome message spits out your gateway URL, token, and commands. Everything's pre-hardened—no manual firewall tweaks needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pick Your AI Brain&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Enter your API key (e.g., Anthropic) when prompted. The service auto-restarts. Pro move: use a limited-key for cost control.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Chat in the Terminal (TUI)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Run the TUI command from the welcome msg.&lt;br&gt;&lt;br&gt;
Test memory: tell it something → exit → come back and ask if it remembers. Persistence just works.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Open the Web Dashboard&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Hit the gateway URL in your browser → auth with the token.&lt;br&gt;&lt;br&gt;
Monitor chats, channels, skills, logs—all in one dark-mode beauty.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pair Telegram for Real-World Access&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Use the CLI script to add Telegram → create bot via BotFather → paste token.&lt;br&gt;&lt;br&gt;
Generate pairing code/QR → scan/message in Telegram DMs. Now your agent lives in your pocket.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install Skills &amp;amp; Go Agentic&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
From dashboard or chat: browse MoltHub → install summarizer, browser tool, etc.&lt;br&gt;&lt;br&gt;
Test: paste URL → watch it summarize like a pro.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Common Mistakes (and How the 1-Click Saves You)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Everyone's literally running these agents locally in early 2026—huge security risk (exposed ports, root execution, no isolation). Or manual Docker setups forgetting tokens/firewalls. The Marketplace droplet auto-applies best practices: non-root, rate limits, gateway auth, Docker sandboxing. Skip weak local keys or public dashboards? This setup enforces secure pairing. Bonus: scales easily—no more "my Mac Mini is screaming" memes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Result&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You've got a production-grade, cloud-hosted autonomous agent: 24/7 uptime, Telegram DM control, persistent memory, extensible skills via MoltHub. Latency stays snappy, costs predictable, security locked down. Perfect for personal use or client automations in this agentic boom.&lt;/p&gt;

&lt;p&gt;(Imagine screenshots here of: droplet dashboard, Telegram convo summarizing an article, TUI memory test, web overview with green health status.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Call to Action&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 Literally building agentic AI setups like this for clients right now—custom OpenClaw extensions, secure cloud deploys, Telegram bots, skill integrations, or full autonomous workflows. As a freelance Full-Stack &amp;amp; AI Automation Developer, I turn these viral trends into bulletproof production systems.&lt;/p&gt;

&lt;p&gt;🔗 Upwork: &lt;a href="https://www.upwork.com/freelancers/%7E015e94f70259a74e1d?mp_source=share" rel="noopener noreferrer"&gt;https://www.upwork.com/freelancers/~015e94f70259a74e1d?mp_source=share&lt;/a&gt;&lt;br&gt;&lt;br&gt;
🔗 Fiverr: &lt;a href="https://www.fiverr.com/s/Q7ArERy" rel="noopener noreferrer"&gt;https://www.fiverr.com/s/Q7ArERy&lt;/a&gt;&lt;br&gt;&lt;br&gt;
🔗 GitHub/Portfolio: &lt;a href="https://girma.studio/" rel="noopener noreferrer"&gt;https://girma.studio/&lt;/a&gt;&lt;br&gt;&lt;br&gt;
🔗 X: &lt;a href="https://x.com/Girma880731631" rel="noopener noreferrer"&gt;https://x.com/Girma880731631&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;DM me your idea—let's make your agent go viral (safely). &lt;/p&gt;

</description>
      <category>moltbot</category>
      <category>openclaw</category>
      <category>dohackathon</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Deploying a Vibe-Coded Website Fast, Clean, and Production-Ready</title>
      <dc:creator>Girma</dc:creator>
      <pubDate>Thu, 05 Feb 2026 06:43:40 +0000</pubDate>
      <link>https://dev.to/girma35/deploying-a-vibe-coded-website-fast-clean-and-production-ready-2mm3</link>
      <guid>https://dev.to/girma35/deploying-a-vibe-coded-website-fast-clean-and-production-ready-2mm3</guid>
      <description>&lt;p&gt;Deploying a Vibe-Coded Website — Fast, Clean, and Production-Ready&lt;/p&gt;

&lt;p&gt;You’ve built your vibe-coded website. It looks great locally. Now it’s time to launch it to the world using platforms like Vercel and other modern hosting providers. Here’s a simple deployment flow that works for most modern stacks (Next.js, React, static builds, and similar frameworks).&lt;/p&gt;

&lt;p&gt;Step 1 — Prepare the Project&lt;br&gt;
Make sure your project runs without errors locally. Confirm:&lt;br&gt;
• Dependencies are installed&lt;br&gt;
• Build command works (&lt;code&gt;npm run build&lt;/code&gt; or equivalent)&lt;br&gt;
• Environment variables are configured&lt;br&gt;
• No hard-coded local URLs remain&lt;/p&gt;

&lt;p&gt;Step 2 — Push to Git&lt;br&gt;
Most modern hosts deploy directly from Git repositories. Push your project to GitHub, GitLab, or Bitbucket. Clean commits make debugging easier later.&lt;/p&gt;

&lt;p&gt;Step 3 — Deploy to Vercel&lt;br&gt;
• Sign in to Vercel&lt;br&gt;
• Click “New Project”&lt;br&gt;
• Import your repository&lt;br&gt;
• Vercel auto-detects most frameworks&lt;br&gt;
• Set build command and output directory if needed&lt;br&gt;
• Add environment variables&lt;br&gt;
• Click Deploy&lt;/p&gt;

&lt;p&gt;Vercel handles SSL, CDN, and global edge delivery automatically.&lt;/p&gt;

&lt;p&gt;Step 4 — Deploy to Other Platforms (Netlify, Cloudflare Pages, Render, etc.)&lt;br&gt;
The process is similar:&lt;br&gt;
• Connect your Git repo&lt;br&gt;
• Set build command&lt;br&gt;
• Set output folder&lt;br&gt;
• Add environment variables&lt;br&gt;
• Trigger deploy&lt;/p&gt;

&lt;p&gt;Step 5 — Custom Domain&lt;br&gt;
Add your domain inside the hosting dashboard and update DNS records at your domain registrar. Most providers give copy-paste DNS values.&lt;/p&gt;

&lt;p&gt;Step 6 — Production Checks&lt;br&gt;
After deployment:&lt;br&gt;
• Test forms and APIs&lt;br&gt;
• Check mobile layout&lt;br&gt;
• Verify SEO metadata&lt;br&gt;
• Confirm performance scores&lt;br&gt;
• Test from multiple devices&lt;/p&gt;

&lt;p&gt;Want someone to handle deployment, optimization, and production hardening for you?&lt;/p&gt;

&lt;p&gt;Get it done here → &lt;a href="http://www.fiverr.com/s/EgAVXPe" rel="noopener noreferrer"&gt;http://www.fiverr.com/s/EgAVXPe&lt;/a&gt;&lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>lovable</category>
      <category>automation</category>
      <category>vercel</category>
    </item>
    <item>
      <title>I Built an AI-Powered Business Website in 7 Days Lessons, Mistakes, and Results</title>
      <dc:creator>Girma</dc:creator>
      <pubDate>Wed, 04 Feb 2026 11:56:53 +0000</pubDate>
      <link>https://dev.to/girma35/i-built-an-ai-powered-business-website-in-7-days-lessons-mistakes-and-results-4ibg</link>
      <guid>https://dev.to/girma35/i-built-an-ai-powered-business-website-in-7-days-lessons-mistakes-and-results-4ibg</guid>
      <description>&lt;p&gt;This project involved building an AI-powered website designed to help businesses establish a professional online presence quickly and efficiently. The website was created with students, early-stage startups, and freelance clients in mind, especially those who want practical results rather than theoretical demos.&lt;/p&gt;

&lt;p&gt;The core idea was not to experiment with trends, but to deliver a working product under real-world constraints. The focus was speed, clarity, and long-term usability rather than visual excess or unnecessary complexity.&lt;/p&gt;

&lt;p&gt;The main problem was familiar. Many businesses understand they need a website, but traditional development often takes weeks, requires multiple specialists, and demands content they do not yet have. The goal of this project was to reduce friction by using AI to accelerate setup, lower costs, and improve communication, while still keeping human control over quality.&lt;/p&gt;

&lt;p&gt;The timeline was tight. The entire project had to be completed in seven days. Features had to be carefully chosen, and every technical decision needed to support speed, maintainability, and performance. The website needed to be usable by non-technical users and ready for real traffic.&lt;/p&gt;

&lt;p&gt;The solution was to build a lean MVP with AI integrated as a support layer rather than the foundation. The architecture focused on simplicity. A modern responsive frontend was paired with lightweight backend services and AI tools that handled content generation, optimization, and basic analytics. Instead of building complex custom systems, the emphasis was on choosing reliable tools and connecting them cleanly.&lt;/p&gt;

&lt;p&gt;The website included AI-assisted landing page content, dynamic text optimization for clarity and search engines, automated content suggestions, and basic behavior tracking to understand how users interacted with the site. Performance and mobile usability were prioritized from the start.&lt;/p&gt;

&lt;p&gt;Development followed a structured but flexible approach. The business message was defined before any code was written. A minimal structure was built first, then AI tools were introduced gradually. Each feature was tested for usefulness rather than novelty, and unnecessary complexity was removed early.&lt;/p&gt;

&lt;p&gt;Several challenges emerged during development. One early mistake was relying too much on raw AI-generated content. While the text was technically correct, it lacked the tone and emotional clarity needed to connect with real users. Another issue was finding the right balance between automation and control. Too much automation made the site feel generic, while too little reduced the benefits of using AI.&lt;/p&gt;

&lt;p&gt;These problems were solved by introducing human review steps, refining AI prompts, and simplifying the user experience instead of adding more features. The lesson was clear: AI works best as an assistant, not a replacement. It amplifies good decisions and exposes weak ones very quickly.&lt;/p&gt;

&lt;p&gt;The final result was a fully functional AI-powered business website delivered within the seven-day timeline. The site performed well, communicated clearly, and allowed content updates with minimal effort. Early feedback was positive, and the foundation was strong enough to scale without major rewrites.&lt;/p&gt;

&lt;p&gt;This project reinforced an important idea. The value of an AI-powered website comes down to leverage. It allows businesses to move faster, reduce costs, and communicate more effectively. Speed matters because attention is limited. AI makes it possible for a business to exist online immediately rather than waiting weeks for a perfect setup.&lt;/p&gt;

&lt;p&gt;AI also lowers the barrier to entry. Small teams and solo founders often lack designers, writers, or developers. AI helps fill those gaps without removing human oversight. It improves communication, adapts based on user behavior, and increases visibility through better optimization and personalization.&lt;/p&gt;

&lt;p&gt;Businesses once competed on size, then efficiency. Today, they compete on intelligence. AI-powered websites are not about looking futuristic. They are about surviving and growing in an environment where clarity, speed, and focus decide outcomes.&lt;/p&gt;

&lt;p&gt;Technology alone does not make a business successful. But ignoring effective tools has quietly ended many promising ones.&lt;/p&gt;

&lt;p&gt;I build real applications, &lt;br&gt;
If you are a startup founder, business owner, or client looking for a reliable developer who understands both engineering and outcomes,&lt;br&gt;
you can find me here.&lt;/p&gt;

&lt;p&gt;Upwork: &lt;a href="https://www.upwork.com/freelancers/~015e94f70259a74e1d?mp_source=share" rel="noopener noreferrer"&gt;https://www.upwork.com/freelancers/~015e94f70259a74e1d?mp_source=share&lt;/a&gt;&lt;br&gt;
Fiverr: &lt;a href="https://www.fiverr.com/s/Q7ArERy" rel="noopener noreferrer"&gt;https://www.fiverr.com/s/Q7ArERy&lt;/a&gt;&lt;br&gt;
Portfolio and GitHub: &lt;a href="https://girma.studio/" rel="noopener noreferrer"&gt;https://girma.studio/&lt;/a&gt;&lt;br&gt;
X: &lt;a href="https://x.com/Girma880731631" rel="noopener noreferrer"&gt;https://x.com/Girma880731631&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well-built software saves time. Well-designed systems protect businesses from wasted effort.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>startup</category>
      <category>rpa</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Node.js vs Deno vs Bun A Developer &amp; Tech Perspective</title>
      <dc:creator>Girma</dc:creator>
      <pubDate>Tue, 03 Feb 2026 06:26:06 +0000</pubDate>
      <link>https://dev.to/girma35/nodejs-vs-deno-vs-bun-a-developer-tech-perspective-5f3p</link>
      <guid>https://dev.to/girma35/nodejs-vs-deno-vs-bun-a-developer-tech-perspective-5f3p</guid>
      <description>&lt;p&gt;Choosing between &lt;strong&gt;Node.js&lt;/strong&gt;, &lt;strong&gt;Deno&lt;/strong&gt;, and &lt;strong&gt;Bun&lt;/strong&gt; often confuses developers because all three run JavaScript/TypeScript on the server, but they represent different philosophies: Node.js is the established giant with massive ecosystem support, Deno emphasizes secure-by-default and modern web standards, and Bun focuses on blazing speed and all-in-one tooling. This article is for developers, tech enthusiasts, students experimenting with backends, startups building fast prototypes, and freelancers delivering client projects efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Comparison Table:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Node.js&lt;/th&gt;
&lt;th&gt;Deno&lt;/th&gt;
&lt;th&gt;Bun&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Performance&lt;/td&gt;
&lt;td&gt;Solid and predictable (★★★☆☆)&lt;/td&gt;
&lt;td&gt;Faster startup &amp;amp; good throughput (★★★★☆)&lt;/td&gt;
&lt;td&gt;Blazing fast, often 2-4x in benchmarks (★★★★★)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning Curve&lt;/td&gt;
&lt;td&gt;Moderate (familiar but dated APIs)&lt;/td&gt;
&lt;td&gt;Steeper initially due to permissions &amp;amp; URL imports&lt;/td&gt;
&lt;td&gt;Low — feels like Node but faster &amp;amp; simpler&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security&lt;/td&gt;
&lt;td&gt;Unrestricted by default (★★★☆☆)&lt;/td&gt;
&lt;td&gt;Secure sandbox with explicit permissions (★★★★★)&lt;/td&gt;
&lt;td&gt;Moderate, better isolation but less strict (★★★☆☆)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Compatibility &amp;amp; Ecosystem&lt;/td&gt;
&lt;td&gt;Massive npm, full legacy support (★★★★★)&lt;/td&gt;
&lt;td&gt;Growing, excellent npm compat in Deno 2 (★★★★☆)&lt;/td&gt;
&lt;td&gt;High npm compat, rapid growth (★★★★☆)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tooling&lt;/td&gt;
&lt;td&gt;Requires extras (npm, tsc, etc.) (★★★☆☆)&lt;/td&gt;
&lt;td&gt;Built-in test runner, formatter, bundler (★★★★★)&lt;/td&gt;
&lt;td&gt;All-in-one: bundler, test runner, package manager (★★★★★)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Community Support&lt;/td&gt;
&lt;td&gt;Huge, battle-tested (★★★★★)&lt;/td&gt;
&lt;td&gt;Growing steadily (★★★★☆)&lt;/td&gt;
&lt;td&gt;Rapidly expanding, developer-focused (★★★★☆)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Real-World Use Case:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;When to choose Node.js&lt;/strong&gt;: For large-scale enterprise apps, legacy codebases, or projects needing maximum library compatibility and hiring ease   like a fintech backend with complex dependencies or a production API serving millions where stability trumps raw speed. In my freelance work, clients with existing Node teams or compliance needs stick here reliably.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;When to choose Deno&lt;/strong&gt;: For security-sensitive apps (e.g., multi-tenant SaaS, edge functions, or scripts handling untrusted input) or TypeScript-first greenfield projects where you want modern standards without node_modules hassle — ideal for clean APIs or serverless deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;When to choose Bun&lt;/strong&gt;: For performance-critical apps (high-throughput APIs, CLI tools, or startups optimizing server costs), rapid prototyping, or when developer experience matters most — like MVPs, real-time services, or bundling full-stack apps quickly. I've seen it shave significant time off build/deploy cycles for client prototypes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;My Recommendation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Based on my experience freelancing on dozens of backend projects — from student MVPs to startup production systems  &lt;strong&gt;Node.js&lt;/strong&gt; remains the safest default for most real world work in 2026 due to its unmatched ecosystem, stability, and team familiarity. Go with &lt;strong&gt;Bun&lt;/strong&gt; for new projects where speed and modern DX give you an edge (especially if you're optimizing costs or building fast), and pick &lt;strong&gt;Deno&lt;/strong&gt; if security-by-default or web  standard alignment is non-negotiable. No single "winner" exists   the best choice depends on your project's constraints, team skills, and priorities. Experiment with all three on side projects to see what clicks for you.&lt;/p&gt;

&lt;p&gt;As a senior freelance developer, if you're building or scaling a backend and need help choosing the right runtime, migrating code, or optimizing performance for your startup/client project, let's chat! Find me on Upwork, Fiverr, GitHub, or X for consultations, code reviews, or full implementations. Drop a message — I'd love to help turn your idea into production-ready code. &lt;/p&gt;

</description>
      <category>bunjs</category>
      <category>node</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>CLI-Agent vs MCP A Practical Comparison for Students, Startups, and Developers</title>
      <dc:creator>Girma</dc:creator>
      <pubDate>Mon, 02 Feb 2026 15:55:50 +0000</pubDate>
      <link>https://dev.to/girma35/cli-agent-vs-mcp-a-practical-comparison-for-students-startups-and-developers-4com</link>
      <guid>https://dev.to/girma35/cli-agent-vs-mcp-a-practical-comparison-for-students-startups-and-developers-4com</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The choice between traditional CLI-based AI agents and the Model Context Protocol (MCP) often creates confusion when building intelligent, autonomous systems. CLI agents rely on existing command-line tools—battle-tested interfaces that humans have refined over decades—while MCP offers a structured, schema-driven protocol for secure, machine-first connections to data and tools. The core tension lies in legibility: should systems remain human-readable and debuggable through familiar text outputs, or prioritize machine guarantees to eliminate ambiguity and parsing errors?&lt;/p&gt;

&lt;p&gt;Students exploring AI agent development, startups prototyping efficient tools, and developers (including freelancers) evaluating production approaches will find this comparison useful. Drawing from real-world implementations in 2025–2026, including benchmarks, client projects, and community debates, this article breaks down the trade-offs clearly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Comparison Table:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;CLI-Agent Position&lt;/th&gt;
&lt;th&gt;MCP Position&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Performance&lt;/td&gt;
&lt;td&gt;Superior token efficiency in many cases; agents call tools via shell with minimal context overhead. Benchmarks show up to 33% better efficiency and capabilities in debugging workflows.&lt;/td&gt;
&lt;td&gt;Structured calls reduce round-trips and parsing errors, but tool discovery/schemas can inflate token usage when many tools are exposed. Code execution integrations help optimize.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning Curve&lt;/td&gt;
&lt;td&gt;Gentler for those familiar with terminals; reuse knowledge of git, curl, jq, etc. LLMs excel at --help parsing and piping outputs naturally.&lt;/td&gt;
&lt;td&gt;Steeper upfront: learn JSON schemas, MCP servers/clients, OAuth/auth flows, and protocol specs. Once grasped, interactions become more predictable and typed.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost&lt;/td&gt;
&lt;td&gt;Generally lower; leverages free/open-source CLIs, requires less prompt engineering for robust calls, and uses fewer tokens overall in practical agent loops.&lt;/td&gt;
&lt;td&gt;Can be higher due to schema overhead and discovery, but scales cost-effectively for complex, multi-tool setups without redundant integrations.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Community Support&lt;/td&gt;
&lt;td&gt;Enormous and mature; decades of CLI ecosystem (npm, brew, pip tools), active debates on X/Reddit/GitHub favoring CLI for flexibility and efficiency in coding agents.&lt;/td&gt;
&lt;td&gt;Rapid growth since Anthropic's 2024 open-sourcing; strong in Claude ecosystem, VS Code, enterprise (thousands of MCP servers built), with SDKs in major languages.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tooling &amp;amp; Debuggability&lt;/td&gt;
&lt;td&gt;Outstanding human inspectability—stdout/stderr logging, manual command replay, shared human/agent workflows. Easy to debug by running commands yourself.&lt;/td&gt;
&lt;td&gt;Schema enforcement and typing prevent classes of errors; better security/consent/sandboxes. Debugging requires MCP-specific tools/inspectors, less "vibe-based."&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Real-World Use Case:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;When to choose CLI-Agent:&lt;/em&gt; Opt for CLI approaches in scenarios demanding speed, cost control, and human oversight—like student experiments, quick prototypes, or solo/small-team development. For example, in coding agents (Claude Code, Aider, Gemini CLI, OpenCode), CLI excels at git workflows, test running, debugging, and repo management. One benchmark highlighted CLI winning by 17 points and 33% token savings in developer tasks, completing jobs (e.g., memory profiling) that MCP structurally struggled with due to selective output vs. full dumps. In practice, teams ship CLI + agent skills (e.g., custom scripts piped with jq) faster, with greater control and reliability—especially when humans remain in the loop for approval or fixes.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;When to choose MCP:&lt;/em&gt; Turn to MCP for production systems requiring reliability, security, and autonomous operation across diverse tools/data sources. Examples include enterprise chatbots connecting to databases/APIs, AI-powered IDEs pulling real-time context, or agents handling Figma designs to code generation. MCP's schemas eliminate parsing brittleness, support OAuth for consented access, and standardize integrations (e.g., GitHub MCP server for repo/issues/CI). In scaled setups, it prevents hallucinations from ambiguous text and enables modular ecosystems where agents discover/use tools without custom hacks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Recommendation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From hands-on experience building and benchmarking AI agents in 2025–2026: &lt;strong&gt;Start with CLI-Agent approaches&lt;/strong&gt; for most learning, prototyping, and everyday development work. They deliver faster iteration, lower inference costs, higher token efficiency, and full human legibility—you can always inspect outputs, replay commands, or intervene directly. CLI agents shine in coding tasks (e.g., 100% success in some tool benchmarks with better autonomy), leverage decades of operational knowledge, and compose naturally (pipe outputs, grep/filter). Community momentum (e.g., "CLI + skills &amp;gt;&amp;gt;&amp;gt; MCP" sentiments) and practical wins—like reduced malicious command checks via careful design—make them the pragmatic choice for students and startups.&lt;/p&gt;

&lt;p&gt;Adopt &lt;strong&gt;MCP&lt;/strong&gt; as projects mature toward production, multi-tool complexity, or agent-only execution. It provides guarantees against errors, standardized security, and ecosystem scale (thousands of servers, cross-platform support). Many effective setups hybridize: use MCP for discovery/structured access where needed, but fall back to CLI for execution efficiency.&lt;/p&gt;

&lt;p&gt;Practical tips from projects: Begin with simple CLI agents (e.g., terminal-based with LangChain or custom scripts) to grasp agentic flows quickly. Test token usage rigorously—CLI often wins on cost. Avoid premature schema complexity; add MCP for polish when reliability demands it. In coding, well-configured CLI agents with MCP augmentation (e.g., for specific tools) frequently outperform pure MCP in speed and stability.&lt;/p&gt;

&lt;p&gt;Picking between CLI agents and MCP can dramatically impact your project's efficiency, cost, and reliability.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>cli</category>
      <category>softwareengineering</category>
      <category>programming</category>
    </item>
    <item>
      <title>Backend Mastery from First Principles in the AI Era: (2026 Edition)</title>
      <dc:creator>Girma</dc:creator>
      <pubDate>Sat, 31 Jan 2026 12:46:36 +0000</pubDate>
      <link>https://dev.to/girma35/backend-mastery-from-first-principles-in-the-ai-era-2026-edition-5col</link>
      <guid>https://dev.to/girma35/backend-mastery-from-first-principles-in-the-ai-era-2026-edition-5col</guid>
      <description>&lt;p&gt;In today's world of rapid framework hype — Express one week, NestJS the next, then Gin, FastAPI, or Spring Boot — many developers build APIs without truly understanding &lt;strong&gt;why&lt;/strong&gt; the systems behave the way they do. They copy-paste middleware chains, slap on JWT auth, throw in Redis for caching, and call it "production-ready."&lt;/p&gt;

&lt;p&gt;But when things break at scale — race conditions appear, graceful shutdowns fail, observability is missing, or a simple misconfigured route leaks sensitive data — the cracks show. The fix isn't another tutorial; it's going back to &lt;strong&gt;first principles&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First principles thinking&lt;/strong&gt; (inspired by thinkers like Aristotle and popularized by Elon Musk) means breaking complex problems down to their most basic, undeniable truths, then rebuilding upward. In backend engineering, this means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Starting with raw bytes on the wire (HTTP)&lt;/li&gt;
&lt;li&gt;Understanding how a single request flows through layers of your system&lt;/li&gt;
&lt;li&gt;Questioning every abstraction: Why do we need middleware? What problem does a request context solve? Why separate handlers from services?&lt;/li&gt;
&lt;li&gt;Building intuition for reliability, security, and performance before touching tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This 32-day series follows exactly that path, based on the outstanding "Backend from First Principles" roadmap by Sriniously — one of the most logically sequenced, comprehensive, and principle-focused curricula available.&lt;/p&gt;

&lt;p&gt;Over the next 32 days, we'll cover &lt;strong&gt;every core concept&lt;/strong&gt; a serious backend engineer must master, explained in depth:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Roadmap intro &amp;amp; high-level understanding
&lt;/li&gt;
&lt;li&gt;HTTP protocol
&lt;/li&gt;
&lt;li&gt;Routing
&lt;/li&gt;
&lt;li&gt;Serialization and deserialization
&lt;/li&gt;
&lt;li&gt;Authentication and authorization
&lt;/li&gt;
&lt;li&gt;Validation and transformation
&lt;/li&gt;
&lt;li&gt;Middlewares
&lt;/li&gt;
&lt;li&gt;Request context
&lt;/li&gt;
&lt;li&gt;Handlers, controllers, and services
&lt;/li&gt;
&lt;li&gt;CRUD deep dive
&lt;/li&gt;
&lt;li&gt;RESTful architecture and best practices
&lt;/li&gt;
&lt;li&gt;Databases
&lt;/li&gt;
&lt;li&gt;Business logic layer (BLL)
&lt;/li&gt;
&lt;li&gt;Caching
&lt;/li&gt;
&lt;li&gt;Transactional emails
&lt;/li&gt;
&lt;li&gt;Task queuing and scheduling
&lt;/li&gt;
&lt;li&gt;Elasticsearch
&lt;/li&gt;
&lt;li&gt;Error handling
&lt;/li&gt;
&lt;li&gt;Config management
&lt;/li&gt;
&lt;li&gt;Logging, monitoring, and observability
&lt;/li&gt;
&lt;li&gt;Graceful shutdown
&lt;/li&gt;
&lt;li&gt;Security
&lt;/li&gt;
&lt;li&gt;Scaling and performance
&lt;/li&gt;
&lt;li&gt;Concurrency and parallelism
&lt;/li&gt;
&lt;li&gt;Object storage and large files
&lt;/li&gt;
&lt;li&gt;Real-time backend systems
&lt;/li&gt;
&lt;li&gt;Testing and code quality
&lt;/li&gt;
&lt;li&gt;12 Factor App
&lt;/li&gt;
&lt;li&gt;OpenAPI standards
&lt;/li&gt;
&lt;li&gt;Webhooks
&lt;/li&gt;
&lt;li&gt;DevOps for backend engineers
&lt;/li&gt;
&lt;li&gt;(Final synthesis: Tying it all together into production systems)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Each daily article will answer three key questions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why?&lt;/strong&gt; — The fundamental problem this concept solves in real distributed systems.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What?&lt;/strong&gt; — A clear, principle-first explanation (no framework bias).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How?&lt;/strong&gt; — Practical implementation patterns that work in any modern backend (Node.js, Go, Python, etc.), with pseudocode, architecture diagrams (described), and real-world gotchas.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the end of these 32 days, you'll have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A mental model that lets you read any backend codebase (open-source or proprietary) and instantly understand its structure
&lt;/li&gt;
&lt;li&gt;The ability to design systems that are reliable, observable, secure, and scalable from day one
&lt;/li&gt;
&lt;li&gt;Confidence to switch stacks or invent your own abstractions without fear
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't about becoming a "Node.js developer" or "Go expert." It's about becoming a &lt;strong&gt;backend systems thinker&lt;/strong&gt; — someone who builds software the way physics builds bridges: from bedrock truths upward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ready?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Tomorrow (Day 1), we start at the absolute foundation: &lt;strong&gt;HTTP Protocol&lt;/strong&gt; — where every backend request truly begins.&lt;/p&gt;

&lt;p&gt;Follow along daily. Build small prototypes as we go. Join discussions in comments or the Sriniously Discord (linked in original videos). By day 32, you'll look back and see how far first-principles thinking has taken you.&lt;/p&gt;

&lt;p&gt;Let's begin.&lt;/p&gt;

</description>
      <category>firstprinicple</category>
      <category>programming</category>
      <category>backenddevelopment</category>
      <category>fullstack</category>
    </item>
    <item>
      <title>Build a Production-Ready AI Document Brain: A No-Nonsense Guide to RAG SaaS</title>
      <dc:creator>Girma</dc:creator>
      <pubDate>Fri, 30 Jan 2026 16:29:47 +0000</pubDate>
      <link>https://dev.to/girma35/build-a-production-ready-ai-document-brain-a-no-nonsense-guide-to-rag-saas-3jhm</link>
      <guid>https://dev.to/girma35/build-a-production-ready-ai-document-brain-a-no-nonsense-guide-to-rag-saas-3jhm</guid>
      <description>&lt;p&gt;Let’s be real: most companies are drowning in a sea of PDFs. Contracts, handbooks, ancient policy docs—it’s a mess. Usually, employees waste half their day hunting for one specific clause. And if you try to just throw a generic ChatGPT at the problem? You get "hallucinations" (AI-speak for "making stuff up") that could get someone fired.&lt;/p&gt;

&lt;p&gt;That’s the exact pain we’re solving today. We’re building a &lt;strong&gt;RAG-powered (Retrieval-Augmented Generation) SaaS backend&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The goal: Users upload their own docs, and the AI &lt;em&gt;only&lt;/em&gt; answers based on that data. No external guessing, just fast, accurate, cited answers. If you’re a dev looking to move past "Hello World" AI tutorials and build something that actually survives a production environment, you're in the right place.&lt;/p&gt;




&lt;h2&gt;
  
  
  The "Battle-Tested" Tech Stack
&lt;/h2&gt;

&lt;p&gt;I’ve built enough of these to know what breaks. Here’s what we’re using to keep things scalable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Engine&lt;/strong&gt;: &lt;strong&gt;FastAPI&lt;/strong&gt;. It’s fast, handles async like a champ, and the auto-docs save you hours of debugging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Brain (Vectors)&lt;/strong&gt;: &lt;strong&gt;PostgreSQL + pgvector&lt;/strong&gt;. Don't get distracted by "trendy" vector-only DBs for your first SaaS. &lt;code&gt;pgvector&lt;/code&gt; lets you keep your user data and your embeddings in one place. It’s persistent, SQL-friendly, and scales beautifully.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Muscle&lt;/strong&gt;: &lt;strong&gt;Redis + Celery&lt;/strong&gt;. Generating embeddings is "heavy lifting." You don't want your API hanging while you process a 50-page PDF. Celery handles the dirty work in the background.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Intelligence&lt;/strong&gt;: &lt;strong&gt;OpenAI&lt;/strong&gt; (&lt;code&gt;text-embedding-3-small&lt;/code&gt; + &lt;code&gt;GPT-4o-mini&lt;/code&gt;). It’s the gold standard for a reason, though you can swap in Gemini if you're feeling adventurous.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Glue&lt;/strong&gt;: &lt;strong&gt;Unstructured.io&lt;/strong&gt; for parsing those messy PDFs and &lt;strong&gt;JWT&lt;/strong&gt; for keeping user data private.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Let’s Build It: Step-by-Step
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. The Foundation
&lt;/h3&gt;

&lt;p&gt;First, grab your virtual env (I’m a fan of &lt;code&gt;uv&lt;/code&gt; or &lt;code&gt;poetry&lt;/code&gt; these days—standard &lt;code&gt;pip&lt;/code&gt; is a bit "last decade" for prod).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;fastapi uvicorn sqlalchemy asyncpg psycopg[binary] pgvector openai redis celery python-dotenv python-multipart PyPDF2 &lt;span class="s2"&gt;"unstructured[all-docs]"&lt;/span&gt; slowapi python-jose[cryptography] passlib[bcrypt]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Pro Tip:&lt;/strong&gt; Create your &lt;code&gt;.env&lt;/code&gt; file immediately. &lt;code&gt;OPENAI_API_KEY&lt;/code&gt;, &lt;code&gt;DATABASE_URL&lt;/code&gt;, &lt;code&gt;REDIS_URL&lt;/code&gt;. If I see these hardcoded in your repo, we’re gonna have a talk.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Setting Up the Vector Vault
&lt;/h3&gt;

&lt;p&gt;Spin up Postgres with &lt;code&gt;pgvector&lt;/code&gt; using Docker. It’s the fastest way to get moving.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; pgvector &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-secret &lt;span class="nt"&gt;-p&lt;/span&gt; 5432:5432 ankane/pgvector

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In your models, define a &lt;code&gt;Chunk&lt;/code&gt; table with a &lt;code&gt;Vector(1536)&lt;/code&gt; column. Trust me: keeping your vectors inside Postgres makes joining metadata (like "which user owns this doc?") a breeze.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Privacy is Non-Negotiable
&lt;/h3&gt;

&lt;p&gt;This is a SaaS, not a personal script. Every document and every text chunk &lt;em&gt;must&lt;/em&gt; be tied to a &lt;code&gt;user_id&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Rule:&lt;/strong&gt; Always filter queries with &lt;code&gt;WHERE user_id = current_user.id&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Level Up:&lt;/strong&gt; Use Postgres Row-Level Security (RLS) to ensure one user can &lt;em&gt;never&lt;/em&gt; peek at another's data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. The Processing Pipeline
&lt;/h3&gt;

&lt;p&gt;When a user hits &lt;code&gt;POST /upload&lt;/code&gt;, don't make them wait.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Parse:&lt;/strong&gt; Use &lt;code&gt;Unstructured&lt;/code&gt;—it’s way better than &lt;code&gt;PyPDF2&lt;/code&gt; at handling tables.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chunk:&lt;/strong&gt; Don't just cut text every 500 characters. Use a recursive splitter with overlap (e.g., 800 tokens with 150 token overlap) so you don't lose the context mid-sentence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embed:&lt;/strong&gt; Send those chunks to OpenAI in batches.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Offload:&lt;/strong&gt; Use Celery. Your API should just say "Got it, I'm working on it!" while the background worker does the heavy lifting.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  5. The Magic "/ask" Endpoint
&lt;/h3&gt;

&lt;p&gt;This is where the RAG happens:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Embed the Question:&lt;/strong&gt; Turn the user's query into a vector.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic Search:&lt;/strong&gt; Use &lt;code&gt;pgvector&lt;/code&gt; to find the 5-10 most relevant chunks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Prompt:&lt;/strong&gt; "Answer ONLY using this context. If it's not there, say you don't know. Cite your sources."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache:&lt;/strong&gt; If someone asks the same thing twice, serve it from Redis. It’s cheaper and faster.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Lessons Learned (The Hard Way)
&lt;/h2&gt;

&lt;p&gt;I’ve made the mistakes so you don't have to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The "Prototype Trap":&lt;/strong&gt; Don't use FAISS for a multi-user app. It lives in RAM. If your server restarts, your "brain" disappears. Use &lt;code&gt;pgvector&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The "Spinning Wheel of Death":&lt;/strong&gt; Never embed synchronously. If a user uploads a book, your API will timeout. &lt;strong&gt;Always use background tasks.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The "Hallucination Headache":&lt;/strong&gt; Be aggressive with your system prompt. Tell the AI: "If you aren't 100% sure based on the provided text, don't guess."&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Payoff
&lt;/h2&gt;

&lt;p&gt;When you're done, you have a system where retrieval usually takes under &lt;strong&gt;200ms&lt;/strong&gt;, and full, cited answers pop up in less than 3 seconds. It looks incredible in a portfolio because it shows you understand &lt;strong&gt;async flows, data security, and cost management.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;(This is usually where you'd drop a screenshot of your Swagger UI showing those clean &lt;code&gt;/upload&lt;/code&gt; and &lt;code&gt;/ask&lt;/code&gt; endpoints in action!)&lt;/p&gt;




&lt;h2&gt;
  
  
  Want This Built for Your Business?
&lt;/h2&gt;

&lt;p&gt;I’m a freelance developer who lives and breathes this stuff. If you need a custom RAG platform, a high-performance FastAPI backend, or just want to turn your company's messy documentation into a searchable superpower, let's talk.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Portfolio:&lt;/strong&gt; &lt;a href="https://girma.studio/" rel="noopener noreferrer"&gt;girma.studio&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Upwork:&lt;/strong&gt; &lt;a href="https://www.google.com/search?q=https://www.upwork.com/freelancers/~015e94f70259a74e1d" rel="noopener noreferrer"&gt;View My Profile&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;X (Twitter):&lt;/strong&gt; &lt;a href="https://x.com/Girma880731631" rel="noopener noreferrer"&gt;@Girma880731631&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>rag</category>
      <category>ai</category>
      <category>vertexai</category>
      <category>agentaichallenge</category>
    </item>
    <item>
      <title>Supabase vs Appwrite A Student &amp; Freelancer Perspective (2026)</title>
      <dc:creator>Girma</dc:creator>
      <pubDate>Thu, 29 Jan 2026 18:01:15 +0000</pubDate>
      <link>https://dev.to/girma35/supabase-vs-appwrite-a-student-freelancer-perspective-2026-iaj</link>
      <guid>https://dev.to/girma35/supabase-vs-appwrite-a-student-freelancer-perspective-2026-iaj</guid>
      <description>&lt;p&gt;&lt;strong&gt;Supabase vs Appwrite — A Student &amp;amp; Freelancer Perspective (2026)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Many students, startups, and freelancers get confused choosing between Supabase and Appwrite because both are powerful open-source Backend-as-a-Service (BaaS) platforms that serve as excellent Firebase alternatives, offering authentication, databases, storage, real-time capabilities, and more—but they differ significantly in database philosophy, ease of use, deployment options, and scaling approach. This article is written for students building side projects or theses, startups launching MVPs on tight budgets, and freelancers delivering client apps quickly and cost-effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Comparison Table:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Supabase&lt;/th&gt;
&lt;th&gt;Appwrite&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Database&lt;/td&gt;
&lt;td&gt;PostgreSQL (relational SQL with strong querying and Row-Level Security)&lt;/td&gt;
&lt;td&gt;MariaDB-based with document-like API (NoSQL-style flexibility)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Real-time&lt;/td&gt;
&lt;td&gt;Native and excellent (Postgres replication + WebSockets)&lt;/td&gt;
&lt;td&gt;Supported via WebSockets, reliable but may need more configuration in complex scenarios&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Self-hosting&lt;/td&gt;
&lt;td&gt;Possible (open-source) but more complex (full stack management)&lt;/td&gt;
&lt;td&gt;Easy and developer-friendly (Docker-based, straightforward)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud offering&lt;/td&gt;
&lt;td&gt;Managed cloud with generous free tier (projects pause after 1 week inactivity)&lt;/td&gt;
&lt;td&gt;Cloud with reliable free tier (no pausing, solid limits like 75K MAUs)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Functions&lt;/td&gt;
&lt;td&gt;Edge functions (primarily Deno/JavaScript)&lt;/td&gt;
&lt;td&gt;Serverless functions with 10+ language support + marketplace/templates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pricing (paid start)&lt;/td&gt;
&lt;td&gt;$25/month Pro (100K MAUs included, 8GB DB, $10 compute credits)&lt;/td&gt;
&lt;td&gt;$25/month Pro (200K MAUs included, 150GB storage, 2TB bandwidth, higher overall limits)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Community Support&lt;/td&gt;
&lt;td&gt;Vibrant and massive (~97K GitHub stars, very active ecosystem)&lt;/td&gt;
&lt;td&gt;Active and helpful (~55K GitHub stars, fast community responses)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Real-World Use Case:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;When to choose Supabase:&lt;/strong&gt; Ideal for projects needing complex relational data, advanced SQL queries, robust row-level security, or seamless real-time features out of the box—like collaborative apps, SaaS dashboards, student management systems, or analytics tools where data relationships matter. In my experience with client startups, Supabase shines for production-grade apps where you want SQL power without managing your own Postgres instance, especially if you're already familiar with relational databases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;When to choose Appwrite:&lt;/strong&gt; Best for privacy-focused projects, multi-language backends, full self-hosting control, or when you need built-in messaging (SMS/email/push), a functions marketplace, or integrated hosting. I've used it successfully for freelance mobile/web apps requiring custom logic across languages, offline-first capabilities, or deployment on cheap VPS/Docker without vendor lock-in—perfect for budget-conscious students or startups avoiding cloud costs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;My Recommendation:&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;Based on my experience with clients and student projects: Supabase is better for most student projects, small startups, and freelancers who prioritize ease of setup, powerful real-time SQL capabilities, and a managed cloud experience with excellent documentation—it's often faster to prototype and scale initially without deep infra knowledge. Appwrite edges out for freelancers needing maximum flexibility, self-hosting to control costs/privacy long-term, broader language support, or additional built-in services like messaging—especially in privacy-sensitive or multi-platform apps. If your project involves relational data or heavy real-time, start with Supabase; if you value control and modularity, go with Appwrite.&lt;/p&gt;

&lt;p&gt;Choosing the right tech for your project matters.&lt;br&gt;&lt;br&gt;
If you need help deciding or building your app:&lt;br&gt;&lt;br&gt;
Hire me on Upwork: &lt;a href="https://www.upwork.com/freelancers/%7E015e94f70259a74e1d?mp_source=share" rel="noopener noreferrer"&gt;https://www.upwork.com/freelancers/~015e94f70259a74e1d?mp_source=share&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Fiverr gigs: &lt;a href="https://www.fiverr.com/s/Q7ArERy" rel="noopener noreferrer"&gt;https://www.fiverr.com/s/Q7ArERy&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;View my GitHub projects: &lt;a href="https://github.com/Girma35" rel="noopener noreferrer"&gt;https://github.com/Girma35&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Follow me on X for dev insights: &lt;a href="https://x.com/Girma880731631" rel="noopener noreferrer"&gt;https://x.com/Girma880731631&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Check my portfolio: &lt;a href="https://girma.studio/" rel="noopener noreferrer"&gt;https://girma.studio/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>supabase</category>
      <category>appwritehack</category>
      <category>programming</category>
      <category>backend</category>
    </item>
    <item>
      <title>How to Build a Cinematic Ad Campaign Workflow Step-by-Step using FLORA</title>
      <dc:creator>Girma</dc:creator>
      <pubDate>Wed, 28 Jan 2026 16:47:13 +0000</pubDate>
      <link>https://dev.to/girma35/how-to-build-a-cinematic-ad-campaign-workflow-step-by-step-using-flora-1g78</link>
      <guid>https://dev.to/girma35/how-to-build-a-cinematic-ad-campaign-workflow-step-by-step-using-flora-1g78</guid>
      <description>&lt;p&gt;I've been using FLORA (florafauna.ai) a lot lately for client projects, and honestly, it's become my go-to for turning chaotic AI ideas into polished, repeatable creative pipelines. Professional designers, filmmakers, ad folks, and freelancers like us usually bounce between Midjourney, Kling, Runway, ChatGPT, etc.—and it gets messy fast: styles drift, files get lost, time disappears.&lt;/p&gt;

&lt;p&gt;This tutorial walks you through building a full &lt;strong&gt;cinematic ad campaign workflow&lt;/strong&gt; in FLORA—from brand concept all the way to export-ready video clips. It's perfect if you're intermediate/advanced and want one tool that keeps everything consistent, collaborative, and scalable.&lt;/p&gt;



&lt;p&gt;(Here's a glimpse of what a real creative workflow looks like in action—storyboard-style planning with AI outputs flowing in.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Core Platform&lt;/strong&gt;: FLORA (florafauna.ai) – the browser-based intelligent infinite canvas with drag-and-drop nodes/blocks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrated AI Models&lt;/strong&gt;: Seedream (images), Seedance (motion), Kling (high-quality video), Ideogram (consistent characters &amp;amp; editorial looks), Claude/GPT variants (scripting/research)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FLORA Features I Rely On&lt;/strong&gt;: Node connections, real-time team collab, asset libraries, style reference locking, auto-model picker&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output/Deployment&lt;/strong&gt;: Direct export of images/videos or shareable live canvases for client reviews&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step-by-Step Implementation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Account Setup &amp;amp; Workspace&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Head over to &lt;a href="https://www.florafauna.ai/" rel="noopener noreferrer"&gt;https://www.florafauna.ai/&lt;/a&gt; and sign up (quick with Google or email). Start on the free tier to test (credits are limited), then jump to Pro (~$16–$48/month depending on usage) for serious work. Once in, create a fresh workspace and name it something like "Cinematic Ad Campaign – Skincare Brand".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Build the Core Canvas Structure&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
FLORA opens to a beautiful blank infinite canvas. Drag your first block in (a prompt/text node) and start zoning it visually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Top-left → Concept &amp;amp; Scripting (text blocks)&lt;/li&gt;
&lt;li&gt;Center → Image Generation &amp;amp; Styling&lt;/li&gt;
&lt;li&gt;Right → Video &amp;amp; Motion blocks&lt;/li&gt;
&lt;li&gt;Bottom → Variations, upscales, and export area&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This layout keeps things sane even when the project grows.&lt;/p&gt;



&lt;p&gt;(Example of generative AI content creation in a visual workflow—similar vibe to what FLORA feels like once populated.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Concept &amp;amp; Script Generation&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Drop a "Text Generation" block and hook it to Claude or GPT. Try a prompt like:&lt;br&gt;&lt;br&gt;
"Write a 30-second cinematic ad script for a luxury skincare brand: emotional storytelling, focus on natural ingredients, elegant and poetic tone."&lt;/p&gt;

&lt;p&gt;The output auto-saves as a text asset. Then I manually pull out key scenes (e.g., "woman in misty forest applying cream at dawn") and turn each into its own prompt node. Connect them so everything flows downstream.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Consistent Character &amp;amp; Image Generation&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Add an "Image Generation" block → pick Ideogram because it's killer for face consistency. Upload a reference photo if you have one, or just seed a style. Connect from your scene prompts. Example prompt:&lt;br&gt;&lt;br&gt;
"Editorial portrait of elegant woman in her 30s, natural soft lighting, consistent face, skincare ad aesthetic, forest background, serene mood."&lt;/p&gt;

&lt;p&gt;Lock the look with a "Style Reference" block (brand colors, mood board vibes). Generate 4–8 variations, pick winners, and upscale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Cinematic Image-to-Video Pipeline&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Connect your best images to a "Video Generation" block. I usually go Seedream + Seedance for buttery motion, or Kling when I need that extra cinematic punch.&lt;br&gt;&lt;br&gt;
Prompt chaining example: Use the image as input + add motion: "slow zoom on face, gentle cream application, soft glowing particles, cinematic 24fps."&lt;/p&gt;

&lt;p&gt;Set clip length to 5–10 seconds. Pro tip: Use "Chain" nodes to automatically feed the best frame forward—saves tons of manual work and keeps sequences seamless.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Iteration &amp;amp; Polish&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Throw in "Variation" blocks to A/B test prompts fast. Preview in real-time. For client/team feedback, just share the canvas link—they can drop comments right on nodes.&lt;br&gt;&lt;br&gt;
Once happy, compile clips into a storyboard view or export the full video assets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick Tips from Real Projects&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always start small: nail one scene, then duplicate the mini-workflow.
&lt;/li&gt;
&lt;li&gt;Lean on "Auto Mode" for model selection when you're in flow.
&lt;/li&gt;
&lt;li&gt;Save templates (like "Skincare Character Kit") for repeat clients.
&lt;/li&gt;
&lt;li&gt;Watch your credits—videos eat them fast; Pro gives priority and more generous limits.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Common Mistakes (and How I Dodge Them)
&lt;/h3&gt;

&lt;p&gt;New folks usually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dump everything into one massive prompt → canvas turns into spaghetti, control vanishes. &lt;strong&gt;Fix&lt;/strong&gt;: Modular nodes + logical connections = easy debugging.&lt;/li&gt;
&lt;li&gt;Skip style references → characters/faces shift wildly between gens. &lt;strong&gt;Fix&lt;/strong&gt;: Add reference blocks right at the start—FLORA is built for this.&lt;/li&gt;
&lt;li&gt;Generate hundreds of assets without iterating → credits gone in minutes. &lt;strong&gt;Fix&lt;/strong&gt;: Low-res previews first, refine, then go high-quality.&lt;/li&gt;
&lt;li&gt;Forget to share early → client feedback comes too late. &lt;strong&gt;Fix&lt;/strong&gt;: Invite collaborators from day one.&lt;/li&gt;
&lt;li&gt;Lose track of outputs → big projects become chaos. &lt;strong&gt;Fix&lt;/strong&gt;: Use canvas folders/libraries religiously.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I avoid all this by templating early, testing tiny loops, and treating FLORA like code—clean architecture wins.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Result
&lt;/h3&gt;

&lt;p&gt;You get a beautiful, fully connected visual pipeline: script → locked-in characters → styled cinematic images → smooth video clips. Everything stays on-brand, scales to hundreds/thousands of assets, and exports ready for Instagram, TikTok, or client decks.  &lt;/p&gt;

&lt;p&gt;Real speed? I can go from a vague brief to a polished 30-second ad in under an hour—instead of days of tool-hopping. Clients rave about the live collaboration—no endless screenshot threads.&lt;/p&gt;





&lt;p&gt;(Examples of the kind of serene, cinematic skincare ad outputs you can achieve—elegant woman in nature, soft lighting, emotional feel.)&lt;/p&gt;

&lt;p&gt;Browser performance is smooth (no lag on decent machines), queues move fast on Pro, and a full campaign usually costs 200–500 credits depending on video length.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ready to Level Up Your Workflow?
&lt;/h3&gt;

&lt;p&gt;Need a similar FLORA pipeline built for your brand or agency? I’m a freelance AI Creative Workflow Developer (with roots in Flutter &amp;amp; AI-powered apps too).  &lt;/p&gt;

&lt;p&gt;I can set up custom templates, train teams, or handle full campaign production—fast, clean, and client-ready.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Upwork&lt;/strong&gt;: &lt;a href="https://www.upwork.com/freelancers/%7E015e94f70259a74e1d?mp_source=share" rel="noopener noreferrer"&gt;https://www.upwork.com/freelancers/~015e94f70259a74e1d?mp_source=share&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Fiverr&lt;/strong&gt;: &lt;a href="https://www.fiverr.com/s/Q7ArERy" rel="noopener noreferrer"&gt;https://www.fiverr.com/s/Q7ArERy&lt;/a&gt;&lt;br&gt;&lt;br&gt;
🔗 &lt;strong&gt;Portfolio&lt;/strong&gt;: &lt;a href="https://girma.studio/" rel="noopener noreferrer"&gt;https://girma.studio/&lt;/a&gt;&lt;br&gt;&lt;br&gt;
🔗 &lt;strong&gt;X&lt;/strong&gt;: &lt;a href="https://x.com/Girma880731631" rel="noopener noreferrer"&gt;https://x.com/Girma880731631&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;DM me your idea—let's make your creative process unstoppable! &lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
