<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nusrat Begum</title>
    <description>The latest articles on DEV Community by Nusrat Begum (@nusratbegum).</description>
    <link>https://dev.to/nusratbegum</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nusratbegum"/>
    <language>en</language>
    <item>
      <title>Before You Touch a Neural Network, Master These 3 Classifiers</title>
      <dc:creator>Nusrat Begum</dc:creator>
      <pubDate>Mon, 19 Jan 2026 13:50:26 +0000</pubDate>
      <link>https://dev.to/nusratbegum/before-you-touch-a-neural-network-master-these-3-classifiers-2p81</link>
      <guid>https://dev.to/nusratbegum/before-you-touch-a-neural-network-master-these-3-classifiers-2p81</guid>
      <description>&lt;h2&gt;
  
  
  Why mastering classical classifiers matters before jumping into Deep Learning
&lt;/h2&gt;

&lt;p&gt;Open LinkedIn and you’ll see buzz everywhere — Transformers, LLMs, and Generative AI. It’s easy to feel left behind if you’re not fine-tuning massive models. But here’s the truth: &lt;strong&gt;complex problems don’t always need complex solutions&lt;/strong&gt;. (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Complex problems don’t always need complex solutions.&lt;br&gt;
Think of it this way: you don’t use a flamethrower to light a candle.&lt;br&gt;
Before you try to master the “magic” of Deep Learning, you need to master the reliability of the classics. (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this post, we’ll break down three essential supervised learning algorithms that form the foundation of machine learning. They are fast, effective, and — unlike deep neural networks — &lt;strong&gt;interpretable&lt;/strong&gt;. (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;




&lt;h2&gt;
  
  
  1. The Copycat: K-Nearest Neighbors (KNN)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F0%2AkkLofiF6uB4AtMaU" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F0%2AkkLofiF6uB4AtMaU" alt="Photo by Samuel Lopez Cruz on Unsplash&amp;lt;br&amp;gt;
" width="1400" height="933"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Visual Intuition
&lt;/h3&gt;

&lt;p&gt;Imagine you move into a new neighborhood but don’t know if it’s a “party” or “quiet” area. So you look at your three closest neighbors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Neighbor 1: Throws parties every weekend.&lt;/li&gt;
&lt;li&gt;Neighbor 2: Throws parties every weekend.&lt;/li&gt;
&lt;li&gt;Neighbor 3: Reads quietly in the garden.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since most neighbors are partying, you assume you’re in a party neighborhood. That’s the essence of &lt;strong&gt;K-Nearest Neighbors&lt;/strong&gt; — you classify a new point based on the “vote” of the &lt;em&gt;K closest labeled points&lt;/em&gt;. (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F1%2AdOk5BitsuicZ9UVsWdBPdA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F1%2AdOk5BitsuicZ9UVsWdBPdA.png" alt="Photo by Samuel Lopez Cruz on Unsplash&amp;lt;br&amp;gt;
" width="765" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Implementation (Python)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# 1. Import the model
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.neighbors&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;KNeighborsClassifier&lt;/span&gt;

&lt;span class="c1"&gt;# 2. Instantiate the model (choosing K=3 neighbors)
&lt;/span&gt;&lt;span class="n"&gt;knn_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;KNeighborsClassifier&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n_neighbors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# 3. Fit the model to your data
&lt;/span&gt;&lt;span class="n"&gt;knn_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# 4. Make predictions on new data
&lt;/span&gt;&lt;span class="n"&gt;predictions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;knn_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_new_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Using scikit-learn makes this model quick and simple to train and use.&lt;/em&gt; (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;




&lt;h2&gt;
  
  
  2. The Probability Calculator: Logistic Regression
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F0%2Ai-p44UKQS9Wm67qX" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F0%2Ai-p44UKQS9Wm67qX" alt="Photo by Samuel Lopez Cruz on Unsplash&amp;lt;br&amp;gt;
" width="1400" height="930"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Visual Intuition
&lt;/h3&gt;

&lt;p&gt;KNN tells you what class something is — but sometimes you want &lt;strong&gt;how confident&lt;/strong&gt; the model is. Logistic Regression works like a &lt;em&gt;dimmer switch&lt;/em&gt; rather than a binary light switch. (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F1%2A0tiUaY0y-fHSyEW95o1uhw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F1%2A0tiUaY0y-fHSyEW95o1uhw.png" alt="Photo by Samuel Lopez Cruz on Unsplash&amp;lt;br&amp;gt;
" width="766" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instead of a straight yes/no boundary, it fits an &lt;strong&gt;S-shaped curve (Sigmoid)&lt;/strong&gt; to your data. Predictions are probabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;0.99&lt;/code&gt; → very likely positive&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;0.51&lt;/code&gt; → positive but with low confidence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This nuance is crucial in business contexts — e.g., deciding whether to send an email or make a call based on churn probability. (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Implementation (Python)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.linear_model&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LogisticRegression&lt;/span&gt;

&lt;span class="n"&gt;log_reg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LogisticRegression&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;log_reg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;predictions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;log_reg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_new_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;probabilities&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;log_reg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict_proba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_new_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Logistic Regression gives you both class predictions and associated probabilities.&lt;/em&gt; (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;




&lt;h2&gt;
  
  
  3. The Boundary Builder: Support Vector Machine (SVM)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F0%2A6O0wKhcjNgxymBQU" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F0%2A6O0wKhcjNgxymBQU" alt="Photo by Samuel Lopez Cruz on Unsplash&amp;lt;br&amp;gt;
" width="1400" height="933"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Visual Intuition
&lt;/h3&gt;

&lt;p&gt;If Logistic Regression is about probabilities, &lt;strong&gt;SVM is about boundaries&lt;/strong&gt;. Imagine trying to separate red balls from blue balls on a table with a stick. You might place the stick anywhere — but SVM chooses the position that gives &lt;strong&gt;the widest possible margin&lt;/strong&gt; between classes. (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F1%2AiVHuekYyo_Q6qjUzpJrOcg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F1%2AiVHuekYyo_Q6qjUzpJrOcg.png" alt="Photo by Samuel Lopez Cruz on Unsplash&amp;lt;br&amp;gt;
" width="766" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A wider margin generally helps the model generalize better to new, unseen data. (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Implementation (Python)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sklearn.svm&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SVC&lt;/span&gt;  &lt;span class="c1"&gt;# "Support Vector Classifier"
&lt;/span&gt;
&lt;span class="n"&gt;svm_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SVC&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;kernel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;linear&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;svm_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_train&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y_train&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;predictions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;svm_model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;X_new_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Switch kernels (e.g., RBF, polynomial) to capture nonlinear relationships.&lt;/em&gt; (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion: Your New Toolkit
&lt;/h2&gt;

&lt;p&gt;You now have three powerful tools in your machine learning arsenal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;KNN&lt;/strong&gt;: intuitive baseline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logistic Regression&lt;/strong&gt;: probability-aware predictions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SVM&lt;/strong&gt;: clear, robust decision boundaries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most importantly, these aren’t magic — they are geometry and code. Next step: &lt;strong&gt;code them yourself&lt;/strong&gt; on a simple dataset like Iris. (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;




&lt;h2&gt;
  
  
  Follow-Up
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Follow me for Part 2, where we will break the rules of geometry with Decision Trees and Neural Networks. (&lt;a href="https://medium.com/%40rupunzelnusrat5/before-you-touch-a-neural-network-master-these-3-classifiers-817ec9ef8883" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>python</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
