<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vishnu Ajit</title>
    <description>The latest articles on DEV Community by Vishnu Ajit (@vishnu_ajit).</description>
    <link>https://dev.to/vishnu_ajit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vishnu_ajit"/>
    <language>en</language>
    <item>
      <title>4 reasons why ditching Machine Learning and falling in love with Deep Learning might be a good idea</title>
      <dc:creator>Vishnu Ajit</dc:creator>
      <pubDate>Tue, 20 Jan 2026 11:13:17 +0000</pubDate>
      <link>https://dev.to/vishnu_ajit/4-reasons-why-ditching-machine-learning-and-falling-in-love-with-deep-learning-might-be-a-good-idea-3lm1</link>
      <guid>https://dev.to/vishnu_ajit/4-reasons-why-ditching-machine-learning-and-falling-in-love-with-deep-learning-might-be-a-good-idea-3lm1</guid>
      <description>&lt;p&gt;In this project we make the AI learn how to recognize the difference between two flowers. We train the AI on images upward of 500qty . Then we give a Machine Learning AI model (ML model) the same set of two folders - Rose flower folder and Carnation flower folder.&lt;/p&gt;

&lt;p&gt;We make the ML model (Machine Learning model ) analyze both folders.&lt;/p&gt;

&lt;p&gt;We make the DL model (Deep Learning model ) analyze both folders.&lt;/p&gt;

&lt;p&gt;Then we try to interpret the results and see which model obtained higher accuracy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note: We are going to purposefully make things difficult for AI . Which is why we chose two red colour flowers. Both of which are almost the same shape geometrically. If we had chosen Rose flowers vs Jasmine flowers. Or Rose flowers vs Tulip flowers the AI models would get an advantage of deciding which is which by looking at the color difference between the flower. In this case, that is not possible. Both AI models - the Machine Learning model &amp;amp; the Deep Learning model has to figure out which is which with pure hardwork and render us our required results.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;TLDR - Find the complete source code in this notebook&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://github.com/ruforavishnu/Project_Machine_Learning/blob/master/rose_vs_carnation_ml_vs_dl.ipynb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/ruforavishnu/Project_Machine_Learning/blob/master/rose_vs_carnation_ml_vs_dl.ipynb" rel="noopener noreferrer"&gt;Complete source code in google colab notebook format can be obtained here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Dataset Exploration
&lt;/h2&gt;

&lt;p&gt;We used the &lt;a href="https://www.kaggle.com/datasets/l3llff/flowers" rel="noopener noreferrer"&gt;Flowers Kaggle Dataset&lt;/a&gt; and extracted &lt;strong&gt;Rose&lt;/strong&gt; and &lt;strong&gt;Carnation&lt;/strong&gt; images. To understand the challenge, we previewed 10 random images per class.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95amfba0q3bazs040zmy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95amfba0q3bazs040zmy.png" alt=" " width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even for humans, distinguishing these two red flowers is tricky — imagine how the AI has to work! &lt;/p&gt;

&lt;h2&gt;
  
  
  Machine Learning Approach
&lt;/h2&gt;

&lt;p&gt;We resized images to 128x128 pixels and converted them to grayscale. Then we extracted &lt;strong&gt;HOG features&lt;/strong&gt; to capture the flowers’ textures and shapes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from skimage.feature import hog

# Convert to grayscale
X_gray = np.array([cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in X])

# Extract HOG features
hog_features = []
for img in X_gray:
    features = hog(
        img,
        orientations=9,
        pixels_per_cell=(16,16),
        cells_per_block=(2,2),
        block_norm='L2-Hys'
    )
    hog_features.append(features)

hog_features = np.array(hog_features)
print("HOG features shape:", hog_features.shape)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We trained a &lt;strong&gt;Support Vector Machine (SVM)&lt;/strong&gt; classifier on these features. ML achieved an accuracy of &lt;strong&gt;74.29%&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Initialize &amp;amp; train SVM
ml_model = SVC(kernel='rbf')
ml_model.fit(X_train, y_train)

# Predictions
ml_preds = ml_model.predict(X_test)

# Accuracy
ml_accuracy = accuracy_score(y_test, ml_preds)
print("Machine Learning Accuracy: {:.2f}%".format(ml_accuracy*100))

# Optional: Confusion matrix
cm = confusion_matrix(y_test, ml_preds)
print("Confusion Matrix:\n", cm)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Machine Learning Accuracy: 74.29%
Confusion Matrix:
 [[137  58]
 [ 41 149]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ML model does reasonably well, but its performance is limited by &lt;strong&gt;handcrafted features&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deep Learning Approach
&lt;/h2&gt;

&lt;p&gt;Next, we trained a &lt;strong&gt;Convolutional Neural Network (CNN)&lt;/strong&gt; that can &lt;strong&gt;automatically learn features&lt;/strong&gt; from the images. This CNN extracts petal shapes, textures, and subtle details that ML might miss.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import tensorflow as tf
from tensorflow.keras import layers, models

dl_model = models.Sequential([
    layers.Conv2D(32, (3,3), activation='relu', input_shape=(IMG_SIZE,IMG_SIZE,3)),
    layers.MaxPooling2D(2,2),

    layers.Conv2D(64, (3,3), activation='relu'),
    layers.MaxPooling2D(2,2),

    layers.Conv2D(128, (3,3), activation='relu'),
    layers.MaxPooling2D(2,2),

    layers.Flatten(),
    layers.Dense(128, activation='relu'),
    layers.Dropout(0.5),
    layers.Dense(1, activation='sigmoid')  # binary classification
])

dl_model.summary()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We trained the CNN for 30 epochs with &lt;strong&gt;data augmentation&lt;/strong&gt;. After evaluation, the DL model achieved &lt;strong&gt;85.71% accuracy&lt;/strong&gt;, much higher than ML.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Epoch 1/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 49s 1s/step - accuracy: 0.8056 - loss: 0.6186 - val_accuracy: 0.8961 - val_loss: 0.3063
Epoch 2/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 45s 1s/step - accuracy: 0.7519 - loss: 0.4839 - val_accuracy: 0.8929 - val_loss: 0.2516
Epoch 3/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.7734 - loss: 0.4461 - val_accuracy: 0.9286 - val_loss: 0.2301
Epoch 4/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 82s 1s/step - accuracy: 0.8113 - loss: 0.4200 - val_accuracy: 0.8961 - val_loss: 0.2441
Epoch 5/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 48s 1s/step - accuracy: 0.8309 - loss: 0.3724 - val_accuracy: 0.9253 - val_loss: 0.2124
Epoch 6/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.8378 - loss: 0.3800 - val_accuracy: 0.8636 - val_loss: 0.3138
Epoch 7/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 49s 1s/step - accuracy: 0.8483 - loss: 0.3569 - val_accuracy: 0.9156 - val_loss: 0.2071
Epoch 8/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.8589 - loss: 0.3541 - val_accuracy: 0.9318 - val_loss: 0.2196
Epoch 9/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 47s 1s/step - accuracy: 0.8877 - loss: 0.2699 - val_accuracy: 0.9253 - val_loss: 0.2291
Epoch 10/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.8797 - loss: 0.2974 - val_accuracy: 0.9253 - val_loss: 0.1862
Epoch 11/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 47s 1s/step - accuracy: 0.8735 - loss: 0.3012 - val_accuracy: 0.9091 - val_loss: 0.2100
Epoch 12/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 81s 1s/step - accuracy: 0.8671 - loss: 0.2905 - val_accuracy: 0.9318 - val_loss: 0.2048
Epoch 13/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 49s 1s/step - accuracy: 0.8845 - loss: 0.2824 - val_accuracy: 0.9188 - val_loss: 0.2141
Epoch 14/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.8760 - loss: 0.2807 - val_accuracy: 0.9481 - val_loss: 0.1933
Epoch 15/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 47s 1s/step - accuracy: 0.8828 - loss: 0.2720 - val_accuracy: 0.8831 - val_loss: 0.2699
Epoch 16/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.8917 - loss: 0.2562 - val_accuracy: 0.8994 - val_loss: 0.2205
Epoch 17/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 49s 1s/step - accuracy: 0.9075 - loss: 0.2298 - val_accuracy: 0.9058 - val_loss: 0.2410
Epoch 18/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.9200 - loss: 0.2157 - val_accuracy: 0.9351 - val_loss: 0.1972
Epoch 19/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 47s 1s/step - accuracy: 0.9239 - loss: 0.1891 - val_accuracy: 0.9123 - val_loss: 0.2217
Epoch 20/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 45s 1s/step - accuracy: 0.9130 - loss: 0.2367 - val_accuracy: 0.8766 - val_loss: 0.2751
Epoch 21/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 47s 1s/step - accuracy: 0.9105 - loss: 0.2186 - val_accuracy: 0.9221 - val_loss: 0.2224
Epoch 22/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.9146 - loss: 0.1949 - val_accuracy: 0.9188 - val_loss: 0.2191
Epoch 23/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 49s 1s/step - accuracy: 0.9288 - loss: 0.1793 - val_accuracy: 0.8994 - val_loss: 0.2418
Epoch 24/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.9088 - loss: 0.2447 - val_accuracy: 0.9091 - val_loss: 0.2558
Epoch 25/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 47s 1s/step - accuracy: 0.9203 - loss: 0.1905 - val_accuracy: 0.9188 - val_loss: 0.2165
Epoch 26/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.9444 - loss: 0.1624 - val_accuracy: 0.9123 - val_loss: 0.2458
Epoch 27/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 47s 1s/step - accuracy: 0.9347 - loss: 0.1750 - val_accuracy: 0.9318 - val_loss: 0.2091
Epoch 28/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 45s 1s/step - accuracy: 0.9213 - loss: 0.1778 - val_accuracy: 0.9091 - val_loss: 0.2497
Epoch 29/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 47s 1s/step - accuracy: 0.9247 - loss: 0.1981 - val_accuracy: 0.9221 - val_loss: 0.2465
Epoch 30/30
39/39 ━━━━━━━━━━━━━━━━━━━━ 46s 1s/step - accuracy: 0.9331 - loss: 0.1870 - val_accuracy: 0.9221 - val_loss: 0.2095
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Accuracy&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ML (HOG + SVM)&lt;/td&gt;
&lt;td&gt;74.29%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DL (CNN + Augmentation)&lt;/td&gt;
&lt;td&gt;85.71%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn16rq5d1gsazqg6rulax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn16rq5d1gsazqg6rulax.png" alt=" " width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;Machine Learning AI model obtained an accuracy of 74.29%&lt;br&gt;
Deep Learning AI model obtained an accuracy of 85.71%&lt;/p&gt;

&lt;p&gt;that is a huge 11% increase in accuracy.&lt;/p&gt;

&lt;h2&gt;
  
  
  4 Reasons to spend more time with DL ❤️
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Feature Engineering Limitations&lt;/strong&gt; – ML needs handcrafted features, DL learns them automatically.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; – DL scales better with large and complex datasets.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pattern Recognition&lt;/strong&gt; – DL captures subtle shapes and textures that ML may miss.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modern Workflows&lt;/strong&gt; – DL integrates seamlessly with images, audio, text, and end-to-end pipelines.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;At the end of the day the ML is a machine. And has to be taught what is what and which is which . The DL model since its been inspired by the way the human brain works and uses Artificial Neurons to learn things rather than using a mathematical approach. In scenarios where images, videos are present the DL model surely shall bring about effective results. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The ML has to be told and taught what to do. DL recognizes patterns and learns things automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deep Learning surely wins over Machine Learning.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;⭐️ &lt;a href="https://www.kaggle.com/ruforavishnu" rel="noopener noreferrer"&gt;Vishnu Ajit's Kaggle url&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🛠️ &lt;a href="https://github.com/ruforavishnu" rel="noopener noreferrer"&gt;Vishnu Ajit's Github url&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>python</category>
    </item>
    <item>
      <title>Project - Supervised Learning with Python - Lets use Logistic Regression for Predicting the chances of having a Heart Attack</title>
      <dc:creator>Vishnu Ajit</dc:creator>
      <pubDate>Sat, 18 Jan 2025 12:39:49 +0000</pubDate>
      <link>https://dev.to/vishnu_ajit/project-supervised-learning-with-python-lets-use-logistic-regression-for-predicting-the-chances-4gf</link>
      <guid>https://dev.to/vishnu_ajit/project-supervised-learning-with-python-lets-use-logistic-regression-for-predicting-the-chances-4gf</guid>
      <description>&lt;p&gt;Excited to share my second tutorial along with the python notebook which i made for experimenting with machine learning algorithms! This time we are exploring a project using &lt;strong&gt;LogisticRegression&lt;/strong&gt; . It loads the dataset from csv file (dataset obtained from kaggle) and enables us &lt;strong&gt;to predict probabilities of a patient having Heart Attack&lt;/strong&gt;🧑‍💻📊&lt;/p&gt;

&lt;h3&gt;
  
  
  Concepts Used Include:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;LogisticRegression🌀&lt;/li&gt;
&lt;li&gt;StandardScaler from sklearn.preprocessing library 🎯&lt;/li&gt;
&lt;li&gt;fit_transform() method ➖&lt;/li&gt;
&lt;li&gt;train_test_split() 🌟&lt;/li&gt;
&lt;li&gt;model.predict() 🔄&lt;/li&gt;
&lt;li&gt;model.predict_proba() 🌟&lt;/li&gt;
&lt;li&gt; classification_report() 🌟&lt;/li&gt;
&lt;li&gt;roc_auc_score() 🎯&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why This Notebook:
&lt;/h4&gt;

&lt;p&gt;The main goal of this notebook is to visually understand how to use the LogisticRegression concept in  machine learning algorithm. Using the beauty of the Python programming language we try to predict from a patient's hospital data whether he might have a heart attack in the future.&lt;/p&gt;

&lt;p&gt;I’ve included a line to my notebook to guide you through the it&lt;br&gt;
The link to the notebook:  &lt;a href="https://github.com/ruforavishnu/Project_Machine_Learning/blob/master/project-supervised-learning-logistic-regression-heart-disease-prediction.ipynb" rel="noopener noreferrer"&gt;https://github.com/ruforavishnu/Project_Machine_Learning/blob/master/project-supervised-learning-logistic-regression-heart-disease-prediction.ipynb&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The link to the dataset : &lt;a href="https://github.com/ruforavishnu/Project_Machine_Learning/blob/master/heart-disease-prediction.csv" rel="noopener noreferrer"&gt;https://github.com/ruforavishnu/Project_Machine_Learning/blob/master/heart-disease-prediction.csv&lt;/a&gt; (Dataset obtained from kaggle)&lt;/p&gt;

&lt;p&gt;Kaggle url to the same above given dataset : &lt;a href="https://www.kaggle.com/datasets/dileep070/heart-disease-prediction-using-logistic-regression" rel="noopener noreferrer"&gt;https://www.kaggle.com/datasets/dileep070/heart-disease-prediction-using-logistic-regression&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  What’s Next:
&lt;/h4&gt;

&lt;p&gt;Over the Next week, I’ll be posting more of my notebooks for other concepts in Machine Learning as recommended by this url &lt;a href="https://www.kaggle.com/discussions/getting-started/554563" rel="noopener noreferrer"&gt;https://www.kaggle.com/discussions/getting-started/554563&lt;/a&gt; [# Machine Learning Engineer Roadmap for 2025]&lt;br&gt;
We'll especially be looking at Supervised Learning and Unsupervised Learning to get our feet wet before we begin to walk towards the shores of greater Artificial Intelligence.&lt;/p&gt;
&lt;h4&gt;
  
  
  Who's This For:
&lt;/h4&gt;

&lt;p&gt;For anybody who loves python and who has been telling themselves I'm gonna learn Machine Learning one day. This is Day 2 for them ! Lets learn Machine Learning Together :)  Yesterday we looked at Linear Regression. Today we are exploring the concept called Logistic Regression.&lt;/p&gt;

&lt;p&gt;Feel free to explore the notebook and try out your own machine learning models! 🚀&lt;/p&gt;

&lt;p&gt;The link to the notebook:  &lt;a href="https://github.com/ruforavishnu/Project_Machine_Learning/blob/master/project-supervised-learning-logistic-regression-heart-disease-prediction.ipynb" rel="noopener noreferrer"&gt;https://github.com/ruforavishnu/Project_Machine_Learning/blob/master/project-supervised-learning-logistic-regression-heart-disease-prediction.ipynb&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The link to the dataset : &lt;a href="https://github.com/ruforavishnu/Project_Machine_Learning/blob/master/heart-disease-prediction.csv" rel="noopener noreferrer"&gt;https://github.com/ruforavishnu/Project_Machine_Learning/blob/master/heart-disease-prediction.csv&lt;/a&gt; (Dataset obtained from kaggle)&lt;/p&gt;

&lt;p&gt;Kaggle url to the same above given dataset : &lt;a href="https://www.kaggle.com/datasets/dileep070/heart-disease-prediction-using-logistic-regression" rel="noopener noreferrer"&gt;https://www.kaggle.com/datasets/dileep070/heart-disease-prediction-using-logistic-regression&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kaggle References: &lt;a href="https://www.kaggle.com/discussions/getting-started/554563" rel="noopener noreferrer"&gt;https://www.kaggle.com/discussions/getting-started/554563&lt;/a&gt; [Machine Learning Engineer Roadmap for 2025]&lt;/p&gt;
&lt;h2&gt;
  
  
  Now Lets begin coding shall we? :)
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Step 1.
&lt;/h3&gt;
&lt;h5&gt;
  
  
  Load the dataset from our csv file
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pandas as pd



data = pd.read_csv('heart-disease-prediction.csv')

print(data.head())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  and we get the output
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;male  age  education  currentSmoker  cigsPerDay  BPMeds  prevalentStroke  \
0     1   39        4.0              0         0.0     0.0                0   
1     0   46        2.0              0         0.0     0.0                0   
2     1   48        1.0              1        20.0     0.0                0   
3     0   61        3.0              1        30.0     0.0                0   
4     0   46        3.0              1        23.0     0.0                0   

   prevalentHyp  diabetes  totChol  sysBP  diaBP    BMI  heartRate  glucose  \
0             0         0    195.0  106.0   70.0  26.97       80.0     77.0   
1             0         0    250.0  121.0   81.0  28.73       95.0     76.0   
2             0         0    245.0  127.5   80.0  25.34       75.0     70.0   
3             1         0    225.0  150.0   95.0  28.58       65.0    103.0   
4             0         0    285.0  130.0   84.0  23.10       85.0     85.0   

   TenYearCHD  
0           0  
1           0  
2           0  
3           1  
4           0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Step 2. Lets explore the data by ourselves first
&lt;/h3&gt;
&lt;h5&gt;
  
  
  We try running data.info() on our dataset
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print(data.info())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  and we get the output as
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;class 'pandas.core.frame.DataFrame'&amp;gt;
RangeIndex: 4238 entries, 0 to 4237
Data columns (total 16 columns):
 #   Column           Non-Null Count  Dtype  
---  ------           --------------  -----  
 0   male             4238 non-null   int64  
 1   age              4238 non-null   int64  
 2   education        4133 non-null   float64
 3   currentSmoker    4238 non-null   int64  
 4   cigsPerDay       4209 non-null   float64
 5   BPMeds           4185 non-null   float64
 6   prevalentStroke  4238 non-null   int64  
 7   prevalentHyp     4238 non-null   int64  
 8   diabetes         4238 non-null   int64  
 9   totChol          4188 non-null   float64
 10  sysBP            4238 non-null   float64
 11  diaBP            4238 non-null   float64
 12  BMI              4219 non-null   float64
 13  heartRate        4237 non-null   float64
 14  glucose          3850 non-null   float64
 15  TenYearCHD       4238 non-null   int64  
dtypes: float64(9), int64(7)
memory usage: 529.9 KB
None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Step 3. Now what do we do with missing data?
&lt;/h3&gt;
&lt;h5&gt;
  
  
  What do we with columns in our dataset which have no value ?? and how do we do that ??
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
print(data.isnull().sum())

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  and we get the output
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
male                 0
age                  0
education          105
currentSmoker        0
cigsPerDay          29
BPMeds              53
prevalentStroke      0
prevalentHyp         0
diabetes             0
totChol             50
sysBP                0
diaBP                0
BMI                 19
heartRate            1
glucose            388
TenYearCHD           0
dtype: int64

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h6&gt;
  
  
  Oh, so there are a couple of columns that have Null data or NaN values.
&lt;/h6&gt;
&lt;h5&gt;
  
  
  The fillna() method comes to rescue us.
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data.fillna(data.mean(), inplace=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Hmmm, did that work? how do we check that? Oh ! Lets try running data.isnull().sum() once again?
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print(data.isnull().sum())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  and we get the output
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;male               0
age                0
education          0
currentSmoker      0
cigsPerDay         0
BPMeds             0
prevalentStroke    0
prevalentHyp       0
diabetes           0
totChol            0
sysBP              0
diaBP              0
BMI                0
heartRate          0
glucose            0
TenYearCHD         0
dtype: int64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Yes, it worked
&lt;/h5&gt;
&lt;h3&gt;
  
  
  Step 4. Now we need to preprocess the data don't we?
&lt;/h3&gt;
&lt;h5&gt;
  
  
  How do we do that? Lets see. Ok, so what all kinds of columns do we have ?
&lt;/h5&gt;
&lt;h6&gt;
  
  
  data.columns to the rescue
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data.columns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h6&gt;
  
  
  and we get the output
&lt;/h6&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Index(['male', 'age', 'education', 'currentSmoker', 'cigsPerDay', 'BPMeds',
       'prevalentStroke', 'prevalentHyp', 'diabetes', 'totChol', 'sysBP',
       'diaBP', 'BMI', 'heartRate', 'glucose', 'TenYearCHD'],
      dtype='object')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  Ok, thats a lot of columns !! Kaggle has provided us with a lot of columns. We dont want all that do we?
&lt;/h5&gt;

&lt;p&gt;Lets pick and choose.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;['age', 'totChol','sysBP','diaBP', 'cigsPerDay','BMI','glucose']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Aha, now we build our friends. The only ones who have the keys to Logistic Regression. One is a DataFrame and the other is a Series.
&lt;/h4&gt;

&lt;p&gt;Lets call them capital X and small y&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X = data[['age', 'totChol','sysBP','diaBP', 'cigsPerDay','BMI','glucose']]

y = data['TenYearCHD']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Hmmm, lets see what we have now
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X.head()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  and we get the output
&lt;/h5&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;&lt;/th&gt;
      &lt;th&gt;age&lt;/th&gt;
      &lt;th&gt;totChol&lt;/th&gt;
      &lt;th&gt;sysBP&lt;/th&gt;
      &lt;th&gt;diaBP&lt;/th&gt;
      &lt;th&gt;cigsPerDay&lt;/th&gt;
      &lt;th&gt;BMI&lt;/th&gt;
      &lt;th&gt;glucose&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;th&gt;0&lt;/th&gt;
      &lt;td&gt;39&lt;/td&gt;
      &lt;td&gt;195.0&lt;/td&gt;
      &lt;td&gt;106.0&lt;/td&gt;
      &lt;td&gt;70.0&lt;/td&gt;
      &lt;td&gt;0.0&lt;/td&gt;
      &lt;td&gt;26.97&lt;/td&gt;
      &lt;td&gt;77.0&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;th&gt;1&lt;/th&gt;
      &lt;td&gt;46&lt;/td&gt;
      &lt;td&gt;250.0&lt;/td&gt;
      &lt;td&gt;121.0&lt;/td&gt;
      &lt;td&gt;81.0&lt;/td&gt;
      &lt;td&gt;0.0&lt;/td&gt;
      &lt;td&gt;28.73&lt;/td&gt;
      &lt;td&gt;76.0&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;th&gt;2&lt;/th&gt;
      &lt;td&gt;48&lt;/td&gt;
      &lt;td&gt;245.0&lt;/td&gt;
      &lt;td&gt;127.5&lt;/td&gt;
      &lt;td&gt;80.0&lt;/td&gt;
      &lt;td&gt;20.0&lt;/td&gt;
      &lt;td&gt;25.34&lt;/td&gt;
      &lt;td&gt;70.0&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;th&gt;3&lt;/th&gt;
      &lt;td&gt;61&lt;/td&gt;
      &lt;td&gt;225.0&lt;/td&gt;
      &lt;td&gt;150.0&lt;/td&gt;
      &lt;td&gt;95.0&lt;/td&gt;
      &lt;td&gt;30.0&lt;/td&gt;
      &lt;td&gt;28.58&lt;/td&gt;
      &lt;td&gt;103.0&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;th&gt;4&lt;/th&gt;
      &lt;td&gt;46&lt;/td&gt;
      &lt;td&gt;285.0&lt;/td&gt;
      &lt;td&gt;130.0&lt;/td&gt;
      &lt;td&gt;84.0&lt;/td&gt;
      &lt;td&gt;23.0&lt;/td&gt;
      &lt;td&gt;23.10&lt;/td&gt;
      &lt;td&gt;85.0&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: We need to normalize for better model performance
&lt;/h3&gt;

&lt;h5&gt;
  
  
  What is a standard scaler?
&lt;/h5&gt;

&lt;p&gt;&lt;strong&gt;Simple explanation&lt;/strong&gt;: A standard scaler is something that allows you to compare two items which are presently on &lt;strong&gt;different scales&lt;/strong&gt; by bringing both of them to a &lt;strong&gt;similiar scale&lt;/strong&gt;. So they can be compared against each other.&lt;/p&gt;

&lt;p&gt;For example : Two friends are talking about how fast a Ferrari goes and how fast a Porsche goes. But one person is using the &lt;strong&gt;m/s&lt;/strong&gt; scale and the other person is using the &lt;strong&gt;km/h&lt;/strong&gt; scale. Its difficult to analyze which is faster right? So we convert both of them into either &lt;strong&gt;m/s&lt;/strong&gt; or into &lt;strong&gt;km/h&lt;/strong&gt;. So the comparison is easy enough. &lt;/p&gt;

&lt;h5&gt;
  
  
  And, Here comes the StandardScaler to our rescue
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.preprocessing import StandardScaler


scaler = StandardScaler()

X = scaler.fit_transform(X)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 6: Now we need to split the data we have into 2 segments.
&lt;/h3&gt;

&lt;h5&gt;
  
  
  First segment is for training the machine learning model. Second segment is to test the machine learning model we trained using the first segment to really check whether the model did work.
&lt;/h5&gt;

&lt;p&gt;&lt;strong&gt;Simple explanation&lt;/strong&gt;: Kind of like asking a student who learnt using only one textbook , the questions from another textbook . Just to check if the student really understood the concept or has he just byhearted the whole thing.&lt;/p&gt;

&lt;h5&gt;
  
  
  And, How do we do that?  By using train_test_split()
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.model_selection import train_test_split



X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note we have used &lt;strong&gt;test_size&lt;/strong&gt; parameter to load only 20% (0.2 means 20%) of the available data for testing data. That means the remaining 80% is given as training data.&lt;/p&gt;

&lt;p&gt;At the end of successful completion of the &lt;strong&gt;train_test_split&lt;/strong&gt; we get 4 variables&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;X_train&lt;/strong&gt; : has the training data. it has 80% elements from our Dataframe X (remember capital letter X?)&lt;br&gt;
&lt;strong&gt;X_test&lt;/strong&gt; : has the testing data. it has 20% elements from our Dataframe X&lt;br&gt;
&lt;strong&gt;y_train&lt;/strong&gt;: has the training data. has the 80% elements from our Series y (remember small letter y? )&lt;br&gt;
&lt;strong&gt;y_test&lt;/strong&gt; : has the testing data. has 20% elements from our Series y&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7. Finally we arrive at our final milestone. Training the LogisticRegression model
&lt;/h3&gt;

&lt;h6&gt;
  
  
  Lets train our model using LogisticRegression. (That is technical lingo for saying lets use the power of machine learning along with the beautiful python programming language to create an Artificial Intelligence model. An AI model that can predict what we want it to predict)
&lt;/h6&gt;

&lt;h5&gt;
  
  
  How do we do that? Oh just three lines of code :) 🤯🤯
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.linear_model import LogisticRegression


model = LogisticRegression()

model.fit(X_train, y_train)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Oh you can sit and pause. Its alright. That is it. Just 3 lines of code in python. And we have created an Artificial Intelligence model for ourselves. Ain't it a beauty??  💛 💛
&lt;/h5&gt;

&lt;h3&gt;
  
  
  Step 8. Lets evaluate the machine learning model we just created
&lt;/h3&gt;

&lt;h6&gt;
  
  
  We save the values of the prediction to a variable called y_pred.
&lt;/h6&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;y_pred = model.predict(X_test)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h6&gt;
  
  
  We need to evaluate our model.
&lt;/h6&gt;

&lt;p&gt;We use two methods for that&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;classification_report&lt;/strong&gt;()&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;roc_auc_score&lt;/strong&gt;() &lt;/li&gt;
&lt;/ol&gt;

&lt;h5&gt;
  
  
  Lets run that
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from sklearn.metrics import classification_report , roc_auc_score



print(classification_report(y_test, y_pred))

print('ROC-AUC-score:', roc_auc_score(y_test, model.predict_proba(X_test)[:, 1]))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  and we get the output as
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
        precision    recall  f1-score   support

           0       0.86      0.99      0.92       724
           1       0.55      0.05      0.09       124

        accuracy                           0.85       848

macro avg       0.70      0.52      0.51       848
weighted avg       0.81      0.85      0.80       848

ROC-AUC-score: 0.695252628764926

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 9. We are done. The project is over. 💯✅✅Tada.
&lt;/h3&gt;

&lt;h5&gt;
  
  
  Lets test the machine learning model we created with a real person's data shall we?
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;patient2 = [[45, 210, 130, 85, 10, 25.1, 95]]

patient2_df = pd.DataFrame(patient2, columns=['age','totChol', 'sysBP','diaBP', 'cigsPerDay', 'BMI','glucose'])

patient2_scaled = scaler.transform(patient2_df)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  We give the model our scaled data. and store the data in a variable called prediction.
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prediction = model.predict(patient2_scaled)



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Finally,  lets test it using our old fashioned print() statement? ✅✅
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# 1=Heart Disease, 0=No Heart Disease



if prediction[0] == 1:

    print('The chances the patient might have a heart disease in the future is: True')

else:

    print('The chances the patient might have a heart disease in the future is: False')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h6&gt;
  
  
  and we get the output
&lt;/h6&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The chances the patient might have a heart disease in the future is: True
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  And it feels beautiful to know that we have completed learning one more machine learning concept doesn't it? :) 💯✅✅
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Yes, it does 💛 💛
&lt;/h5&gt;

&lt;h5&gt;
  
  
  Homework: Now, here are a few other patient data for you to check on your own.
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;patient3 = [[65, 250, 155, 100, 15, 32.0, 150]]

patient4 = [[55, 240, 140, 90, 10, 29.5, 110]]

patient5 = [[70, 300, 160, 105, 20, 34.0, 180]]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Now go! 💨 🏃🏃 Go, open Visual Studio Code and start coding 🤖🤖.  And, don't forget to come back here tomorrow for our next project. Like somebody once said: You never know what the tide might bring tomorrow? 🌊 🔮🖥️
&lt;/h5&gt;

</description>
      <category>python</category>
      <category>machinelearning</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Posting my first Dev Post: Project : Using Linear Regression in Machine Learning to predict house prices</title>
      <dc:creator>Vishnu Ajit</dc:creator>
      <pubDate>Fri, 17 Jan 2025 12:25:59 +0000</pubDate>
      <link>https://dev.to/vishnu_ajit/sharing-my-first-notebook-project-using-linear-regression-in-machine-learning-to-predict-house-181n</link>
      <guid>https://dev.to/vishnu_ajit/sharing-my-first-notebook-project-using-linear-regression-in-machine-learning-to-predict-house-181n</guid>
      <description>&lt;h1&gt;
  
  
  Posting My First Dev Post Notebook: Predicting house prices using LinearRegression
&lt;/h1&gt;

&lt;p&gt;Excited to share my notebook which i made for experimenting with machine learning algorithms! This notebook contains code and markdown for a project using LinearRegression . It loads the dataset from load_boston dataset and enables us to predict house prices from the available actual houseprices 🧑‍💻📊&lt;/p&gt;

&lt;h3&gt;
  
  
  Concepts Used Include:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Train_Test_Split🌀&lt;/li&gt;
&lt;li&gt;LinearRegression 🎯&lt;/li&gt;
&lt;li&gt;mean_squared_error ➖&lt;/li&gt;
&lt;li&gt;model.coef_ 🌟&lt;/li&gt;
&lt;li&gt;model.intercept_ 🔄&lt;/li&gt;
&lt;li&gt; model.predict 🌟&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why This Notebook:
&lt;/h4&gt;

&lt;p&gt;The main goal of this notebook is to visually understand how to use the LinearRegression concept in  machine learning algorithm to calculate/predict house prices from the training data we have.&lt;/p&gt;

&lt;p&gt;I’ve included a line to my notebook to guide you through the it &lt;a href="https://colab.research.google.com/drive/1-fGYNuGfMXjq172ErX7TWGoSPguU863f?usp=sharing" rel="noopener noreferrer"&gt;https://colab.research.google.com/drive/1-fGYNuGfMXjq172ErX7TWGoSPguU863f?usp=sharing&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  What’s Next:
&lt;/h4&gt;

&lt;p&gt;Over the Next week, I’ll be posting more of my notebooks for other concepts in Machine Learning as recommended by this url &lt;a href="https://www.kaggle.com/discussions/getting-started/554563" rel="noopener noreferrer"&gt;https://www.kaggle.com/discussions/getting-started/554563&lt;/a&gt; [# Machine Learning Engineer Roadmap for 2025]&lt;/p&gt;

&lt;h4&gt;
  
  
  Who's This For:
&lt;/h4&gt;

&lt;p&gt;For anybody who loves python and who has been telling themselves I'm gonna learn Machine Learning one day. This is for them !Lets learn Machine Learning Together :)&lt;/p&gt;

&lt;p&gt;Feel free to explore the notebook and try out your own machine learning models! 🚀&lt;/p&gt;

&lt;p&gt;Notebook Link: &lt;a href="https://colab.research.google.com/drive/1-fGYNuGfMXjq172ErX7TWGoSPguU863f?usp=sharing%C2%A0" rel="noopener noreferrer"&gt;https://colab.research.google.com/drive/1-fGYNuGfMXjq172ErX7TWGoSPguU863f?usp=sharing &lt;/a&gt;  [Project ML - Learn Linear Regression in Machine Learning through Python]&lt;br&gt;
Kaggle References: &lt;a href="https://www.kaggle.com/discussions/getting-started/554563" rel="noopener noreferrer"&gt;https://www.kaggle.com/discussions/getting-started/554563&lt;/a&gt; [Machine Learning Engineer Roadmap for 2025]&lt;/p&gt;

</description>
      <category>python</category>
      <category>machinelearning</category>
      <category>datascience</category>
    </item>
  </channel>
</rss>
