DEV Community

Janet Wafula
Janet Wafula

Posted on

My Take on Supervised Learning: Focus on Classification

Supervised learning is essentially about learning from examples. It’s called “supervised” because the data already contains the answers, guiding the model step by step until it can make predictions on its own.Supervised learning, to me, mirrors how humans make decisions. We don’t make choices in a vacuum; we rely on past examples. A doctor diagnoses a patient based on years of cases they’ve studied,machines do the same with labeled data.

When we talk about classification, we’re dealing with problems where outcomes fall into distinct groups or classes. A classic case is fraud detection: the system must decide if a transaction is fraudulent or legitimate. The challenge lies in finding patterns in the features that point to one class over another.

Models We Use for Classification

  • KNN – classifies by “looking at neighbors.”
  • Logistic Regression — Works well for binary classification (yes/no, spam/not spam).
  • Decision Trees –A tree-like model where data is split into smaller groups it is clear, visual, and beginner-friendly.
  • Random Forests – an ensemble that builds many decision trees and improves accuracy.
  • SVMs – effective for complex, high-dimensional datasets.
  • Neural Networks – best for large-scale, deep feature learning.

Personal views, insights and challenges I have faced while working with classification

The biggest difficulty for me has been feature engineering. Sometimes the raw data isn’t enough, and the choice of features makes all the difference. Based on my last project on Car price prediction, I noticed that adding more features could improve the model's accuracy.What I love about classification is how it simplifies the messiness of life into categories.

Top comments (0)