DEV Community

Cover image for Day 3 — Errors & Loss Functions: Measuring How Wrong a Model Is
Chanchal Singh
Chanchal Singh

Posted on

Day 3 — Errors & Loss Functions: Measuring How Wrong a Model Is

You’re trying to guess your monthly electricity bill.

You think:

“Maybe around ₹1,500 this month.”

The bill arrives.

Actual bill: ₹1,620

You smile and say:

“Hmm… close, but not exact.”

That gap between what you guessed and what actually happened
is called error.


So, What Is Error Really?

In simple human language:

Error is how far your guess is from reality.

That’s it.

  • Predicted number → your guess
  • Actual number → truth
  • Difference → error

Every prediction has an error.
Even humans make them.


Why Errors Are Normal (And Not a Problem)

Real life is not neat.

  • People behave differently
  • Weather changes
  • Markets move randomly

So expecting perfect predictions is unrealistic.

Machine learning doesn’t try to be perfect.
It tries to be less wrong every time.


Absolute Error: “Just Tell Me How Wrong I Am”

Imagine your friend asks:

“I don’t care if you guessed more or less.
Just tell me how off you were.”

That thinking is called Absolute Error.

If:

  • You predicted too high → error
  • You predicted too low → error

Only the size of the mistake matters.


One Guess Is Not Enough

Now imagine this:

You guessed the bill every month for a year.

  • Some months: Very close

  • Some months: Way off

Now the question becomes:

“Overall, how good are my guesses?”

To answer that, we need a single score.

That score is called a loss.

Loss Function explanation with example


Loss Function: The Model’s Report Card

Think of a loss function like a report card.

  • It looks at all mistakes together
  • Gives one number
  • Lower number = better performance

Models don’t feel emotions.
They only understand numbers.

Loss tells them:

“You’re doing okay”
or
“You’re doing badly — improve.”


Mean Squared Error: Why Big Mistakes Hurt More

Now here’s the clever part.

Imagine two mistakes:

  • One mistake of ₹50
  • One mistake of ₹500

Which one should worry you more?

Obviously, ₹500.

Mean Squared Error (MSE) thinks the same way.

It:

  • Makes small mistakes small
  • Makes big mistakes very big

This forces the model to say:

“I must avoid big blunders.”

That’s why MSE is widely used in linear regression.
Not because it’s fancy.
Because it matches human common sense.

One-Line Memory Hook

"MSE shouts at big mistakes and whispers at small ones."


How This Chooses the Best Line

Remember the straight line for Linear Regression from Day 2?

Linear regression:

  • Tries many possible lines
  • Calculates loss for each line
  • Picks the line with lowest loss

That’s how the “best line” is chosen.

Not by looks.
By least mistake.


Tiny Thought Experiment 🧠

If your predictions are:

  • Always off by ₹20 → acceptable
  • Sometimes off by ₹500 → dangerous

Loss functions feel the same.


Final Takeaways (Remember These)

  • Error = mistake for one prediction
  • Loss = overall mistake score
  • MSE punishes big mistakes more

What’s Coming Next 👀

Now the big question:

How does the model actually reduce this loss?

That’s where training begins.

👉 Day 4 — Teaching the Model to Improve (Gradient Descent)

I love breaking down complex topics into simple, easy-to-understand explanations so everyone can follow along. If you're into learning AI in a beginner-friendly way, make sure to follow for more!

Connect on Linkedin: https://www.linkedin.com/in/chanchalsingh22/
Connect on YouTube: https://www.youtube.com/@Brains_Behind_Bots

Top comments (0)