Precision and recall are two core metrics used in evaluating the performance of a binary or multi-class classification model. Both focus on how well the model handles positive cases
Precision
Precision indicates how many of the items your model predicted as positive were actually correct.
• Measures the correctness of positive predictions
• Focuses on being accurate when saying “positive”
• When your model says “Yes” or “Positive,” how often is it right?
• Penalizes False Positives (FP). Precision score is higher when there are fewer false positives.
• Higher when the model is conservative (fewer false alarms)
Precision = True Positive / True Positives (TP) + False Positives (FP)
Use Cases :
• Spam detection: You want to avoid flagging legitimate emails as spam.
• Medical diagnostics: Sometimes, you want to avoid false alarms that could cause stress or unnecessary tests.
Recall
Recall indicates how many of the actual positives your model was able to find. Of all the real positive cases, how many did the model catch?
• Measures the coverage of actual positives
• Focuses on finding all real positives
• When your model says “Yes” or “Positive,” how often is it right?
• Penalizes False Positives (FP). Precision score is higher when there are fewer false positives.
• Higher when the model is conservative (fewer false alarms)
Recall = True Positive / True Positives (TP) + False Negatives (FN)
Use Cases:
• Disease detection: You don’t want to miss any sick patients, even if it means a few healthy ones get flagged.
• Fraud detection: Better to investigate more cases than miss a real fraud.
Summary
Top comments (0)