I actually did this yesterday, but I forgot to post.
Anyway, building on day 3's work on ROC today was all about how to quantify how good a model is. That's where the area under the curve (or AUC) comes in. This will then give you a value. The bigger the value is the better your model.
The code
Once you've got your model (any model), split it into your train and test, then have it making predictions. Then you need to import the AUC and you can run it.
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test, y_pred_prob)
Top comments (2)
And what about showing us the 'R' equivalent? hehe
I wish I knew enough to be able to do that 😅