<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arnold Chris</title>
    <description>The latest articles on DEV Community by Arnold Chris (@oduor_arnold).</description>
    <link>https://dev.to/oduor_arnold</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oduor_arnold"/>
    <language>en</language>
    <item>
      <title>ML Model Selection.</title>
      <dc:creator>Arnold Chris</dc:creator>
      <pubDate>Tue, 24 Sep 2024 18:51:29 +0000</pubDate>
      <link>https://dev.to/oduor_arnold/ml-model-selection-1437</link>
      <guid>https://dev.to/oduor_arnold/ml-model-selection-1437</guid>
      <description>&lt;h2&gt;
  
  
  1. Introduction
&lt;/h2&gt;

&lt;p&gt;In this article we will learn how to choose the best model between multiple models with varying hyperparameters, in some cases we can have more than 50 different models, knowing how to choose one is important to get the best performant one for your dataset.&lt;/p&gt;

&lt;p&gt;We will do model selection both by selecting the best learning algorithm and it's best hyperparameters.&lt;/p&gt;

&lt;p&gt;But first what are &lt;strong&gt;hyperparameters&lt;/strong&gt;? These are the additional settings that are set by the user and will affect how the model will learn it's parameters. &lt;strong&gt;Parameters&lt;/strong&gt; on the other hand are what models learn during the training process.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Using Exhaustive Search.
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Exhaustive Search&lt;/strong&gt; involves selecting the best model by searching over a range of hyperparameters. To do this we make use of scikit-learn's &lt;strong&gt;&lt;em&gt;GridSearchCV&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;How GridSearchCV works:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User defines sets of possible values for one or multiple hyperparameters.&lt;/li&gt;
&lt;li&gt;GridSearchCV trains a model using every value and /or combination of values.&lt;/li&gt;
&lt;li&gt;The model with the best performance is selected as the best model.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Example&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
We can set up a logistic regression as our learning algorithm and tune two hyperparameters, (C and the regularization penalty). We can also specify two parameters the solver and max iterations.&lt;/p&gt;

&lt;p&gt;Now for each combination of C and regularization penalty values, we train the model and evaluate it using k-fold cross-validation.&lt;br&gt;
Since we have 10 possible values of C, 2 possible values of reg. penalty and 5 folds we have a total of (10 x 2 x 5 = 100) candidate models from which the best is selected.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
import numpy as np
from sklearn import linear_model, datasets
from sklearn.model_selection import GridSearchCV

# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target

# Create logistic regression
logistic = linear_model.LogisticRegression(max_iter=500, solver='liblinear')

# Create range of candidate penalty hyperparameter values
penalty = ['l1','l2']

# Create range of candidate regularization hyperparameter values
C = np.logspace(0, 4, 10)

# Create dictionary of hyperparameter candidates
hyperparameters = dict(C=C, penalty=penalty)

# Create grid search
gridsearch = GridSearchCV(logistic, hyperparameters, cv=5, verbose=0)

# Fit grid search
best_model = gridsearch.fit(features, target)

# Show the best model
print(best_model.best_estimator_)

# LogisticRegression(C=7.742636826811269, max_iter=500, penalty='l1',
solver='liblinear') # Result

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Getting the best model&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# View best hyperparameters
print('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])
print('Best C:', best_model.best_estimator_.get_params()['C'])

# Best Penalty: l1 #Result
# Best C: 7.742636826811269 # Result

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Using Randomized Search.
&lt;/h2&gt;

&lt;p&gt;This is commonly used when you want a computationally cheaper method than exhaustive search to select the best model.&lt;/p&gt;

&lt;p&gt;It's worth noting that the reason RandomizedSearchCV isn't inherently faster than GridSearchCV, but it often achieves comparable performance to GridSearchCV in less time just by testing fewer combinations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;How RandomizedSearchCV works&lt;/em&gt;&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user will supply hyperparameters / distributions (e.g normal, uniform).&lt;/li&gt;
&lt;li&gt;The algorithms will randomly search over  a specific number of random combinations of the given hyperparameter values without replacement.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target

# Create logistic regression
logistic = linear_model.LogisticRegression(max_iter=500, solver='liblinear')

# Create range of candidate regularization penalty hyperparameter values
penalty = ['l1', 'l2']

# Create distribution of candidate regularization hyperparameter values
C = uniform(loc=0, scale=4)

# Create hyperparameter options
hyperparameters = dict(C=C, penalty=penalty)

# Create randomized search
randomizedsearch = RandomizedSearchCV(
logistic, hyperparameters, random_state=1, n_iter=100, cv=5, verbose=0,
n_jobs=-1)

# Fit randomized search
best_model = randomizedsearch.fit(features, target)

# Print best model
print(best_model.best_estimator_)

# LogisticRegression(C=1.668088018810296, max_iter=500, penalty='l1',
solver='liblinear') #Result.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Getting the best model:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# View best hyperparameters
print('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])
print('Best C:', best_model.best_estimator_.get_params()['C'])

# Best Penalty: l1 # Result
# Best C: 1.668088018810296 # Result

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The number of candidate models trained is specified in the &lt;strong&gt;n_iter&lt;/strong&gt; (number of iterations) settings.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Selecting the Best Models from Multiple Learning Algorithms.
&lt;/h2&gt;

&lt;p&gt;In this part we will look at how to select the best model by searching over a range of learning algorithms and their respective hyperparameters.&lt;/p&gt;

&lt;p&gt;We can do this by simply creating a dictionary of candidate learning algorithms and their hyperparameters to use as the search space for &lt;strong&gt;&lt;em&gt;GridSearchCV&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We can define a search space that includes two learning algorithms.&lt;/li&gt;
&lt;li&gt;We specify the hyperparameters and we define their candidate values using the format &lt;strong&gt;&lt;em&gt;classifier&lt;/em&gt;[hyperparameter name]_&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
import numpy as np
from sklearn import datasets
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline

# Set random seed
np.random.seed(0)

# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target

# Create a pipeline
pipe = Pipeline([("classifier", RandomForestClassifier())])

# Create dictionary with candidate learning algorithms and their hyperparameters
search_space = [{"classifier": [LogisticRegression(max_iter=500,
solver='liblinear')],
"classifier__penalty": ['l1', 'l2'],
"classifier__C": np.logspace(0, 4, 10)},
{"classifier": [RandomForestClassifier()],
"classifier__n_estimators": [10, 100, 1000],
"classifier__max_features": [1, 2, 3]}]

# Create grid search
gridsearch = GridSearchCV(pipe, search_space, cv=5, verbose=0)

# Fit grid search
best_model = gridsearch.fit(features, target)

# Print best model
print(best_model.best_estimator_)

# Pipeline(steps=[('classifier',
                 LogisticRegression(C=7.742636826811269, max_iter=500,
                      penalty='l1', solver='liblinear'))])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The best model:&lt;/strong&gt;&lt;br&gt;
After the search is complete, we can use &lt;strong&gt;best_estimator_&lt;/strong&gt; to view the best model's learning algorithm and hyperparameters.&lt;/p&gt;
&lt;h2&gt;
  
  
  5. Selecting the Best Model When Preprocessing.
&lt;/h2&gt;

&lt;p&gt;Sometimes we might want to include a preprocessing step during model selection.&lt;br&gt;
The best solution is to create a pipeline that includes the preprocessing step and any of its parameters:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The First Challenge&lt;/strong&gt;:&lt;br&gt;
GridSeachCv uses cross-validation to determine the model with the highest performance.&lt;/p&gt;

&lt;p&gt;However, in cross-validation we are pretending that the fold held out as the test set is not seen, and thus not part of fitting any preprocessing steps (e.g scaling or standardization).&lt;/p&gt;

&lt;p&gt;For this reason the preprocessing steps must be a part of the set of actions taken by GridSearchCV.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Solution&lt;/strong&gt;&lt;br&gt;
Scikit-learn provides the &lt;strong&gt;FeatureUnion&lt;/strong&gt; which allows us to combine multiple preprocessing actions properly.&lt;br&gt;
&lt;strong&gt;steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We use &lt;strong&gt;_FeatureUnion _&lt;/strong&gt;to combine two preprocessing steps: standardize the feature values(&lt;strong&gt;&lt;em&gt;StandardScaler&lt;/em&gt;&lt;/strong&gt;) and principal component analysis(&lt;strong&gt;&lt;em&gt;PCA&lt;/em&gt;&lt;/strong&gt;) - this object is called the preprocess and contains both of our preprocessing steps.&lt;/li&gt;
&lt;li&gt;Next we include preprocess in our pipeline with our learning algorithm.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This allows us to outsource the proper handling of fitting, transforming, and training the models with combinations of hyperparameters to scikit-learn.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second Challenge:&lt;/strong&gt;&lt;br&gt;
Some preprocessing methods such as PCA have their own parameters, dimensionality reduction using PCA requires the user to define the number of principal components to use to produce the transformed features set. Ideally we would choose the number of components that produces a model with the greatest performance for some evaluation test metric.&lt;br&gt;
&lt;strong&gt;Solution.&lt;/strong&gt;&lt;br&gt;
In scikit-learn when we include candidate component values in the search space, they are treated like any other hyperparameter to be searched over.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
import numpy as np
from sklearn import datasets
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler

# Set random seed
np.random.seed(0)

# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target

# Create a preprocessing object that includes StandardScaler features and PCA
preprocess = FeatureUnion([("std", StandardScaler()), ("pca", PCA())])

# Create a pipeline
pipe = Pipeline([("preprocess", preprocess),
               ("classifier", LogisticRegression(max_iter=1000,
               solver='liblinear'))])

# Create space of candidate values
search_space = [{"preprocess__pca__n_components": [1, 2, 3],
"classifier__penalty": ["l1", "l2"],
"classifier__C": np.logspace(0, 4, 10)}]

# Create grid search
clf = GridSearchCV(pipe, search_space, cv=5, verbose=0, n_jobs=-1)

# Fit grid search
best_model = clf.fit(features, target)

# Print best model
print(best_model.best_estimator_)

# Pipeline(steps=[('preprocess',
     FeatureUnion(transformer_list=[('std', StandardScaler()),
                                    ('pca', PCA(n_components=1))])),
    ('classifier',
    LogisticRegression(C=7.742636826811269, max_iter=1000,
                      penalty='l1', solver='liblinear'))]) # Result


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the model selection is complete we can view the preprocessing values that produced the best model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Preprocessing steps that produced the best modes&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# View best n_components

best_model.best_estimator_.get_params() 
# ['preprocess__pca__n_components'] # Results

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Speeding Up Model Selection with Parallelization.
&lt;/h2&gt;

&lt;p&gt;That time you need to reduce the time it takes to select a model.&lt;br&gt;
We can do this by training multiple models simultaneously, this is done by using all the cores in our machine by setting &lt;strong&gt;n_jobs=-1&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
import numpy as np
from sklearn import linear_model, datasets
from sklearn.model_selection import GridSearchCV

# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target

# Create logistic regression
logistic = linear_model.LogisticRegression(max_iter=500, 
                                           solver='liblinear')

# Create range of candidate regularization penalty hyperparameter values
penalty = ["l1", "l2"]

# Create range of candidate values for C
C = np.logspace(0, 4, 1000)

# Create hyperparameter options
hyperparameters = dict(C=C, penalty=penalty)

# Create grid search
gridsearch = GridSearchCV(logistic, hyperparameters, cv=5, n_jobs=-1, 
                             verbose=1)

# Fit grid search
best_model = gridsearch.fit(features, target)

# Print best model
print(best_model.best_estimator_)

# Fitting 5 folds for each of 2000 candidates, totalling 10000 fits
# LogisticRegression(C=5.926151812475554, max_iter=500, penalty='l1',
                                                  solver='liblinear')

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Speeding Up Model Selection ( Algorithm Specific Methods).
&lt;/h2&gt;

&lt;p&gt;This a way to speed up model selection without using additional compute power.&lt;/p&gt;

&lt;p&gt;This is possible because scikit-learn has model-specific cross-validation hyperparameter tuning.&lt;/p&gt;

&lt;p&gt;Sometimes the characteristics of a learning algorithms allows us to search for the best hyperparameters significantly faster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;LogisticRegression&lt;/strong&gt; is used to conduct a standard logistic regression classifier.&lt;br&gt;
&lt;strong&gt;LogisticRegressionCV&lt;/strong&gt; implements an efficient cross-validated logistic regression classifier that can identify the optimum value of the hyperparameter C.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
from sklearn import linear_model, datasets

# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target

# Create cross-validated logistic regression
logit = linear_model.LogisticRegressionCV(Cs=100, max_iter=500,
                                            solver='liblinear')

# Train model
logit.fit(features, target)

# Print model
print(logit)

# LogisticRegressionCV(Cs=100, max_iter=500, solver='liblinear')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;A major downside to LogisticRegressionCV is that it can only search a range of values for C. This limitation is common to many of scikit-learn's model-specific cross-validated approaches.&lt;/p&gt;

&lt;p&gt;I hope this Article was helpful in creating a quick overview of how to select a machine learning model.&lt;/p&gt;

</description>
      <category>python</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Strategies in Evaluating Machine Learning Models.</title>
      <dc:creator>Arnold Chris</dc:creator>
      <pubDate>Sat, 21 Sep 2024 11:29:11 +0000</pubDate>
      <link>https://dev.to/oduor_arnold/strategies-in-evaluating-machine-learning-models-3jeg</link>
      <guid>https://dev.to/oduor_arnold/strategies-in-evaluating-machine-learning-models-3jeg</guid>
      <description>&lt;p&gt;We are going to look at the strategies in evaluating regression, classification and clustering models at the end we will also look at creating reports of evaluation metrics and visualizing the effects of hyperparameter values.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Models are only as useful as their quality of predictions, and thus fundamentally our goal is not to create models, but to create high quality models.&lt;br&gt;
Lets begin:&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;1. Cross-Validating Models&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Our method of evaluation should help us understand how well our models are able to make predictions from data they have never seen before.&lt;br&gt;
One strategy might be to hold off a slice of data for testing. This is called &lt;strong&gt;validation&lt;/strong&gt; (or hold-out). In validation our observations (features and targets) are split into two sets, the &lt;strong&gt;&lt;em&gt;training set&lt;/em&gt;&lt;/strong&gt; and the &lt;strong&gt;&lt;em&gt;test set&lt;/em&gt;&lt;/strong&gt;. Next we train the model using the training set, using the features and target vector to teach the model how to make the best prediction. Finally we simulate having never-before-seen external data by evaluating how our model performs on our test set.&lt;/p&gt;

&lt;p&gt;The following sample code explain the step by step process of how to achieve this using the digits data library, except we've used an improved version of this approach as explained below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
from sklearn import datasets
from sklearn import metrics
from sklearn.model_selection import KFold, cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler

# Load digits dataset
digits = datasets.load_digits()

# Create features matrix
features = digits.data

# Create target vector
target = digits.target

# Create standardizer
standardizer = StandardScaler()

# Create logistic regression object
logit = LogisticRegression()

# Create a pipeline that standardizes, then runs logistic regression
pipeline = make_pipeline(standardizer, logit)

# Create k-fold cross-validation
kf = KFold(n_splits=5, shuffle=True, random_state=0)

# Conduct k-fold cross-validation
cv_results = cross_val_score(pipeline,      # Pipeline
features,                                   # Feature matrix
target,                                     # Target vector
cv=kf,                                      # Performance metric
scoring="accuracy",                         # Loss function
n_jobs=-1)                                  # Use all CPU cores

# Calculate mean
cv_results.mean()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my case I got a mean of 0.96995&lt;/p&gt;

&lt;h3&gt;
  
  
  Weaknesses of this approach
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;The performance of the model can be highly dependent on which few observations were selected for the test set.&lt;/li&gt;
&lt;li&gt;The model is not being trained using all the available data, and it's not being evaluated on all the available data.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  K-fold Cross Validation (KFCV)
&lt;/h3&gt;

&lt;p&gt;In this method we split the data into k parts called &lt;strong&gt;&lt;em&gt;folds&lt;/em&gt;&lt;/strong&gt;. The model is then trained using k - 1 folds(combined into 1 training set) and then the last fold as the test set. The performance on the model for each of the k iterations is then averaged to produce an overall measurement.&lt;/p&gt;

&lt;p&gt;In our code sample above we conducted k-fold cross-validation using five folds and outputted the evaluation scores to &lt;strong&gt;cv_results&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The results of which is an array for the score of all 5 folds.&lt;br&gt;
I got:&lt;/p&gt;

&lt;p&gt;array([0.96111111, 0.96388889, 0.98050139, 0.97214485, 0.97214485])&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Points to consider when using KFCV&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;It assumes that each observation was created independently from the other (i.e The data is independent and identically distributed [IID]). If the data is IID it is better to shuffle observations when assigning to folds. In scikit-learn we can set &lt;strong&gt;shuffle=True&lt;/strong&gt; to perform shuffling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When using KFCV to evaluate a classifier it's beneficial to have folds containing roughly the same percentage of observations from each of the different target classes (called &lt;em&gt;&lt;strong&gt;stratified k-fold&lt;/strong&gt;&lt;/em&gt;). For Example, if our target vector contaned gender and 80% were male, then each fold would contain 80% male and 20% female observations. In scikit-learn this is done by replacing the &lt;strong&gt;KFold&lt;/strong&gt; class with &lt;strong&gt;StratifiedKFold&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When using validation sets or cross-validation, it is important to preprocess data based on the training set and then apply those transformations to both the training and test set. E.g when we &lt;strong&gt;fit&lt;/strong&gt; our standardization object, &lt;strong&gt;standardizer&lt;/strong&gt;, we calculate the mean and variance of only the training set. Then we apply that transformation (using &lt;em&gt;&lt;strong&gt;transform&lt;/strong&gt;&lt;/em&gt;) to both the training and test sets as shown in the code block below:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Import library
from sklearn.model_selection import train_test_split

# Create training and test sets
features_train, features_test, target_train, target_test = train_test_split(
features, target, test_size=0.1, random_state=1)

# Fit standardizer to training set
standardizer.fit(features_train)

# Apply to both training and test sets which can then be used to train models
features_train_std = standardizer.transform(features_train)
features_test_std = standardizer.transform(features_test)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The reason for this is because we are pretending that the test set is unknown data. &lt;br&gt;
If we fit both our preprocessors using observations from both training and test sets, some of the information from the test set leaks into our training set. &lt;br&gt;
This rule applies for any preprocessing step such as&lt;br&gt;
feature selection.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Creating a Baseline Regression Model.
&lt;/h2&gt;

&lt;p&gt;This is a common method where we create a baseline regression model to use as a comparison against other models that we train.&lt;br&gt;
We can use scikit-learn's &lt;strong&gt;DummyRegressor&lt;/strong&gt;. This often can be useful to simulate a "naive" existing prediction process in a product or system.&lt;/p&gt;

&lt;p&gt;For example, a product might have been originally hardcoded to assume that all new users will spend $100 in the first month, regardless&lt;br&gt;
of their features. &lt;br&gt;
If we encode that assumption into a baseline model, we are able to concretely state the benefits of using a machine learning approach by comparing the dummy model’s score with that of a trained model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
from sklearn.datasets import load_wine
from sklearn.dummy import DummyRegressor
from sklearn.model_selection import train_test_split

# Load data
wine = load_wine()

# Create features
features, target = wine.data, wine.target

# Make test and training split
features_train, features_test, target_train, target_test = train_test_split(
features, target, random_state=0)

# Create a dummy regressor
dummy = DummyRegressor(strategy='mean')

# "Train" dummy regressor
dummy.fit(features_train, target_train)

# Get R-squared score
dummy.score(features_test, target_test)

-0.0480213580840978  #Result.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To compare, we train our model and evaluate the performance score:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load library
from sklearn.linear_model import LinearRegression

# Train simple linear regression model
ols = LinearRegression()
ols.fit(features_train, target_train)

# Get R-squared score
ols.score(features_test, target_test)

0.804353263176954 #Result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Dummyregressor&lt;/strong&gt; uses the **strategy **parameter to set the method of making predictions, including the mean or median value in the training set. Furthermore, if we set **strategy **to constant and use the constant parameter, we can set the dummy regressor to predict some constant value for every observation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create dummy regressor that predicts 1s for everything
clf = DummyRegressor(strategy='constant', constant=1)
clf.fit(features_train, target_train)

# Evaluate score
clf.score(features_test, target_test)
-0.06299212598425186  #Result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One small note regarding &lt;strong&gt;score&lt;/strong&gt;. By default it returns the coefficient of determination (R-squared).&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Creating a Baseline Classification Model.
&lt;/h2&gt;

&lt;p&gt;This is basically the same concept as creating a regression baseline model with a few changes.&lt;/p&gt;

&lt;p&gt;Note that a common measure of a classifier's performance is how much better it is than random guessing.&lt;/p&gt;

&lt;p&gt;Scikit-learn's &lt;strong&gt;DummyClassifier&lt;/strong&gt; makes this comparison easy.&lt;br&gt;
The following code block shows how to effectively create the dummy classifier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
from sklearn.datasets import load_iris
from sklearn.dummy import DummyClassifier
from sklearn.model_selection import train_test_split

# Load data
iris = load_iris()

# Create target vector and feature matrix
features, target = iris.data, iris.target

# Split into training and test set
features_train, features_test, target_train, target_test = train_test_split(
features, target, random_state=0)

# Create dummy classifier
dummy = DummyClassifier(strategy='uniform', random_state=1)

# "Train" model
dummy.fit(features_train, target_train)
# Get accuracy score

dummy.score(features_test, target_test)

0.42105263157894735  # Result.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By comparing the baseline classifier to our trained classifier, we can see the improvement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load the library
from sklearn.ensemble import RandomForestClassifier

# Create classifier
classifier = RandomForestClassifier()

# Train model.
classifier.fit(features_train, target_train)

# Get accuracy score.
classifier.score(features_test, target_test)

0.9736842105263158   # Result.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;strategy&lt;/strong&gt; gives us a number of options for generating values.&lt;br&gt;
There are two particularly useful strategies.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stratified&lt;/strong&gt; makes predictions proportional to the class proportions of the training set's target vector(e.g 20% of the observations in the training data are women, then &lt;strong&gt;DummyClassifier&lt;/strong&gt; will predict women 20% of the time).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Uniform&lt;/strong&gt; will generate predictions uniformly at random between the different classes. E.g if 20% of observations are women and 80% are men, &lt;strong&gt;uniform&lt;/strong&gt; will produce predictions that are 50% women and 50% men.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  4. Evaluating Binary Classifier Predictions.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Given a trained classification model, you want to evaluate its quality.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can define one of a number of performance metrics, including accuracy, precision, recall and F1.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Accuracy&lt;/strong&gt;&lt;/em&gt; is a common performance metric, it's simply the proportion of observations predicted correctly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3e77nj5sgxnxgy0j4dub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3e77nj5sgxnxgy0j4dub.png" alt="Image description" width="347" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Where:&lt;br&gt;
    &lt;em&gt;&lt;strong&gt;TP&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Number of true positives, observations that are part of the &lt;em&gt;&lt;strong&gt;positive&lt;/strong&gt;&lt;/em&gt; class that we predicted correctly.&lt;br&gt;
   &lt;em&gt;&lt;strong&gt;TN&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
True negatives, observations that are part of the _*&lt;em&gt;negative *&lt;/em&gt;_class and that we predicted correctly.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;FP&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
The number of false positives, also called &lt;em&gt;Type I error&lt;/em&gt;. Predicted to be part of the positive class but are actually part of the &lt;em&gt;&lt;strong&gt;negative&lt;/strong&gt;&lt;/em&gt; class&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;FN&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
The number of false negatives, also called a &lt;em&gt;&lt;strong&gt;Type II error&lt;/strong&gt;&lt;/em&gt;. These are observations that are predicted to be part of the &lt;em&gt;&lt;strong&gt;negative&lt;/strong&gt;&lt;/em&gt; class but are actually part of the &lt;em&gt;&lt;strong&gt;negative&lt;/strong&gt;&lt;/em&gt; class.&lt;/p&gt;

&lt;p&gt;We can measure in accuracy three-fold(the default number of folds) cross-validation by setting &lt;strong&gt;scoring="accuracy"&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification

# Generate features matrix and target vector
X, y = make_classification(n_samples = 10000,
n_features = 3,
n_informative = 3,
n_redundant = 0,
n_classes = 2,
random_state = 1)

# Create logistic regression
logit = LogisticRegression()

# Cross-validate model using accuracy
cross_val_score(logit, X, y, scoring="accuracy")

array([0.9555, 0.95 , 0.9585, 0.9555, 0.956 ])  # Result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Accuracy works well with balanced data however when in presence of inbalanced classes (99.9% of observations belong to a single class, and 0.1% belong to the second class) accuracy suffers from the paradox where a model is highly accurate but lacks predictive power.&lt;/p&gt;

&lt;p&gt;For example, imagine we are trying to predict the presence of a very rare cancer that occurs in 0.1% of the population. &lt;/p&gt;

&lt;p&gt;After training our model, we find the accuracy is at 95%. However, 99.9% of people do not have the cancer: if we simply created a&lt;br&gt;
model that “&lt;em&gt;&lt;strong&gt;predicted&lt;/strong&gt;&lt;/em&gt;” that nobody had that form of cancer, our _*&lt;em&gt;naive *&lt;/em&gt;_model would be 4.9% more accurate, but it clearly is not able to predict anything. For this reason,  we are often motivated to use other metrics such as precision, recall, and the F1 score.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Precision&lt;/strong&gt; is the proportion of every observation predicted to be &lt;em&gt;&lt;strong&gt;positive&lt;/strong&gt;&lt;/em&gt; that is actually positive i.e how likely we are to be right when we predict something is positive.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kpbagy2z4rfsgl4isq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6kpbagy2z4rfsgl4isq1.png" alt="Image description" width="225" height="77"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Cross-validate model using precision
cross_val_score(logit, X, y, scoring="precision")

# array([0.95963673, 0.94820717, 0.9635996 , 0.96149949, 0.96060606])

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Models with high precision are &lt;em&gt;&lt;strong&gt;pessimistic&lt;/strong&gt;&lt;/em&gt; in that they predict an observation is of the positive class only when they are very certain about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recall&lt;/strong&gt; is the proportion of the positive observation that is truly positive. Recall measures the model's ability to identify the positive class. Models with high recall are &lt;em&gt;&lt;strong&gt;optimistic&lt;/strong&gt;&lt;/em&gt; in that they have a low bar for predicting that an observation is of the positive class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Cross-validate model using recall
cross_val_score(logit, X, y, scoring="recall")

# array([0.951, 0.952, 0.953, 0.949, 0.951])

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszfyfziv8pawxg0q81lc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fszfyfziv8pawxg0q81lc.png" alt="Image description" width="183" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since precision and recall are less intuitive we always want some kind of balance between precision and recall, and this is where the role is filled by the F1 score.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The F1 score&lt;/strong&gt; is the &lt;em&gt;&lt;strong&gt;harmonic mean&lt;/strong&gt;&lt;/em&gt; (a kind of average used for ratios.)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Cross-validate model using F1
cross_val_score(logit, X, y, scoring="f1")

# array([0.95529884, 0.9500998 , 0.95827049, 0.95520886, 0.95577889])

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfydm7ffdphz663npaag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfydm7ffdphz663npaag.png" alt="Image description" width="254" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This score is a measure of correctness achieved in positive prediction, that is, of the observations labelled as positive how many are actually positive.&lt;/p&gt;

&lt;p&gt;As an alternative to using cross_val_score, if we already have the true y values and the predicted y values, we can calculate the metrics accuracy and recall directly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# Create training and test split
X_train, X_test, y_train, y_test = train_test_split(X, y, 
                                            test_size=0.1,
                                            random_state=1)

# Predict values for training target vector
y_hat = logit.fit(X_train, y_train).predict(X_test)

# Calculate accuracy
accuracy_score(y_test, y_hat)
#   0.947
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Evaluating Binary Classifier Thresholds.
&lt;/h2&gt;

&lt;p&gt;This is when we want to evaluate a binary classifier and various probability thresholds.&lt;/p&gt;

&lt;p&gt;To do this we can use the &lt;em&gt;&lt;strong&gt;receiver operating characteristic (ROC)&lt;/strong&gt;&lt;/em&gt; curve to evaluate the quality of the binary classifier. &lt;strong&gt;roc_curve&lt;/strong&gt; helps us calculate the true and false positives at each threshold and then plot them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_curve, roc_auc_score
from sklearn.model_selection import train_test_split

# Create feature matrix and target vector
features, target = make_classification(n_samples=10000,
                         n_features=10,
                         n_classes=2,
                         n_informative=3,
                         random_state=3)

# Split into training and test sets
features_train, features_test, target_train, target_test = train_test_split(features, target, test_size=0.1, random_state=1)

# Create classifier
logit = LogisticRegression()

# Train model
logit.fit(features_train, target_train)

# Get predicted probabilities
target_probabilities = logit.predict_proba(features_test)[:,1]

# Create true and false positive rates
false_positive_rate, true_positive_rate, threshold = roc_curve(
              target_test,
             target_probabilities
)

# Plot ROC curve
plt.title("Receiver Operating Characteristic")
plt.plot(false_positive_rate, true_positive_rate)
plt.plot([0, 1], ls="--")
plt.plot([0, 0], [1, 0] , c=".7"), plt.plot([1, 1] , c=".7")
plt.ylabel("True Positive Rate")
plt.xlabel("False Positive Rate")
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The graph should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpy60pfq9pulg74sjkrz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpy60pfq9pulg74sjkrz2.png" alt="Image description" width="489" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ROC compares the presence of true positives and false positives at every probability threshold ( probability that an observation is predicted to be a class ).&lt;/p&gt;

&lt;p&gt;A classifier that predicts every observation **correctly **would look like the solid light gray line in the ROC output in the previous&lt;br&gt;
figure, going straight up to the top immediately. &lt;br&gt;
A classifier that predicts at **random **will appear as the diagonal line. The better the model, the closer it is to the solid line.&lt;/p&gt;
&lt;h3&gt;
  
  
  Predicted Probabilities.
&lt;/h3&gt;

&lt;p&gt;Until now we have only examined models based on the values they predict. &lt;br&gt;
However, in many learning algorithms, those predicted values are based on probability estimates. That is, each observation is given an explicit probability of belonging in each class. &lt;br&gt;
In our solution, we can use &lt;strong&gt;predict_proba&lt;/strong&gt; to see the predicted probabilities for the first observation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get predicted probabilities
logit.predict_proba(features_test)[0:1]

# array([[0.86891533, 0.13108467]])

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the classes using &lt;strong&gt;classes_:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;logit.classes_
# array([0, 1])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example the first observation has ~87% chance of being in the negative class (0) and a 13% chance of being in the positive class (1).&lt;/p&gt;

&lt;p&gt;By default scikit-learn predicts an observation is part of the positive class if the probability is greater than 0.5 (&lt;em&gt;&lt;strong&gt;threshold&lt;/strong&gt;&lt;/em&gt;). However, instead of the middle ground we might explicitly want to explicitly bias our model to use a different threshold for substantive reasons e.g if a false positive is very costly to our company, we might prefer a model that has a high probability threshold.&lt;/p&gt;

&lt;p&gt;We fail to predict some positives, but when an observation is predicted to be positive, we can be very confident that the prediction is correct. The trade-off is represented in the &lt;em&gt;&lt;strong&gt;true positive rate (TPR)&lt;/strong&gt;&lt;/em&gt; and the &lt;em&gt;&lt;strong&gt;false positive rate (FPR)&lt;/strong&gt;&lt;/em&gt;.&lt;br&gt;
The TPR is the number of observations correctly predicted true divided by all true positive observations:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesyqnu0xgiojezwesf8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesyqnu0xgiojezwesf8j.png" alt="Image description" width="179" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FPR&lt;/strong&gt; is the number of incorrectly predicted positives divided by all true negative observations:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjql5yelavtc3zesup8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjql5yelavtc3zesup8o.png" alt="Image description" width="211" height="74"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The ROC curve represents the respective &lt;strong&gt;TPR&lt;/strong&gt; and &lt;strong&gt;FPR&lt;/strong&gt; for every probability threshold.&lt;br&gt;
In our solution a threshold of roughly 0.50 has a &lt;strong&gt;TPR&lt;/strong&gt; of ~0.83 and an &lt;strong&gt;FPR&lt;/strong&gt; of ~0.16&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print("Threshold:", threshold[124])
print("True Positive Rate:", true_positive_rate[124])
print("False Positive Rate:", false_positive_rate[124])

# Threshold: 0.5008252732632008
# True Positive Rate: 0.8346938775510204
# False Positive Rate: 0.1607843137254902

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, if we increase the threshold to &lt;em&gt;&lt;strong&gt;~80%&lt;/strong&gt;&lt;/em&gt; (i.e., increase how certain the model has to be before it predicts an observation as positive) the TPR &lt;em&gt;&lt;strong&gt;drops significantly&lt;/strong&gt;&lt;/em&gt; but so does the FPR:&lt;/p&gt;

&lt;p&gt;This is because our higher requirement for being predicted to be in the positive class has caused the model to not identify a number of positive observations (the lower TPR) but has also reduced the noise from negative observations being predicted as positive (the lower FPR).&lt;/p&gt;

&lt;p&gt;ROC curve as a general metric for a model. The better a model is, the higher the curve and thus the greater the area under the curve.&lt;br&gt;
Thus it's common to calculate the area under the ROC curve &lt;strong&gt;(AUC ROC)&lt;/strong&gt; to judge the overall quality of a model at all possible thresholds. The close the AUC ROC is closer to 1, the better the model. &lt;/p&gt;

&lt;p&gt;We can make this calculation like shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Calculate area under curve
roc_auc_score(target_test, target_probabilities)

# 0.9073389355742297

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Evaluating Multiclass Classifier Predictions.
&lt;/h2&gt;

&lt;p&gt;This is useful when we have a model that predicts three or more classes and want to evaluate the model's performance.&lt;/p&gt;

&lt;p&gt;The solution is to use cross-validation with an evaluation metric capable of handling more than two classes like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification

# Generate features matrix and target vector
features, target = make_classification(n_samples = 10000,
                                         n_features = 3,
                                         n_informative = 3,
                                         n_redundant = 0,
                                         n_classes = 3,
                                         random_state = 1)

# Create logistic regression
logit = LogisticRegression()

# Cross-validate model using accuracy
cross_val_score(logit, features, target, scoring='accuracy')

# array([0.841 , 0.829 , 0.8265, 0.8155, 0.82 ])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For &lt;strong&gt;balanced classes&lt;/strong&gt; (roughly equal number of observations in each class of the target vector) we should consider **accuracy **as a simple and interpretable choice of an evaluation metric. However if the classes are imbalanced we should be inclined to use other evaluation metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; Many of scikit-learn's built-in metrics are for evaluating binary classifiers include Precision, recall and F1 score. These were originally designed for binary classifiers, we can apply them to multiclass settings by treating our data as a set of binary classes.&lt;/p&gt;

&lt;p&gt;Thus we can apply the metrics to each class as if it were the only class in the data, and then aggregate the evaluation scores for all the classes by averaging them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Cross-validate model using macro averaged F1 score
cross_val_score(logit, features, target, scoring='f1_macro')

# array([0.84061272, 0.82895312, 0.82625661, 0.81515121, 0.81992692])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code, **macro **refers to the method used to average the evaluation scores from the classes.&lt;/p&gt;

&lt;p&gt;The options are macro, weighted, and micro:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;macro&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Calculate the mean of metric scores for each class, weighting each class equally.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;weighted&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Calculate the mean of metric scores for each class, weighting each class proportional to its size in the data.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;micro&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Calculate the mean of metric scores for each observation-class combination.&lt;/p&gt;
&lt;h2&gt;
  
  
  7. Visualizing a Classifier's Performance.
&lt;/h2&gt;

&lt;p&gt;We do this when we have predicted classes and true classes of the test data and we want to visually compare the model's quality.&lt;/p&gt;

&lt;p&gt;We can start by creating a confusion matrix, which compares the predicted classes and true classes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import datasets
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import pandas as pd

# Load data
iris = datasets.load_iris()

# Create features matrix
features = iris.data

# Create target vector
target = iris.target

# Create list of target class names
class_names = iris.target_names

# Create training and test set
features_train, features_test, target_train, target_test = train_test_split(features, target, random_state=2)

# Create logistic regression
classifier = LogisticRegression()

# Train model and make predictions
target_predicted = classifier.fit(features_train,
target_train).predict(features_test)

# Create confusion matrix
matrix = confusion_matrix(target_test, target_predicted)

# Create pandas dataframe
dataframe = pd.DataFrame(matrix, index=class_names,columns=class_names)

# Create heatmap
sns.heatmap(dataframe, annot=True, cbar=None, cmap="Blues")
plt.title("Confusion Matrix"), plt.tight_layout()
plt.ylabel("True Class"), plt.xlabel("Predicted Class")
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhefev8y94n8nke6mpqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjhefev8y94n8nke6mpqq.png" alt="Image description" width="506" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the major benefits of confusion matrices is their interpretability. Each column of the matrix (often represented as a heatmap) represents predicted classes, while every row shows true classes.&lt;/p&gt;

&lt;p&gt;In the solution, the top-left cell is the number of observations predicted to be Iris setosa (indicated by the column) that are actually Iris setosa (indicated by the row). This means the model accurately predicted all Iris setosa flowers.&lt;/p&gt;

&lt;p&gt;However, the model does not do as well at predicting Iris virginica. The bottom-right cell indicates that the model successfully predicted eleven observations were Iris virginica, but (looking one cell up) predicted one flower to be virginica that was actually Iris versicolor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Noteworthy things about confusion matrices.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A perfect model will have values along the diagonal and zeros everywhere else. A bad model will have the observation counts spread  evenly around cells.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A confusion matrix helps us see where the model was wrong and how wrong it was, i.e we can look at the patterns of misclassification.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Confusion matrices work with any number of classes.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  8. Evaluating Regression Models.
&lt;/h2&gt;

&lt;p&gt;The simplest method of evaluating a regression model is by calculating the &lt;em&gt;&lt;strong&gt;Mean Squared Error (MSE)&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
from sklearn.datasets import make_regression
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression

# Generate features matrix, target vector
features, target = make_regression(n_samples = 100,
                                   n_features = 3,
                                   n_informative = 3,
                                   n_targets = 1,
                                   noise = 50,
                                   coef = False,
                                   random_state = 1)

# Create a linear regression object
ols = LinearRegression()

# Cross-validate the linear regression using (negative) MSE
cross_val_score(ols, features, target,scoring='neg_mean_squared_error')

# array([-1974.65337976, -2004.54137625, -3935.19355723, -1060.04361386, -1598.74104702])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another common regression metric is the coefficient of determination, &lt;em&gt;&lt;strong&gt;(R squared)&lt;/strong&gt;&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Cross-validate the linear regression using R-squared
cross_val_score(ols, features, target, scoring='r2')

# array([0.8622399 , 0.85838075, 0.74723548, 0.91354743, 0.84469331])

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;MSE&lt;/strong&gt; is a measurement of the squared sum of all distances between predicted and true values. The higher the value of MSE, the greater the total squared error and thus the worse the model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F704779ceu0msj0bao0ii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F704779ceu0msj0bao0ii.png" alt="Image description" width="210" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mathematical benefits of squaring the error term&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Forces all the error values to be positive.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It penalizes a few large errors more than many small errors, even if the absolute value of the errors is the same.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;
By default, in scikit-learn, arguments of the &lt;strong&gt;_scoring _&lt;/strong&gt;parameter assumethat higher values are better than lower values. &lt;/p&gt;

&lt;p&gt;However, this is not the case for MSE, where higher values mean a worse model. For this reason, scikit-learn looks at the negative MSE using&lt;br&gt;
the &lt;strong&gt;_neg_mean_squared_error _&lt;/strong&gt;argument.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;R squared&lt;/strong&gt; measures the amount of variance in the target vector that is explained by the model:-&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vdy8s8y4ij97t8u9zl1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vdy8s8y4ij97t8u9zl1.png" alt="Image description" width="217" height="77"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;where &lt;strong&gt;yi **is the true target value of the **ith **observation, **yˆi&lt;/strong&gt; is the predicted value for the &lt;strong&gt;ith **observation, and **y¯&lt;/strong&gt; is the mean value of the target vector. The closer that R2 is to &lt;strong&gt;1.0&lt;/strong&gt;, the better the model.&lt;/p&gt;
&lt;h2&gt;
  
  
  9. Evaluating Clustering Models.
&lt;/h2&gt;

&lt;p&gt;Involves evaluating the performance of an unsupervised learning algorithm.&lt;/p&gt;

&lt;p&gt;We can use the &lt;strong&gt;_silhousette coefficients _&lt;/strong&gt; to measure the quality of the clusters (not the predictive performance).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
import numpy as np
from sklearn.metrics import silhouette_score
from sklearn import datasets
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs

# Generate features matrix
features, _ = make_blobs(n_samples = 1000,
                         n_features = 10,
                         centers = 2,
                         cluster_std = 0.5,
                         shuffle = True,
                         random_state = 1)
# Cluster data using k-means to predict classes
model = KMeans(n_clusters=2, random_state=1).fit(features)

# Get predicted classes
target_predicted = model.labels_

# Evaluate model
silhouette_score(features, target_predicted)

# 0.8916265564072141

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Supervised model evaluation&lt;/em&gt;&lt;/strong&gt; compares predictions (e.g classes or quantitative values) with the corresponding true values in the target vector.&lt;/p&gt;

&lt;p&gt;The most common motivation for using clustering is that your data doesn't have a target vector.&lt;/p&gt;

&lt;p&gt;While we cannot evaluate predictions versus true values if we don't have a target vector, we can evaluate the nature of the clusters themselves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Silhousette coefficients&lt;/em&gt;&lt;/strong&gt; provide a single value for measuring both of the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;"Good" clusters which have very small distances between observations in the same cluster (dense clusters).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Large distances between differnt clusters (i.e well spaced clusters).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Formally, the _*&lt;em&gt;ith *&lt;/em&gt;_observation’s silhouette coefficient&lt;br&gt;
is:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0ud0xf18kzprxcuficb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0ud0xf18kzprxcuficb.png" alt="Image description" width="152" height="58"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;where &lt;strong&gt;_si _&lt;/strong&gt;is the silhouette coefficient for observation &lt;strong&gt;&lt;em&gt;i&lt;/em&gt;&lt;/strong&gt;, &lt;strong&gt;_ai _&lt;/strong&gt;is the mean distance between &lt;strong&gt;_i _&lt;/strong&gt;and all observations of the same class, and &lt;strong&gt;_bi _&lt;/strong&gt;is the mean distance between &lt;strong&gt;_i _&lt;/strong&gt;and all observations from the closest cluster of a different class. &lt;/p&gt;

&lt;p&gt;The value returned by &lt;strong&gt;_silhouette_score _&lt;/strong&gt;is the mean silhouette coefficient for all observations. Silhouette coefficients range between –1 and 1, with 1 indicating dense, well-separated clusters.&lt;/p&gt;
&lt;h2&gt;
  
  
  10. Creating a Custom Evaluation Metric.
&lt;/h2&gt;

&lt;p&gt;Sometimes you might want to evaluate a model using a metric you created.&lt;/p&gt;

&lt;p&gt;Create the metric as a function and convert it into a scorer function using scikit-learn’s &lt;strong&gt;&lt;em&gt;make_scorer&lt;/em&gt;&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
from sklearn.metrics import make_scorer, r2_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge
from sklearn.datasets import make_regression

# Generate features matrix and target vector
features, target = make_regression(n_samples = 100,
n_features = 3, random_state = 1)

# Create training set and test set
features_train, features_test, target_train, target_test = train_test_split(features, target, test_size=0.10, random_state=1)

# Create custom metric
def custom_metric(target_test, target_predicted):

     # Calculate R-squared score
     r2 = r2_score(target_test, target_predicted)

     # Return R-squared score
     return r2

# Make scorer and define that higher scores are better
score = make_scorer(custom_metric, greater_is_better=True)

# Create ridge regression object
classifier = Ridge()

# Train ridge regression model
model = classifier.fit(features_train, target_train)

# Apply custom scorer
score(model, features_test, target_test)

#  0.9997906102882058
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, we define a function that takes in two targets - the ground truth target vector and our predicted values - and outputs some score.&lt;/p&gt;

&lt;p&gt;Second, we use &lt;strong&gt;make_scorer&lt;/strong&gt; to create a scorer object, making sure to specify whether higher or lower scores are desirable (using the **greater_is_better **parameter).&lt;/p&gt;

&lt;p&gt;The custom metric in the solution (&lt;strong&gt;custom_metric&lt;/strong&gt;) is a toy example since it simply wraps a built-in metric for calculating the R2 score. In a real-world situation, we would replace the **custom_metric **function with whatever custom metric we wanted. However, we can see that the custom metric that calculates R2 does work by comparing the results to scikit-learn’s **r2_score **built-in method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Predict values
target_predicted = model.predict(features_test)

# Calculate R-squared score
r2_score(target_test, target_predicted)

#  0.9997906102882058

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  11. Visualizing the Effect of Training Set Size.
&lt;/h2&gt;

&lt;p&gt;In some cases you would like to evaluate the effect of the number of observations in your training set on some metric. (accuracy, F1, etc).&lt;/p&gt;

&lt;p&gt;We can then plot the accuracy against the training set size:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import learning_curve

# Load data
digits = load_digits()
# Create feature matrix and target vector
features, target = digits.data, digits.target
# Create CV training and test scores for various training set sizes
train_sizes, train_scores, test_scores = learning_curve(# Classifier
              RandomForestClassifier(),
              # Feature matrix
              features,
              # Target vector
              target,
              # Number of folds
              cv=10,
              # Performance metric
              scoring='accuracy',
              # Use all computer cores
              n_jobs=-1,
              # Sizes of 50
              # Training set
             train_sizes=np.linspace(0.01,
                                     1.0,
                                      50))

# Create means and standard deviations of training set scores
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)

# Create means and standard deviations of test set scores
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)

# Draw lines
plt.plot(train_sizes, train_mean, '--', color="#111111", label="Training score")
plt.plot(train_sizes, test_mean, color="#111111", label="Cross-validation score")

# Draw bands
plt.fill_between(train_sizes, train_mean - train_std,
train_mean + train_std, color="#DDDDDD")
plt.fill_between(train_sizes, test_mean - test_std,
test_mean + test_std, color="#DDDDDD")

# Create plot
plt.title("Learning Curve")
plt.xlabel("Training Set Size"), plt.ylabel("Accuracy Score"),
plt.legend(loc="best")
plt.tight_layout()
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6uyw1zyoaannve33kjq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6uyw1zyoaannve33kjq.png" alt="Image description" width="562" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning curves&lt;/strong&gt; visualize the performance (e.g., accuracy, recall) of a model on the training set and during cross-validation as the number of observations in the training set increases. &lt;br&gt;
They are commonly used to determine if our learning algorithms would benefit from gathering additional training data.&lt;/p&gt;

&lt;p&gt;In our solution, we plot the accuracy of a random forest classifier at 50 different training set sizes, ranging from 1% of observations to 100%. &lt;br&gt;
The increasing accuracy score of the cross validated models tell us that we would likely benefit from additional observations (although in practice this might not be feasible).&lt;/p&gt;
&lt;h2&gt;
  
  
  12. Creating a Text Report of Evaluation Metrics.
&lt;/h2&gt;

&lt;p&gt;Text reports are important when we want a quick description of a classifier's performance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
from sklearn import datasets
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report

# Load data
iris = datasets.load_iris()

# Create features matrix
features = iris.data

# Create target vector
target = iris.target

# Create list of target class names
class_names = iris.target_names

# Create training and test set
features_train, features_test, target_train, target_test = train_test_split(features, target, random_state=0)

# Create logistic regression
classifier = LogisticRegression()

# Train model and make predictions
model = classifier.fit(features_train, target_train)
target_predicted = model.predict(features_test)

# Create a classification report
print(classification_report(target_test,target_predicted,target_names=class_names))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwek7nom1ekxni9nl72x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwek7nom1ekxni9nl72x.png" alt="Image description" width="410" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  13. Visualizing the Effects of Hyperparameter Values
&lt;/h2&gt;

&lt;p&gt;We want to understand how the performance of a model changes as the value of some hyperparameter changes.&lt;/p&gt;

&lt;p&gt;We can plot the hyperparameter against the model accuracy (validation curve).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Load libraries
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import validation_curve

# Load data
digits = load_digits()

# Create feature matrix and target vector
features, target = digits.data, digits.target

# Create range of values for parameter
param_range = np.arange(1, 250, 2)

# Calculate accuracy on training and test set using range of parameter values 
train_scores, test_scores = validation_curve(
               # Classifier
               RandomForestClassifier(),
               # Feature matrix
               features,
               # Target vector
               target,
               # Hyperparameter to examine
               param_name="n_estimators",
               # Range of hyperparameter's values
               param_range=param_range,
               # Number of folds
               cv=3,
               # Performance metric
               scoring="accuracy",
               # Use all computer cores
               n_jobs=-1)

# Calculate mean and standard deviation for training set scores
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)

# Calculate mean and standard deviation for test set scores
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)

# Plot mean accuracy scores for training and test sets
plt.plot(param_range, train_mean, label="Training score", 
                      color="black")
plt.plot(param_range, test_mean, label="Cross-validation score",
                      color="dimgrey")

# Plot accuracy bands for training and test sets
plt.fill_between(param_range, train_mean - train_std,
train_mean + train_std, color="gray")
plt.fill_between(param_range, test_mean - test_std,
test_mean + test_std, color="gainsboro")

# Create plot
plt.title("Validation Curve With Random Forest")
plt.xlabel("Number Of Trees")
plt.ylabel("Accuracy Score")
plt.tight_layout()
plt.legend(loc="best")
plt.show()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjhtv82gph6f070gq0w0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjhtv82gph6f070gq0w0.png" alt="Image description" width="546" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most training algorithms contain **hyperparameters **that must be chosen before the training process begins. For example, a random forest classifier creates a “forest” of decision trees, each of which votes on the predicted class of an observation.&lt;/p&gt;

&lt;p&gt;One hyperparameter in random forest classifiers is the number of trees in the forest. Most often hyperparameter values are selected during model selection. However, it is occasionally useful to visualize how model performance changes as the hyperparameter value changes.&lt;/p&gt;

&lt;p&gt;In our solution, we plot the changes in accuracy for a random forest classifier for the training set and during cross-validation as the number of trees increases. When we have a small number of trees, both the training and cross-validation score are &lt;strong&gt;low&lt;/strong&gt;, suggesting the model is &lt;strong&gt;underfitted&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;As the number of trees increases to 250, the accuracy of both levels off, suggesting there is probably not much value in the computational cost of training a massive forest.&lt;/p&gt;

&lt;p&gt;In scikit-learn, we can calculate the validation curve using validation_curve, which contains three important parameters:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;param_name&lt;/strong&gt;&lt;br&gt;
Name of the hyperparameter to vary&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;param_range&lt;/strong&gt;&lt;br&gt;
Value of the hyperparameter to use&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;scoring&lt;/strong&gt;&lt;br&gt;
Evaluation metric used to judge to model&lt;/p&gt;

&lt;p&gt;This is a quick view of some of the best practices in evaluating Machine Learning, I'd recommend reading books specifically on **model evaluation **to get a more in-depth explanation of the concept discussed above.&lt;/p&gt;

&lt;p&gt;Happy Coding :)&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What do we REALLY mean by immutable data types?</title>
      <dc:creator>Arnold Chris</dc:creator>
      <pubDate>Sun, 11 Aug 2024 15:46:15 +0000</pubDate>
      <link>https://dev.to/oduor_arnold/what-do-we-really-mean-by-immutable-data-types-edk</link>
      <guid>https://dev.to/oduor_arnold/what-do-we-really-mean-by-immutable-data-types-edk</guid>
      <description>&lt;p&gt;Why are data types either mutable or immutable?&lt;br&gt;
Lets look at python as an example,&lt;/p&gt;

&lt;p&gt;Data types in python are basically objects or classes, int is a class, floats, lists etc.&lt;/p&gt;

&lt;p&gt;Therefore,  writing &lt;code&gt;x=6&lt;/code&gt; creates a new &lt;strong&gt;integer object&lt;/strong&gt; with a value of 6 and points a reference called x at this object.&lt;/p&gt;

&lt;p&gt;Now we need to look into classes, classes basically group data and functions together, there functions are called Methods and they are of two types: &lt;strong&gt;accessor&lt;/strong&gt; and &lt;strong&gt;mutator&lt;/strong&gt; methods.&lt;/p&gt;

&lt;p&gt;Accessor methods access the current state of an object but doesn't change the object itself e.g &lt;/p&gt;

&lt;p&gt;&lt;code&gt;x = "hello"&lt;br&gt;
y = x.upper()&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here the method upper is called on the object that x refers to, the upper accessor then returns a new object, a str object that is an upper-cased version of the original string. (feel free to re-read) , basically it returns a new object based on the original now only it is uppercased.&lt;/p&gt;

&lt;p&gt;Mutator methods on the other hand change the values in the existing objects and a good example is the list type(class).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;newList = [1,2,3]&lt;br&gt;
newList.reverse()&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This method will mutate the existing object, a mutator method can't be undone.&lt;/p&gt;

&lt;p&gt;Data types that lack these mutator methods are said to be immutable and hence only contain accessor methods, one that lack are mutable.&lt;/p&gt;

&lt;p&gt;Hope this helped, stay curious :)&lt;/p&gt;

</description>
      <category>python</category>
      <category>datastructures</category>
    </item>
  </channel>
</rss>
