1. Introduction
In this article we will learn how to choose the best model between multiple models with varying hyperparameters, in some cases we can have more than 50 different models, knowing how to choose one is important to get the best performant one for your dataset.
We will do model selection both by selecting the best learning algorithm and it's best hyperparameters.
But first what are hyperparameters? These are the additional settings that are set by the user and will affect how the model will learn it's parameters. Parameters on the other hand are what models learn during the training process.
2. Using Exhaustive Search.
Exhaustive Search involves selecting the best model by searching over a range of hyperparameters. To do this we make use of scikit-learn's GridSearchCV.
How GridSearchCV works:
- User defines sets of possible values for one or multiple hyperparameters.
- GridSearchCV trains a model using every value and /or combination of values.
- The model with the best performance is selected as the best model.
Example
We can set up a logistic regression as our learning algorithm and tune two hyperparameters, (C and the regularization penalty). We can also specify two parameters the solver and max iterations.
Now for each combination of C and regularization penalty values, we train the model and evaluate it using k-fold cross-validation.
Since we have 10 possible values of C, 2 possible values of reg. penalty and 5 folds we have a total of (10 x 2 x 5 = 100) candidate models from which the best is selected.
# Load libraries
import numpy as np
from sklearn import linear_model, datasets
from sklearn.model_selection import GridSearchCV
# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target
# Create logistic regression
logistic = linear_model.LogisticRegression(max_iter=500, solver='liblinear')
# Create range of candidate penalty hyperparameter values
penalty = ['l1','l2']
# Create range of candidate regularization hyperparameter values
C = np.logspace(0, 4, 10)
# Create dictionary of hyperparameter candidates
hyperparameters = dict(C=C, penalty=penalty)
# Create grid search
gridsearch = GridSearchCV(logistic, hyperparameters, cv=5, verbose=0)
# Fit grid search
best_model = gridsearch.fit(features, target)
# Show the best model
print(best_model.best_estimator_)
# LogisticRegression(C=7.742636826811269, max_iter=500, penalty='l1',
solver='liblinear') # Result
Getting the best model:
# View best hyperparameters
print('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])
print('Best C:', best_model.best_estimator_.get_params()['C'])
# Best Penalty: l1 #Result
# Best C: 7.742636826811269 # Result
3. Using Randomized Search.
This is commonly used when you want a computationally cheaper method than exhaustive search to select the best model.
It's worth noting that the reason RandomizedSearchCV isn't inherently faster than GridSearchCV, but it often achieves comparable performance to GridSearchCV in less time just by testing fewer combinations.
How RandomizedSearchCV works:
- The user will supply hyperparameters / distributions (e.g normal, uniform).
- The algorithms will randomly search over a specific number of random combinations of the given hyperparameter values without replacement.
Example
# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target
# Create logistic regression
logistic = linear_model.LogisticRegression(max_iter=500, solver='liblinear')
# Create range of candidate regularization penalty hyperparameter values
penalty = ['l1', 'l2']
# Create distribution of candidate regularization hyperparameter values
C = uniform(loc=0, scale=4)
# Create hyperparameter options
hyperparameters = dict(C=C, penalty=penalty)
# Create randomized search
randomizedsearch = RandomizedSearchCV(
logistic, hyperparameters, random_state=1, n_iter=100, cv=5, verbose=0,
n_jobs=-1)
# Fit randomized search
best_model = randomizedsearch.fit(features, target)
# Print best model
print(best_model.best_estimator_)
# LogisticRegression(C=1.668088018810296, max_iter=500, penalty='l1',
solver='liblinear') #Result.
Getting the best model:
# View best hyperparameters
print('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])
print('Best C:', best_model.best_estimator_.get_params()['C'])
# Best Penalty: l1 # Result
# Best C: 1.668088018810296 # Result
Note: The number of candidate models trained is specified in the n_iter (number of iterations) settings.
4. Selecting the Best Models from Multiple Learning Algorithms.
In this part we will look at how to select the best model by searching over a range of learning algorithms and their respective hyperparameters.
We can do this by simply creating a dictionary of candidate learning algorithms and their hyperparameters to use as the search space for GridSearchCV.
Steps:
- We can define a search space that includes two learning algorithms.
- We specify the hyperparameters and we define their candidate values using the format classifier[hyperparameter name]_.
# Load libraries
import numpy as np
from sklearn import datasets
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
# Set random seed
np.random.seed(0)
# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target
# Create a pipeline
pipe = Pipeline([("classifier", RandomForestClassifier())])
# Create dictionary with candidate learning algorithms and their hyperparameters
search_space = [{"classifier": [LogisticRegression(max_iter=500,
solver='liblinear')],
"classifier__penalty": ['l1', 'l2'],
"classifier__C": np.logspace(0, 4, 10)},
{"classifier": [RandomForestClassifier()],
"classifier__n_estimators": [10, 100, 1000],
"classifier__max_features": [1, 2, 3]}]
# Create grid search
gridsearch = GridSearchCV(pipe, search_space, cv=5, verbose=0)
# Fit grid search
best_model = gridsearch.fit(features, target)
# Print best model
print(best_model.best_estimator_)
# Pipeline(steps=[('classifier',
LogisticRegression(C=7.742636826811269, max_iter=500,
penalty='l1', solver='liblinear'))])
The best model:
After the search is complete, we can use best_estimator_ to view the best model's learning algorithm and hyperparameters.
5. Selecting the Best Model When Preprocessing.
Sometimes we might want to include a preprocessing step during model selection.
The best solution is to create a pipeline that includes the preprocessing step and any of its parameters:
The First Challenge:
GridSeachCv uses cross-validation to determine the model with the highest performance.
However, in cross-validation we are pretending that the fold held out as the test set is not seen, and thus not part of fitting any preprocessing steps (e.g scaling or standardization).
For this reason the preprocessing steps must be a part of the set of actions taken by GridSearchCV.
The Solution
Scikit-learn provides the FeatureUnion which allows us to combine multiple preprocessing actions properly.
steps:
- We use _FeatureUnion _to combine two preprocessing steps: standardize the feature values(StandardScaler) and principal component analysis(PCA) - this object is called the preprocess and contains both of our preprocessing steps.
- Next we include preprocess in our pipeline with our learning algorithm.
This allows us to outsource the proper handling of fitting, transforming, and training the models with combinations of hyperparameters to scikit-learn.
Second Challenge:
Some preprocessing methods such as PCA have their own parameters, dimensionality reduction using PCA requires the user to define the number of principal components to use to produce the transformed features set. Ideally we would choose the number of components that produces a model with the greatest performance for some evaluation test metric.
Solution.
In scikit-learn when we include candidate component values in the search space, they are treated like any other hyperparameter to be searched over.
# Load libraries
import numpy as np
from sklearn import datasets
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
# Set random seed
np.random.seed(0)
# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target
# Create a preprocessing object that includes StandardScaler features and PCA
preprocess = FeatureUnion([("std", StandardScaler()), ("pca", PCA())])
# Create a pipeline
pipe = Pipeline([("preprocess", preprocess),
("classifier", LogisticRegression(max_iter=1000,
solver='liblinear'))])
# Create space of candidate values
search_space = [{"preprocess__pca__n_components": [1, 2, 3],
"classifier__penalty": ["l1", "l2"],
"classifier__C": np.logspace(0, 4, 10)}]
# Create grid search
clf = GridSearchCV(pipe, search_space, cv=5, verbose=0, n_jobs=-1)
# Fit grid search
best_model = clf.fit(features, target)
# Print best model
print(best_model.best_estimator_)
# Pipeline(steps=[('preprocess',
FeatureUnion(transformer_list=[('std', StandardScaler()),
('pca', PCA(n_components=1))])),
('classifier',
LogisticRegression(C=7.742636826811269, max_iter=1000,
penalty='l1', solver='liblinear'))]) # Result
After the model selection is complete we can view the preprocessing values that produced the best model.
Preprocessing steps that produced the best modes
# View best n_components
best_model.best_estimator_.get_params()
# ['preprocess__pca__n_components'] # Results
5. Speeding Up Model Selection with Parallelization.
That time you need to reduce the time it takes to select a model.
We can do this by training multiple models simultaneously, this is done by using all the cores in our machine by setting n_jobs=-1
# Load libraries
import numpy as np
from sklearn import linear_model, datasets
from sklearn.model_selection import GridSearchCV
# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target
# Create logistic regression
logistic = linear_model.LogisticRegression(max_iter=500,
solver='liblinear')
# Create range of candidate regularization penalty hyperparameter values
penalty = ["l1", "l2"]
# Create range of candidate values for C
C = np.logspace(0, 4, 1000)
# Create hyperparameter options
hyperparameters = dict(C=C, penalty=penalty)
# Create grid search
gridsearch = GridSearchCV(logistic, hyperparameters, cv=5, n_jobs=-1,
verbose=1)
# Fit grid search
best_model = gridsearch.fit(features, target)
# Print best model
print(best_model.best_estimator_)
# Fitting 5 folds for each of 2000 candidates, totalling 10000 fits
# LogisticRegression(C=5.926151812475554, max_iter=500, penalty='l1',
solver='liblinear')
6. Speeding Up Model Selection ( Algorithm Specific Methods).
This a way to speed up model selection without using additional compute power.
This is possible because scikit-learn has model-specific cross-validation hyperparameter tuning.
Sometimes the characteristics of a learning algorithms allows us to search for the best hyperparameters significantly faster.
Example:
LogisticRegression is used to conduct a standard logistic regression classifier.
LogisticRegressionCV implements an efficient cross-validated logistic regression classifier that can identify the optimum value of the hyperparameter C.
# Load libraries
from sklearn import linear_model, datasets
# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target
# Create cross-validated logistic regression
logit = linear_model.LogisticRegressionCV(Cs=100, max_iter=500,
solver='liblinear')
# Train model
logit.fit(features, target)
# Print model
print(logit)
# LogisticRegressionCV(Cs=100, max_iter=500, solver='liblinear')
Note:A major downside to LogisticRegressionCV is that it can only search a range of values for C. This limitation is common to many of scikit-learn's model-specific cross-validated approaches.
I hope this Article was helpful in creating a quick overview of how to select a machine learning model.
Top comments (0)