Let’s take a look at the different steps to build a prediction model and go over the what, when, why, and how people accomplish them.
Below are the steps required to solve a machine learning use case and to build a model.
- Define the Objective
- Data Gathering
- Data Cleaning
- Exploratory Data Analysis (EDA)
- Feature Engineering
- Feature Selection
- Model Building
- Model Evaluation
- Model Optimization
Deciding a use case you want to predict or know more about.
The objective is the first step which is decided based on business requirements.
Defining the objective sheds light on what kind of data should be gathered. It also helps us in judging what kind of observations are important while doing exploratory data analysis.
An objective should be clear and precise. Therefore, to define a clear objective we need to follow few steps like:
- Understand the business (Eg: Grocery store)
- Identify the problem (Eg: Less Profits)
- List out all the possible solutions to solve the problem(Eg: By increasing sales or by reducing manufacturing costs or by managing inventory etc.)
- Decide on one solution (Eg: managing inventory, we can come to this conclusion by talking to the respective business people back and forth.
By following the above steps, we’ve clearly defined that the objective is to build a model to manage inventory in order to increase store profits.
Data Gathering is nothing but collecting the data required as per the defined objective.
Once the objective is defined, we will collect data.
Without past data, we cannot predict the future, hence Data Gathering is necessary. In general a dataset is created by gathering data from various resources based on the objective. One of the reasons for gathering data from multiple resources is to get more accurate results i.e.,”The more the data, the more accurate the results will be”.
Data can be collected in one of the following ways mentioned below:
API’s (like Google, Amazon, Twitter, New York Times etc.)
Databases (like AWS, GCP etc.)
Open source (Kaggle/UCI Machine Learning Repositories etc.)
Web Scraping (Not recommended, as often it is considered as illegal)
The order of Defining the objective and Data gathering steps can be changed. Sometimes we will have the data handy and we need to define the objective later and sometimes we will decide the objective first and then we will gather data.
Data cleaning is the process of removing, modifying or formatting data that is incorrect, irrelevant or duplicated.
Once we have the dataset ready, we will clean the data.
Data Cleaning helps in preparing the data for Exploratory Data Analysis.
We use libraries like Pandas, Numpy to do Data Cleaning and apply the following key steps to determine if we need to clean the dataset.
- Check how many rows and columns are in the dataset.
- Look for duplicate features by going through the meta info provided.
- Identify Numerical and Categorical features in the gathered data and check if formatting is required or not.
Formatting can be something like changing data types of the features, correcting the typos or removing the special characters from the data if there are any.
If you are working with real time data, then it’s recommended to save the cleaned dataset in the cloud databases before the next steps.
In simple terms, EDA is nothing but understanding and analyzing the data by using various Statistical Measures (like mean, median) and Visualization Techniques(like Univariate Analysis, Bivariate Analysis etc.).
After the data cleaning stage. Once the data is cleaned, we perform EDA on cleaned data.
Exploratory Data Analysis is considered as the fundamental and crucial step in solving any Machine Learning use case as it helps us to identify trends, or patterns in the data.
There are Python libraries like Pandas, Numpy, Statsmodels, Matplotlib, Seaborn, Plotly etc, to perform Exploratory Data Analysis.
While doing EDA, some of the basic common questions we ask are:
- What are the independent and dependent features/labels in the collected data?
- Is the selected label/dependent feature Categorical or Numerical?
- Are there any missing values in the features/variables?
- What are the summary statistics (like mean etc.) for Numerical features?
- What are the summary statistics (like mode etc.) for Categorical features?
- Are the features/variables normally distributed or skewed?
- Are there any outliers in the features/variables?
- Which independent features are correlated with the dependent feature?
- Is there any correlation between the independent features? >So, we will try to understand the data by finding answers to the above questions both Visually (by plotting graphs) and Statistically (hypothesis testing like normality tests).
When we are dealing with larger datasets, then it’s a bit difficult to get more insights from the data. Hence, at this stage we sometimes use Unsupervised learning techniques like Clustering to identify hidden groups/clusters in the data which thereby helps us in understanding the data more.
A feature refers to a column in a dataset, while engineering can be manipulating, transforming, or constructing, together they’re known as Feature Engineering. Simply put, Feature Engineering is nothing but transforming existing features or constructing new features.
Feature Engineering is done immediately after Exploratory Data Analysis (EDA)
Feature Engineering transforms the raw data/features into features which are suitable for machine learning algorithms. This step is necessary because feature engineering further helps in improving machine learning model’s performance and accuracy.
Algorithm: Algorithms are mathematical procedures applied on a given data.
Model: Outcome of a machine learning algorithm is a generalized equation for the given data and this generalized equation is called a model.
We use libraries like Pandas, Numpy, Scikit-learn to do Feature Engineering. Feature Engineering techniques include:
- Handling Missing Values
- Handling Skewness
- Treating Outliers
- Handling Imbalanced data
- Scaling down the features
- Creating new features from the existing features
Feature Selection is the process of selecting the best set of independent features or columns that are required to train a machine learning algorithm.
Feature Selection is performed right after the feature engineering step.
Feature Selection is necessary for the following reasons:
Improves Machine Learning Model performance.
Reduces training time of machine learning algorithms.
Improves the generalization of the model.
We use Python libraries like Statsmodels or Scikit-learn to do feature selection.
Each of the following methods can be used for selecting the best independent features:
- Filter methods
- Wrapper methods
- Embedded or intrinsic methods
If the number of selected input features are very large (probably greater than the number of rows/records in the dataset), then we can use Unsupervised learning techniques like Dimensionality Reduction at this stage to reduce the total number of inputs to the model.
Building a machine learning model is about coming up with a generalized equation for data using machine learning algorithms.
Machine learning algorithms are not only used to build models but sometimes they are also used for filling missing values, detecting outliers, etc.
You start building immediately after feature selection, with independent features.
Building a machine learning model helps businesses in predicting the future.
Scikit-learn is used to build machine learning models.
Basic Steps to create a machine learning model:
- Create two variables to store Dependent and Independent Features separately.
- Split the variable(which stores independent features) into either train, validation, test sets or use Cross validation techniques to split the data.
Train set: To train the algorithms
Validation set: To optimize the model
Test set: To evaluate the model.
Cross validation techniques are used to split the data when you are working with small datasets.
- Build a model on a training set.
- What models can you build? Machine Learning algorithms are broadly categorized into two types, Supervised, Unsupervised machine learning algorithms. Predictive models are built using Supervised Machine Learning Algorithms. The models built using supervised machine learning algorithms are known as Supervised Machine Learning Models. There are two types of Supervised Machine Learning Models that can be build: — Regression models: Some of the regression models are Linear Regression, Decision Tree Regressor, Random Forest Regressor, Support Vector Regression. — Classification models: Some of the classification models are Logistic Regression, K-Nearest Neighbors, Decision Tree Classifier, Support Vector Machine(classifier), Random Forest Classifier, XGBoost. Unsupervised machine learning algorithms are not used to build models, rather they are used in either identifying hidden groups/clusters in the data or to reduce dimensions of the data. Some of the unsupervised learning algorithms are Clustering Algorithms(like K-means clustering, etc), Dimensionality Reduction Techniques(like PCA etc).
In simple model evaluation means checking how accurate the model’s predictions are, that is determining how well the model is behaving on train and test dataset.
As soon as model building is done, the next step is to evaluate it.
In general, we will build many machine learning models by using different machine learning algorithms, hence evaluating the model helps in choosing a model which is giving best results.
We use the Scikit-learn library to evaluate models using evaluation metrics.
Metrics are divided into two categories as shown:
Regression Model Metrics: Mean Squared Error, Root Mean Squared Error, Mean Absolute Error
Classification Model Metrics: Accuracy (Confusion Matrix), Recall, Precision, F1-Score, Specificity, ROC (Receiver Operator Characteristics), AUC (Area Under Curve).
Most of the machine learning models have some hyperparameters which can be tuned or adjusted. For example: Ridge Regression has hyperparameters like regularization term, similarly Decision Tree model has hyperparameters like desired depth or number of leaves in a tree.
The process of tuning these hyperparameters to determine the best combination of hyperparameters to increase model’s performance is known as hyperparameter optimization or hyperparameter tuning.
After calculating the Evaluation Metrics, we will choose the models with the best results and then tune hyperparameters to enhance the results.
Optimization increases the performance of the machine learning models which in turn increases the accuracy of the models and gives best predictions.
We make use of libraries like Scikit-learn etc or we can use frameworks like Optuna to optimize by tuning hyperparameters.
Hyperparameter tuning approaches include:
- Grid Search
- Random Search
- Bayesian Optimization
- Genetic Algorithms
Finally, we will choose our hyperparameter optimized model with the best metrics and use that model for production.
After all these steps, if you are still not happy with the machine learning model’s performance, then you can repeat the entire process starting from Step 2 through Step 9. Remember, Machine Learning is an iterative, hit and trial process and its performance also depends on the sample of the data we gathered.
That’s it for this blog. I tried my best to keep it as simple as possible, and I hope you all got an idea on how to build and optimize a machine learning model.
As part of this series, we will implement all the above mentioned steps on Telco Customer data and come up with the best model to predict whether a customer churns.
Thanks for reading!!
This guest blog was written by Jaanvi. Learn more about her on LinkedIn.