DEV Community

Cover image for Data Preprocessing Techniques for Machine Learning in Python
Ife
Ife

Posted on • Edited on

Data Preprocessing Techniques for Machine Learning in Python

Data preprocessing is a critical step in machine learning workflows. It is the act of carrying out certain actions or steps on a dataset to improve the dataset's quality before it is used for machine learning or other tasks. Data preprocessing steps involve cleaning, transforming, normalization and handling outliers in order to improve its quality or ensure that it is suitable for its main purpose (in this case, machine learning). A clean and high-quality dataset enhances the machine learning model's performance.

Common issues with low-quality data include:

  • Missing values
  • Inconsistent formats
  • Duplicate values
  • Irrelevant features

In summary, these are the steps in data preprocessing for machine learning:

  • Import necessary libraries.
  • Load and inspect the dataset.
  • Data cleaning
    • Handling missing values.
    • Duplicate removal.
    • Dealing with outliers.
  • Data transformation
    • Normalization
    • Standardization

You will need basic knowledge of Python and how to use Python libraries for data preprocessing to be able to follow this guide.

Requirements:
The following are required for data preprocessing in this guide:

You can also check out the output of each code in these Jupyter notebooks on GitHub.

Import necessary libraries

If you haven't installed Python already, you can download it from the Python website and follow the instructions to install it.

Once Python has been installed, install the required libraries

pip install numpy scipy pandas scikit-learn
Enter fullscreen mode Exit fullscreen mode

Install Jupyter Notebook.

pip install notebook
Enter fullscreen mode Exit fullscreen mode

After installation, start Jupyter Notebook with the following command

jupyter notebook
Enter fullscreen mode Exit fullscreen mode

This will launch Jupyter Notebook in your default web browser. If not, check the terminal for a link you can manually paste into your browser.

Open a new notebook from the File menu, import the required libraries and run the cell

import numpy as np
import pandas as pd
import scipy
Enter fullscreen mode Exit fullscreen mode

Load and Inspect the Data

Go to the Melbourne Housing Dataset site and download the dataset. Load the dataset into the notebook using the following code. You can copy the file path on your computer to paste in the read_csv function. You can also put the CSV file in the same folder as the notebook and import the file as seen below.

data = pd.read_csv(r"melb_data.csv")

# View the first 5 columns of the dataset
data.head()
Enter fullscreen mode Exit fullscreen mode

Split the data into training and validation sets

from sklearn.model_selection import train_test_split

# Set the target
y = data['Price']

# Firstly drop categorical data types
melb_features = data.drop(['Price'], axis=1) #drop the target column

X = melb_features.select_dtypes(exclude=['object'])

# Divide data into training and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)
Enter fullscreen mode Exit fullscreen mode

You have to split data into training and validation sets to prevent data leakage. As a result, whatever preprocessing technique you carry out on the training features set is the same as the one you carry out on the validation features set.

Now the dataset is ready for preprocessing!

Data Cleaning

Handling missing values
Missing values in a dataset are like holes in a fabric that are supposed to be used to sew a dress. It spoils the dress before it is even made.

There are 3 ways to handle missing values in a dataset.

  • Drop the rows or columns with empty cells
#Drop the rows with empty cells from the original data frame
data.dropna(inplace=True)

#Drop the columns with empty cells
#firstly, get the names of columns with empty cells
cols_with_empty_cells = [col for col in X_train.columns if X_train[col].isnull().any()]

#secondly, drop the columns with the empty cells
removed_X_train_cols = X_train.drop(cols_with_empty_cells, axis=1)
removed_X_valid_cols = X_valid.drop(cols_with_empty_cells, axis=1)

Enter fullscreen mode Exit fullscreen mode

The issue with this method is that you may lose valuable information that you are to train your model with. Unless most values in the dropped rows or columns are missing, there is no need to drop either rows or columns with empty cells.

  • Impute values in the empty cells

You can impute or fill in the empty cells with the mean, median or mode of the data in that particular column. SimpleImputer from Scikit Learn will be used to impute values in the empty cells

from sklearn.impute import SimpleImputer

# Impute values
imputer = SimpleImputer()
imputed_X_train_values = pd.DataFrame(imputer.fit_transform(X_train))
imputed_X_valid_values = pd.DataFrame(imputer.transform(X_valid))

# Imputation removed column names so we put them back
imputed_X_train_values.columns = X_train.columns
imputed_X_valid_values.columns = X_valid.columns

# Set the imputed values to X_train
X_train = imputed_X_train_values
X_valid = imputed_X_valid_values

X_train.head()
Enter fullscreen mode Exit fullscreen mode
  • Impute and notify

How this works is that you impute values in the empty cells but you also create a column that indicates that the cell was initially empty.

# Make new columns indicating what will be imputed
# The column will have booleans as values
for col in cols_with_empty_cells:
    X_train[col + '_was_missing'] = X_train[col].isnull()
    X_valid[col + '_was_missing'] = X_valid[col].isnull()

# Impute values
imputer = SimpleImputer()
imputed_X_train_values = pd.DataFrame(imputer.fit_transform(X_train))
imputed_X_valid_values = pd.DataFrame(imputer.transform(X_valid))

# Imputation removed column names so we put them back
imputed_X_train_values.columns = X_train.columns
imputed_X_valid_values.columns = X_valid.columns

# Set the imputed values to X_train
X_train = imputed_X_train_values
X_valid = imputed_X_valid_values

# See the new columns and their values
X_train.head() 

Enter fullscreen mode Exit fullscreen mode

Duplicate removal
Duplicate cells mean repeated data and it affects model accuracy. The only way to deal with them is to drop them.

# Check for the number of duplicate rows in the dataset
X_train.duplicated().sum()

# Drop the duplicate rows
X_train.drop_duplicates(inplace=True)
X_valid.drop_duplicates(inplace=True)
Enter fullscreen mode Exit fullscreen mode

Dealing with outliers
Outliers are values that are significantly different from the other values in the dataset. They can be unusually high or low compared to other data values. They can arise due to entry errors or they could genuinely be outliers.

It is important to deal with outliers or else they will lead to inaccurate data analysis or models. One method to detect outliers is by calculating z-scores.

The way it works is that the z-score is used to check if a data point is 3 points or more away from the mean value. This calculation is done for every data point. If the z-score for a data point equals 3 or a higher value, the data point is an outlier.

from scipy import stats
import pandas as pd

# Calculate Z-scores for each row in the training and validation datasets
X_train_zscores = stats.zscore(X_train, axis=1)
X_valid_zscores = stats.zscore(X_valid, axis=1)

# Define the threshold for outlier detection
threshold = 3

# Identify rows in X_train and X_valid with values above the threshold (meaning they're outliers)
outliers_train = np.abs(X_train_zscores) > threshold
outliers_valid = np.abs(X_valid_zscores) > threshold

# Remove rows identified as outliers from X_train and X_valid (~ means NOT)
X_train_no_outliers = X_train[~outliers_train]
X_valid_no_outliers = X_valid[~outliers_valid]

# Display the results
print("Original X_train shape:", X_train.shape)
print("X_train shape after removing outliers:", X_train_no_outliers.shape)

print("Original X_valid shape:", X_valid.shape)
print("X_valid shape after removing outliers:", X_valid_no_outliers.shape)

Enter fullscreen mode Exit fullscreen mode

Data Transformation

Normalization
You normalize features so they can be described as a normal distribution.

A normal distribution (also known as the Gaussian distribution) is a statistical distribution where there are roughly equal distances or distributions above and below the mean. The graph of the data points of a normally distributed data form a bell curve.

The point of normalizing data is if the machine learning algorithm you want to use assumes that the data is normally distributed. An example is the Gaussian Naive Bayes model.

from sklearn.preprocessing import MinMaxScaler

# Initialize the MinMaxScaler
scaler = MinMaxScaler()

# Fit the scaler on the training data and transform it
X_train_normalized = scaler.fit_transform(X_train)

# Transform the validation data using the same scaler
X_valid_normalized = scaler.transform(X_valid)

# Convert the normalized data back into DataFrames to keep column names
X_train_normalized = pd.DataFrame(X_train_normalized, columns=X_train.columns, index=X_train.index)
X_valid_normalized = pd.DataFrame(X_valid_normalized, columns=X_valid.columns, index=X_valid.index)

# Display the results
print("First few rows of normalized X_train:")
print(X_train_normalized.head())

print("First few rows of normalized X_valid:")
print(X_valid_normalized.head())
Enter fullscreen mode Exit fullscreen mode

Standardization
Standardization transforms the features of a dataset to have a mean of 0 and a standard deviation of 1. This process scales each feature so that it has similar ranges across the data. This ensures that each feature contributes equally to model training.

You use standardization when:

  • The features in your data are on different scales or units.
  • The machine learning model you want to use is based on distance or gradient-based optimizations (e.g., linear regression, logistic regression, K-means clustering).

You use StandardScaler() from the sklearn library to standardize features.

from sklearn.preprocessing import StandardScaler

# Initialize the StandardScaler
scaler = StandardScaler()

# Fit the scaler on the training and validation data and transform them
X_train_standardized = scaler.fit_transform(X_train)
X_valid_standardized = scaler.transform(X_valid)

# Convert the standardized data back into DataFrames to keep column names
X_train_standardized = pd.DataFrame(X_train_standardized, columns=X_train.columns, index=X_train.index)
X_valid_standardized = pd.DataFrame(X_valid_standardized, columns=X_valid.columns, index=X_valid.index)

# Display the results
print("First few rows of standardized X_train:")
print(X_train_standardized.head())

print("First few rows of standardized X_valid:")
print(X_valid_standardized.head())
Enter fullscreen mode Exit fullscreen mode

Conclusion

Data preprocessing is not just a preliminary stage. It is part of the process of building accurate machine learning models. It can also be tweaked to fit the needs of the dataset you are working with.

Like with most activities, practice makes perfect. As you continue to practise these data preprocessing techniques, your skills will improve as well as your models.

Thank you for reading through. I would love to read your thoughts on this 👇

Top comments (0)