DEV Community

Sai Vishwa B
Sai Vishwa B

Posted on

How to preprocess your Dataset

Introduction

The Titanic dataset is a classic dataset used in data science and machine learning projects. It contains information about the passengers on the Titanic, and the goal is often to predict which passengers survived the disaster. Before building any predictive model, it's crucial to preprocess the data to ensure it's clean and suitable for analysis. This blog post will guide you through the essential steps of preprocessing the Titanic dataset using Python.

Step 1: Loading the Data

The first step in any data analysis project is loading the dataset. We use the pandas library to read the CSV file containing the Titanic data. This dataset includes features like Name, Age, Sex, Ticket, Fare, and whether the passenger survived (Survived).

import pandas as pd
import numpy as np
Enter fullscreen mode Exit fullscreen mode

Load the Titanic dataset

titanic = pd.read_csv('titanic.csv')
titanic.head()

Enter fullscreen mode Exit fullscreen mode

Understand the data

The dataset contains the following variables related to passengers on the Titanic:

  • Survival: Indicates if the passenger survived.

    • 0 = No
    • 1 = Yes
  • Pclass: Ticket class of the passenger.

    • 1 = 1st class
    • 2 = 2nd class
    • 3 = 3rd class
  • Sex: Gender of the passenger.

  • Age: Age of the passenger in years.

  • SibSp: Number of siblings or spouses aboard the Titanic.

  • Parch: Number of parents or children aboard the Titanic.

  • Ticket: Ticket number.

  • Fare: Passenger fare.

  • Cabin: Cabin number.

  • Embarked: Port of embarkation.

    • C = Cherbourg
    • Q = Queenstown
    • S = Southampton

Step 2: Exploratory Data Analysis (EDA)

Exploratory Data Analysis (EDA) involves examining the dataset to understand its structure and the relationships between different variables. This step helps identify any patterns, trends, or anomalies in the data.

Overview of the Dataset

We start by displaying the first few rows of the dataset and getting a summary of the statistics. This gives us an idea of the data types, the range of values, and the presence of any missing values.

# Display the first few rows
print(titanic.head())

# Summary statistics
print(titanic.describe(include='all'))
Enter fullscreen mode Exit fullscreen mode

Step 3: Data Cleaning

Data cleaning is the process of handling missing values, correcting data types, and removing any inconsistencies. In the Titanic dataset, features like Age, Cabin, and Embarked have missing values.

Handling Missing Values

To handle missing values, we can fill them with appropriate values or drop rows/columns with missing data. For example, we can fill missing Age values with the median age and drop rows with missing Embarked values.

# Fill missing age values with the mode
titanic['Age'].fillna(titanic['Age'].mode(), inplace=True)

# Drop rows with missing 'Embarked' values
titanic.dropna(subset=['Embarked'], inplace=True)

# Check remaining missing values
print(titanic.isnull().sum())
Enter fullscreen mode Exit fullscreen mode

Step 4: Feature Engineering

Feature engineering involves transforming existing ones to improve model performance. This step can include encoding categorical variables scaling numerical features.

Encoding Categorical Variables

Machine learning algorithms require numerical input, so we need to convert categorical features into numerical ones. We can use one-hot encoding for features like Sex and Embarked.

# Convert categorical features to numerical
from sklearn import preprocessing
le = preprocessing.LabelEncoder()

#fit the required column to be transformed
le.fit(df['Sex'])
df['Sex'] = le.transform(df['Sex'])
Enter fullscreen mode Exit fullscreen mode

Conclusion

Preprocessing is a critical step in any data science project. In this blog post, we covered the essential steps of loading data, performing exploratory data analysis, cleaning the data, and feature engineering. These steps help ensure our data is ready for analysis or model building. The next step is to use this preprocessed data to build predictive models and evaluate their performance. For further insights take a look into my colab notebook

By following these steps, beginners can get a solid foundation in data preprocessing, setting the stage for more advanced data analysis and machine learning tasks. Happy coding!

Top comments (0)