DEV Community

Cover image for Why You Need to Explore Your Data & How You Can Start
Davis David
Davis David

Posted on

Why You Need to Explore Your Data & How You Can Start

We live in the world where millions of data are generated every single day, from smartphones we use every day, what we search on google or bing, what we post, like, comment or share in different social media platforms, what we buy in e-commerce sites, data generated by machines and other sources. We are in the Data Age and data is a new oil.

Alt text of image

Quick Fact: The article in Forbes states that ‘ The amount of data we produce every day is truly mind-boggling. There are 2.5 quintillion bytes of data created each day at our current pace’.

Data has a lot of potentials if you can find insights from it and it allows you to make data-driven decisions in whatever business you are doing instead of depending on your experiences. Big to small companies have started to use data to understand their customers better, sales and marketing behaviors and make an accurate decision for their business.

The question is how you can start finding insights from your data in order to make data-driven decisions.?

It all starts by exploring your data to find and understand the hidden patterns, knowledge, and facts that can help you to make a better decision.

In this article, you will learn

  • Exploratory data analysis.
  • Importances of exploratory data analysis.
  • Python packages you can use to explore your data.
  • Practical example with a real-world dataset.

What is Exploratory Data Analysis?

Exploratory Data Analysis refers to the critical process of performing initial investigations on data so as to discover patterns, to spot anomalies, to test hypotheses and to check assumptions with the help of summary statistics and graphical representations.

It is a good practice to understand the data first and try to gather as many insights from it.

Why Exploratory Data Analysis is important?

By exploring your data you can benefit in different ways like:-

  • Identifying the most important variables/features in your dataset.
  • Testing a hypothesis or checking assumptions related to the dataset.
  • To check the quality of data for further processing and cleaning.
  • Deliver data-driven insights to business stakeholders.
  • Verify expected relationships actually exist in the data.
  • To find unexpected structures or patterns in the data.

Python packages for Exploratory Data Analysis

The following python packages will help you to start exploring your dataset.

  • Pandas- is a Python package focus on data analysis.
  • NumPy- is a general-purpose array-processing package.
  • Matplotlib- is a Python 2D plotting library which produces publication quality figures in a variety of formats.
  • Seaborn- is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.

Now you know what is EDA and its benefits let move on by starting to explore the Financial Inclusion in Africa dataset from zindi Africa so that you can understand important steps to follow when you analyze your own dataset.

Exploratory Data Analysis For Financial inclusion in Africa Dataset

The first important step is to understand the problem statement about the dataset you are going to analyze. This will help you to generate Hypotheses or assumptions about the dataset.

1.Understand The Problem Statement

Financial Inclusion remains one of the main obstacles to economic and human development in Africa. For example, across Kenya, Rwanda, Tanzania, and Uganda only 9.1 million adults (or 13.9% of the adult population) have access to or use commercial bank accounts.

Traditionally, access to bank accounts has been regarded as an indicator of financial inclusion. Despite the proliferation of mobile money in Africa and the growth of innovative fintech solutions, banks still play a pivotal role in facilitating access to financial services. Access to bank accounts enables households to save and facilitate payments while also helping businesses build up their credit-worthiness and improve their access to other financial services. Therefore, access to bank accounts is an essential contributor to long-term economic growth.

To know more about the problem statement visit Zindi Africa Competition on Financial inclusion in Africa.

2.Type of the Problem

After going through the problem statement, the dataset focuses on a classification where you have to predict whether individuals are most likely to have or use a bank account or not. But you will not apply a machine learning technique in this article.

3.Hypothesis Generation

This is a very important stage during data exploration. It involves understanding the problem in detail by brainstorming as many factors as possible which can impact the outcome. It is done by understanding the problem statement thoroughly and before looking at the data.

Below are some of the factors which I think can affect the chance for a person to have a bank account:-

  • People who have mobile phones have a lower chance to use bank accounts because of mobile money services.
  • People who are employed have a higher chance of having bank accounts than people who are unemployed.
  • People with low education levels have a low chance to have bank accounts.
  • People in rural areas have a low chance to have bank accounts.
  • Females have less chance to have bank accounts.

Now let’s load and analyze our dataset to see if assumptions generated are valid or not valid. You can download the dataset and notebook here.
Load Python Packages

We import all important python packages to start analyzing our dataset.

# import important modules  
import pandas as pd 
import numpy as np 
import matplotlib.pyplot as plt 
import seaborn as sns  
plt.rcParams["axes.labelsize"] = 18 
import warnings 
warnings.filterwarnings('ignore') %matplotlib inline
Enter fullscreen mode Exit fullscreen mode

Load Financial Inclusion Dataset.

# Import data
data = pd.read_csv('../data/financial_inclusion.csv')
Enter fullscreen mode Exit fullscreen mode

Let’s see the shape of our data.

# print shape 
print('train data shape :', data.shape)
Enter fullscreen mode Exit fullscreen mode
train data shape : (23524, 13)
Enter fullscreen mode Exit fullscreen mode

In our dataset, we have 13 columns and 23544 rows.

We can observe the first five rows from our data set by using the head() method from the pandas library.

# Inspect Data by showing the first five rows 
data.head()
Enter fullscreen mode Exit fullscreen mode

It is important to understand the meaning of each feature so you can really understand the dataset. Click here to get the definition of each feature presented in the dataset.
We can get more information about the features presented by using the info() method from pandas.

# show Some information about the dataset 
print(data.info())
Enter fullscreen mode Exit fullscreen mode

The output shows the list of variables/features, sizes, if it contains missing values and data type for each variable. From the dataset, we don't have any missing values and we have 3 features of integer data type and 10 features of the object data type.

If you want to learn how to handle missing data in your dataset, I recommend you read this article “How to handle missing data with python” by Jason Brownlee.

4.Univariate Analysis

In this section, we will do the univariate analysis. It is the simplest form of analyzing data where we examine each variable individually. For categorical features, we can use frequency tables or bar plots which will calculate the number of each category in a particular variable. For numerical features, probability density plots can be used to look at the distribution of the variable.

The following codes show unique values in the bank_account variable where Yes means the person has a bank account and No means the person doesn't have a bank account.

# Frequency table of a variable will give us the count of each category in that Target variable.
data['bank_account'].value_counts()
Enter fullscreen mode Exit fullscreen mode


# Explore Target distribution 
sns.catplot(x="bank_account", kind="count", data= data)
Enter fullscreen mode Exit fullscreen mode

The data shows that we have a large number of no class than yes class in our target variable means a majority of people don't have bank accounts.

# Explore Country distribution 
sns.catplot(x="country", kind="count", data=data)
Enter fullscreen mode Exit fullscreen mode

The country feature in the above graph shows that most of the data were collected in Rwanda and fewer data were collected in Uganda.

# Explore Location distribution 
sns.catplot(x="location_type", kind="count", data=data)
Enter fullscreen mode Exit fullscreen mode

In the Location_type feature, we have a large number of people live in rural areas than in urban areas.

# Explore Years distribution 
sns.catplot(x="year", kind="count", data=data)
Enter fullscreen mode Exit fullscreen mode

In the year's feature, most of the data were collected in 2016.

# Explore cellphone_access distribution 
sns.catplot(x="cellphone_access", kind="count", data=data)
Enter fullscreen mode Exit fullscreen mode

In the cellphone_access feature, most of the participants have access to the cellphone.

# Explore gender_of_respondents distribution 
sns.catplot(x="gender_of_respondent", kind="count", data=data)
Enter fullscreen mode Exit fullscreen mode

In the gender_of_respondent feature, we have more Females than Males.

# Explore relationship_with_head distribution 
sns.catplot(x="relationship_with_head", kind="count", data=data);
plt.xticks(
    rotation=45, 
    horizontalalignment='right',
    fontweight='light',
    fontsize='x-large'  
)
Enter fullscreen mode Exit fullscreen mode

In the relationship_with_head feature, we have more heads of Household participants and few other non-relatives.

# Explore marital_status  distribution 
sns.catplot(x="marital_status", kind="count", data=data);
plt.xticks(
    rotation=45, 
    horizontalalignment='right',
    fontweight='light',
    fontsize='x-large'  
)
Enter fullscreen mode Exit fullscreen mode

In the marital_status feature, most of the participants are married/living together.

# Explore education_level  distribution 
sns.catplot(x="education_level", kind="count", data=data); 

plt.xticks(
    rotation=45, 
    horizontalalignment='right',
    fontweight='light',
    fontsize='x-large'  
)
Enter fullscreen mode Exit fullscreen mode

In the education_level feature, most of the participants have a primary level of education.

# Explore job_type distribution 
sns.catplot(x="job_type", kind="count", data=data); 

plt.xticks(
    rotation=45, 
    horizontalalignment='right',
    fontweight='light',
    fontsize='x-large'  
)
Enter fullscreen mode Exit fullscreen mode

In the job_type feature, most of the participants are self-employed.

# Explore household_size distribution 

plt.figure(figsize=(16, 6))
data.household_size.hist() 
plt.xlabel('Household  size')
Enter fullscreen mode Exit fullscreen mode

Household_size is not normally distributed and the most common number of people living in the house is 2.

# Explore age_of_respondent distribution 
plt.figure(figsize=(16, 6))
data.age_of_respondent.hist() 
plt.xlabel('Age of Respondent')
Enter fullscreen mode Exit fullscreen mode

In our last feature called age_of_respondent, most of the participant’s age is between 25 and 35.

5.Bivariate Analysis

Bivariate analysis is the simultaneous analysis of two variables (attributes). It explores the concept of the relationship between two variables, whether there exists an association and the strength of this association, or whether there are differences between two variables and the significance of these differences.

After looking at every variable individually in Univariate analysis, we will now explore them again with respect to the target variable.

#Explore location type  vs bank account 
plt.figure(figsize=(16, 6))
sns.countplot('location_type', hue= 'bank_account', data=data)
plt.xticks(
    fontweight='light',
    fontsize='x-large'  
)
Enter fullscreen mode Exit fullscreen mode

From the above plot, you can see that the majority of people living in rural areas don't have bank accounts. Therefore our assumption we made during the hypothesis generation is valid that people live in rural areas have a low chance to have bank accounts.

#Explore gender_of_respondent vs bank account 
plt.figure(figsize=(16, 6))
sns.countplot('gender_of_respondent', hue= 'bank_account', data=data)
plt.xticks(
    fontweight='light',
    fontsize='x-large'  
)
Enter fullscreen mode Exit fullscreen mode

In the above plot, we try to compare the target variable (bank_account) against the gender_of_respondent. The plot shows that there is a small difference between males and females who have bank accounts(The number of males are greater than females). This proves our assumption that females have less chance to have bank accounts.

#Explore cellphone_accesst vs bank account 
plt.figure(figsize=(16, 6))
sns.countplot('cellphone_access', hue= 'bank_account', data=data)
plt.xticks(
    fontweight='light',
    fontsize='x-large'  
)
Enter fullscreen mode Exit fullscreen mode

The cellphone_access plot show the majority of people who have cellphone access, don't have bank accounts. This proved that people who have access to the cellphone have a lower chance to use bank accounts. One of the reasons is the availability of mobile money services which is more accessible and affordable especially for people living in rural areas.


#Explore 'education_level vs bank account 
plt.figure(figsize=(16, 6))
sns.countplot('education_level', hue= 'bank_account', data=data)
plt.xticks(
    rotation=45, 
    horizontalalignment='right',
    fontweight='light',
    fontsize='x-large'  
)
Enter fullscreen mode Exit fullscreen mode

The education_level plot shows that the majority of people have primary education and most of them don't have bank accounts. This also proves our assumption that people with lower education have a lower chance to have bank accounts.

#Explore job_type vs bank account 
plt.figure(figsize=(16, 6))
sns.countplot('job_type', hue= 'bank_account', data=data)
plt.xticks(
    rotation=45, 
    horizontalalignment='right',
    fontweight='light',
    fontsize='x-large'  
)
Enter fullscreen mode Exit fullscreen mode

The job_type plot shows that the majority of people who are self-employed don't have access to the bank accounts, followed by informally employed and farming and fishing.

Now you understand important steps you can take while trying to explore and find insights and hidden patterns in your dataset. You can go further by comparing the relationship among independent features presented in the dataset.

But what if you have a dataset with more than 100 features(columns) ? do you think by trying to analyze each individual feature one by one will be the best way.? Having many features in your dataset means it will take a lot of your time to analyze and find insights in your dataset.

The best way to solve this problem is by using and lastest python package called data profiling package. This package will speed up the Exploratory Data Analysis steps.

DATA PROFILING PACKAGE

Profiling is a process that helps you in understanding your data and pandas Profiling is a python package that does exactly that. It is a simple and fast way to perform exploratory data analysis of a Pandas Dataframe.

The pandas df.describe() and df.info() functions are normally used as a first step in the EDA process. However, it only gives a very basic overview of the data and doesn’t help much in the case of large data sets. The pandas profiling function, on the other hand, extends the pandas DataFrame with df.profile_report() for quick data analysis.

Pandas profiling generates a complete report for your dataset, which includes:

  • Basic data type information.
  • Descriptive statistics (mean, median etc.).
  • Common and Extreme Values.
  • Quantile statistics (tells you about how your data is distributed).
  • Histograms for your data (for visualizing distributions).
  • Correlations (Show features that are related).

How to install the package

There are three ways you can install pandas-profiling on your computer.

You can install using the pip package manager by running.

pip install pandas-profiling
Enter fullscreen mode Exit fullscreen mode

Alternatively, you could install directly from Github:

pip install https://github.com/pandas-profiling/pandas-profiling/archive/master.zip
Enter fullscreen mode Exit fullscreen mode

You can install using the conda package manager by running.

conda install -c conda-forge pandas-profiling
Enter fullscreen mode Exit fullscreen mode

After installing the package now you need to import the package by writing the following codes.

#import the package 
import pandas_profiling
Enter fullscreen mode Exit fullscreen mode

Now let’s do the EDA using the package that we have just imported. We can either print the output in the notebook environment or save it to an HTML file that can be downloaded and shared with anyone.

# generate report 
eda_report = pandas_profiling.ProfileReport(data)
eda_report
Enter fullscreen mode Exit fullscreen mode

In the above codes, we add our data object in the ProfileReport method which will generate the report.

If you want to generate an HTML report file, save the ProfileReport to an object and use the to_file() function:

#save the generated report to the html file
eda_report.to_file("eda_report.html")
Enter fullscreen mode Exit fullscreen mode

Now you can open the eda_report.html file in your browser and observe the output generated by the package.

The above image shows the first output in the generated report. You can access the entire report here.

Conclusion

You can follow the steps provided in this article to perform Exploratory Data Analysis in your dataset and start to discover insights and hidden patterns in your dataset. Keep in mind that the dataset comes from a different source with different data types which means you will need to apply a different way to explore your data such as time series and text datasets.

If you learned something new or enjoyed reading this article, please share it so that others can see it. Feel free to leave a comment too. Till then, see you in the next post!

This article first appeared on Medium.

Top comments (0)