DEV Community

Cover image for UNDERSTANDING YOUR DATA:THE ESSENTIALS OF EXPLORATORY DATA ANALYSIS.
kiplimo patrick
kiplimo patrick

Posted on

UNDERSTANDING YOUR DATA:THE ESSENTIALS OF EXPLORATORY DATA ANALYSIS.

Introduction
Based on the end goal you have about your data as a result of a machine learning model, development of visualizations and incorporation of user friendly applications, developing fluency in the data at the beginning of the project will bolster the final success.
Essentials of EDA
This is where we get to learn on how the necessity of data preprocessing is beneficial to data analysts.
Due to the vastness and various sources, today's data is more likely to be abnormal. The preprocessing of data has become the foundation stage in the field of data science since high quality data results in more robust models and predictions.
Exploratory data analysis is a data scientist's tool to see what data can expose outside the formal modelling or assumption testing task.
Data scientist must always perform EDA to ensure the reliable results and applicable to any effected outcomes and objectives. It also assists scientists and analysts in confirming that they are on the proper track to achieve the desired results.
Some of the examples of research questions that guide the study are:
1.Is there any significant effect of preprocessing of data
analysis approaches-- missing values, the aggregate of values, data filtering, outliers, variable transformation, and variable reduction - on accurate data analysis results?
2. At what significant level is preprocessing data analysis necessary in research studies?
Exploratory Data Analysis Metrics and Their Importance
1.Data Filtering
This is the practice of picking a smaller section of a dataset and using that subset for viewing or analysis. The full data set is kept, but only a subset of it is used for calculation; filtering is typically a temporary procedure. Discovering inaccurate, incorrect, or subpar observations from the study, extracting data for a specific interest group, or hunting for information for a specific period can all be summed up using filters. The data scientist must specify a rule or logic during filtering to extract cases for the study.

2.Data Aggregation
Data aggregation requires gathering unprocessed data into a single location and summing it up for analysis. Data aggregation increases the informational, practical, and usable value of data. The perspective of a technical user is often used to define the phrase. Data aggregation is the process of integrating unprocessed data from many databases or data sources into a centralized database in the instance of an analyst or engineer. The aggregate numbers are then created by combining the raw data. A sum or average is a straight forward illustration of an aggregate value. Aggregated data is used in the analysis, reporting, dashboarding, and other data products. Data aggregation can increase productivity, decision-making, and time to insight.

3.Missing Data
In data analytics, missing values are another name for missing
data. It occurs when specific variables or respondents are left out or skipped. Omissions can happen due to incorrect data entry, lost files, or broken technology. Missing data can intermittently result in model bias, depending on their type, which makes them problematic. Missing data implies that since data may have come from misleading sample at times, outcomes may only be generalizable within the study's parameters. To ensure consistency across the entire dataset, it is necessary to recode all missing values with labels of "N/A"(short for "not applicable").

missing values
4.Data Transformation
Data are rescaled using a function or other mathematical
operation on each observation during a transformation. We
occasionally alter the data to make it easier to model when it
is very significantly skewed (either positively or negatively).
In other words, one should try a data transformation to suit the assumption of applying a parametric statistical test if
the variable(s) does not fit a normal distribution. The most popular data transformation is log (or natural log), which is frequently used when all of the observations are positive, and most of the data values cluster around zero concerning the more significant values in the data set.

Data transformation
Diagram illustration

Exploratory data analysis

Visualization techniques in EDA
Visualization techniques play an essential role in EDA, enabling us to explore and understand complex data structures and relationships visually. Some common visualization techniques used in EDA are:
1.Histograms:
Histograms are graphical representations that show the distribution of numerical variables. They help understand the central tendency and spread of the data by visualizing the frequency distribution.

Histogram
2.Boxplots: A boxplot is a graph showing the distribution of a numerical variable. This visualization technique helps identify any outliers and understand the spread of the data by visualizing its quartiles.

Boxplot
3.Heatmaps: They are graphical representations of data in which colors represent values. They are often used to display complex data sets, providing a quick and easy way to visualize patterns and trends in large amounts of data.

Heatmap

4.Bar charts: A bar chart is a graph that shows the distribution of a categorical variable. It is used to visualize the frequency distribution of the data, which helps to understand the relative frequency of each category.

bar chart
5.Line charts: A line chart is a graph that shows the trend of a numerical variable over time. It is used to visualize the changes in the data over time and to identify any patterns or trends.

Line chart

output line
5.Pie charts: Pie charts are a graph that showcases the proportion of a categorical variable. It is used to visualize each category’s relative proportion and understand the data distribution.

Pie chart

Top comments (0)