DEV Community

Cover image for Dirty Data: How to Find It and What to Do
Akhilesh
Akhilesh

Posted on

Dirty Data: How to Find It and What to Do

You loaded your data. You ran head(). Everything looks fine.

It is not fine.

The data that looks fine in head() hides its problems. The missing values are three thousand rows down. The duplicates are in the middle. The date column that looks like a date is actually a string and will break your model silently. The salary column has a value of negative forty thousand that nobody caught.

Every real dataset has these problems. Every single one without exception. The question is not whether your data is dirty. The question is whether you found the dirt before you built a model on top of it.

This post is about finding it. Systematically. All of it.


The Dataset We Are Working With

Create this CSV first so you have something realistic to clean.

import pandas as pd
import numpy as np

data = {
    "id":         [1, 2, 3, 4, 5, 5, 6, 7, 8, 9, 10],
    "name":       ["Alex", "priya", "SAM", "Jordan", None, "Jordan", "Lisa", "Ravi ", "Tom", "Nina", "Oscar"],
    "age":        [25, 30, 22, 35, 28, 35, -5, 150, 31, 29, 27],
    "salary":     [55000, 82000, None, 95000, 67000, 95000, 48000, 71000, None, 63000, 59000],
    "join_date":  ["2022-01-15", "2021-03-22", "2023-06-01", "20/04/2019",
                   "2022-11-30", "20/04/2019", "2021-07-14", "invalid_date",
                   "2023-02-28", "2022-08-10", "2021-12-05"],
    "department": ["Eng", "Marketing", "eng", "Sales", "Marketing",
                   "Sales", "Eng", "Engineering", "Eng", "marketing", "Sales"]
}

df = pd.DataFrame(data)
df.to_csv("messy_data.csv", index=False)
df = pd.read_csv("messy_data.csv")
print(df)
Enter fullscreen mode Exit fullscreen mode

Look at what is wrong in this data. Duplicate rows (row 3 and row 5 are the same person). Missing values in name, salary. Negative age (-5). Impossible age (150). Inconsistent department names (Eng, eng, Engineering, marketing, Marketing). Inconsistent date formats. A trailing space in "Ravi ". An invalid date string.

This is mild compared to real datasets.


Step 1: The Full Audit

Before fixing anything, see everything that needs fixing.

print("SHAPE:", df.shape)
print("\nDTYPES:")
print(df.dtypes)

print("\nMISSING VALUES:")
missing = df.isnull().sum()
missing_pct = (df.isnull().sum() / len(df) * 100).round(1)
missing_report = pd.DataFrame({"count": missing, "percent": missing_pct})
print(missing_report[missing_report["count"] > 0])

print("\nDUPLICATE ROWS:", df.duplicated().sum())

print("\nUNIQUE VALUES PER COLUMN:")
for col in df.columns:
    print(f"  {col}: {df[col].nunique()} unique values")
Enter fullscreen mode Exit fullscreen mode

Output:

SHAPE: (11, 6)

DTYPES:
id             int64
name          object
age            int64
salary        float64
join_date      object
department    object

MISSING VALUES:
        count  percent
name        1      9.1
salary      2     18.2

DUPLICATE ROWS: 1

UNIQUE VALUES PER COLUMN:
  id: 10 unique values
  name: 10 unique values
  age: 11 unique values
  salary: 9 unique values
  join_date: 10 unique values
  department: 6 unique values
Enter fullscreen mode Exit fullscreen mode

One duplicate row. Missing values in name and salary. Department has 6 unique values but should probably have 3. Already suspicious.


Step 2: Duplicate Rows

Always handle duplicates before anything else. Cleaning duplicates after other fixes is harder.

print("Duplicate rows:")
print(df[df.duplicated(keep=False)])

df = df.drop_duplicates()
print(f"\nAfter removing duplicates: {df.shape}")
Enter fullscreen mode Exit fullscreen mode

Output:

Duplicate rows:
   id    name  age   salary   join_date department
3   4  Jordan   35  95000.0  20/04/2019      Sales
5   4  Jordan   35  95000.0  20/04/2019      Sales

After removing duplicates: (10, 6)
Enter fullscreen mode Exit fullscreen mode

keep=False shows you all duplicated rows, not just the second occurrence. Good for verifying what you are about to delete.

Sometimes duplicate IDs are the issue, not duplicate rows. A row might have the same ID but different values, which means data corruption or a merge that went wrong.

duplicate_ids = df[df["id"].duplicated(keep=False)]
if len(duplicate_ids) > 0:
    print("Rows with duplicate IDs:")
    print(duplicate_ids)
Enter fullscreen mode Exit fullscreen mode

Step 3: Missing Values

You saw the counts. Now decide what to do with each one.

print("Rows with any missing value:")
print(df[df.isnull().any(axis=1)])
Enter fullscreen mode Exit fullscreen mode

Output:

    id  name  age   salary   join_date department
2    3   SAM   22      NaN  2023-06-01        eng
4    5  None   28  67000.0  2022-11-30  Marketing
7    8  Ravi   31      NaN  2023-02-28        Eng
Enter fullscreen mode Exit fullscreen mode

Three options for each missing value: drop the row, fill with a calculated value, fill with a placeholder.

df["salary"] = df["salary"].fillna(df["salary"].median())

df["name"] = df["name"].fillna("Unknown")

print("\nMissing values after filling:")
print(df.isnull().sum())
Enter fullscreen mode Exit fullscreen mode

Output:

Missing values after filling:
id            0
name          0
age           0
salary        0
join_date     0
department    0
dtype: int64
Enter fullscreen mode Exit fullscreen mode

Why median for salary and not mean? Because salary distributions are skewed. A few very high earners inflate the mean. Median is more representative of a typical value.


Step 4: Impossible Values

Missing values are marked. Impossible values are hiding as normal numbers.

print("Age distribution:")
print(df["age"].describe())

print("\nSuspicious ages:")
print(df[(df["age"] < 16) | (df["age"] > 100)][["id", "name", "age"]])
Enter fullscreen mode Exit fullscreen mode

Output:

Age distribution:
count    10.000000
mean      42.800000
std       36.824...
min       -5.000000
25%       27.250000
50%       30.000000
75%       31.500000
max      150.000000

Suspicious ages:
   id  name  age
6   6  Lisa   -5
7   7  Ravi  150
Enter fullscreen mode Exit fullscreen mode

Mean of 42.8 when most values cluster around 25-35 already tells you something is off. The min of -5 and max of 150 confirm it.

age_median = df[(df["age"] >= 16) & (df["age"] <= 100)]["age"].median()

df.loc[(df["age"] < 16) | (df["age"] > 100), "age"] = age_median
print(f"Replaced impossible ages with median: {age_median}")
print(df[["id", "name", "age"]])
Enter fullscreen mode Exit fullscreen mode

Replace with median of the valid values only. Using the contaminated median would pull the central value toward the impossible numbers.


Step 5: Inconsistent Strings

The department column has six unique values but should have three.

print("Raw department values:")
print(df["department"].value_counts())
Enter fullscreen mode Exit fullscreen mode

Output:

Eng            4
Marketing      3
Sales          2
eng            1
Engineering    1
marketing      1
Enter fullscreen mode Exit fullscreen mode

Three real departments written six different ways. Strip whitespace, lowercase everything, then map to canonical names.

df["name"] = df["name"].str.strip().str.title()
df["department"] = df["department"].str.strip().str.lower()

dept_map = {
    "eng": "Engineering",
    "engineering": "Engineering",
    "marketing": "Marketing",
    "sales": "Sales"
}

df["department"] = df["department"].map(dept_map)

print("Cleaned department values:")
print(df["department"].value_counts())
Enter fullscreen mode Exit fullscreen mode

Output:

Engineering    5
Marketing      3
Sales          2
Enter fullscreen mode Exit fullscreen mode

Always strip whitespace on string columns immediately after loading. Trailing spaces are invisible and cause "Ravi " != "Ravi" comparisons to fail silently.


Step 6: Date Columns

Pandas reads dates as strings unless you tell it otherwise. Mixed date formats are a common disaster.

print("Current join_date dtype:", df["join_date"].dtype)
print("Sample values:")
print(df["join_date"].head(5))
Enter fullscreen mode Exit fullscreen mode

Output:

Current join_date dtype: object
Sample values:
0    2022-01-15
1    2021-03-22
2    2023-06-01
3    20/04/2019
4    2022-11-30
Enter fullscreen mode Exit fullscreen mode

Two formats: YYYY-MM-DD and DD/MM/YYYY. One invalid string.

df["join_date"] = pd.to_datetime(df["join_date"], errors="coerce", dayfirst=False)

print("\nAfter conversion:")
print(df["join_date"].dtype)
print(df[["name", "join_date"]])
Enter fullscreen mode Exit fullscreen mode

Output:

After conversion:
datetime64[ns]

        name  join_date
0       Alex 2022-01-15
1      Priya 2021-03-22
2        Sam 2023-06-01
3     Jordan        NaT
4    Unknown 2022-11-30
...
7       Ravi        NaT
Enter fullscreen mode Exit fullscreen mode

errors="coerce" converts anything it cannot parse to NaT (Not a Time, the datetime equivalent of NaN). The invalid date and the misformatted date became NaT. Now you know exactly where the problems are.


Step 7: Data Type Fixes

print("Before dtype fixes:")
print(df.dtypes)

df["id"] = df["id"].astype(str)

df["salary"] = df["salary"].astype(int)

print("\nAfter dtype fixes:")
print(df.dtypes)
Enter fullscreen mode Exit fullscreen mode

Sometimes columns load as the wrong type because of mixed values. A column that should be integer loads as float because it had NaN values (NaN forces float). After filling the NaNs, convert back.


Step 8: Final Validation

After all cleaning, verify the result.

def final_check(df):
    print("=" * 50)
    print("FINAL DATA QUALITY REPORT")
    print("=" * 50)
    print(f"Shape: {df.shape}")
    print(f"Missing values: {df.isnull().sum().sum()}")
    print(f"Duplicates: {df.duplicated().sum()}")
    print(f"\nDtypes:")
    print(df.dtypes)
    print(f"\nNumerical column stats:")
    print(df.describe())
    print("=" * 50)

final_check(df)
Enter fullscreen mode Exit fullscreen mode

Run this after every cleaning operation. If anything unexpected appears, you catch it before it contaminates your analysis or your model.


The Thinking Behind Every Decision

Cleaning decisions are not mechanical. They require judgment.

When you drop a row versus fill a missing value, you are making a choice about what to assume. Dropping is safer but loses data. Filling is riskier but keeps samples.

When you replace an impossible age with the median, you are assuming the measurement was an error. But what if age 150 was a test record that should be deleted entirely?

When you standardize department names, you are assuming "eng" and "Engineering" mean the same thing. Usually true. Sometimes not.

Document every decision you make. Not for anyone else. For yourself, when you come back to this code in three months and wonder why you did what you did.


A Blog That Goes Deep on This

Chris Albon runs a site called Machine Learning Flashcards and has a well-known collection of practical data cleaning recipes at chrisalbon.com. His post on handling missing values in Pandas is one of the most referenced pieces on the topic. Very code-first, very practical. Search "Chris Albon pandas missing data" and it comes right up.

Towards Data Science published a piece by Jeff Hale called "The Ultimate Guide to Data Cleaning" that covers this topic across multiple tools and with real datasets. Widely shared in the data science community. Search "Jeff Hale ultimate guide data cleaning".


Try This

Create cleaning_practice.py.

Download the Titanic dataset from Kaggle (it is free, search "Titanic dataset Kaggle" and download the CSV). Load it with Pandas.

Do the full audit: shape, dtypes, missing values, duplicates.

Fix every problem you find:

  • Missing Age values: fill with median age grouped by Pclass (different passenger classes had different age distributions)
  • Missing Cabin values: fill with "Unknown"
  • Missing Embarked values: fill with the most common port
  • Check for impossible values in Fare and Age
  • Standardize any string columns that need it

After cleaning, verify with the final check function. The cleaned dataset should have zero missing values except possibly in columns you intentionally left.

Write a short comment above each cleaning step explaining why you made the choice you made.


What's Next

Your data is clean now. Next up is filtering and selecting specific rows and columns efficiently using loc and iloc. You have seen these briefly, but the next post goes deep on the patterns that come up constantly in real analysis.

Top comments (0)