DEV Community

Moonlit Capy
Moonlit Capy

Posted on • Edited on • Originally published at blog.elunari.uk

Data Pipeline Design: From Messy CSV to Clean Database

9 min read

Most real-world data starts messy — inconsistent CSV exports from legacy systems, spreadsheets with merged cells, date formats that vary by row. A good data pipeline transforms this chaos into clean, normalized database records.

Pipeline Architecture

The classic ETL (Extract, Transform, Load) pattern remains the foundation. Each stage has a clear boundary and failure mode.

Extract     → Read raw data from source (CSV, API, FTP)
Transform   → Normalize, validate, enrich, deduplicate
Load        → Write clean records to target database
Monitor     → Log anomalies, send alerts on threshold breaches
Enter fullscreen mode Exit fullscreen mode

Handling Messy Data

  • Date normalization: Parse multiple formats (MM/DD/YYYY, DD-MM-YYYY, ISO 8601) into a single canonical format
  • Address standardization: Normalize street abbreviations (St → Street, Ave → Avenue) and validate against postal databases
  • Deduplication: Use fuzzy matching (Levenshtein distance, Jaro-Winkler) to identify duplicate records with slight variations
  • Type coercion: Convert "yes"/"no"/"1"/"0"/"true"/"false" to consistent booleans

AI-Assisted Normalization

For particularly messy data, LLMs can classify and normalize fields that rule-based systems struggle with — company name variations, free-text product descriptions, and ambiguous category labels.

async function normalizeWithAI(rawValue: string, field: string): Promise<string> {
  const response = await llm.complete({
    prompt: `Normalize this ${field} value into a standard format.\nInput: "${rawValue}"\nOutput (just the normalized value, nothing else):`,
    maxTokens: 50,
    temperature: 0,
  })
  return response.trim()
}
Enter fullscreen mode Exit fullscreen mode

Validation Layer

Every record should pass through validation before loading. Reject invalid records into a quarantine table for manual review rather than silently dropping data.

Wrapping Up

Data pipelines are unglamorous but critical. The difference between a good pipeline and a bad one shows up in data quality — and data quality directly impacts every business decision downstream.


Explore 85+ free developer tools or support this work.


Keep Leveling Up

If you found this useful, check out my Dev Reference Card - a downloadable cheat sheet with essential developer patterns, shortcuts, and references condensed into one page.

Grab it here for the price of a coffee.

Browse all my tools and templates at moonlitcapy.gumroad.com

Top comments (0)