<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shiva</title>
    <description>The latest articles on DEV Community by Shiva (@shivakoreddi).</description>
    <link>https://dev.to/shivakoreddi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shivakoreddi"/>
    <language>en</language>
    <item>
      <title>Why every data quality tool tells you what broke — but leaves you alone to figure out why</title>
      <dc:creator>Shiva</dc:creator>
      <pubDate>Tue, 12 May 2026 18:48:32 +0000</pubDate>
      <link>https://dev.to/shivakoreddi/why-every-data-quality-tool-tells-you-what-broke-but-leaves-you-alone-to-figure-out-why-2fk</link>
      <guid>https://dev.to/shivakoreddi/why-every-data-quality-tool-tells-you-what-broke-but-leaves-you-alone-to-figure-out-why-2fk</guid>
      <description>&lt;p&gt;Most data quality tools describe what the error is. None of them describes why. Last year, I found that out the hard way — I opened a notebook, ran some queries, dug through pipeline logs, and eventually traced it back to a test account that had been deleted without cleaning up its associated orders. The fix took ten minutes. Finding the cause took three hours.&lt;/p&gt;

&lt;p&gt;What bothered me wasn't that the tool missed it — it caught it. What bothered me was that the tool handed me a one-line error and expected me to do all the detective work myself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Every DQ tool has the same blind spot
&lt;/h2&gt;

&lt;p&gt;I've used Great Expectations, Soda Core, and dbt tests across different teams. They're all good tools. But they all answer the same question: &lt;em&gt;did this check pass or fail?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That's genuinely useful. You know something is wrong. But knowing something is wrong is only the first step, and in my experience it's the easy step.&lt;/p&gt;

&lt;p&gt;The hard part is what comes after the alert fires. You open a notebook and query the failing table. You look at the bad rows and figure out what they have in common. You check pipeline logs to find when the bad data arrived. You trace it upstream to find which source or transform wrote it. Then you decide what to do — fix the data, alert a source team, add a guard.&lt;/p&gt;

&lt;p&gt;None of the tooling helps with any of that. You get &lt;code&gt;FAILED&lt;/code&gt; and then you're on your own.&lt;/p&gt;

&lt;p&gt;For the engineer who built the pipeline, this is annoying but manageable. For whoever is on call at 2am rebuilding context from scratch, it's genuinely painful.&lt;/p&gt;

&lt;h2&gt;
  
  
  What good diagnosis actually looks like
&lt;/h2&gt;

&lt;p&gt;Here's the same foreign key failure, described the way a senior engineer would explain it to you:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Rule:    orders_customer_fk  (critical)
Table:   orders
Failed:  3 rows

Explanation:
  3 orders reference customer_id=99 which does not exist in the
  customers table. Downstream revenue attribution for these orders
  will be silently dropped in any JOIN-based report.

Likely cause:
  customer_id=99 appears to be a test account that was deleted from
  the customers table without cleaning up associated orders.

Recommended action:
  1. SELECT * FROM orders WHERE customer_id = 99
  2. Check customers table: was id=99 recently deleted?
  3. If test data: DELETE FROM orders WHERE customer_id = 99
  4. Add a cleanup job or FK constraint to prevent recurrence

Proposed SQL:
  DELETE FROM orders WHERE customer_id NOT IN (SELECT id FROM customers);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same failure. Completely different experience. The first output stops you. The second gives you a path forward.&lt;/p&gt;

&lt;h2&gt;
  
  
  The approach: rule validation + LLM diagnosis
&lt;/h2&gt;

&lt;p&gt;The idea is straightforward — keep the deterministic rule engine (fast, free, reliable) and add an LLM layer that only runs when rules fail.&lt;/p&gt;

&lt;p&gt;The LLM doesn't replace the validator. It interprets the failure. It receives the rule definition, the failing rows, the table schema, and any common causes you've documented — and produces the explanation you'd normally piece together manually.&lt;/p&gt;

&lt;p&gt;The pipeline runs like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;validate → classify → diagnose → root cause → SQL fix → report
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tables run in parallel. For each failure, a classifier decides whether it's worth an LLM call based on severity and failure rate. Only failures that cross the threshold get diagnosed. If everything passes, nothing is charged.&lt;/p&gt;

&lt;p&gt;Every LLM call is logged to a local SQLite audit trail — the exact prompt, response, cost, and latency. Nothing is a black box.&lt;/p&gt;

&lt;p&gt;I built this as &lt;a href="https://github.com/aegis-dq/aegis-dq" rel="noopener noreferrer"&gt;Aegis DQ&lt;/a&gt;, an open-source Python framework. Here's what a full run looks like against a demo database with intentional dirty data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;╭──────────────── Validation Summary ─────────────────╮
│  Rules checked  │  12                               │
│  Passed         │  1   │  Failed  │  11             │
│  Pass rate      │  8%  │  Cost    │  $0.005576      │
╰─────────────────────────────────────────────────────╯

LLM Diagnoses
  orders_customer_fk  →  Order placed with customer_id=99 that does not exist.
                         Likely cause: customer deleted or test record not cleaned up.
                         Action: Verify customer_id=99; check recent deletions.

  products_sku_unique →  Duplicate SKU-001 — two products share the same identifier.
                         Likely cause: duplicate import from supplier feed.

Remediation SQL
  orders_status_valid     UPDATE orders SET status = 'SHIPPED' WHERE status = 'DISPATCHED';
  products_price_positive UPDATE products SET price = ABS(price) WHERE price &amp;lt; 0;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;11 failures, full diagnosis, proposed SQL for each fixable failure — $0.005 with Claude Haiku.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost is not the problem you think it is
&lt;/h2&gt;

&lt;p&gt;The first question most people ask is whether running an LLM on every failure is expensive.&lt;/p&gt;

&lt;p&gt;With &lt;code&gt;--no-llm&lt;/code&gt; it's free — pure rule validation, no API needed. With Claude Haiku, the demo above (12 rules, 4 tables, 11 failures) costs half a cent per run. With Ollama it's $0 at any scale because everything runs locally on your machine.&lt;/p&gt;

&lt;p&gt;The key design decision: only failed rules trigger LLM calls. In a healthy pipeline, most runs cost nothing. The LLM cost scales with your actual failure rate, not your rule count.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating rules from your schema
&lt;/h2&gt;

&lt;p&gt;Writing rules by hand for every new table always felt like unnecessary work. The schema already tells you most of what you need — which columns can't be null, what the value ranges look like, which columns should be unique.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aegis generate&lt;/code&gt; introspects your table and writes the first draft:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aegis generate orders &lt;span class="nt"&gt;--db&lt;/span&gt; warehouse.duckdb &lt;span class="nt"&gt;--output&lt;/span&gt; rules.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pass a plain-text policy document with your business logic and it generates business validation rules alongside the structural ones:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aegis generate orders &lt;span class="nt"&gt;--db&lt;/span&gt; warehouse.duckdb &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--kb&lt;/span&gt; docs/orders_policy.md &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output&lt;/span&gt; rules.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A policy file like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- status must be one of: placed, confirmed, shipped, delivered, cancelled
- revenue must be greater than 0
- discount_pct must be between 0 and 0.5
- email must be a valid email format
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Produces rules like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;logic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;accepted_values&lt;/span&gt;
    &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;placed&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;confirmed&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;shipped&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;delivered&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;cancelled&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;logic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sql_expression&lt;/span&gt;
    &lt;span class="na"&gt;expression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;revenue&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;0"&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;logic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;between&lt;/span&gt;
    &lt;span class="na"&gt;min_value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
    &lt;span class="na"&gt;max_value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.5&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;logic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;regex_match&lt;/span&gt;
    &lt;span class="na"&gt;pattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;^[^@]+@[^@]+&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;.[^@]+$"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generated rules come out stamped &lt;code&gt;status: draft&lt;/code&gt; — you review them, promote the ones you want to enforce to &lt;code&gt;active&lt;/code&gt;, and commit to version control. The LLM handles the boilerplate. You handle the judgement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where this doesn't replace what you have
&lt;/h2&gt;

&lt;p&gt;If you have an existing Great Expectations suite or dbt tests, there's no reason to replace them. Aegis has a dbt manifest parser that converts your existing tests to Aegis rules if you want to layer diagnosis on top — but that's additive, not a migration.&lt;/p&gt;

&lt;p&gt;If you need a business-user UI or an enterprise support contract, Aegis isn't the right fit today. It's a CLI and Python framework, not a SaaS platform.&lt;/p&gt;




&lt;p&gt;The gap between "what failed" and "why it failed" is where a lot of data engineering time quietly disappears. Tooling has gotten very good at detection. Diagnosis is still mostly manual. That feels like the wrong place to spend engineering hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/aegis-dq/aegis-dq" rel="noopener noreferrer"&gt;https://github.com/aegis-dq/aegis-dq&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Install:&lt;/strong&gt; &lt;code&gt;pip install aegis-dq&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Docs:&lt;/strong&gt; &lt;a href="https://aegis-dq.dev" rel="noopener noreferrer"&gt;https://aegis-dq.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy to answer questions in the comments — especially curious what people are using for DQ today and what pain points I might have missed.&lt;/p&gt;

</description>
      <category>dataquality</category>
      <category>agents</category>
      <category>opensource</category>
      <category>dataengineering</category>
    </item>
  </channel>
</rss>
