DEV Community

Cover image for BiasLens: A Plug-and-Play Fairness Auditor for AI Systems
Arpit Pandey
Arpit Pandey

Posted on

BiasLens: A Plug-and-Play Fairness Auditor for AI Systems

Detect AI Bias Without Being a Machine Learning Expert

Artificial intelligence is increasingly making decisions that impact people's lives - hiring, lending, healthcare, and more. But what happens when those decisions are biased?
Amazon had to scrap an AI recruiting tool that discriminated against women. Apple Card faced backlash for offering lower credit limits to women. Healthcare systems have shown bias against Black patients compared to white patients with identical conditions.
The pattern is clear:
 Bias is often discovered too late.
And here's the real issue - most tools designed to detect AI bias are built for machine learning engineers, not the people responsible for decisions.
That's the gap I wanted to close.


What is BiasLens?
BiasLens is a web application that helps non-technical users detect and understand bias in AI systems.
Instead of requiring expertise in machine learning, it allows users to simply upload a dataset and receive fairness insights instantly.
What it does:
Upload a CSV file
Automatically detect sensitive attributes (like gender or race)
Compute fairness metrics
Visualize results clearly
Export a compliance-ready report


Why This Matters
Most organizations don't intentionally deploy biased systems.
 They just don't have the tools - or visibility - to detect bias early.
BiasLens shifts fairness auditing from:

"reactive damage control"
 to
 "proactive evaluation"


The Stack (and Why It Matters)
Frontend
React 18
TypeScript
Tailwind CSS
Recharts

Backend
Python
FastAPI
Pandas & NumPy
IBM AI Fairness 360
ReportLab

Key decisions:
FastAPI for speed, validation, and clean APIs
AIF360 for proven fairness metrics
TypeScript to catch errors early
React for scalable UI

Nothing fancy for the sake of it - just practical, reliable choices.


How It Works
Frontend → FastAPI → Metrics Engine
| | |
Upload Validate Compute
| | |
Preview Schema Visualize
| | |
Confirm Analyze Export
Flow:
Upload dataset
System validates and suggests schema
User confirms inputs
Metrics are computed
Results are visualized
Report is exported

Simple, fast, and intentional.


The Core: 4 Fairness Metrics
BiasLens focuses on four widely accepted fairness metrics.

  1. Statistical Parity Difference Do different groups receive outcomes at the same rate? Fair range: -0.1 to 0.1 Example: 70% men hired vs 50% women → Fail

  1. Disparate Impact Based on the "four-fifths rule" in employment law. Fair range: 0.8 to 1.25 Example: 60% vs 80% → Fail

  1. Equal Opportunity Difference Are qualified individuals treated equally? Fair range: -0.1 to 0.1 Example: 90% vs 70% → Fail

  1. Predictive Parity Difference Are predictions equally accurate across groups? Fair range: -0.1 to 0.1 Example: 85% vs 65% → Fail

Making It Understandable
Each metric is labeled as:
Pass → acceptable
Warning → borderline
Fail → problematic

Because raw numbers don't help non-technical users - decisions do.


Key Engineering Decisions

  1. No Login System BiasLens uses session-based storage instead of accounts. No authentication No database overhead Sessions expire automatically

This keeps friction low and usability high.


  1. Smart Attribute Detection The system suggests protected attributes automatically based on: Column names Number of unique values

Users don't need to "know what to select" - the system guides them.


  1. Strict Validation One mistake users kept making:  Selecting non-binary outcome columns (like age). Fix: Only allow binary outcome columns Filter UI options Show clear errors

Simple constraint, huge improvement.


  1. Performance Trade-offs Large datasets (>500k rows) slowed things down. Instead of over-engineering: Added progress indicators Warned users Suggested sampling

Sometimes the right solution isn't optimization - it's expectation management.


Real Example: Adult Income Dataset
~48K rows
Attribute: gender
Outcome: income >50K

Results:
Statistical Parity: 0.197 → Fail
Disparate Impact: 0.27 → Fail
Equal Opportunity: 0.098 → Pass
Predictive Parity: -0.053 → Pass

Insight:
The model is biased in overall outcomes
 - but fair in evaluating qualified candidates.
That distinction matters.


What You Should Take Away
If you're a developer:
Don't reinvent fairness metrics - use proven libraries
Validate early
Build for the actual user, not yourself

If you're an organization:
Audit before deployment, not after controversy
Bias isn't rare - it's expected
Documentation matters

If you're a researcher:
Your work needs usable interfaces
Theory without accessibility doesn't scale


Getting Started
git clone https://github.com/arpitpandey0307/bias-lens.git
cd biaslens
docker-compose up
Or run manually using FastAPI + React.
Then:
Open localhost
Upload dataset
Analyze fairness
Export report


Final Thoughts
Bias in AI isn't going away.
But ignorance about it can.
BiasLens isn't about solving fairness completely - 
 it's about making it visible, understandable, and actionable.
And that's where real progress starts.


Resources
IBM AI Fairness 360
Microsoft Fairlearn
UCI ML Repository


About the Author
[Arpit Pandey]
 [Full Stack and AI/ML Developer]

Top comments (0)