DEV Community

丁久
丁久

Posted on • Originally published at dingjiu1989-hue.github.io

Responsible AI Development Practices

This article was originally published on AI Study Room. For the full version with working code examples and related articles, visit the original post.

Responsible AI Development Practices

Introduction

As AI systems make increasingly consequential decisions--from loan approvals to medical diagnoses--responsible AI development is no longer optional. Regulations like the EU AI Act, emerging AI liability frameworks, and growing public scrutiny demand that developers implement systematic fairness, transparency, and safety practices. This article covers practical techniques for building responsible AI applications.

Bias Detection and Fairness Metrics

Quantify bias across demographic groups using standard fairness metrics:

import numpy as np

from sklearn.metrics import confusion_matrix

from dataclasses import dataclass

from typing import Dict, List

@dataclass

class FairnessReport:

    group: str

    sample_size: int

    positive_rate: float

    true_positive_rate: float

    false_positive_rate: float

    false_negative_rate: float

    demographic_parity: float  # Difference from overall positive rate

class BiasAuditor:

    def __init__(self, protected_attributes: List[str]):

        self.protected_attributes = protected_attributes

    def evaluate_fairness(

        self,

        y_true: np.ndarray,

        y_pred: np.ndarray,

        groups: Dict[str, np.ndarray],

    ) -> Dict[str, FairnessReport]:

        """Evaluate fairness metrics across all groups."""

        overall_positive_rate = y_pred.mean()

        reports = {}

        for group_name, group_mask in groups.items():

            group_pred = y_pred[group_mask]

            group_true = y_true[group_mask]

            tn, fp, fn, tp = confusion_matrix(

                group_true, group_pred

            ).ravel()

            reports[group_name] = FairnessReport(

                group=group_name,

                sample_size=int(group_mask.sum()),

                positive_rate=group_pred.mean(),

                true_positive_rate=tp / (tp + fn) if (tp + fn) > 0 else 0,

                false_positive_rate=fp / (fp + tn) if (fp + tn) > 0 else 0,

                false_negative_rate=fn / (fn + tp) if (fn + tp) > 0 else 0,

                demographic_parity=abs(

                    group_pred.mean() - overall_positive_rate

                ),

            )

        return reports

    def check_thresholds(

        self, reports: Dict[str, FairnessReport]

    ) -> List[str]:

        """Check fairness metrics against thresholds."""

        violations = []

        # Demographic parity: max difference < 0.1

        max_parity = max(r.demographic_parity for r in reports.values())

        if max_parity > 0.1:

            violations.append(

                f"Demographic parity violation: {max_parity:.3f} > 0.1"

            )

        # Equal opportunity: TPR difference < 0.1

        tpr_values = [r.true_positive_rate for r in reports.values()]

        if max(tpr_values) - min(tpr_values) > 0.1:

            violations.append("Equal opportunity violation: TPR gap > 0.1")

        # Equalized odds: FPR difference < 0.1

        fpr_values = [r.false_positive_rate for r in reports.values()]

        if max(fpr_values) - min(fpr_values) > 0.1:

            violations.append("Equalized odds violation: FPR gap > 0.1")

        return violations
Enter fullscreen mode Exit fullscreen mode

Model Explainability

SHAP (SHapley Additive exPlanations)

SHAP explains individual predictions by computing feature contributions:

import shap

import xgboost as xgb

import matplotlib.pyplot as plt

class ModelExplainer:

    def __init__(self, model, feature_names: List[str]):

        self.model = model

        self.feature_names = feature_names

        self.explainer = shap.TreeExplainer(model)

    def explain_prediction(self, instance: np.ndarray) -> dict:

        """Generate SHAP explanation for a single prediction."""

        shap_values = self.explainer.shap_values(instance)

        explanation = {

            "prediction": float(self.model.predict(instance.reshape(1, -1))[0]),

            "base_value": float(self.explainer.expected_value),

            "feature_contributions": [],

        }

        # Sort features by absolute contribution

        for i, (name, value) in enumerate(

            sorted(

                zip(self.feature_names, shap_values[0]),

                key=lambda x: abs(x[1]),

                reverse=True,

            )

        ):

            explanation["feature_contributions"].append({

                "feature": name,

                "value": float(value),

                "direction": "positive" if value > 0 else "negative",

                "magnitude": "high" if abs(value) > 0.1 else "low",
Enter fullscreen mode Exit fullscreen mode

Read the full article on AI Study Room for complete code examples, comparison tables, and related resources.

Found this useful? Check out more developer guides and tool comparisons on AI Study Room.

Top comments (0)