DEV Community

BAder82t
BAder82t

Posted on

CipherExplain: Encrypted Explainable AI for Privacy-Preserving ML Interpretability

The Problem Nobody Talks About

GDPR and HIPAA require data to stay encrypted.

The EU AI Act (Article 13) requires AI systems to explain their decisions.

These two rules are on a collision course.

A credit bureau cannot decrypt applicant data to compute an explanation — but regulators say it must provide one.

What CipherExplain Does

CipherExplain computes SHAP feature attributions entirely under Fully Homomorphic Encryption (FHE). The server never sees plaintext. The client gets a complete explanation of which features drove the prediction — all on encrypted data.

Metric Value
SHAP accuracy MAE vs KernelSHAP 0.009
Efficiency axiom error 0.0 (machine epsilon)
Coalition compression (d=50) 2^50 → 390 evaluations
CKKS latency (d=50, M1) ~10s end-to-end

Why it matters

  • Explanations stay private
  • Still fully interpretable by authorized users
  • Ideal for healthcare, finance, and security use cases

Get started

👉 GitHub: https://github.com/VaultBytes/CipherExplain
📦 PyPI: pip install cipherexplain

Licensed AGPL-3.0. Patent-pending: PCT/IB2026/053378, PCT/IB2026/053405.

Patent licensing: b@vaultbytes.com

Top comments (0)