DEV Community

wintrover
wintrover

Posted on • Originally published at wintrover.github.io

Upgrading Face Recognition: From DeepFace to InsightFace — Performance, Quality, and Integration

Why the migration?

Early experiments with DeepFace hit practical limits in GPU parallelism and multi-image throughput. Moving to InsightFace helped me standardize multi-image processing on GPU, cut per-image latency, and increase throughput — making my pipeline ready for production traffic.

Quality and Operations

  • SonarQube for automated detection of bugs, code smells, and hotspots.
  • SOLID refactoring across service/repository/gateway layers to stabilize dependencies and isolate responsibilities.
  • Dockerized GPU runtime for consistent environments across dev/test/CI.
  • CI pipeline with linting and lightweight checks on each change.

Benchmarking and Visualization

  • LFW was used for objective validation (Accuracy, AUC).
  • Introduced W&B to track experiments and visualize metrics (accuracy curves, ROC/AUC) per run — making regressions visible and reproducible.

Service Integration (React + FastAPI)

  • While wiring the real service, I fixed upload bugs and type mismatches.
  • Established clear contracts and defensive error paths (timeouts, invalid payloads) for stable data exchange.

Example snippets

1) InsightFace: GPU-backed embeddings (single image)

from insightface.app import FaceAnalysis
import cv2

app = FaceAnalysis(name='buffalo_l')
app.prepare(ctx_id=0, det_size=(640, 640))

img = cv2.imread('face.jpg')
faces = app.get(img)
embeddings = [f.normed_embedding for f in faces]
print(len(embeddings), 'embeddings computed')
Enter fullscreen mode Exit fullscreen mode

2) CI pipeline: lint + build (GitHub Actions)

name: ci
on: [push, pull_request]
jobs:
  web:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: '20' }
      - run: npm ci
      - run: npm run lint || echo "lint warnings"
      - run: npm run build
Enter fullscreen mode Exit fullscreen mode

3) Log LFW metrics with W&B

import wandb
from sklearn.metrics import accuracy_score, roc_auc_score

wandb.init(project='face-benchmark', config={'dataset': 'LFW'})
# y_true, y_score prepared from verification pairs
acc = accuracy_score(y_true, (y_score > 0.5).astype(int))
auc = roc_auc_score(y_true, y_score)
wandb.log({'lfw/accuracy': acc, 'lfw/auc': auc})
wandb.finish()
Enter fullscreen mode Exit fullscreen mode

4) FastAPI endpoint + React upload

# fastapi
from fastapi import FastAPI, UploadFile, File
app = FastAPI()

@app.post('/api/images')
async def upload_image(file: UploadFile = File(...)):
    data = await file.read()
    # validate & store
    return {"ok": True, "size": len(data)}
Enter fullscreen mode Exit fullscreen mode
// react
async function upload(blob: Blob) {
  const fd = new FormData();
  fd.append('file', blob, 'image.jpg');
  const res = await fetch('/api/images', { method: 'POST', body: fd });
  if (!res.ok) throw new Error('Upload failed');
}
Enter fullscreen mode Exit fullscreen mode

Results: LFW + W&B (InsightFace–ArcFace)

The figure below is the run dashboard I captured from Weights & Biases while evaluating InsightFace (ArcFace) on the LFW verification pairs. It visualizes ROC/AUC and accuracy/precision/recall across epochs, alongside threshold drift and basic GPU telemetry during the run. The curves help me select a sensible decision threshold and quickly spot regressions in later experiments.

W&B dashboard showing LFW ROC curve approaching the top-left, AUC and accuracy trends, precision/recall, and GPU metrics during InsightFace ArcFace benchmarking



LFW verification with InsightFace–ArcFace tracked in W&B: (top) ROC and AUC/accuracy trends; (middle) precision/recall and epoch progression; (bottom) GPU memory/clock/power. I use the ROC and class-wise curves to derive a stable threshold and verify improvements hold across runs.

What I learned

  • Measure first: LFW + W&B made improvements and regressions explicit.
  • Make quality repeatable: SonarQube + SOLID + CI kept changes safe.
  • Prepare for service: InsightFace GPU flow and clear contracts reduced unexpected runtime issues.

Next

Scale out batched/stream flows, add inference caching, and iterate based on user-centric metrics to evolve the pipeline responsibly.

Top comments (0)