DEV Community

Cover image for UniFace: All-in-one face analysis Library in Python
yakhyo
yakhyo

Posted on

UniFace: All-in-one face analysis Library in Python

In 2025, I built UniFace to solve a problem I kept running into: I wanted one Python toolkit that could handle the “full face pipeline” (detection → alignment/landmarks → embeddings → attributes → privacy), without gluing together half a dozen repos, model formats, and inconsistent inputs/outputs.

UniFace is a lightweight, ONNX Runtime–based library that aims to be production-ready: minimal dependencies, hardware acceleration where available, typed outputs, and a modular architecture that makes it easy to use only what you need.

If you’ve ever wanted to go from “image” to “faces + embeddings + attributes + optional privacy” in a few lines, UniFace is for you.


What UniFace supports today

UniFace covers a wide range of face analysis tasks:

  • Face detection (RetinaFace, SCRFD, YOLOv5-Face with 5-point landmarks)
  • Face recognition (ArcFace, AdaFace, MobileFace, SphereFace embeddings)
  • Facial landmarks (106-point landmarker)
  • Face parsing (semantic segmentation of facial components)
  • Gaze estimation (MobileGaze family)
  • Attributes (age/gender, FairFace demographics; optional emotion with PyTorch)
  • Anti-spoofing / liveness detection (MiniFASNet)
  • Privacy / anonymization (gaussian, median, pixelate, blackout, elliptical blur)

A key goal is flexibility: you can swap models per task depending on your latency/accuracy needs, and you can run on CPU or accelerate on supported hardware.


Design goals (the “why”)

1) ONNX-first, cross-platform by default

ONNX Runtime gives UniFace a practical deployment story: the same models run on macOS, Linux, and Windows. And on supported systems, UniFace can take advantage of acceleration backends.

2) Minimal dependencies

The core stack stays small (NumPy, OpenCV, ONNX Runtime + a few utilities). That makes it easier to ship and easier to debug.

3) A simple, consistent API

Most modules follow a predictable pattern:

  • Input: NumPy image arrays (OpenCV-style)
  • Output: typed result objects (e.g., Face, GazeResult, etc.)
  • Composable: detection feeds the other stages

Installation

Requires Python 3.11+.

pip install uniface
Enter fullscreen mode Exit fullscreen mode

For NVIDIA CUDA acceleration (when your environment supports it):

pip install "uniface[gpu]"
Enter fullscreen mode Exit fullscreen mode

Want to try it without installing anything? Open the ready-to-run Google Colab notebooks: https://yakhyo.github.io/uniface/notebooks/


Quick start: face detection in a few lines

import cv2
from uniface import RetinaFace

image = cv2.imread("photo.jpg")  # OpenCV loads BGR by default

detector = RetinaFace()  # models auto-download on first use
faces = detector.detect(image)

for i, face in enumerate(faces, start=1):
    print(f"[{i}] conf={face.confidence:.3f} bbox={face.bbox} landmarks={face.landmarks.shape}")
Enter fullscreen mode Exit fullscreen mode

You get back Face objects (bbox, confidence, landmarks), and you can pass those into recognition/attributes/landmarks modules—or use the all-in-one analyzer.


Face recognition: embeddings + similarity

import cv2
import numpy as np
from uniface import RetinaFace, ArcFace, compute_similarity

detector = RetinaFace()
recognizer = ArcFace()

img1 = cv2.imread("person1.jpg")
img2 = cv2.imread("person2.jpg")

f1 = detector.detect(img1)
f2 = detector.detect(img2)

if f1 and f2:
    emb1 = recognizer.get_normalized_embedding(img1, f1[0].landmarks)
    emb2 = recognizer.get_normalized_embedding(img2, f2[0].landmarks)

    similarity = float(compute_similarity(emb1, emb2, normalized=True))
    print("similarity:", similarity)
Enter fullscreen mode Exit fullscreen mode

The convenience layer: FaceAnalyzer

When you want a single entry point that orchestrates multiple tasks, use FaceAnalyzer. You pass in the building blocks (detector, optional recognizer, optional attributes), and it enriches each Face accordingly.

import cv2
from uniface import ArcFace, FaceAnalyzer, RetinaFace
from uniface.attribute import AgeGender, FairFace

image = cv2.imread("group_photo.jpg")

detector = RetinaFace()
recognizer = ArcFace()
age_gender = AgeGender()
fairface = FairFace()

analyzer = FaceAnalyzer(
    detector=detector,
    recognizer=recognizer,
    age_gender=age_gender,
    fairface=fairface,
)

faces = analyzer.analyze(image)

for face in faces:
    print("bbox:", face.bbox)
    print("sex:", face.sex, "age:", face.age)
    if face.embedding is not None:
        print("embedding shape:", face.embedding.shape)
Enter fullscreen mode Exit fullscreen mode

Privacy matters: anonymize faces (one-liner or full control)

Sometimes you don’t want identity at all—you want privacy-preserving output. UniFace includes multiple anonymization methods.

One-liner:

import cv2
from uniface.privacy import anonymize_faces

image = cv2.imread("crowd.jpg")
out = anonymize_faces(image, method="pixelate")
cv2.imwrite("crowd_anonymized.jpg", out)
Enter fullscreen mode Exit fullscreen mode

Manual control (detect first, anonymize second):

import cv2
from uniface import RetinaFace
from uniface.privacy import BlurFace

image = cv2.imread("crowd.jpg")

detector = RetinaFace()
blurrer = BlurFace(method="gaussian", blur_strength=5.0)

faces = detector.detect(image)
out = blurrer.anonymize(image, faces)
cv2.imwrite("crowd_anonymized.jpg", out)
Enter fullscreen mode Exit fullscreen mode

Model management that’s friendly to real deployments

UniFace handles a lot of “unsexy but necessary” details:

  • Models download automatically on first use
  • They are cached locally at ~/.uniface/models for subsequent runs
  • SHA-256 checksums are verified
  • You can pre-download models for offline / air-gapped deployments (e.g., verify_model_weights in a setup step)
  • Advanced: pass a custom root to verify_model_weights if you want a different cache path

This is one of those features that sounds small until you deploy to a server fleet or ship a demo to someone else.


Hardware acceleration (when available)

UniFace uses ONNX Runtime execution providers. In practice that means:

  • NVIDIA GPU: CUDA execution provider
  • Apple Silicon: CoreML execution provider
  • CPU fallback everywhere

You can inspect what your environment supports:

import onnxruntime as ort
print(ort.get_available_providers())
Enter fullscreen mode Exit fullscreen mode

How I think about choosing models

Different workloads need different tradeoffs (speed vs. accuracy vs. memory). UniFace is designed so you can pick models per task.

As a general approach:

  • Start with a balanced detector for most applications
  • Switch to a smaller detector for real-time webcam/video
  • Use a stronger detector when recall really matters
  • Keep recognition separate from detection so you can scale them independently

(If you’re curious, the UniFace docs include a Model Zoo with comparisons and recommendations.)


Responsible use

Face analysis is powerful and can be misused. If you use UniFace (or any face tech) in real products:

  • Get explicit consent where required
  • Follow local laws and regulations
  • Be cautious with demographic inference (bias and error rates are real)
  • Prefer privacy-preserving workflows when you can (e.g., anonymization or on-device processing)

What’s next

UniFace is actively evolving, and I plan to keep improving:

  • better benchmarks and reproducible evaluation recipes
  • more end-to-end examples (batch pipelines, streaming/video, face search)
  • additional models and deployment patterns

If you’d like to contribute—bug reports, docs improvements, new models, or examples—PRs and issues are welcome.

Repo: https://github.com/yakhyo/uniface
Docs: https://yakhyo.github.io/uniface/

Thanks for reading.

Top comments (0)