DEV Community

Cover image for Emotion Detection: Face Analyzer API vs DeepFace
AI Engine
AI Engine

Posted on • Originally published at ai-engine.net

Emotion Detection: Face Analyzer API vs DeepFace

You want to detect emotions from facial expressions. Customer feedback kiosks, driver monitoring, telehealth, HR mood tracking. Two approaches: install DeepFace, the most popular open-source face analysis library (22,000+ GitHub stars), or call a cloud face analysis API. This guide tests both on the same images and compares what they get right, what they get wrong, and what it takes to run each in production.

Want to see how it performs on your photos? Try the Face Analyzer API on your own images.

Quick Comparison

Face Analyzer API DeepFace
Emotions detected 8 (happy, sad, angry, surprised, disgusted, calm, confused, fear) 7 (angry, disgust, fear, happy, sad, surprise, neutral)
Output format Dominant emotion(s), can return multiple Percentage scores for all 7
Extra features Age, gender, smile, glasses, sunglasses, landmarks, face comparison Age, gender, race
Setup API key (2 min) TensorFlow + tf-keras + ~1GB model weights
Latency (CPU) 600-700ms 500-5,000ms (depends on cold/warm)
License Commercial MIT

What DeepFace Does

DeepFace is a Python library that wraps multiple face analysis models. It uses TensorFlow under the hood and downloads model weights (~500MB) on first run.

from deepface import DeepFace

result = DeepFace.analyze(
    "photo.jpg",
    actions=["emotion", "age", "gender"],
    silent=True,
)

r = result[0]
print(f"Emotion: {r['dominant_emotion']}")
for emo, score in sorted(r["emotion"].items(), key=lambda x: -x[1]):
    print(f"  {emo}: {score:.1f}%")
Enter fullscreen mode Exit fullscreen mode

The first call is slow (5-10 seconds) because it downloads model weights from GitHub. Subsequent calls take 500-800ms on CPU.

What the Face Analyzer API Does

import requests

url = "https://faceanalyzer-ai.p.rapidapi.com/faceanalysis"
headers = {
    "x-rapidapi-host": "faceanalyzer-ai.p.rapidapi.com",
    "x-rapidapi-key": "YOUR_API_KEY",
}

with open("photo.jpg", "rb") as f:
    response = requests.post(url, headers=headers, files={"image": f})

for face in response.json()["body"]["faces"]:
    ff = face["facialFeatures"]
    print(f"Emotions: {ff['Emotions']}")
    print(f"Gender: {ff['Gender']}, Age: {ff['AgeRange']['Low']}-{ff['AgeRange']['High']}")
    print(f"Smile: {ff['Smile']}, Glasses: {ff['Eyeglasses']}")
Enter fullscreen mode Exit fullscreen mode

One API call returns emotion + age + gender + smile + glasses + sunglasses + landmarks. DeepFace requires separate analysis passes and does not detect smile, glasses, or landmarks.

See real benchmarks side by side, full code for both tools, and the install gotchas in the complete guide.


Testing Both on the Same Images

Test environment: Intel Core i7-7700HQ @ 2.80GHz, 4 cores, no GPU.

Test 1: Surprised expression

  • API: SURPRISED. Female, 31-39. Latency: ~600ms.
  • DeepFace: surprise 95.9%, happy 2.9%, fear 1.1%. Woman, 32. Latency: ~500ms (warm).
  • Verdict: both correct.

Test 2: Fear expression

  • API: SURPRISED + FEAR (two emotions). Male, 27-35. Latency: 665ms.
  • DeepFace: FaceNotDetected crash. With enforce_detection=False: sad 99.7%. Wrong emotion.
  • Verdict: API wins.

Test 3: Angry expression

  • API: ANGRY. Male, 47-55. Latency: 633ms.
  • DeepFace: FaceNotDetected crash. With fallback: neutral 77.5%. Wrong emotion, wrong gender (Woman 52.1%), age 31 vs API 47-55. Latency: 5,672ms.
  • Verdict: API wins. DeepFace got emotion, gender, and age all wrong.

Summary: API correct on 3/3 images. DeepFace crashed on 2/3, wrong emotion on 2/3, wrong gender on 2/3.

The Installation Problem

DeepFace's biggest friction point is setup. Here's what we encountered:

  1. pip install deepface installs TensorFlow (~500MB of dependencies)
  2. First run crashes with ValueError: You have tensorflow 2.21.0 and this requires tf-keras package. Fix: pip install tf-keras
  3. First analysis call downloads 3 model files from GitHub (emotion 6MB, age 539MB, gender 537MB). Total: over 1GB of model weights
  4. File paths with non-ASCII characters (e.g., French "Téléchargements") throw ValueError: Input image must not have non-english characters
  5. Without a GPU, TensorFlow prints warnings about missing CUDA drivers on every single run

The API requires pip install requests and an API key. That's it.

When to Choose DeepFace

  • You need emotion score distributions. DeepFace returns percentage scores for all 7 emotions.
  • Offline processing. No network dependency.
  • You need race estimation. DeepFace detects race. The API does not.

When to Choose the API

  • Reliable face detection. The API detected all 3 test images. DeepFace crashed on 2 of 3.
  • Multi-feature analysis. One API call returns emotion + age + gender + smile + glasses + landmarks.
  • Face comparison and repositories for identity verification workflows.
  • No dependency headaches. No TensorFlow, no tf-keras, no 1GB model downloads.
  • Consistent latency. 600-700ms regardless of input vs DeepFace ranging from 500ms (warm) to 5,000ms+ (cold start).

Sources


Read the full guide with image-by-image breakdown, install gotchas explained, and JavaScript examples on ai-engine.net.

Top comments (0)