DEV Community

TildAlice
TildAlice

Posted on • Originally published at tildalice.io

Whisper.cpp vs Faster-Whisper: WER Accuracy Test on LibriSpeech

The 0.3% WER Gap Nobody Talks About

Running Whisper on a Raspberry Pi 4 should be straightforward in 2026. It isn't. I compiled whisper.cpp with NEON optimizations, ran it against LibriSpeech test-clean, and got 4.7% WER. Then I ran the same audio through faster-whisper on the same Pi. 4.4% WER.

That 0.3% difference cost me a week to understand.

The typical edge AI benchmarks focus on latency and memory. But when you're building a voice interface for an AMR (autonomous mobile robot) or a field device, WER accuracy determines whether your system actually works in production. A transcription that's 50ms faster but drops 3 more words per 100 is worse than useless.

A vintage wooden bookcase filled with classic literature books, evoking a warm and nostalgic atmosphere.

Photo by Valentin Angel Fernandez on Pexels

Why Two Whisper Implementations Exist

OpenAI's original Whisper runs on PyTorch. Great for servers, unusable on edge devices. Two projects emerged to fix this:


Continue reading the full article on TildAlice

Top comments (0)