DEV Community

TildAlice
TildAlice

Posted on • Originally published at tildalice.io

TensorFlow Lite vs PyTorch Mobile: Cortex-A Latency Test

TensorFlow Lite wins on Cortex-A — but PyTorch Mobile catches up with XNNPACK

I just ran MobileNetV2 inference 10,000 times on a Raspberry Pi 4 (Cortex-A72). TensorFlow Lite clocked 23ms average latency. PyTorch Mobile hit 41ms. That's a 78% slowdown.

But here's the twist: enable XNNPACK in PyTorch Mobile, and that gap shrinks to 26ms — just 13% slower than TFLite. The default PyTorch build ships without ARM-optimized kernels. Most tutorials don't mention this.

This post compares both frameworks on the same hardware, same model, same quantization settings. I'll show you where each one fails, what the actual bottlenecks are, and when you'd pick one over the other.

Close-up view of a smartphone showcasing the ChatGPT app against a colorful background.

Photo by Patrick Gamelkoorn on Pexels

The test setup: same model, same Pi, same INT8 quant

Hardware: Raspberry Pi 4 Model B (4GB RAM, Cortex-A72 quad-core @ 1.5GHz). OS: Raspberry Pi OS Lite (64-bit, Debian 12). No active cooling, ambient 22°C.


Continue reading the full article on TildAlice

Top comments (0)