DEV Community

TildAlice
TildAlice

Posted on • Originally published at tildalice.io

ONNX Export Pitfalls: 7 PyTorch Production Gotchas

The Export Worked. The Inference Failed.

Your PyTorch model exports to ONNX without errors. The file loads in ONNX Runtime. Then inference produces NaN outputs, silently wrong predictions, or crashes with cryptic shape errors.

I've debugged this cycle enough times to spot the patterns. ONNX export doesn't fail loudly — it fails in production, after you've already committed to the deployment strategy. Most tutorials stop at torch.onnx.export() succeeding, but that's where the real problems start.

Here are seven gotchas I've hit moving PyTorch models to ONNX, complete with the specific errors, fixes, and when each one bites you.

Soldier inside an armored vehicle, showcasing military equipment and seating.

Photo by Konrad Ciężki on Pexels

1. Dynamic Axes Break When You Don't Expect Them

You specify dynamic_axes={'input': {0: 'batch'}} during export. The model runs fine with batch size 1, 8, 32 during validation. Then you deploy with batch size 7 and get a shape mismatch error buried six ops deep in the graph.


python
import torch
import torch.nn as nn
import onnx
import onnxruntime as ort
import numpy as np


---

*Continue reading the full article on [TildAlice](https://tildalice.io/onnx-export-pitfalls-pytorch-production-gotchas/)*
Enter fullscreen mode Exit fullscreen mode

Top comments (0)