DEV Community

metriclogic26
metriclogic26

Posted on

CVE-2025-32434: PyTorch's "safe" model loading flag isn't safe

The assumption that broke

For years, the PyTorch documentation said this:

Use weights_only=True to avoid arbitrary code execution
when loading untrusted models.

That assumption is now broken.

CVE-2025-32434 was published on April 17, 2025. CVSS score: 9.3
(Critical). Researcher Ji'an Zhou demonstrated that torch.load()
with weights_only=True can still achieve remote code execution
on PyTorch versions ≤ 2.5.1.

If your team loads models from Hugging Face, TorchHub, or any
community repository, and you haven't updated to PyTorch 2.6.0,
you are exposed.


How the attack works

PyTorch uses Python's pickle format to serialize model weights.
The weights_only=True parameter was designed to restrict
deserialization to safe types only — tensors, primitives,
basic containers.

Zhou demonstrated that an attacker can craft a model file that
exploits inconsistencies in PyTorch's serialization validation
to bypass these restrictions entirely. When a victim loads the
malicious model, arbitrary code executes in their environment.

The attack vector is network-accessible (AV:N), requires no
privileges (PR:N), and no user interaction beyond the normal
model loading workflow (UI:N). In cloud-based ML environments
this could mean lateral movement or data exfiltration.


Who is affected

Any pipeline that does this:

# You were told this was safe. It wasn't.
model = torch.load('model.pt', weights_only=True)
Enter fullscreen mode Exit fullscreen mode

Specifically:

  • Transfer learning pipelines pulling from public model repos
  • Automated training pipelines that download community models
  • Inference servers loading third-party weights
  • Anyone on torch ≤ 2.5.1

The fix

pip install --upgrade torch>=2.6.0
Enter fullscreen mode Exit fullscreen mode

Or pin in your requirements.txt:

torch>=2.6.0
torchvision>=0.21.0
Enter fullscreen mode Exit fullscreen mode

The broader problem: your full ML stack

PyTorch is one package. Most ML stacks have 20-50 dependencies,
many pinned at versions from 2022-2023 when the model was first
built and never touched again.

Here's what a typical ML requirements.txt looks like after
a real CVE scan:

torch==2.5.1          # CRITICAL CVE-2025-32434
pillow==9.5.0         # HIGH CVE-2023-50447 (Arbitrary Code Execution)
pyyaml==5.3.1         # CRITICAL CVE-2020-14343
cryptography==36.0.0  # HIGH CVE-2023-49083
requests==2.28.0      # MEDIUM CVE-2023-32681
Enter fullscreen mode Exit fullscreen mode

Every one of those has a known CVE. Most ML engineers have no
idea because they haven't scanned their dependencies since
the model was first trained.


How to check your stack right now

Paste your requirements.txt into
PackageFix — free, browser-based,
no signup, no CLI install. It queries the OSV database live
so CVE-2025-32434 and any CVEs published this week are included.

Test file you can paste immediately:

torch==2.5.1
pillow==9.5.0
pyyaml==5.3.1
cryptography==36.0.0
requests==2.28.0
transformers==4.30.0
numpy==1.24.0
Enter fullscreen mode Exit fullscreen mode

Recommended minimum versions for ML stacks in 2026

torch>=2.6.0
torchvision>=0.21.0
pillow>=10.2.0
pyyaml>=6.0.1
cryptography>=42.0.5
requests>=2.32.0
transformers>=4.36.0
numpy>=1.26.0
Enter fullscreen mode Exit fullscreen mode

Pin to these minimums. Add a monthly audit to your calendar.
The OSV database updates daily — CVEs for packages already
in your production stack appear regularly without any
notification unless you're actively checking.


The uncomfortable truth about ML security

The ML community has a dependency hygiene problem. We obsess
over model accuracy, training efficiency, and inference speed.
Almost nobody runs a CVE scanner on their requirements.txt.

CVE-2025-32434 is a critical RCE in the most widely used ML
framework in the world. It affects the exact workflow the
documentation told us was safe.

Check your stack. Update torch. Scan your full requirements.txt.

The attack surface for ML systems is larger than most teams realize.

Top comments (0)