<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: metriclogic26</title>
    <description>The latest articles on DEV Community by metriclogic26 (@metriclogic26).</description>
    <link>https://dev.to/metriclogic26</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/metriclogic26"/>
    <language>en</language>
    <item>
      <title>CVE-2025-32434: PyTorch's "safe" model loading flag isn't safe</title>
      <dc:creator>metriclogic26</dc:creator>
      <pubDate>Thu, 19 Mar 2026 03:19:18 +0000</pubDate>
      <link>https://dev.to/metriclogic26/cve-2025-32434-pytorchs-safe-model-loading-flag-isnt-safe-55a</link>
      <guid>https://dev.to/metriclogic26/cve-2025-32434-pytorchs-safe-model-loading-flag-isnt-safe-55a</guid>
      <description>&lt;h2&gt;
  
  
  The assumption that broke
&lt;/h2&gt;

&lt;p&gt;For years, the PyTorch documentation said this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Use &lt;code&gt;weights_only=True&lt;/code&gt; to avoid arbitrary code execution &lt;br&gt;
when loading untrusted models.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That assumption is now broken.&lt;/p&gt;

&lt;p&gt;CVE-2025-32434 was published on April 17, 2025. CVSS score: 9.3 &lt;br&gt;
(Critical). Researcher Ji'an Zhou demonstrated that &lt;code&gt;torch.load()&lt;/code&gt; &lt;br&gt;
with &lt;code&gt;weights_only=True&lt;/code&gt; can still achieve remote code execution &lt;br&gt;
on PyTorch versions ≤ 2.5.1.&lt;/p&gt;

&lt;p&gt;If your team loads models from Hugging Face, TorchHub, or any &lt;br&gt;
community repository, and you haven't updated to PyTorch 2.6.0, &lt;br&gt;
you are exposed.&lt;/p&gt;


&lt;h2&gt;
  
  
  How the attack works
&lt;/h2&gt;

&lt;p&gt;PyTorch uses Python's pickle format to serialize model weights. &lt;br&gt;
The &lt;code&gt;weights_only=True&lt;/code&gt; parameter was designed to restrict &lt;br&gt;
deserialization to safe types only — tensors, primitives, &lt;br&gt;
basic containers.&lt;/p&gt;

&lt;p&gt;Zhou demonstrated that an attacker can craft a model file that &lt;br&gt;
exploits inconsistencies in PyTorch's serialization validation &lt;br&gt;
to bypass these restrictions entirely. When a victim loads the &lt;br&gt;
malicious model, arbitrary code executes in their environment.&lt;/p&gt;

&lt;p&gt;The attack vector is network-accessible (AV:N), requires no &lt;br&gt;
privileges (PR:N), and no user interaction beyond the normal &lt;br&gt;
model loading workflow (UI:N). In cloud-based ML environments &lt;br&gt;
this could mean lateral movement or data exfiltration.&lt;/p&gt;


&lt;h2&gt;
  
  
  Who is affected
&lt;/h2&gt;

&lt;p&gt;Any pipeline that does this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# You were told this was safe. It wasn't.
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;model.pt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;weights_only&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Specifically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transfer learning pipelines pulling from public model repos&lt;/li&gt;
&lt;li&gt;Automated training pipelines that download community models&lt;/li&gt;
&lt;li&gt;Inference servers loading third-party weights&lt;/li&gt;
&lt;li&gt;Anyone on torch ≤ 2.5.1&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The fix
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--upgrade&lt;/span&gt; torch&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;2.6.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or pin in your requirements.txt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;torch&amp;gt;=2.6.0
torchvision&amp;gt;=0.21.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The broader problem: your full ML stack
&lt;/h2&gt;

&lt;p&gt;PyTorch is one package. Most ML stacks have 20-50 dependencies, &lt;br&gt;
many pinned at versions from 2022-2023 when the model was first &lt;br&gt;
built and never touched again.&lt;/p&gt;

&lt;p&gt;Here's what a typical ML requirements.txt looks like after &lt;br&gt;
a real CVE scan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="py"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;=2.5.1          # CRITICAL CVE-2025-32434&lt;/span&gt;
&lt;span class="py"&gt;pillow&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;=9.5.0         # HIGH CVE-2023-50447 (Arbitrary Code Execution)&lt;/span&gt;
&lt;span class="py"&gt;pyyaml&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;=5.3.1         # CRITICAL CVE-2020-14343&lt;/span&gt;
&lt;span class="py"&gt;cryptography&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;=36.0.0  # HIGH CVE-2023-49083&lt;/span&gt;
&lt;span class="py"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;=2.28.0      # MEDIUM CVE-2023-32681&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every one of those has a known CVE. Most ML engineers have no &lt;br&gt;
idea because they haven't scanned their dependencies since &lt;br&gt;
the model was first trained.&lt;/p&gt;


&lt;h2&gt;
  
  
  How to check your stack right now
&lt;/h2&gt;

&lt;p&gt;Paste your &lt;code&gt;requirements.txt&lt;/code&gt; into &lt;br&gt;
&lt;a href="https://packagefix.dev" rel="noopener noreferrer"&gt;PackageFix&lt;/a&gt; — free, browser-based, &lt;br&gt;
no signup, no CLI install. It queries the OSV database live &lt;br&gt;
so CVE-2025-32434 and any CVEs published this week are included.&lt;/p&gt;

&lt;p&gt;Test file you can paste immediately:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="py"&gt;torch&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;=2.5.1&lt;/span&gt;
&lt;span class="py"&gt;pillow&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;=9.5.0&lt;/span&gt;
&lt;span class="py"&gt;pyyaml&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;=5.3.1&lt;/span&gt;
&lt;span class="py"&gt;cryptography&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;=36.0.0&lt;/span&gt;
&lt;span class="py"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;=2.28.0&lt;/span&gt;
&lt;span class="py"&gt;transformers&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;=4.30.0&lt;/span&gt;
&lt;span class="py"&gt;numpy&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;=1.24.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Recommended minimum versions for ML stacks in 2026
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="err"&gt;torch&amp;gt;=2.6.0&lt;/span&gt;
&lt;span class="err"&gt;torchvision&amp;gt;=0.21.0&lt;/span&gt;
&lt;span class="err"&gt;pillow&amp;gt;=10.2.0&lt;/span&gt;
&lt;span class="err"&gt;pyyaml&amp;gt;=6.0.1&lt;/span&gt;
&lt;span class="err"&gt;cryptography&amp;gt;=42.0.5&lt;/span&gt;
&lt;span class="err"&gt;requests&amp;gt;=2.32.0&lt;/span&gt;
&lt;span class="err"&gt;transformers&amp;gt;=4.36.0&lt;/span&gt;
&lt;span class="err"&gt;numpy&amp;gt;=1.26.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pin to these minimums. Add a monthly audit to your calendar. &lt;br&gt;
The OSV database updates daily — CVEs for packages already &lt;br&gt;
in your production stack appear regularly without any &lt;br&gt;
notification unless you're actively checking.&lt;/p&gt;




&lt;h2&gt;
  
  
  The uncomfortable truth about ML security
&lt;/h2&gt;

&lt;p&gt;The ML community has a dependency hygiene problem. We obsess &lt;br&gt;
over model accuracy, training efficiency, and inference speed. &lt;br&gt;
Almost nobody runs a CVE scanner on their requirements.txt.&lt;/p&gt;

&lt;p&gt;CVE-2025-32434 is a critical RCE in the most widely used ML &lt;br&gt;
framework in the world. It affects the exact workflow the &lt;br&gt;
documentation told us was safe.&lt;/p&gt;

&lt;p&gt;Check your stack. Update torch. Scan your full requirements.txt.&lt;/p&gt;

&lt;p&gt;The attack surface for ML systems is larger than most teams realize.&lt;/p&gt;

</description>
      <category>python</category>
      <category>security</category>
      <category>machinelearning</category>
      <category>pytorch</category>
    </item>
    <item>
      <title>Running Ollama locally? These 5 server misconfigs can expose your instance to the internet</title>
      <dc:creator>metriclogic26</dc:creator>
      <pubDate>Tue, 17 Mar 2026 15:34:40 +0000</pubDate>
      <link>https://dev.to/metriclogic26/running-ollama-locally-these-5-server-misconfigs-can-expose-your-instance-to-the-internet-3nc7</link>
      <guid>https://dev.to/metriclogic26/running-ollama-locally-these-5-server-misconfigs-can-expose-your-instance-to-the-internet-3nc7</guid>
      <description>&lt;p&gt;Ollama binds to 0.0.0.0 by default on port 11434. That means if you're running it on a VPS or home server, your entire model API is publicly accessible — no authentication required.&lt;/p&gt;

&lt;p&gt;Here's what to check before your Ollama instance becomes someone else's free GPU.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Ollama port exposed via Docker
&lt;/h2&gt;

&lt;p&gt;If you're running Ollama in Docker with:&lt;br&gt;
ports: "11434:11434"&lt;/p&gt;

&lt;p&gt;That binding bypasses UFW entirely. Docker inserts rules directly into iptables PREROUTING — before UFW even sees the traffic.&lt;/p&gt;

&lt;p&gt;The fix:&lt;br&gt;
ports: "127.0.0.1:11434:11434"&lt;/p&gt;

&lt;p&gt;Or skip the port mapping entirely and use Docker's internal network if only other containers need access.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. UFW showing blocked but port still open
&lt;/h2&gt;

&lt;p&gt;Run: curl &lt;a href="http://your-server-ip:11434" rel="noopener noreferrer"&gt;http://your-server-ip:11434&lt;/a&gt;&lt;br&gt;
If you get a response, your Ollama API is public regardless of what UFW status shows.&lt;/p&gt;

&lt;p&gt;Paste your ufw status verbose output and check for IPv4/IPv6 mismatches — a common gap that leaves ports open on one protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Cron jobs colliding with model pulls
&lt;/h2&gt;

&lt;p&gt;Scheduled model pulls + backup jobs firing at the same time = server load spike = hung process. Visualize your full cron timeline before adding Ollama maintenance tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. SSL not covering your Ollama web interface
&lt;/h2&gt;

&lt;p&gt;If you're proxying Open WebUI or any Ollama frontend through Nginx or Traefik — that cert will expire. Check expiry across all your domains at once, not just the main one.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Dependencies in your Ollama extensions
&lt;/h2&gt;

&lt;p&gt;Building custom tools or scripts on top of Ollama? Your requirements.txt or package.json has CVEs you don't know about — the OSV database updates daily and AI training data is always stale.&lt;/p&gt;




&lt;h2&gt;
  
  
  The tools
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;ConfigClarity&lt;/strong&gt; — Docker, firewall, cron, SSL, reverse proxy audits. Paste your config, get the exact fix. No signup, nothing leaves your browser.&lt;br&gt;
&lt;a href="https://configclarity.dev" rel="noopener noreferrer"&gt;https://configclarity.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PackageFix&lt;/strong&gt; — Paste your manifest, get a fixed version back. Live CVE scan via OSV + CISA KEV.&lt;br&gt;
&lt;a href="https://packagefix.dev" rel="noopener noreferrer"&gt;https://packagefix.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both MIT licensed, open source, client-side only.&lt;/p&gt;




&lt;p&gt;Running Ollama on a VPS or home server? What's your current setup for keeping the API locked down?&lt;/p&gt;

</description>
      <category>security</category>
      <category>docker</category>
      <category>sefhosted</category>
    </item>
    <item>
      <title>Running NemoClaw or OpenClaw locally? Audit your server before you give an AI agent the keys.</title>
      <dc:creator>metriclogic26</dc:creator>
      <pubDate>Mon, 16 Mar 2026 21:30:13 +0000</pubDate>
      <link>https://dev.to/metriclogic26/running-nemoclaw-or-openclaw-locally-audit-your-server-before-you-give-an-ai-agent-the-keys-35n4</link>
      <guid>https://dev.to/metriclogic26/running-nemoclaw-or-openclaw-locally-audit-your-server-before-you-give-an-ai-agent-the-keys-35n4</guid>
      <description>&lt;p&gt;NVIDIA just announced NemoClaw at GTC 2026 today. If you're in the OpenClaw community, you're probably already thinking about running it locally on a dedicated machine.&lt;/p&gt;

&lt;p&gt;Before you do — your server needs to be clean first.&lt;/p&gt;

&lt;p&gt;An always-on AI agent with access to your files, tools, and network is only as secure as the infrastructure it runs on. Here's what to check before you hand over the keys.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Your Docker ports might be publicly exposed
&lt;/h2&gt;

&lt;p&gt;NemoClaw and OpenClaw both run in Docker. The most common misconfiguration in any Docker setup is this:&lt;/p&gt;

&lt;p&gt;ports: "11434:11434"&lt;/p&gt;

&lt;p&gt;That binds to 0.0.0.0 — meaning your AI agent's inference port is accessible from the public internet, not just localhost. UFW won't catch it. Docker bypasses UFW entirely by inserting rules directly into iptables PREROUTING.&lt;/p&gt;

&lt;p&gt;The fix:&lt;br&gt;
ports: "127.0.0.1:11434:11434"&lt;/p&gt;

&lt;p&gt;Check every port mapping in your compose file before NemoClaw goes live.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Your firewall has IPv4/IPv6 mismatches
&lt;/h2&gt;

&lt;p&gt;You locked down IPv4. IPv6 is wide open. Same result — your agent's ports are reachable from outside.&lt;/p&gt;

&lt;p&gt;Paste your ufw status verbose output and check for rules that only apply to one protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Your cron jobs will collide with agent tasks
&lt;/h2&gt;

&lt;p&gt;Always-on agents schedule their own tasks. If you already have cron jobs running backups, &lt;br&gt;
updates, or maintenance — you need to know exactly when they fire.&lt;/p&gt;

&lt;p&gt;Three jobs hitting the same minute = server load spike = agent timeout = failed task with no error.&lt;/p&gt;

&lt;p&gt;Visualize your full cron timeline before adding agent workloads on top.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Your SSL certificates need monitoring
&lt;/h2&gt;

&lt;p&gt;NemoClaw runs a local web interface. If you're proxying it through Nginx or Traefik with SSL — that cert will expire. Set up monitoring across all your domains now, not after the renewal window passes.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Your dependencies have CVEs you don't know about
&lt;/h2&gt;

&lt;p&gt;Building on top of NemoClaw? Extending OpenClaw with custom skills? Your package.json or requirements.txt has vulnerabilities that ChatGPT can't tell you about — because the OSV database updates daily and AI training data is always stale.&lt;/p&gt;

&lt;p&gt;Paste your manifest and get a live CVE scan against today's vulnerability database. CISA KEV flags actively exploited packages first.&lt;/p&gt;




&lt;h2&gt;
  
  
  The tools
&lt;/h2&gt;

&lt;p&gt;All of the above checks are what I built for the MetricLogic network:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ConfigClarity&lt;/strong&gt; — Docker, firewall, cron, SSL, reverse proxy audits. Paste your config, get the fix. No signup, nothing leavesyour browser.&lt;br&gt;
&lt;a href="https://configclarity.dev" rel="noopener noreferrer"&gt;https://configclarity.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PackageFix&lt;/strong&gt; — Paste your manifest, get a fixed version back. Live CVE scan via OSV + CISA KEV. npm, PyPI, Ruby, PHP.&lt;br&gt;
&lt;a href="https://packagefix.dev" rel="noopener noreferrer"&gt;https://packagefix.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both MIT licensed, open source, client-side only.&lt;/p&gt;

&lt;p&gt;If you're building an always-on AI agent setup, run these before you go live. An agent with access to a misconfigured server is worse than no agent at all.&lt;/p&gt;




&lt;p&gt;Building something with NemoClaw or OpenClaw? &lt;br&gt;
Drop a comment — would love to know what stacks people are running this on.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>devops</category>
      <category>selfhosted</category>
    </item>
  </channel>
</rss>
