<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: T Obias</title>
    <description>The latest articles on DEV Community by T Obias (@t_obias_12538a44ba441aacc).</description>
    <link>https://dev.to/t_obias_12538a44ba441aacc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/t_obias_12538a44ba441aacc"/>
    <language>en</language>
    <item>
      <title>I Got Tired of Googling "Can My GPU Run This LLM?" So I Built This</title>
      <dc:creator>T Obias</dc:creator>
      <pubDate>Wed, 11 Feb 2026 15:03:45 +0000</pubDate>
      <link>https://dev.to/t_obias_12538a44ba441aacc/i-got-tired-of-googling-can-my-gpu-run-this-llm-so-i-built-this-208j</link>
      <guid>https://dev.to/t_obias_12538a44ba441aacc/i-got-tired-of-googling-can-my-gpu-run-this-llm-so-i-built-this-208j</guid>
      <description>&lt;p&gt;Free tool that instantly tells you if your GPU can run DeepSeek, Llama 3, Mistral, and 50+ other AI models. No more guessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  tags: ai, tools, productivity, beginners
&lt;/h2&gt;

&lt;h1&gt;
  
  
  I Got Tired of Googling "Can My GPU Run This LLM?" So I Built This
&lt;/h1&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;You want to run LLMs locally (DeepSeek, Llama 3, Mistral, whatever).&lt;/p&gt;

&lt;p&gt;You Google: &lt;strong&gt;"Can RTX 3060 run Llama 3?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;10 Reddit threads with different answers&lt;/li&gt;
&lt;li&gt;Someone says "yeah probably"&lt;/li&gt;
&lt;li&gt;Someone else says "no way"&lt;/li&gt;
&lt;li&gt;A YouTube video from 2023&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You download 40GB anyway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It doesn't fit.&lt;/strong&gt; 😤&lt;/p&gt;




&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;I built a simple tool that gives you the answer in &lt;strong&gt;5 seconds&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;[&lt;a href="https://dev.tourl"&gt;canirunllms.com&lt;/a&gt;](&lt;a href="https://canirunllms.com" rel="noopener noreferrer"&gt;https://canirunllms.com&lt;/a&gt;)&lt;/strong&gt; 👈&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pick your GPU&lt;/li&gt;
&lt;li&gt;See which models work (green = yes, red = no)&lt;/li&gt;
&lt;li&gt;Done.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Built This
&lt;/h2&gt;

&lt;p&gt;I was buying a GPU for AI stuff and had no idea what I needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Questions I couldn't answer:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Will my RTX 3060 run Llama 3?&lt;/li&gt;
&lt;li&gt;Do I need 16GB or 24GB VRAM?&lt;/li&gt;
&lt;li&gt;Can my MacBook run local LLMs?&lt;/li&gt;
&lt;li&gt;What about DeepSeek-R1?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every answer I found online was &lt;strong&gt;"it depends"&lt;/strong&gt; or &lt;strong&gt;"maybe"&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So I built a database of every GPU and every popular LLM, and made it searchable.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Makes This Different
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Other tools:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Require you to understand "quantization levels"&lt;/li&gt;
&lt;li&gt;Show you complicated formulas&lt;/li&gt;
&lt;li&gt;Don't include Apple Silicon&lt;/li&gt;
&lt;li&gt;Haven't been updated since 2023&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;This tool:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Just shows you: YES or NO&lt;/li&gt;
&lt;li&gt;✅ Covers 50+ models (DeepSeek, Llama, Mistral, Mixtral, Gemma, etc.)&lt;/li&gt;
&lt;li&gt;✅ Includes MacBook Pro, Mac Studio, RTX 3060, 4090, AMD, Intel—everything&lt;/li&gt;
&lt;li&gt;✅ Updated February 2026&lt;/li&gt;
&lt;li&gt;✅ 100% free, no signup&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Examples (Try These)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;"Can RTX 3060 run DeepSeek-R1 8B?"&lt;/strong&gt;&lt;br&gt;
👉 &lt;a href="https://canirunllms.com/gpu/nvidia-rtx-3060" rel="noopener noreferrer"&gt;Click here&lt;/a&gt;&lt;br&gt;
Answer: &lt;strong&gt;Yes&lt;/strong&gt; (4-bit quantized)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Can RTX 4090 run Mixtral 8x7B?"&lt;/strong&gt;&lt;br&gt;
👉 &lt;a href="https://canirunllms.com/gpu/nvidia-rtx-4090-24gb" rel="noopener noreferrer"&gt;Click here&lt;/a&gt;&lt;br&gt;
Answer: &lt;strong&gt;No&lt;/strong&gt; (needs 90GB VRAM)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"RTX 4090 vs RTX 3060 – which is better for LLMs?"&lt;/strong&gt;&lt;br&gt;
👉 &lt;a href="https://canirunllms.com/compare/nvidia-rtx-4090-24gb-vs-nvidia-rtx-3060" rel="noopener noreferrer"&gt;Compare them&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Who This Is For
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;You should use this if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🛒 You're buying a GPU and want to know what you can run&lt;/li&gt;
&lt;li&gt;💻 You already have a GPU and want to know which models fit&lt;/li&gt;
&lt;li&gt;🍎 You have a Mac and everyone online only talks about NVIDIA&lt;/li&gt;
&lt;li&gt;🤔 You're tired of Googling and getting vague answers&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Real Use Cases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Before buying a GPU:
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;"I'm choosing between RTX 4070 and RTX 4090. Let me check which models each one can run..."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Before downloading a 40GB model:
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;"Wait, will Llama 3 70B even fit on my GPU? Let me check first..."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  When someone asks you "what GPU do I need?":
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;"Just go to canirunllms.com and type in your model. Done."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Best Part: It's Free
&lt;/h2&gt;

&lt;p&gt;No signup. No email. No credit card.&lt;/p&gt;

&lt;p&gt;Just a tool that answers one question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;"Can my GPU run this LLM?"&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Try it: 👉 &lt;strong&gt;&lt;a href="https://canirunllms.com" rel="noopener noreferrer"&gt;canirunllms.com&lt;/a&gt;&lt;/strong&gt; 👈&lt;/p&gt;




&lt;h2&gt;
  
  
  I'd Love Your Feedback
&lt;/h2&gt;

&lt;p&gt;Since you're here, I have 3 quick questions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Did you ever download an LLM that didn't fit on your GPU?&lt;/strong&gt; (I did this 3 times before building this tool 😅)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;What GPU do you use?&lt;/strong&gt; (Curious what the Dev.to crowd runs)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Missing anything?&lt;/strong&gt; (GPUs, models, features?)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Drop a comment! 👇&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🏠 &lt;strong&gt;Main tool:&lt;/strong&gt; &lt;a href="https://canirunllms.com" rel="noopener noreferrer"&gt;canirunllms.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📊 &lt;strong&gt;GPU comparison:&lt;/strong&gt; &lt;a href="https://canirunllms.com/compare/nvidia-rtx-4090-24gb-vs-nvidia-rtx-3060" rel="noopener noreferrer"&gt;Compare any 2 GPUs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📖 &lt;strong&gt;VRAM guide:&lt;/strong&gt; &lt;a href="https://canirunllms.com/guide/vram-requirements" rel="noopener noreferrer"&gt;How VRAM actually works&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;ℹ️ &lt;strong&gt;About:&lt;/strong&gt; &lt;a href="https://canirunllms.com/about" rel="noopener noreferrer"&gt;How I verify the data&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;P.S.&lt;/strong&gt; If this saved you from buying the wrong GPU or downloading a model that doesn't fit, share it with someone who needs it. 🙏&lt;/p&gt;

&lt;p&gt;---&lt;a href="https://dev.tourl"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Comments? Questions? Roasts?
&lt;/h2&gt;

&lt;p&gt;Let me know below! 👇&lt;/p&gt;

&lt;p&gt;(And yes, I know the design is simple. That's on purpose. Just wanted something fast that works.)&lt;/p&gt;

</description>
      <category>llm</category>
      <category>canirunllms</category>
    </item>
  </channel>
</rss>
