<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Luciano Ballerano</title>
    <description>The latest articles on DEV Community by Luciano Ballerano (@runlocal).</description>
    <link>https://dev.to/runlocal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/runlocal"/>
    <language>en</language>
    <item>
      <title>Built an open-source picker that recommends the right self-hosted LLM for your hardware</title>
      <dc:creator>Luciano Ballerano</dc:creator>
      <pubDate>Sun, 17 May 2026 00:09:22 +0000</pubDate>
      <link>https://dev.to/runlocal/built-an-open-source-picker-that-recommends-the-right-self-hosted-llm-for-your-hardware-2p4f</link>
      <guid>https://dev.to/runlocal/built-an-open-source-picker-that-recommends-the-right-self-hosted-llm-for-your-hardware-2p4f</guid>
      <description>&lt;p&gt;Built this because every "which LLM should I self-host on my [hardware]" &lt;br&gt;
thread ends with "depends" without anyone actually doing the math.&lt;/p&gt;

&lt;p&gt;You tell it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Platform (NVIDIA, AMD, Apple Silicon, Intel Arc, CPU-only)&lt;/li&gt;
&lt;li&gt;Available VRAM or unified memory&lt;/li&gt;
&lt;li&gt;Use case (chat, code, long-context, math)&lt;/li&gt;
&lt;li&gt;License preference (any vs permissive-only)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You get a ranked list of open-weight models that actually fit in your&lt;br&gt;
memory budget with 15% safety margin, the right GGUF quantization picked&lt;br&gt;
automatically, and copy-paste install commands for Ollama or llama.cpp.&lt;br&gt;
The picker runs entirely in your browser — nothing sent to a server.&lt;/p&gt;

&lt;p&gt;The site also has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A curated model directory with explicit license labels colour-coded 
(permissive / open-weight / non-commercial)&lt;/li&gt;
&lt;li&gt;Three install guides for Ollama, llama.cpp and LM Studio&lt;/li&gt;
&lt;li&gt;A glossary in plain English for newcomers&lt;/li&gt;
&lt;li&gt;A live trending section from Hugging Face, refreshed weekly via a 
GitHub Action that commits the snapshot back to the repo (full diff 
history in git)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Source code is MIT, content is CC BY 4.0. No accounts, no analytics, &lt;br&gt;
no ads, no affiliate links.&lt;/p&gt;

&lt;p&gt;Picker: &lt;a href="https://runlocal.blog/picker" rel="noopener noreferrer"&gt;https://runlocal.blog/picker&lt;/a&gt;&lt;br&gt;
Site: &lt;a href="https://runlocal.blog" rel="noopener noreferrer"&gt;https://runlocal.blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feedback welcome, especially on the memory estimates and the picker &lt;br&gt;
scoring formula (downloads + likes + recency, weighted). If a model &lt;br&gt;
you'd want is missing from the catalog, drop the name in the comments.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>localai</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
