<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Fayaz Bin Salam</title>
    <description>The latest articles on DEV Community by Fayaz Bin Salam (@fayazbuilds_n5f2t7).</description>
    <link>https://dev.to/fayazbuilds_n5f2t7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/fayazbuilds_n5f2t7"/>
    <language>en</language>
    <item>
      <title>Ollama Models Explorer — a clean Next.js UI to browse and filter local LLMs</title>
      <dc:creator>Fayaz Bin Salam</dc:creator>
      <pubDate>Mon, 11 May 2026 11:34:06 +0000</pubDate>
      <link>https://dev.to/fayazbuilds_n5f2t7/ollama-models-explorer-a-clean-nextjs-ui-to-browse-and-filter-local-llms-15a8</link>
      <guid>https://dev.to/fayazbuilds_n5f2t7/ollama-models-explorer-a-clean-nextjs-ui-to-browse-and-filter-local-llms-15a8</guid>
      <description>&lt;p&gt;If you're running local LLMs through Ollama, finding the right model is annoying. The official model page scrolls forever, capability tags are inconsistent, and there's no way to sort by context window or size without doing it in your head.&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;Ollama Models Explorer&lt;/strong&gt; — a small Next.js + Tailwind app that pulls the full Ollama model catalog and gives you a real, fast table. Search by name, filter by capability (chat, vision, embedding), sort by name, size, or context length by clicking the column header. Dark-themed because that's where I live.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Live demo:&lt;/strong&gt; &lt;a href="https://ollama-models-explorer.vercel.app/" rel="noopener noreferrer"&gt;https://ollama-models-explorer.vercel.app/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/p32929/ollama_models_explorer" rel="noopener noreferrer"&gt;https://github.com/p32929/ollama_models_explorer&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Next.js (app router) + TypeScript&lt;/li&gt;
&lt;li&gt;Tailwind CSS + shadcn/ui for the table and inputs&lt;/li&gt;
&lt;li&gt;Lucide for icons&lt;/li&gt;
&lt;li&gt;Model data loaded from a local JSON file, so the deploy is fully static and instant&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  One non-obvious thing
&lt;/h2&gt;

&lt;p&gt;The capability filter ANDs the pills together instead of ORing them. Most "filter by tag" UIs default to OR ("any of these tags") which is almost never what you want when you're hunting for a model that does &lt;em&gt;both&lt;/em&gt; vision AND chat. Small detail, surprisingly different feel.&lt;/p&gt;

&lt;p&gt;Also: sorting context window numerically meant parsing strings like &lt;code&gt;128k&lt;/code&gt;, &lt;code&gt;1M&lt;/code&gt;, &lt;code&gt;32768&lt;/code&gt; into a consistent unit before comparing. Looks trivial, but the Ollama catalog mixes formats freely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;It's bare-bones on purpose — would love feedback on what filters or columns you'd actually use day-to-day. Stars / forks welcome if it's useful.&lt;/p&gt;

&lt;p&gt;Open to building with sharp teams + solo founders — DMs and email open.&lt;/p&gt;

&lt;p&gt;— Fayaz (&lt;a href="https://github.com/p32929" rel="noopener noreferrer"&gt;github.com/p32929&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>ollama</category>
      <category>ai</category>
      <category>opensource</category>
      <category>nextjs</category>
    </item>
  </channel>
</rss>
