<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jesus Fernandez</title>
    <description>The latest articles on DEV Community by Jesus Fernandez (@jfernandez27).</description>
    <link>https://dev.to/jfernandez27</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jfernandez27"/>
    <language>en</language>
    <item>
      <title>🚀 Optimizing Local LLM Development with Docker &amp; NVIDIA GPUs</title>
      <dc:creator>Jesus Fernandez</dc:creator>
      <pubDate>Thu, 07 Aug 2025 02:20:02 +0000</pubDate>
      <link>https://dev.to/jfernandez27/optimizing-local-llm-development-with-docker-nvidia-gpus-4fbf</link>
      <guid>https://dev.to/jfernandez27/optimizing-local-llm-development-with-docker-nvidia-gpus-4fbf</guid>
      <description>&lt;p&gt;Working with large language models (LLMs) locally is exciting—but also messy. Between GPU drivers, container configs, and model juggling, it’s easy to lose hours just getting things to run. That’s why I created &lt;strong&gt;ollama-dev-env&lt;/strong&gt;: an experimental project designed to streamline local LLM development using Docker, NVIDIA GPUs, and open-source models like DeepSeek Coder.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧪 Why This Project Exists
&lt;/h2&gt;

&lt;p&gt;This started as a personal experiment.&lt;/p&gt;

&lt;p&gt;I wanted to see how far I could push local development with LLMs—without relying on cloud APIs or heavyweight setups. The goals were simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Run models like DeepSeek Coder and CodeLlama entirely on my own hardware
&lt;/li&gt;
&lt;li&gt;✅ Automate the setup with Docker and shell scripts
&lt;/li&gt;
&lt;li&gt;✅ Create a reusable environment for testing, coding, and learning
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What began as a weekend project turned into a full-featured dev environment I now use daily for prototyping and AI-assisted coding.&lt;/p&gt;




&lt;h2&gt;
  
  
  🌟 Key Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🔧 &lt;strong&gt;Experimental but practical&lt;/strong&gt;: Built for tinkering, stable enough for real use
&lt;/li&gt;
&lt;li&gt;🧠 &lt;strong&gt;Pre-installed LLMs&lt;/strong&gt;: DeepSeek Coder, CodeLlama, Llama 2, Mixtral, Phi, Mistral, Neural Chat
&lt;/li&gt;
&lt;li&gt;🚀 &lt;strong&gt;GPU Acceleration&lt;/strong&gt;: Optimized for RTX 3050 and compatible cards
&lt;/li&gt;
&lt;li&gt;🛠️ &lt;strong&gt;Dev Script Automation&lt;/strong&gt;: One CLI to manage everything
&lt;/li&gt;
&lt;li&gt;🌐 &lt;strong&gt;Web UI&lt;/strong&gt;: Chat and interact with models visually
&lt;/li&gt;
&lt;li&gt;🔐 &lt;strong&gt;Security-first&lt;/strong&gt;: Non-root containers, health checks, resource limits
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🛠️ Setup in Seconds
&lt;/h2&gt;

&lt;p&gt;Full instructions are in the &lt;a href="https://github.com/Jfernandez27/ollama-dev-env" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;, but here’s the short version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Jfernandez27/ollama-dev-env.git
&lt;span class="nb"&gt;cd &lt;/span&gt;ollama-dev-env
./scripts/ollama-dev.sh start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🧠 Ollama API: &lt;a href="http://localhost:11434" rel="noopener noreferrer"&gt;http://localhost:11434&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🌐 Web UI: &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧩 What You Can Do With It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🧪 Experiment with LLMs locally
&lt;/li&gt;
&lt;li&gt;💬 Chat with models via CLI or browser
&lt;/li&gt;
&lt;li&gt;🧠 Analyze code with DeepSeek Coder
&lt;/li&gt;
&lt;li&gt;🧱 Pull and switch between models
&lt;/li&gt;
&lt;li&gt;🔍 Monitor GPU usage and container health
&lt;/li&gt;
&lt;li&gt;🧰 Extend the environment with your own tools
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚙️ Built for Developers Like Me
&lt;/h2&gt;

&lt;p&gt;As a backend-focused dev working in EdTech and SaaS, I needed a local playground for AI tools—something fast, secure, and flexible. This project reflects that need. While it’s experimental, it’s already powering real workflows.&lt;/p&gt;




&lt;h2&gt;
  
  
  🤝 Want to Collaborate?
&lt;/h2&gt;

&lt;p&gt;If you're building something similar, exploring LLMs, or just want to geek out over Docker and GPUs, feel free to reach out or contribute. The repo is open-source and MIT licensed:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/Jfernandez27/ollama-dev-env" rel="noopener noreferrer"&gt;github.com/Jfernandez27/ollama-dev-env&lt;/a&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>docker</category>
      <category>opensource</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
