<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Raveendiran RR</title>
    <description>The latest articles on DEV Community by Raveendiran RR (@raveendiran).</description>
    <link>https://dev.to/raveendiran</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/raveendiran"/>
    <language>en</language>
    <item>
      <title>🚀Empowering Developers with Docker Model Runner: Run AI inference Models Locally with Enhanced Privacy and GPU Acceleration</title>
      <dc:creator>Raveendiran RR</dc:creator>
      <pubDate>Thu, 03 Apr 2025 18:30:48 +0000</pubDate>
      <link>https://dev.to/raveendiran/empowering-developers-with-docker-model-runner-run-ai-inference-models-locally-with-enhanced-bb9</link>
      <guid>https://dev.to/raveendiran/empowering-developers-with-docker-model-runner-run-ai-inference-models-locally-with-enhanced-bb9</guid>
      <description>&lt;p&gt;Hey there, tech enthusiasts! 👋&lt;/p&gt;

&lt;p&gt;If you’ve ever thought:&lt;/p&gt;

&lt;p&gt;“Wouldn’t it be cool if I could just run an AI model locally with zero setup pain?”&lt;/p&gt;

&lt;p&gt;Well, let me introduce you to something magical: Docker Model Runner.&lt;/p&gt;

&lt;p&gt;This tool is about to become your best friend — whether you’re a developer building ML apps, a DevOps engineer managing workflows, or a leader figuring out how to scale AI integration in your org.&lt;/p&gt;

&lt;p&gt;Ready to roll? Let’s go!&lt;/p&gt;

&lt;h1&gt;
  
  
  🧠 What is Docker Model Runner?
&lt;/h1&gt;

&lt;p&gt;In plain terms, Docker Model Runner lets you run open models like Llama, Mistral, or Gemma or even deepseek locally on your machine using Docker Desktop — without worrying about dependencies, GPU setup, or cloud costs.(Do keep in mind that local GPU can boost the performance)&lt;/p&gt;

&lt;p&gt;It’s like giving your laptop a magic AI engine that works out of the box.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79813u2pzajn67r32nk2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F79813u2pzajn67r32nk2.jpg" alt="Docker Model Runner -&amp;gt; Architecture" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  🔧 Why Should You Care? (Even if You’re Not a Dev)
&lt;/h1&gt;

&lt;p&gt;Role    What You Gain&lt;br&gt;
Developer   Run and test models locally in minutes&lt;br&gt;
DevOps  Integrate AI model runs into CI/CD pipelines&lt;br&gt;
Manager Understand how teams can innovate faster, safely&lt;br&gt;
Data Science    Try models without wrangling Python environments&lt;br&gt;
Product Lead    Explore AI integration early in product lifecycle&lt;/p&gt;
&lt;h1&gt;
  
  
  ✅ Prerequisites
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• 🐳 Docker Desktop (v4.27 or later)
• 💻 macOS | Windows  | Linux (chipset :Apple Silicon or Intel)
• 🧠 Some curiosity about how AI models can power your tools
• Optional: An OpenAI API key or similar if you plan to do tool calling
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  🔥 Step-by-Step: Setting Up Docker Model Runner
&lt;/h1&gt;

&lt;p&gt;Let’s do this.&lt;/p&gt;
&lt;h2&gt;
  
  
  1. Enable Model Runner in Docker Desktop
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1.  Open Docker Desktop
2.  Navigate to Settings &amp;gt; Experimental Features
3.  Toggle ON “Model Runner”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqiolby7ymchvwkr8hjj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqiolby7ymchvwkr8hjj.png" alt="Enable Docker Model Runner" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Pull a Model
&lt;/h2&gt;

&lt;p&gt;Docker makes this ridiculously easy. Open a terminal and run:&lt;/p&gt;

&lt;p&gt;(refer [&lt;a href="https://hub.docker.com/u/ai%5D" rel="noopener noreferrer"&gt;https://hub.docker.com/u/ai]&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker model pull &amp;lt;model name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check if the model has been downloaded&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker model list 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Run the Model Locally
&lt;/h2&gt;

&lt;p&gt;You’ll see it spin up a containerized AI model, ready to answer questions when you run the below command.However, this layer is abstracted by Docker&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker model run &amp;lt;model name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AI models from Docker’s ai namespace:&lt;/p&gt;

&lt;h2&gt;
  
  
  🧠 Docker AI Models: At-a-Glance Comparison
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Here’s a snapshot of the top models available via Docker's &lt;code&gt;ai&lt;/code&gt; namespace, perfect for local GenAI experiments or production-grade setups.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model Name&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Provider&lt;/th&gt;
&lt;th&gt;Parameters&lt;/th&gt;
&lt;th&gt;Quantization&lt;/th&gt;
&lt;th&gt;Context Window&lt;/th&gt;
&lt;th&gt;Key Features&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ai/llama3.1&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Meta's LLama 3.1: Chat-focused, benchmark-strong, multilingual-ready&lt;/td&gt;
&lt;td&gt;Meta&lt;/td&gt;
&lt;td&gt;8B, 70B&lt;/td&gt;
&lt;td&gt;Q4_K_M, F16&lt;/td&gt;
&lt;td&gt;128K&lt;/td&gt;
&lt;td&gt;- Multilingual (EN, DE, FR, IT, PT, HI, ES, TH)&lt;br&gt;- Text/code generation&lt;br&gt;- Chat assistant&lt;br&gt;- NLG&lt;br&gt;- Synthetic data generation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ai/llama3.3&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Newest LLama 3 release with improved reasoning and generation quality&lt;/td&gt;
&lt;td&gt;Meta&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;- Improved reasoning&lt;br&gt;- Better generation quality&lt;br&gt;- Latest LLaMA release&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ai/smollm2&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tiny LLM built for speed, edge devices, and local development&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;- Optimized for edge&lt;br&gt;- Speed-focused&lt;br&gt;- Local dev&lt;br&gt;- Low resource footprint&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ai/mxbai-embed-large&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Text embedding model&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;- Text embedding&lt;br&gt;- Large parameter size&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ai/qwen2.5&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Versatile Qwen update with better language skills&lt;/td&gt;
&lt;td&gt;Qwen&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;- Improved language abilities&lt;br&gt;- Versatile usage&lt;br&gt;- Broader application support&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ai/phi4&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Microsoft’s compact model with strong reasoning and coding&lt;/td&gt;
&lt;td&gt;Microsoft&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;- Compact&lt;br&gt;- Strong reasoning&lt;br&gt;- Code generation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ai/mistral&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Efficient open model with top-tier performance&lt;/td&gt;
&lt;td&gt;Mistral AI&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;- Fast inference&lt;br&gt;- Top performance&lt;br&gt;- Open model&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ai/mistral-nemo&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Mistral tuned with NVIDIA NeMo for enterprise&lt;/td&gt;
&lt;td&gt;Mistral AI&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;- NVIDIA NeMo-optimized&lt;br&gt;- Enterprise-grade&lt;br&gt;- Smooth ops&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ai/gemma3&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Google’s small but powerful model for chat and gen&lt;/td&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;- Compact yet strong&lt;br&gt;- Chat-friendly&lt;br&gt;- High-gen capabilities&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ai/qwq&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Experimental Qwen variant&lt;/td&gt;
&lt;td&gt;Qwen&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;- Experimental&lt;br&gt;- Lightweight&lt;br&gt;- Fast&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ai/llama3.2&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Stable LLama 3 update for chat, Q&amp;amp;A, and coding&lt;/td&gt;
&lt;td&gt;Meta&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;- Coding-friendly&lt;br&gt;- Chat capable&lt;br&gt;- Reliable Q&amp;amp;A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ai/deepseek-r1-distill-llama&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Distilled LLaMA by DeepSeek for real-world tasks&lt;/td&gt;
&lt;td&gt;DeepSeek&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;- Distilled version&lt;br&gt;- Fast execution&lt;br&gt;- Real-world optimization&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ Note: Many models don’t list detailed specs (params, quant, etc.) publicly. Visit the &lt;a href="https://hub.docker.com/u/ai" rel="noopener noreferrer"&gt;Docker AI catalog&lt;/a&gt; and individual repos for the latest info.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  🔁 Integration in Your Dev Lifecycle
&lt;/h1&gt;

&lt;p&gt;Here’s where it gets interesting for teams and orgs.&lt;/p&gt;

&lt;h2&gt;
  
  
  👷 For Devs
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Add model runner commands in makefiles, test scripts, or runbooks.
• Prototype AI features before wiring them into your full app.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  🔄 For CI/CD
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Spin up models in a container during testing.
• Validate AI model outputs in pull requests.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  💼 For Management
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Encourage safe local testing without extra infra cost.
• Help teams build trust in GenAI adoption with repeatable environments.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;🤔 Wait, Can This Replace the Cloud?&lt;/p&gt;

&lt;p&gt;Not entirely. But it’s great for:&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ Prototyping
&lt;/h3&gt;

&lt;h3&gt;
  
  
  ✅ Demos
&lt;/h3&gt;

&lt;h3&gt;
  
  
  ✅ Offline dev
&lt;/h3&gt;

&lt;h3&gt;
  
  
  ✅ Local evaluation
&lt;/h3&gt;

&lt;h3&gt;
  
  
  ✅ Privacy-sensitive tasks
&lt;/h3&gt;

&lt;p&gt;You’ll still use the cloud for production workloads — but Model Runner is an amazing stepping stone.&lt;/p&gt;

&lt;h1&gt;
  
  
  🧪 Real Use Case Example
&lt;/h1&gt;

&lt;p&gt;Imagine you’re building a customer support assistant. You could:&lt;br&gt;
    1.  Run smollm2 locally via Docker&lt;br&gt;
    2.  Feed it user queries&lt;br&gt;
    3.  Use tool calling to fetch FAQs from your API&lt;br&gt;
    4.  Iterate without pushing a line of code to prod&lt;/p&gt;

&lt;p&gt;Dev speed just leveled up. 🚀&lt;/p&gt;
&lt;h1&gt;
  
  
  🗣️ Wrapping Up
&lt;/h1&gt;

&lt;p&gt;Docker Model Runner is a game changer — not just for devs, but for anyone exploring GenAI.&lt;/p&gt;

&lt;p&gt;It’s fast.&lt;br&gt;
It’s local.&lt;br&gt;
It’s powerful.&lt;br&gt;
And best of all… it just works.&lt;/p&gt;

&lt;p&gt;So go ahead — pull a model, ask it something, and blow your own mind.&lt;/p&gt;
&lt;h2&gt;
  
  
  ⚠️ Finally .. a pinch of salt while using Docker Model Runner
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Issue&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Workaround&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No safeguard for oversized models&lt;/td&gt;
&lt;td&gt;Docker Model Runner doesn’t prevent running models too large for your system, which can cause severe slowdowns or make the system unresponsive.&lt;/td&gt;
&lt;td&gt;Make sure your machine has enough RAM/GPU before running large models.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;model run&lt;/code&gt; drops into chat if pull fails&lt;/td&gt;
&lt;td&gt;If a model pull fails (e.g., due to network/disk space), &lt;code&gt;docker model run&lt;/code&gt; still enters chat mode, though the model isn't loaded, leading to confusion.&lt;/td&gt;
&lt;td&gt;Manually retry &lt;code&gt;docker model pull&lt;/code&gt; to confirm successful download before running.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No digest support in Model CLI&lt;/td&gt;
&lt;td&gt;The CLI lacks reliable support for referencing models by digest.&lt;/td&gt;
&lt;td&gt;Use model names (e.g., &lt;code&gt;mistralai/mistral-7b-instruct&lt;/code&gt;) instead of digests for now.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Misleading pull progress after failure&lt;/td&gt;
&lt;td&gt;If an initial &lt;code&gt;docker model pull&lt;/code&gt; fails, a retry might misleadingly show "0 bytes downloaded" even though data is loading.&lt;/td&gt;
&lt;td&gt;Wait—despite incorrect progress, the pull usually completes successfully in the background.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h1&gt;
  
  
  👋 Bonus: Run the Hello GenAI App Locally (In Under 5 Minutes)
&lt;/h1&gt;

&lt;p&gt;If you’ve come this far, you’re probably itching to try a real-world app using Docker Model Runner. Good news: Docker has an awesome example project called hello-genai — and it’s the easiest way to see AI in action locally.&lt;/p&gt;

&lt;p&gt;Here’s how to set it up:&lt;/p&gt;
&lt;h1&gt;
  
  
  🧰 Prerequisites
&lt;/h1&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Docker Desktop with Model Runner enabled ✅
• Git installed (or download the ZIP manually)
• Terminal access
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  🪜 Step-by-Step Setup
&lt;/h1&gt;
&lt;h2&gt;
  
  
  1. Clone the Repo
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/docker/hello-genai.git
cd hello-genai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qjonhbdvlso7kr54czv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qjonhbdvlso7kr54czv.png" alt="Clone repo" width="800" height="167"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Pull the Required Model
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker model pull ai/smollm2:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can replace the model if you want to try another supported one (like smollm2 or even deepseek).&lt;/p&gt;
&lt;h3&gt;
  
  
  some intersting commands
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker model list
docker model status
docker model inspect
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjaor83zbj24vvsa9jqrl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjaor83zbj24vvsa9jqrl.png" alt="cool commands" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  3. Start the App
&lt;/h2&gt;

&lt;p&gt;Just run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./run.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start both the frontend and the model backend containers in Python | GO and Node.js on different ports&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkjjovr07hrfyjd9nxz5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkjjovr07hrfyjd9nxz5.png" alt="Start script" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Containers created for this simple chat app
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm40vmoz619bmjqq33gg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm40vmoz619bmjqq33gg.png" alt="Docker Containers for GenAI Chat App" width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Open in Your Browser
&lt;/h2&gt;

&lt;p&gt;Navigate to &lt;a href="http://localhost:8081" rel="noopener noreferrer"&gt;http://localhost:8081&lt;/a&gt; for python App and start chatting with your AI model right from the browser!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3vxhysfuix2fpm20hp5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3vxhysfuix2fpm20hp5.png" alt="screenshot of the Hello GenAI UI" width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💡 What’s Going On Behind the Scenes?&lt;/p&gt;

&lt;p&gt;The Hello GenAI app connects to your locally running model (via the Docker Model Runner) using python and React frontend. No cloud, no GPU setup — just local magic.&lt;/p&gt;

&lt;p&gt;This is a great sandbox to:&lt;br&gt;
    • Prototype your own AI app&lt;br&gt;
    • Customize the frontend&lt;br&gt;
    • Try different models&lt;/p&gt;
&lt;h1&gt;
  
  
  🔄 Want to Stop It?
&lt;/h1&gt;

&lt;p&gt;Simply hit Ctrl + C in the terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  🎯 Use Case Ideas With Hello GenAI
&lt;/h1&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Demo to your manager how quickly GenAI features can be spun up
• Test prompt flows before integrating into your real app
• Customize the UI and rebrand it for internal tools
• Hook it to a backend API for a tool-calling proof of concept
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  🏁 Wrapping This Up (For Real Now!)
&lt;/h1&gt;

&lt;p&gt;Docker Model Runner + Hello GenAI = Your AI sandbox on steroids.&lt;br&gt;
Now you’ve got the power to run, test, and innovate with open-source models without cloud costs or platform headaches.&lt;/p&gt;

&lt;p&gt;Want me to create a follow-up walkthrough where we customize Hello GenAI for tool-calling or turn it into a Slack bot? Drop a comment or hit me up!&lt;/p&gt;

&lt;p&gt;Let me know if you’d like this post packaged as a downloadable PDF or published directly on dev.to with formatting — happy to help you launch it 🚀&lt;/p&gt;
&lt;h1&gt;
  
  
  🙋‍♂️ What’s Next?
&lt;/h1&gt;

&lt;p&gt;Have you tried Model Runner? Planning to use it in your product or workflow?&lt;/p&gt;
&lt;h3&gt;
  
  
  👉 Let me know in the comments, or share your experience!
&lt;/h3&gt;
&lt;h3&gt;
  
  
  👉 Love this content and want more like this  -› Vote below!
&lt;/h3&gt;
&lt;h3&gt;
  
  
  :)Yes !! it counts a lot
&lt;/h3&gt;
&lt;h2&gt;
  
  
  📢 If You Loved This, Don’t Forget To:
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• ❤️ Like this post
• 🔄 Share it with your team
• 📬 Follow for more dev-friendly AI tips
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;Keywords: Docker Model Runner setup, Docker for AI models, MLOps with Docker, running AI models locally, AI tool calling with Docker, Docker Desktop model integration&lt;/p&gt;

</description>
      <category>docker</category>
      <category>mlops</category>
      <category>ai</category>
      <category>dockerdesktop</category>
    </item>
    <item>
      <title>🚀 Exploring Docker Ask Gordon: Your AI-Powered Dev Companion!b</title>
      <dc:creator>Raveendiran RR</dc:creator>
      <pubDate>Sun, 16 Feb 2025 19:59:05 +0000</pubDate>
      <link>https://dev.to/raveendiran/exploring-docker-ask-gordon-your-ai-powered-dev-companion-2a3k</link>
      <guid>https://dev.to/raveendiran/exploring-docker-ask-gordon-your-ai-powered-dev-companion-2a3k</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0q5c0ym8r6jqmmjgoyo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0q5c0ym8r6jqmmjgoyo.png" alt="Ai revolution with Docker" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As developers, we love automation, efficiency, and innovation. With Docker Ask Gordon, Docker has stepped into the AI-powered assistant space, making it easier than ever to interact with and optimize containerized workflows. Let’s dive into what it is, how it works, and how you can start using it today!&lt;/p&gt;

&lt;p&gt;🔹  &lt;/p&gt;

&lt;h3&gt;
  
  
  What is Docker and Why Use It?
&lt;/h3&gt;

&lt;p&gt;Docker is a platform that enables developers to build, ship, and run applications in containers, ensuring consistency across different environments. It simplifies deployment, enhances scalability, and boosts productivity.&lt;/p&gt;

&lt;p&gt;🌍 &lt;/p&gt;

&lt;h2&gt;
  
  
  Generative AI &amp;amp; Model Context Protocol: The Evolution of Intelligent Development
&lt;/h2&gt;

&lt;p&gt;Generative AI is revolutionizing how we interact with technology, from code generation to debugging. The Model Context Protocol (MCP) enables AI models to retain and utilize contextual understanding across different interactions, making AI assistants smarter and more efficient for developers.&lt;/p&gt;

&lt;p&gt;🔄 &lt;/p&gt;

&lt;h2&gt;
  
  
  Model Context Protocol: How It Powers Smart AI Assistants
&lt;/h2&gt;

&lt;p&gt;MCP allows AI models to understand project-specific requirements, retain knowledge across sessions, and provide accurate recommendations, making tools like Ask Gordon more intuitive and responsive.&lt;/p&gt;

&lt;p&gt;The MCP adds more context to your ask. It chooses the right MCP servers and the tools available in the server to complete the task you ask it&lt;/p&gt;

&lt;p&gt;Imagine chatting with anything and getting things done. Like create a new file and write content into it by chatting with client IDE.Or query any DB and get insigths without writing a single line of code. The possibilities are limitless. Learn more on &lt;a href="https://modelcontextprotocol.io/introduction" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegldp7qsivxwhufn2idz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegldp7qsivxwhufn2idz.png" alt="MCP Architecture" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🛠 &lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Docker Desktop
&lt;/h2&gt;

&lt;p&gt;Download Docker Desktop from Docker’s official site.&lt;/p&gt;

&lt;p&gt;Install it following platform-specific instructions.&lt;/p&gt;

&lt;p&gt;Launch Docker Desktop and sign in to your Docker Hub account.&lt;/p&gt;

&lt;p&gt;🎛 &lt;/p&gt;

&lt;h3&gt;
  
  
  Activating Beta Features in Docker and Docker AI
&lt;/h3&gt;

&lt;p&gt;Open Docker Desktop and navigate to Settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0f97sr0fs9wj1sdgtqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0f97sr0fs9wj1sdgtqk.png" alt="Docker setting menu" width="263" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the Features in Development tab, enable Beta Features.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8wlujslmvcrvtilohcti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8wlujslmvcrvtilohcti.png" alt="Enable development features" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Restart Docker Desktop to apply changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbw3bfo5b9x7bvwckfad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbw3bfo5b9x7bvwckfad.png" alt="Accept Terms| Enable features" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w7xlg3u3tgg2d8ukcyy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w7xlg3u3tgg2d8ukcyy.png" alt="restart Docker Desktop" width="447" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ask Gordon Activated &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcrt97fbzbxh8f78m3ae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcrt97fbzbxh8f78m3ae.png" alt="Ask Gordon Activated" width="447" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now ask any question to Ask Gordon and with the help of Docker AI, the questions are answered just like ChatGPT. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu933gc2sohq0fb66ss3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzu933gc2sohq0fb66ss3.png" alt="Ask Gordon answering questions" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzdts2c0nv4l4ruk418v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzdts2c0nv4l4ruk418v.png" alt="Ask gordon command prompt" width="800" height="526"&gt;&lt;/a&gt;&lt;br&gt;
Notice that you have a coy button from where you can copy commands &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8txskaju65yeycubb6et.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8txskaju65yeycubb6et.png" alt="showing commands" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔍&lt;/p&gt;
&lt;h2&gt;
  
  
  How to get MCP working on Ask Gordon
&lt;/h2&gt;

&lt;p&gt;Using MCP with Docker Desktop and Gordon&lt;/p&gt;

&lt;p&gt;Create a .yml file called gordon-mcp.yaml in your working &lt;br&gt;
directory from the terminal in docker desktop&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;touch &lt;/span&gt;gordon-mcp.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8ccr7xkxlkgp3ttuk5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8ccr7xkxlkgp3ttuk5a.png" alt="gordon file created" width="646" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the MCP servers you want to activate for Docker AI. In this example, we have only listed the servers for file system and fetch ( webscraping)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;services:
  fetch:
    image: mcp/fetch
  fs:
    image: mcp/filesystem
    &lt;span class="nb"&gt;command&lt;/span&gt;:
      - /rootfs
    volumes:
      - .:/rootfs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nszsb32rcpjnebkant0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nszsb32rcpjnebkant0.png" alt="update the gordon file" width="800" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now run this command to get the contents of the directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker ai list the contents of this file 
&lt;span class="nv"&gt;$ &lt;/span&gt;docker ai show me the content of https://docs.docker.com/desktop/features/gordon/mcp/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you can run either in the docker terminal or your system terminal . The ouput remains the same &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1fnosibktn2x985do9z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1fnosibktn2x985do9z.png" alt="get contents of website" width="670" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker would create MCP server containers when it executes the MCP actions. Later these containers are stopped &lt;/p&gt;

&lt;p&gt;🎯 &lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases for Ask Gordon
&lt;/h2&gt;

&lt;p&gt;Debugging Containers: Quickly diagnose and fix container-related issues.&lt;br&gt;
Optimizing Dockerfiles: Get AI-driven suggestions to enhance performance.&lt;br&gt;
Understanding Configurations: Learn how to set up volumes, networking, and security best practices.&lt;br&gt;
CI/CD Integration: Ensure best practices in building efficient pipelines.&lt;/p&gt;

&lt;p&gt;📚 &lt;/p&gt;

&lt;h2&gt;
  
  
  References &amp;amp; Further Reading
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/desktop/use-desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.docker.com/desktop/features/gordon/" rel="noopener noreferrer"&gt;Ask Gordon Overview &lt;/a&gt;&lt;br&gt;
&lt;a href="https://modelcontextprotocol.io/introduction" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker Ask Gordon is more than just an AI assistant—it’s a context-aware development companion. Try it out and supercharge your container workflows today! 🚀&lt;/p&gt;

&lt;p&gt;I'm excited to share this information with you. Please Like | Share |Comment | &lt;a href="https://www.linkedin.com/in/raveendiranrr/" rel="noopener noreferrer"&gt;follow me&lt;/a&gt; if this post has helped you&lt;/p&gt;

</description>
      <category>docker</category>
      <category>askgoardon</category>
      <category>dockerdesktop</category>
      <category>mcpchatbot</category>
    </item>
  </channel>
</rss>
