<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gowsiya Syednoor Shek</title>
    <description>The latest articles on DEV Community by Gowsiya Syednoor Shek (@gowsiyashek).</description>
    <link>https://dev.to/gowsiyashek</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gowsiyashek"/>
    <language>en</language>
    <item>
      <title>Set Up Your Company for Success with Docker (Part 5)</title>
      <dc:creator>Gowsiya Syednoor Shek</dc:creator>
      <pubDate>Sat, 14 Jun 2025 22:01:28 +0000</pubDate>
      <link>https://dev.to/gowsiyashek/set-up-your-company-for-success-with-docker-part-5-17a3</link>
      <guid>https://dev.to/gowsiyashek/set-up-your-company-for-success-with-docker-part-5-17a3</guid>
      <description>&lt;p&gt;Docker isn't just for developers on personal projects, it's a powerful platform that, when configured correctly, can accelerate collaboration and strengthen security at the organizational level. Here’s how to set your company up for long-term success with Docker:&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Enforce Sign-In for Docker Desktop
&lt;/h2&gt;

&lt;p&gt;By default, Docker Desktop can be launched without requiring sign-in. This means users might bypass organizational policies and lose access to subscription benefits. Enforcing sign-in ensures tighter control:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Sign-in Prompt&lt;/em&gt;: Docker Desktop will block access unless the user signs in with an org-approved Docker ID.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Sign-out Behavior&lt;/em&gt;: Signed-out users are immediately blocked until they re-authenticate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enforcement Methods:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Windows&lt;/td&gt;
&lt;td&gt;Registry Key&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;macOS&lt;/td&gt;
&lt;td&gt;Configuration Profiles or .plist&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cross-platform&lt;/td&gt;
&lt;td&gt;&lt;code&gt;registry.json&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Enforcing sign-in &lt;em&gt;does not&lt;/em&gt; affect CLI access unless &lt;em&gt;Single Sign-On (SSO)&lt;/em&gt; is also enforced.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  2. Enforce Single Sign-On (SSO) for Centralized Authentication
&lt;/h2&gt;

&lt;p&gt;Enforcing &lt;em&gt;SSO&lt;/em&gt; ensures that all users authenticate through your company’s identity provider (e.g., Okta, Azure AD). Here is what this enables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralized access policies (MFA, password rotation)&lt;/li&gt;
&lt;li&gt;Automatic provisioning/de-provisioning (via SCIM)&lt;/li&gt;
&lt;li&gt;Streamlined onboarding and offboarding&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Enforcement Level&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Benefits&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Sign-in Only&lt;/td&gt;
&lt;td&gt;Requires Docker Hub account sign-in&lt;/td&gt;
&lt;td&gt;Enables visibility &amp;amp; subscription usage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SSO Only&lt;/td&gt;
&lt;td&gt;Forces sign-in via SSO&lt;/td&gt;
&lt;td&gt;Aligns with enterprise identity governance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Both&lt;/td&gt;
&lt;td&gt;Strongest option&lt;/td&gt;
&lt;td&gt;Combines access control, policy enforcement, &amp;amp; visibility&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Neither&lt;/td&gt;
&lt;td&gt;Least secure&lt;/td&gt;
&lt;td&gt;Not recommended for orgs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  3. Create Organizations and Teams in Docker Hub
&lt;/h2&gt;

&lt;p&gt;Using &lt;em&gt;Organizations and Teams&lt;/em&gt; in Docker Hub, you can group users and define roles based on job functions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Team Setup:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Team&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;frontendeng&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Front-end developers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;backendeng&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Back-end developers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;qaeng&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Quality Assurance testers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;devopseng&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;DevOps / Infra team&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can assign Docker IDs to these teams and manage them via the &lt;em&gt;Organizations &amp;gt; Teams&lt;/em&gt; tab in Docker Hub.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Set Repository-Level Access Permissions
&lt;/h2&gt;

&lt;p&gt;After your teams are set up, configure fine-grained permissions for each Docker repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Access Table:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Repository&lt;/th&gt;
&lt;th&gt;frontendeng&lt;/th&gt;
&lt;th&gt;backendeng&lt;/th&gt;
&lt;th&gt;qaeng&lt;/th&gt;
&lt;th&gt;devopseng&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ui-build&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Admin&lt;/td&gt;
&lt;td&gt;Read-only&lt;/td&gt;
&lt;td&gt;Read-only&lt;/td&gt;
&lt;td&gt;Admin&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;api-build&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Read-only&lt;/td&gt;
&lt;td&gt;Admin&lt;/td&gt;
&lt;td&gt;Read-only&lt;/td&gt;
&lt;td&gt;Admin&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ui-release&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Read-only&lt;/td&gt;
&lt;td&gt;Read-only&lt;/td&gt;
&lt;td&gt;Read &amp;amp; Write&lt;/td&gt;
&lt;td&gt;Admin&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;api-release&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Read-only&lt;/td&gt;
&lt;td&gt;Read-only&lt;/td&gt;
&lt;td&gt;Read &amp;amp; Write&lt;/td&gt;
&lt;td&gt;Admin&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Permissions are configured through &lt;em&gt;Organizations &amp;gt; Teams &amp;gt; Permissions&lt;/em&gt; in Docker Hub.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Secure Image Delivery with Docker Content Trust &amp;amp; Scout
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Enable Docker Content Trust (DCT)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DCT allows publishers to &lt;em&gt;digitally sign images&lt;/em&gt; and enables consumers to &lt;em&gt;verify image signatures&lt;/em&gt; before pulling or running them. This prevents tampering or unverified images from being used in production.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trust is associated &lt;em&gt;per image tag&lt;/em&gt; (e.g., &lt;code&gt;myimage:latest&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Signed tags can coexist with unsigned ones under the same repo
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Enable DCT&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DOCKER_CONTENT_TRUST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1

&lt;span class="c"&gt;# Generate key &amp;amp; sign image&lt;/span&gt;
docker trust key generate signer-name
docker trust sign registry.example.com/org/image:tag

&lt;span class="c"&gt;# Inspect trust&lt;/span&gt;
docker trust inspect &lt;span class="nt"&gt;--pretty&lt;/span&gt; registry.example.com/org/image:tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Back up your &lt;em&gt;root key&lt;/em&gt; securely - it cannot be recovered if lost.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Activate Docker Scout for CVE Analysis and SBOMs
&lt;/h3&gt;

&lt;p&gt;Docker Scout gives you real-time insights into the security status of your images:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically scans images on Docker Hub (or other integrated registries)&lt;/li&gt;
&lt;li&gt;Tracks vulnerabilities (CVEs) and severity&lt;/li&gt;
&lt;li&gt;Uses SBOM (Software Bill of Materials) for in-depth package-level analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enable &amp;amp; Analyze:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build with provenance + SBOM&lt;/span&gt;
docker build &lt;span class="nt"&gt;--push&lt;/span&gt; &lt;span class="nt"&gt;--tag&lt;/span&gt; myorg/myimage:tag &lt;span class="nt"&gt;--provenance&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nt"&gt;--sbom&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="c"&gt;# Analyze locally&lt;/span&gt;
docker scout quickview myorg/myimage:tag
docker scout cves &lt;span class="nt"&gt;--only-severity&lt;/span&gt; critical myorg/myimage:tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;CLI Tools:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker scout quickview&lt;/code&gt;: Summary &amp;amp; base image comparison&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker scout cves&lt;/code&gt;: CVEs filtered by severity/type&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker scout compare&lt;/code&gt;: Compare two image versions for risk delta&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Scout results update automatically as new CVEs emerge - no need to re-push.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;By combining account enforcement, team-based access, image signing, and continuous vulnerability scanning, Docker can be transformed from a simple container platform into a &lt;em&gt;secure and scalable foundation&lt;/em&gt; for modern DevOps workflows.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>coding</category>
    </item>
    <item>
      <title>GenAI App with Prompt Templates and Role Switching (Part 4)</title>
      <dc:creator>Gowsiya Syednoor Shek</dc:creator>
      <pubDate>Mon, 02 Jun 2025 01:19:57 +0000</pubDate>
      <link>https://dev.to/gowsiyashek/genai-app-with-prompt-templates-and-role-switching-part-4-2gc3</link>
      <guid>https://dev.to/gowsiyashek/genai-app-with-prompt-templates-and-role-switching-part-4-2gc3</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/gowsiyashek/build-a-local-genai-api-with-docker-model-runner-and-fastapi-part-3-29ka"&gt;Part 3&lt;/a&gt;, we built a FastAPI backend that could talk to a local LLM using Docker Model Runner. In this &lt;strong&gt;Part 4&lt;/strong&gt;, we are moving forward by adding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prompt Templates&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Role Switching&lt;/strong&gt; (system/user roles)
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;Simple HTML UI&lt;/strong&gt; - all running on your machine&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;Prompt engineering isn't just about text, it's about &lt;strong&gt;structure&lt;/strong&gt;. This app lets you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Try different system roles like &lt;code&gt;"You are a data science tutor"&lt;/code&gt; or &lt;code&gt;"You are a sarcastic assistant"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Apply templates around your prompt&lt;/li&gt;
&lt;li&gt;Experiment with how prompt structure changes output&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Project Layout
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-llm-fastapi-app/
├── app/
│   ├── main.py         
│   ├── templates/
│   │   └── index.html
├── Dockerfile
├── docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  How to Run
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Pull &amp;amp; Run the Model (if not already)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker model pull ai/mistral
docker model run ai/mistral
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure &lt;strong&gt;TCP host access&lt;/strong&gt; is enabled in Docker Desktop’s &lt;strong&gt;Beta &amp;gt; Enable Docker Model Runner&lt;/strong&gt; section.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Start the Backend API
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open from browser: &lt;code&gt;http://localhost:8000&lt;/code&gt;&lt;br&gt;
Full code is available here &lt;a href="https://github.com/gowsiyabs/docker-llm-fastapi-app" rel="noopener noreferrer"&gt;part4-code&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  UI Preview
&lt;/h3&gt;

&lt;p&gt;The UI has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A dropdown for system roles&lt;/li&gt;
&lt;li&gt;A textbox for prompts&lt;/li&gt;
&lt;li&gt;A Generate button to generate responses&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All results appear below your prompt, in the same page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvyzbb3g6nwktepys8bzm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvyzbb3g6nwktepys8bzm.png" alt="Here is the page" width="800" height="708"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Why Is It So Slow?
&lt;/h3&gt;

&lt;p&gt;If your prompts are consistently taking more than a minute, it’s likely due to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Model Size: Even relatively small LLMs like Mistral can take time to load into memory and start processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cold Starts: Each prompt might be triggering a cold start if the model isn't staying warm between requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;System Resources: Docker Model Runner currently doesn’t optimize for resource constraints, so if your machine lacks sufficient CPU/RAM or an NVIDIA GPU, performance may suffer.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Suggestions to improve speed:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Enable host-side TCP support.&lt;/li&gt;
&lt;li&gt;Keep the model warm by sending periodic “ping” prompts every few minutes.&lt;/li&gt;
&lt;li&gt;Try a smaller or quantized model (e.g., ai/smollm2 instead of ai/mistral).&lt;/li&gt;
&lt;li&gt;Upgrade to a system with more memory or GPU support for Docker Desktop.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  What’s Next
&lt;/h3&gt;

&lt;p&gt;In &lt;a href="https://dev.to/gowsiyashek/set-up-your-company-for-success-with-docker-part-5-17a3"&gt;Part 5&lt;/a&gt;, I’ll wrap up this mini-series with a broader, practical guide: "How to Set Up Your Company for Success with Docker"&lt;/p&gt;

&lt;p&gt;This won’t be about just local demos, but will cover topics like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Organizing Your Teams with Docker Organizations&lt;/li&gt;
&lt;li&gt;Enforcing Sign-In and Enabling SSO&lt;/li&gt;
&lt;li&gt;Standardizing Docker Desktop Configurations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After that, I will be exploring independent Docker topics in more depth, stay tuned!&lt;/p&gt;




</description>
    </item>
    <item>
      <title>Build a Local GenAI API with Docker Model Runner and FastAPI (Part 3)</title>
      <dc:creator>Gowsiya Syednoor Shek</dc:creator>
      <pubDate>Sun, 25 May 2025 18:14:06 +0000</pubDate>
      <link>https://dev.to/gowsiyashek/build-a-local-genai-api-with-docker-model-runner-and-fastapi-part-3-29ka</link>
      <guid>https://dev.to/gowsiyashek/build-a-local-genai-api-with-docker-model-runner-and-fastapi-part-3-29ka</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/gowsiyashek/run-llms-locally-with-docker-model-runner-a-real-world-developer-guide-part-2-387c"&gt;Part 2&lt;/a&gt;, I ran an LLM locally using Docker Model Runner and connected to it through a Python script. Now in &lt;strong&gt;Part 3&lt;/strong&gt;, we are wrapping that logic inside a &lt;strong&gt;FastAPI REST API&lt;/strong&gt; - giving us a real, local GenAI backend we can use from Postman, web apps, or CLI tools.&lt;/p&gt;

&lt;p&gt;Let’s dive in.&lt;/p&gt;




&lt;h3&gt;
  
  
  Goal
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Build a FastAPI server that sends prompts to a locally running LLM (ai/mistral)&lt;/li&gt;
&lt;li&gt;Expose a &lt;code&gt;/generate&lt;/code&gt; endpoint&lt;/li&gt;
&lt;li&gt;Run the API container and Docker Model Runner &lt;strong&gt;side-by-side&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  What We Built
&lt;/h3&gt;

&lt;p&gt;A REST API (running in Docker) that talks to Docker Model Runner via an &lt;strong&gt;OpenAI-compatible&lt;/strong&gt; endpoint. You send a prompt like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"prompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="err"&gt;Explain&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;what&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;docker&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;model&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;runner&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;points&lt;/span&gt;&lt;span class="s2"&gt;"
}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;…and it responds with an AI-generated answer from a model running 100% on your machine.&lt;/p&gt;




&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-llm-fastapi-app/
├── app/
│   └── main.py         ← FastAPI logic
├── Dockerfile          ← API container
├── docker-compose.yml  ← Orchestration
├── README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check out the code here &lt;a href="https://github.com/gowsiyabs/docker-llm-fastapi-app" rel="noopener noreferrer"&gt;part3-code&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  How to Run It
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Pull and Start the Model
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker model pull ai/mistral
docker model run ai/mistral
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;If you have already pulled the model from previous tutorial, running pull again is not necessary.&lt;br&gt;
You may see: &lt;code&gt;Interactive chat mode started. Type '/bye' to exit.&lt;/code&gt;&lt;br&gt;&lt;br&gt;
That’s okay — the API is still active behind the scenes if TCP access is enabled.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  2. Start the FastAPI Server
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Uvicorn running on http://0.0.0.0:8000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Call the Endpoint
&lt;/h3&gt;

&lt;p&gt;Send a request from Postman or curl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;POST http://localhost:8000/generate
Content-Type: application/json

{
  "prompt": "What is MLOps in simple terms?"
}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"response"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MLOps, short for Machine Learning Operations, is a practice for collaboration and..."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fo14f196j5zmbsujlou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fo14f196j5zmbsujlou.png" alt="From Postman" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Things I Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Interactive Mode Still Enables API
&lt;/h3&gt;

&lt;p&gt;Even though Docker says:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Interactive chat mode started. Type '/bye' to exit.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;…the HTTP API is still available on &lt;code&gt;localhost:12434&lt;/code&gt;. As long as TCP support is enabled in Docker Desktop, it works fine.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. First Call Is Slow
&lt;/h3&gt;

&lt;p&gt;The first request took ~2 minutes. Why?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The model is loaded into memory&lt;/li&gt;
&lt;li&gt;Runtime warmup takes time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But after that, future prompts are little better.&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://dev.to/gowsiyashek/genai-app-with-prompt-templates-and-role-switching-part-4-2gc3"&gt;Part 4&lt;/a&gt; I plan to build a - Prompt Templates + Role Options which adds a practical layer of prompt engineering to your GenAI app.&lt;/p&gt;

&lt;p&gt;Stay tuned!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>genai</category>
      <category>llm</category>
    </item>
    <item>
      <title>Run LLMs Locally with Docker Model Runner – A Real-World Developer Guide (Part 2)</title>
      <dc:creator>Gowsiya Syednoor Shek</dc:creator>
      <pubDate>Sat, 17 May 2025 19:21:10 +0000</pubDate>
      <link>https://dev.to/gowsiyashek/run-llms-locally-with-docker-model-runner-a-real-world-developer-guide-part-2-387c</link>
      <guid>https://dev.to/gowsiyashek/run-llms-locally-with-docker-model-runner-a-real-world-developer-guide-part-2-387c</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/gowsiyashek/from-save-to-serve-boost-llm-dev-speed-with-docker-compose-watch-3ig3"&gt;Part 1&lt;/a&gt; of this series, I explored how Docker Compose Watch helped accelerate development when working with Python-based AI apps. In this second part, I dive into Docker Model Runner – a powerful new feature that lets developers run large language models (LLMs) locally using OpenAI-compatible APIs, all powered by Docker.&lt;/p&gt;

&lt;p&gt;But getting it to work wasn't as plug-and-play as I had hoped. So this tutorial is both a how-to and a real-world troubleshooting log for anyone trying to follow the same path.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is Docker Model Runner?
&lt;/h3&gt;

&lt;p&gt;Docker Model Runner is a beta feature (from Docker Desktop 4.40+) that allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pull open-source LLMs from Docker Hub or Hugging Face&lt;/li&gt;
&lt;li&gt;Run them locally on your machine&lt;/li&gt;
&lt;li&gt;Interact with them via OpenAI-compatible endpoints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It removes the need to use separate tools like Ollama or Hugging Face Transformers manually. And best of all: it plugs into Docker Desktop.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I Tried to Build
&lt;/h3&gt;

&lt;p&gt;I wanted a simple Python app that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sends a prompt to a local LLM using the /chat/completions API&lt;/li&gt;
&lt;li&gt;Receives a response, all without touching OpenAI's cloud&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setup Troubles I Faced
&lt;/h3&gt;

&lt;p&gt;Here are the real-world challenges I encountered:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Docker Desktop Version Was Too Old
&lt;/h4&gt;

&lt;p&gt;At first, I was running Docker Desktop 4.39. This version showed experimental features like "Docker AI" and "Wasm," but didn't expose Docker Model Runner in the UI.&lt;/p&gt;

&lt;p&gt;🔧 Fix: I upgraded to Docker Desktop 4.41, which finally showed "Enable Docker Model Runner" under the Beta Features tab.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Pulling the Model Fails Unless You Use the Right Name
&lt;/h4&gt;

&lt;p&gt;Running this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker model pull mistral&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;resulted in:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;401 Unauthorized&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;🔧 Fix: I checked Docker’s official documentation and found that model names are namespaced. I changed it to:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker model pull ai/mistral&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;✅ This worked instantly.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. TCP Port Refused Until I Enabled Host-Side Access
&lt;/h4&gt;

&lt;p&gt;My Python script kept failing with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ConnectionRefusedError: [WinError 10061] No connection could be made...&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Even though the model was running in interactive mode.&lt;/p&gt;

&lt;p&gt;🔧 Fix:I opened Docker Desktop → Features in Development → Enable Docker Model Runner&lt;br&gt;
Then I scrolled down to find the checkbox: Enable host-side TCP support&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbv73v6er0nkjs5cbe45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbv73v6er0nkjs5cbe45.png" alt="Docker Destop Settings" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once enabled, the model ran with the HTTP API exposed on &lt;code&gt;http://localhost:12434&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Working Setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Pull and Run the Model&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;docker model pull ai/mistral&lt;br&gt;
docker model run ai/mistral&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This launches the model and exposes an OpenAI-compatible endpoint.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Test via Python
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests

API_URL = "http://localhost:12434/engines/v1/chat/completions"
payload = {
    "model": "ai/mistral",
    "messages": [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Why is containerization important for deploying AI models?"}
    ]
}
headers = {"Content-Type": "application/json"}

response = requests.post(API_URL, headers=headers, json=payload)
print(response.json()["choices"][0]["message"]["content"])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output&lt;/strong&gt;: You’ll receive a well-formed response from the model — all running locally on your machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;Docker Model Runner is incredibly promising — especially for developers who want to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build apps using LLMs without relying on cloud APIs&lt;/li&gt;
&lt;li&gt;Save costs and protect sensitive data&lt;/li&gt;
&lt;li&gt;Learn and prototype GenAI apps offline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But like many beta features, it takes some experimentation. If you're running into issues, check:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker version (must be 4.40+)&lt;/li&gt;
&lt;li&gt;Correct model name (use the ai/ namespace)&lt;/li&gt;
&lt;li&gt;TCP support enabled in the Model Runner settings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the next article, I’ll explore integrating Docker Model Runner with a front-end GenAI UI.&lt;/p&gt;

&lt;p&gt;👉 Stay tuned for &lt;a href="https://dev.to/gowsiyashek/build-a-local-genai-api-with-docker-model-runner-and-fastapi-part-3-29ka"&gt;Part 3&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>llm</category>
      <category>ai</category>
    </item>
    <item>
      <title>From Save to Serve: Boost LLM Dev Speed with Docker Compose Watch</title>
      <dc:creator>Gowsiya Syednoor Shek</dc:creator>
      <pubDate>Sat, 10 May 2025 22:15:41 +0000</pubDate>
      <link>https://dev.to/gowsiyashek/from-save-to-serve-boost-llm-dev-speed-with-docker-compose-watch-3ig3</link>
      <guid>https://dev.to/gowsiyashek/from-save-to-serve-boost-llm-dev-speed-with-docker-compose-watch-3ig3</guid>
      <description>&lt;h2&gt;
  
  
  Next-Gen Docker for AI Developers: Series Intro
&lt;/h2&gt;

&lt;p&gt;Welcome to my 6-part series exploring how &lt;strong&gt;Docker's latest features&lt;/strong&gt; streamline AI and LLM-based application development.&lt;/p&gt;

&lt;p&gt;Each part will show how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accelerate development&lt;/li&gt;
&lt;li&gt;Reduce manual steps&lt;/li&gt;
&lt;li&gt;Improve containerized workflows for modern AI use cases&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Part 1: Why Docker Compose Watch Is a Game Changer
&lt;/h3&gt;

&lt;p&gt;As an AI developer working with LLM apps (LangChain, Flask APIs, etc.), I often deal with repetitive edit-rebuild-restart cycles. This gets especially frustrating while:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tuning prompts&lt;/li&gt;
&lt;li&gt;Testing temperature settings&lt;/li&gt;
&lt;li&gt;Tweaking model configs&lt;/li&gt;
&lt;li&gt;Updating dependencies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even small changes can become time-consuming in a Dockerized workflow.&lt;/p&gt;




&lt;h3&gt;
  
  
  What Docker Compose Watch Promises
&lt;/h3&gt;

&lt;p&gt;Docker Compose Watch solves this by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Watching files or directories&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Triggering live sync into the container (&lt;code&gt;sync&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Or &lt;strong&gt;rebuilding&lt;/strong&gt; when key files change (&lt;code&gt;rebuild&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Supporting &lt;code&gt;sync+restart&lt;/code&gt; for fast config reloading&lt;/li&gt;
&lt;li&gt;Letting you stay in your flow — no manual restart or rebuilds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It sounded perfect for speeding up my LangChain + Flask API dev loop.&lt;/p&gt;




&lt;h3&gt;
  
  
  My Setup
&lt;/h3&gt;

&lt;p&gt;I containerized a Flask API using LangChain to answer LLM queries.&lt;br&gt;&lt;br&gt;
Instead of manually rebuilding every time I changed a prompt or package, I enabled:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;sync&lt;/code&gt; for Python code&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rebuild&lt;/code&gt; for &lt;code&gt;requirements.txt&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything was orchestrated with &lt;code&gt;docker-compose.override.yml&lt;/code&gt; using the new &lt;code&gt;develop.watch&lt;/code&gt; feature.&lt;/p&gt;

&lt;p&gt;🔗 GitHub Repo:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/gowsiyabs/docker-llm-compose-watch" rel="noopener noreferrer"&gt;https://github.com/gowsiyabs/docker-llm-compose-watch&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  My Real-World Findings
&lt;/h2&gt;

&lt;p&gt;Here’s what I observed after enabling Compose Watch for my LangChain + Flask setup:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Command used&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;docker compose -f docker-compose.yml -f docker-compose.override.yml up --build&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Python code changes synced instantly&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Once I saved the file, changes reflected in the running container without needing to restart. This was a big win for fast iteration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rebuild didn't work at first for &lt;code&gt;requirements.txt&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Initially, even after modifying &lt;code&gt;requirements.txt&lt;/code&gt;, no rebuild was triggered. The console just kept showing the logs from the first run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix: Enable watch support explicitly in your terminal&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
On Windows, I had to explicitly enable Docker’s watch support when using &lt;em&gt;Git Bash&lt;/em&gt; or &lt;em&gt;PyCharm's terminal&lt;/em&gt;. Once enabled, rebuilds started working as expected.&lt;/p&gt;

&lt;p&gt;📌 Tip: Use a terminal that supports Docker watch triggers or set up your IDE to allow file system notifications to propagate correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson:&lt;/strong&gt; Even though &lt;code&gt;docker compose watch&lt;/code&gt; is powerful, it may require terminal or IDE-level configuration depending on your OS and environment.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why This Matters for AI Developers
&lt;/h3&gt;

&lt;p&gt;Compose Watch is ideal if you are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Experimenting rapidly with LLM APIs&lt;/li&gt;
&lt;li&gt;Testing prompts, chains, or model parameters&lt;/li&gt;
&lt;li&gt;Tired of typing &lt;code&gt;docker compose down &amp;amp;&amp;amp; up&lt;/code&gt; every few minutes&lt;/li&gt;
&lt;li&gt;Wanting a fast inner loop without sacrificing containerization&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Docker Compose Watch takes us closer to &lt;strong&gt;hot-reload for containers&lt;/strong&gt;, especially for interpreted languages like Python.&lt;/p&gt;

&lt;p&gt;While rebuild actions still feel a bit rough around the edges, &lt;strong&gt;sync-based workflows are already a productivity booster&lt;/strong&gt; for LLM, GenAI, and Flask-based AI apps.&lt;/p&gt;




&lt;h3&gt;
  
  
  🔜 Next in the Series
&lt;/h3&gt;

&lt;p&gt;In Part 2, I will explore &lt;strong&gt;Docker Model Runner&lt;/strong&gt; — a new way to run LLMs locally with Docker, without relying on cloud APIs or billing. Its available &lt;a href="https://dev.to/gowsiyashek/run-llms-locally-with-docker-model-runner-a-real-world-developer-guide-part-2-387c"&gt;here&lt;/a&gt; now!&lt;/p&gt;

&lt;p&gt;Follow me &lt;a href="https://dev.to/gowsiyashek"&gt;here&lt;/a&gt; on Dev.to to stay updated!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>ai</category>
      <category>llm</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
