<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Urvil Joshi</title>
    <description>The latest articles on DEV Community by Urvil Joshi (@urvvil).</description>
    <link>https://dev.to/urvvil</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/urvvil"/>
    <language>en</language>
    <item>
      <title>Docker Model Runner: Run AI Models Locally Within Your Docker Ecosystem</title>
      <dc:creator>Urvil Joshi</dc:creator>
      <pubDate>Tue, 07 Oct 2025 13:10:57 +0000</pubDate>
      <link>https://dev.to/urvvil/docker-model-runner-run-ai-models-locally-within-your-docker-ecosystem-n9h</link>
      <guid>https://dev.to/urvvil/docker-model-runner-run-ai-models-locally-within-your-docker-ecosystem-n9h</guid>
      <description>&lt;h1&gt;
  
  
  Docker Model Runner (DMR)
&lt;/h1&gt;

&lt;p&gt;Docker Model Runner (DMR) officially reached General Availability on September 18th, transitioning from its beta phase that began in April. This powerful tool enables developers to pull, run, and manage AI models locally within the Docker ecosystem, bringing the convenience of containerization to machine learning workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Docker Model Runner?
&lt;/h2&gt;

&lt;p&gt;Docker Model Runner allows you to run Large Language Models (LLMs) directly on your local machine while leveraging Docker's robust ecosystem. It combines the best features of local AI inference with Docker's familiar tooling and workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Local LLM Execution
&lt;/h3&gt;

&lt;p&gt;Running LLM models locally provides several critical advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Data Security&lt;/strong&gt;: Your data remains entirely on your local machine, never leaving your control or being sent to external services. This is particularly important for sensitive or proprietary information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Accelerated Development Workflows&lt;/strong&gt;: Developers can iterate faster by running AI models alongside their applications without network latency or API rate limits.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Seamless Integration&lt;/strong&gt;: If you're already using Docker Compose for your development environment, you can easily add AI models to your stack. When you spin up your containers, your LLM will launch simultaneously, creating a fully integrated local development environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foi9rs1d97pfmcu8kdg5j.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foi9rs1d97pfmcu8kdg5j.webp" alt=" " width="539" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. OpenAI-Compatible APIs
&lt;/h3&gt;

&lt;p&gt;Docker Model Runner provides OpenAI-compatible API endpoints, making integration straightforward. Many applications already use OpenAI's API format, which means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No code changes required in existing applications&lt;/li&gt;
&lt;li&gt;Client applications can switch seamlessly between cloud and local models&lt;/li&gt;
&lt;li&gt;Response formats remain consistent with OpenAI standards&lt;/li&gt;
&lt;li&gt;Your existing parsing logic continues to work without modification&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw9mo6c9btz02bnfdbid.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw9mo6c9btz02bnfdbid.webp" alt=" " width="720" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Integrated Inference Engine
&lt;/h3&gt;

&lt;p&gt;The architecture is designed for optimal performance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Models run on your host machine rather than inside Docker containers, maximizing performance&lt;/li&gt;
&lt;li&gt;Utilizes Llama.cpp inference server for efficient model execution&lt;/li&gt;
&lt;li&gt;Automatic NVIDIA GPU support when available&lt;/li&gt;
&lt;li&gt;Combines Docker's ecosystem management capabilities with Ollama-like performance&lt;/li&gt;
&lt;li&gt;Provides Docker commands for pulling, caching, managing, and running models&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7bxg9700jq5ujzoic00.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7bxg9700jq5ujzoic00.webp" alt=" " width="657" height="655"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. OCI Artifact Distribution
&lt;/h3&gt;

&lt;p&gt;Models are packaged and distributed as Open Container Initiative (OCI) artifacts, the same standardized format used for Docker images. This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Models can be pushed to any OCI-compatible registry&lt;/li&gt;
&lt;li&gt;Standardized packaging ensures consistency and portability&lt;/li&gt;
&lt;li&gt;Most models are distributed in GGUF (GPT-Generated Unified Format)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GGUF uses quantization to reduce model size, enabling AI models to run on standard hardware, including CPU-only systems. This format is ideal for local deployments where computational resources may be limited.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cczd28zk6kathd6f972.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cczd28zk6kathd6f972.webp" alt=" " width="720" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Multiple Interaction Methods
&lt;/h3&gt;

&lt;p&gt;Docker Model Runner offers flexibility in how you interact with models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Command-line interface for terminal-based interactions&lt;/li&gt;
&lt;li&gt;Docker Desktop GUI for visual model management&lt;/li&gt;
&lt;li&gt;OpenAI-compatible REST APIs for programmatic access&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. Parallel Multi-Model Support
&lt;/h3&gt;

&lt;p&gt;Need to run multiple models simultaneously? Docker Model Runner handles this effortlessly. For example, if you're building an AI agent that performs text summarization and image generation, you can run both models in parallel without complex configuration. Models can be accessed through the GUI, CLI, and API endpoints concurrently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5c4wrtxqe4juig8jxgw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5c4wrtxqe4juig8jxgw.webp" alt=" " width="720" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Docker Desktop version 4.41.0 or higher&lt;/li&gt;
&lt;li&gt;(Optional) NVIDIA GPU for accelerated inference&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Configuration
&lt;/h3&gt;

&lt;p&gt;Open Docker Desktop settings and enable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GPU-backed inference&lt;/strong&gt;: Allows automatic NVIDIA GPU detection and utilization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Host TCP support&lt;/strong&gt;: Enables OpenAI-compatible API access via HTTP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CORS settings&lt;/strong&gt;: Set to "all" if you encounter API access issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8m31qo6dakr2l6gglsp.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8m31qo6dakr2l6gglsp.webp" alt=" " width="720" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Finding and Pulling Models
&lt;/h3&gt;

&lt;h4&gt;
  
  
  From Docker Hub:
&lt;/h4&gt;

&lt;p&gt;Navigate to the AI section in Docker Hub and search for models using the &lt;code&gt;ai/&lt;/code&gt; prefix. Popular models like Llama 3.2, Mistral, and Phi-3 are readily available. Each model listing shows different quantization versions, allowing you to balance performance and resource requirements.&lt;/p&gt;

&lt;p&gt;To pull a model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker model pull ai/llama3.2:1b-instruct-q4_K_M
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  From Hugging Face:
&lt;/h4&gt;

&lt;p&gt;Browse to your desired model on Hugging Face, select "Use this model," and choose "Docker Model Runner" as the deployment method. The interface will display the appropriate pull command with your selected quantization level.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic Commands
&lt;/h3&gt;

&lt;p&gt;Check Docker Model Runner status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker model status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List downloaded models:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker model list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This displays model metadata including name, parameters, quantization level, and architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interacting with Models
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Command-Line Interface
&lt;/h3&gt;

&lt;p&gt;Single query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker model run ai/llama3.2:1b-instruct-q4_K_M &lt;span class="s2"&gt;"What is Docker?"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Interactive session:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker model run ai/llama3.2:1b-instruct-q4_K_M
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This opens a chat interface where you can have multi-turn conversations. Docker Model Runner maintains context across multiple exchanges. Exit by typing &lt;code&gt;/bye&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Desktop GUI
&lt;/h3&gt;

&lt;p&gt;In the Models tab, navigate to the Local section and click "Run" next to your desired model. This launches an interactive interface where you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chat with the model through a text input field&lt;/li&gt;
&lt;li&gt;View the Inspect tab for model metadata and architecture details&lt;/li&gt;
&lt;li&gt;Check the Requests tab to see your conversation history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The GUI maintains multi-turn conversation context, allowing natural, contextual interactions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflmz8iret90duu0dl2w6.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflmz8iret90duu0dl2w6.webp" alt=" " width="720" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenAI-Compatible API
&lt;/h3&gt;

&lt;p&gt;With host TCP support enabled in Docker Desktop settings, you can access models via REST API on the configured port (default varies based on your settings):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:PORT/v1/chat/completions &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "model": "ai/llama3.2:1b-instruct-q4_K_M",
    "messages": [{"role": "user", "content": "What is Docker?"}]
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The response format matches OpenAI's API specification, ensuring compatibility with existing tooling and parsers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Model Runner vs. Ollama
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k18o6qwk962bkvzyvg0.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k18o6qwk962bkvzyvg0.webp" alt=" " width="720" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both tools enable local AI model execution, but they have distinct characteristics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance&lt;/strong&gt;: Docker Model Runner runs models on the host machine rather than in containers, typically achieving approximately 12% better performance than containerized approaches. Ollama also runs on the host, either as a standalone binary or managed service, providing similar performance benefits.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration&lt;/strong&gt;: Docker Model Runner provides seamless integration with Docker Desktop and Docker Compose, making it ideal if you're already using Docker for development. Models can be defined in your compose files and started automatically with your application stack. Ollama operates as a standalone application with its own CLI and basic API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Endpoints&lt;/strong&gt;: Both offer OpenAI-compatible endpoints, but they use different default ports. You can configure these as needed for your environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tips and Resources
&lt;/h2&gt;

&lt;p&gt;The official Docker Model Runner documentation provides comprehensive guidance for various platforms including WSL 2, Linux, and macOS. The "Known Issues" section addresses common problems and their solutions.&lt;/p&gt;

&lt;p&gt;For those interested in the technical details, the Docker team has published an in-depth blog post covering the design philosophy, goals, GPU acceleration strategies, and high-level architecture. This resource is invaluable for understanding the engineering decisions behind Docker Model Runner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Docker Model Runner represents a natural evolution for developers already invested in the Docker ecosystem. By bringing local AI model execution to Docker Desktop, it eliminates the need for separate tools while providing familiar commands and workflows.&lt;/p&gt;

&lt;p&gt;The combination of data privacy, development speed, and seamless integration makes Docker Model Runner particularly attractive for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Development teams building AI-powered applications&lt;/li&gt;
&lt;li&gt;Organizations with data sensitivity requirements&lt;/li&gt;
&lt;li&gt;Developers seeking faster iteration cycles&lt;/li&gt;
&lt;li&gt;Teams already standardized on Docker tooling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're currently using Docker for development but haven't explored local AI model execution, Docker Model Runner offers a compelling entry point. Its integration with existing Docker workflows means minimal learning curve while unlocking powerful AI capabilities directly in your development environment.&lt;/p&gt;

&lt;p&gt;Whether you're building chatbots, implementing RAG systems, or experimenting with AI agents, Docker Model Runner provides the infrastructure to do so efficiently and securely on your local machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://youtu.be/CV5uBoA78qI" rel="noopener noreferrer"&gt;Docker Model Runner Tutorial 2025: Run AI Models Locally in Minutes | Complete Guide&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>devops</category>
    </item>
    <item>
      <title>10X Your Git Workflow: 7 Pro Tips to Boost Productivity 🚀</title>
      <dc:creator>Urvil Joshi</dc:creator>
      <pubDate>Wed, 17 Sep 2025 08:52:10 +0000</pubDate>
      <link>https://dev.to/urvvil/10x-your-git-workflow-7-pro-tips-to-boost-productivity-28nj</link>
      <guid>https://dev.to/urvvil/10x-your-git-workflow-7-pro-tips-to-boost-productivity-28nj</guid>
      <description>&lt;p&gt;Hey DEV community! Tired of Git stashes or messy commits? My new YouTube video, 10X Your Git Workflow: 7 Pro Tips (Worktree, Hooks &amp;amp; More), shares advanced hacks to save time and streamline version control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highlights&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Swap git stash for git worktree to juggle branches smoothly.&lt;/li&gt;
&lt;li&gt;Clean commits with interactive rebase for polished PRs.&lt;/li&gt;
&lt;li&gt;Automate checks with Git hooks to catch errors early.&lt;/li&gt;
&lt;li&gt;Recover lost commits with git reflog—your safety net!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Perfect for devs using GitHub or GitLab. Watch now: [&lt;a href="https://youtu.be/d_xZgcRJ--Q" rel="noopener noreferrer"&gt;https://youtu.be/d_xZgcRJ--Q&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;What’s your top Git trick or worst Git headache? Share below! 😄&lt;/p&gt;

&lt;h1&gt;
  
  
  git #versioncontrol #developerproductivity #programming #coding
&lt;/h1&gt;

</description>
      <category>git</category>
      <category>versioncontrol</category>
      <category>developerproductivity</category>
      <category>gittips</category>
    </item>
    <item>
      <title>Hey DEV community! Tired of Git chaos? My new YouTube video, 10X Your Git Workflow: 7 Pro Tips, shares advanced hacks to save time https://youtu.be/d_xZgcRJ--Q #git #versioncontrol #developerproductivity #programming #coding</title>
      <dc:creator>Urvil Joshi</dc:creator>
      <pubDate>Wed, 17 Sep 2025 08:44:00 +0000</pubDate>
      <link>https://dev.to/urvvil/hey-dev-community-tired-of-git-chaos-my-new-youtube-video-10x-your-git-workflow-7-pro-tips-2n5h</link>
      <guid>https://dev.to/urvvil/hey-dev-community-tired-of-git-chaos-my-new-youtube-video-10x-your-git-workflow-7-pro-tips-2n5h</guid>
      <description>&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://youtu.be/d_xZgcRJ--Q" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;youtu.be&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


</description>
    </item>
  </channel>
</rss>
