<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ákos Takács</title>
    <description>The latest articles on DEV Community by Ákos Takács (@rimelek).</description>
    <link>https://dev.to/rimelek</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rimelek"/>
    <language>en</language>
    <item>
      <title>History of GPUs in the context of containers until 2025</title>
      <dc:creator>Ákos Takács</dc:creator>
      <pubDate>Tue, 24 Jun 2025 20:00:37 +0000</pubDate>
      <link>https://dev.to/rimelek/history-of-gpus-in-the-context-of-containers-until-2025-je7</link>
      <guid>https://dev.to/rimelek/history-of-gpus-in-the-context-of-containers-until-2025-je7</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Years ago, before I started to use Docker, I didn't really care about GPUs. When I was about 12, all I heard, that a good graphics card was required to play some games on a Windows PC. That didn't change much during the years since I wasn't really among people talking about graphics cards and later GPUs for a long time. Now, in the age of containers, Docker and machine learning, everyone is talking about GPUs.&lt;/p&gt;

&lt;p&gt;I have done some research to discover the important steps of the evolution of GPUs, but I did it with containers in mind, so I could learn also about important steps of the development of Docker and containers in general in the context of GPUs. That's why I share more GPU-related events until 2017 below, and then speed up, focusing on containers.&lt;/p&gt;

&lt;p&gt;It is important to note that many of the events I share, I read in other articles on the internet, but I tried to get multiple sources for confirmation. You will find the links in this post to all of my sources. This one could not have been made without the authors of the sources, so a big thanks to them and please, follow the links if you want to learn more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The very beginning&lt;/li&gt;
&lt;li&gt;Main contributors to the early history of GPUs&lt;/li&gt;
&lt;li&gt;GPU not just for graphics&lt;/li&gt;
&lt;li&gt;Containers and GPUs&lt;/li&gt;
&lt;li&gt;Introducing GPU support in WSL and Docker Desktop&lt;/li&gt;
&lt;li&gt;Supporting AI workloads and using AU at Docker, Inc&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;li&gt;Sources&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The very beginning
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;It is not an accident that I didn't hear about using GPUs when I was 12, and not just because I was a kid. The term "GPU" did not exist at the beginning, when we could only talk about graphics cards. So, if we really wanted to go back to the very beginning, we could start with the first attempts at visualization, and talk about the "&lt;a href="https://en.wikipedia.org/wiki/Manchester_Baby" rel="noopener noreferrer"&gt;Manchester Baby&lt;/a&gt;", which, according to &lt;a href="https://www.britannica.com/technology/graphics-processing-unit" rel="noopener noreferrer"&gt;Britannica&lt;/a&gt;, could display images with a cathode-ray tube in 1948. The same article also mentions "&lt;a href="https://en.wikipedia.org/wiki/Whirlwind_I" rel="noopener noreferrer"&gt;Whirlwind computer&lt;/a&gt;" as "the first computer to display video". Of course, it was not the video that you could watch today.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;According to &lt;a href="https://acecloud.ai/blog/the-evolution-of-gpu/#early" rel="noopener noreferrer"&gt;AceCloud&lt;/a&gt;, the history of GPUs started in the 1970s, mentioning even 1968 when "computer graphics were just in their infancy".&lt;/li&gt;
&lt;li&gt;In the beginning, graphics cards were indeed for computer graphics but in a very simple way compared to what we have today. Later, people realized that these devices could be used for other purposes when parallel processing is required.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Main contributors to the early history of GPUs
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;There are some companies I probably don't have to introduce to anyone, as they are all well-known like ATI, AMD's NVIDIA, and Sony, but I personally never heard about 3DLabs before.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/ATI_Technologies" rel="noopener noreferrer"&gt;ATI Technologies&lt;/a&gt; was founded in 1985&lt;/li&gt;
&lt;li&gt;even before &lt;a href="https://en.wikipedia.org/wiki/Nvidia" rel="noopener noreferrer"&gt;NVIDIA&lt;/a&gt; was founded in 1993&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://en.wikipedia.org/wiki/Graphics_processing_unit#1990s" rel="noopener noreferrer"&gt;term "GPU" was coined by Sony&lt;/a&gt; in 1994&lt;/li&gt;
&lt;li&gt;A company, called 3DLabs was founded in 1994, and &lt;a href="https://www.britannica.com/technology/graphics-processing-unit" rel="noopener noreferrer"&gt;according to Britannica&lt;/a&gt;, they had a "3-D add-in card" &lt;a href="https://www.britannica.com/technology/graphics-processing-unit" rel="noopener noreferrer"&gt;as the "first modern GPU"&lt;/a&gt; from 1995, which I have never heard about before, and I couldn't figure out which card exactly the author referred to, as I couldn't find the quoted term anywhere else on Google. However, &lt;a href="https://vintage3d.org/3dlabs.php#sthash.XHcdU0gg.okYrVDAw.dpbs" rel="noopener noreferrer"&gt;3DLabs is mentioned by other websites&lt;/a&gt; as once a "leading company in the field".&lt;/li&gt;
&lt;li&gt;Intel's first dedicated GPU &lt;a href="https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units#First_generation" rel="noopener noreferrer"&gt;was made in 1998&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;You can read that &lt;a href="https://en.wikipedia.org/wiki/GeForce_256" rel="noopener noreferrer"&gt;GeForce 256&lt;/a&gt; was marketed by NVIDIA &lt;a href="https://en.wikipedia.org/wiki/Graphics_processing_unit#Terminology" rel="noopener noreferrer"&gt;as the "world's first GPU"&lt;/a&gt; in 1999.&lt;/li&gt;
&lt;li&gt;In 2006, ATI Technologies &lt;a href="https://en.wikipedia.org/wiki/ATI_Technologies#History" rel="noopener noreferrer"&gt;was acquired by AMD&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is interesting how many products were considered basically the beginning of GPUs by different people. I think it just means that the birth of today's GPUs didn't happen suddenly in the morning, but it was a long process with many important steps. Let's continue with more.&lt;/p&gt;

&lt;h2&gt;
  
  
  GPU not just for graphics
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In 2009, &lt;a href="https://medium.com/neuralmagic/a-brief-history-of-gpus-27122d8fd45" rel="noopener noreferrer"&gt;a paper was written&lt;/a&gt; "discussing the technology's promise in machine learning applications". The link to the paper in the linked article doesn't work anymore, but I think I found it from two sources, although both are from the Stanford University uploaded by the authors:

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://robotics.stanford.edu/~ang/papers/icml09-LargeScaleUnsupervisedDeepLearningGPU.pdf" rel="noopener noreferrer"&gt;Uploaded by Andrew Y. Ng&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ai.stanford.edu/~rajatr/papers/icml09_gpu.pdf" rel="noopener noreferrer"&gt;Uploaded by Rajat Raina&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;So it seems that things started to change and more and more people and companies started to use GPUs for machine learning.&lt;/li&gt;

&lt;li&gt;Still, in 2009, a paper was published with the title "&lt;a href="https://www.nvidia.com/content/pdf/fermi_white_papers/p.glaskowsky_nvidia%27s_fermi-the_first_complete_gpu_architecture.pdf" rel="noopener noreferrer"&gt;NVIDIA's Fermi: The First Complete&lt;br&gt;
GPU Computing Architecture&lt;/a&gt;".&lt;/li&gt;

&lt;li&gt;In 2010, NVIDIA released the &lt;a href="https://en.wikipedia.org/wiki/Fermi_(microarchitecture)" rel="noopener noreferrer"&gt;Fermi Architecture&lt;/a&gt; as a successor to the Tesla Architecture.&lt;/li&gt;

&lt;li&gt;In 2012, AlexNet wins and even "dominates" the &lt;a href="https://en.wikipedia.org/wiki/ImageNet" rel="noopener noreferrer"&gt;ImageNet challenge&lt;/a&gt; using CUDA, which was a significant step in Deep Learning.&lt;/li&gt;

&lt;li&gt;In 2017, NVIDIA released the &lt;a href="https://en.wikipedia.org/wiki/Volta_(microarchitecture)" rel="noopener noreferrer"&gt;Volta architecture&lt;/a&gt; that introduced Tensor cores to "speed up the training of neural networks"&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Containers and GPUs
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;GPUs might be very important tools in parallel processing and machine learning, but containers are also everywhere in development and also in production, so naturally, we have to be able to use GPUs in containers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;a href="https://github.com/lxc/lxc/releases/tag/lxc_0_1_0" rel="noopener noreferrer"&gt;first version of LXC&lt;/a&gt; (Linux Containers) was released in 2008&lt;/li&gt;
&lt;li&gt;Docker's first version &lt;a href="https://en.wikipedia.org/wiki/Docker_(software)#History" rel="noopener noreferrer"&gt;was released in 2013&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;If my interpretation of the sources is correct, NVIDIA started to support GPUs in containers in 2016. There was a &lt;a href="https://github.com/NVIDIA/nvidia-docker/releases/tag/v0.0.0-poc" rel="noopener noreferrer"&gt;proof of concept release of NVIDIA Docker&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;The latest patch of "nvidia-docker" v1 &lt;a href="https://github.com/NVIDIA/nvidia-docker/tree/v1.0.1" rel="noopener noreferrer"&gt;was released in March 2017&lt;/a&gt;. This was the last package that actually contained the source code, and it supported Docker 17.03.&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://github.com/NVIDIA/libnvidia-container/commit/e1a4a1101578726c76f7b1d8092bf2dd3b4a2cc4" rel="noopener noreferrer"&gt;first commit of libnvidia-container&lt;/a&gt; was made in April 2017.
Interestingly, to confuse us, the source code was pushed to the "nvidia-container-runtime" repository and the "libnvidia-container" repository as well, and I assume the original source was actually the "nvidia-container-runtime" repository. This assumption is based on the following:

&lt;ul&gt;
&lt;li&gt;The title of the &lt;a href="https://github.com/NVIDIA/nvidia-container-runtime/tree/v1.0.0" rel="noopener noreferrer"&gt;readme of v1.0.0&lt;/a&gt; was "libnvidia-container"&lt;/li&gt;
&lt;li&gt;All the v1 tags (v1.0.0 - v1.0.3) in the "nvidia-container-runtime" repository were created when the commits were created. According to GitHub, v1.0.0 in "nvidia-container-runtime" &lt;a href="https://github.com/NVIDIA/nvidia-container-runtime/releases/tag/v1.0.0" rel="noopener noreferrer"&gt;was tagged in September 2018&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Only one day later was the same commit in the "libnvidia-container" repository &lt;a href="https://github.com/NVIDIA/libnvidia-container/releases/tag/v1.0.0" rel="noopener noreferrer"&gt;tagged as v1.0.0&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;The rest of the v1 tags in "nvidia-container-runtime" were created even with a bigger delay in "libnvidia-container".&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;I also found a Hungarian name as the author of &lt;a href="https://github.com/docker/cli/commit/1ba368a5ac06766c6ba43f54a3ffd4f5452af760" rel="noopener noreferrer"&gt;the commit of the "gpus" option in the Docker CLI&lt;/a&gt;. To me, as a Hungarian, it was probably more interesting than to others.&lt;/li&gt;

&lt;li&gt;After v1, "nvidia-docker" eventually became the &lt;a href="https://github.com/NVIDIA/nvidia-docker/issues/1268" rel="noopener noreferrer"&gt;name of a collection of packages&lt;/a&gt; including "&lt;a href="https://github.com/NVIDIA/nvidia-container-runtime" rel="noopener noreferrer"&gt;nvidia-container-runtime&lt;/a&gt;" and the "&lt;a href="https://github.com/NVIDIA/libnvidia-container" rel="noopener noreferrer"&gt;libnvidia-container&lt;/a&gt;".&lt;/li&gt;

&lt;li&gt;Initially, the runtime was a patched version of "runc", which is the default container runtime in Docker today. Despite some sources that indicate "nvidia-docker" was a fork of "runc", it was not.&lt;/li&gt;

&lt;li&gt;The latest release of NVIDIA Docker came out on August 30 in 2023, and it was superseded by &lt;a href="https://github.com/NVIDIA/nvidia-container-toolkit" rel="noopener noreferrer"&gt;NVIDIA Container Toolkit&lt;/a&gt;.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introducing GPU support in WSL and Docker Desktop
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Although GPU support existed on Linux and in containers before, &lt;a href="https://docs.docker.com/desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt; runs the Docker daemon and the containers in a virtual machine even on Linux, as I mentioned in a previous &lt;a href="https://dev.to/rimelek/you-run-containers-not-dockers-discussing-docker-variants-components-and-versioning-4lpn#docker-in-a-vm-using-docker-desktop"&gt;blog post&lt;/a&gt;. That means the GPU has to be available inside the virtual machine, even when you are using that GPU on the physical host.&lt;/p&gt;

&lt;p&gt;On June 17, 2020, &lt;a href="https://devblogs.microsoft.com/commandline/gpu-compute-wsl-install-and-wsl-update-arrive-in-the-windows-insiders-fast-ring-for-the-windows-subsystem-for-linux/" rel="noopener noreferrer"&gt;Microsoft announced GPU compute support for WSL2&lt;/a&gt;, which was also &lt;a href="https://developer.nvidia.com/blog/announcing-cuda-on-windows-subsystem-for-linux-2/" rel="noopener noreferrer"&gt;mentioned on the NVIDIA tech blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;On December 21, 2020, Docker announced the "&lt;a href="https://www.docker.com/blog/wsl-2-gpu-support-is-here/" rel="noopener noreferrer"&gt;general preview of Docker Desktop support for GPU with Docker in WSL2&lt;/a&gt;". That basically meant that since WSL2 already supported NVIDIA GPUs, and the NVIDIA container runtime existed, it could be automatically configured in Docker Desktop when using WSL2 as a backend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Supporting AI workloads and using AI at Docker, Inc
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"&lt;a href="https://www.notion.so/English-1f7bbedb6cd780f8b486f56a60201b5e?pvs=21" rel="noopener noreferrer"&gt;Docker, Inc&lt;/a&gt;" didn't just continue to support workloads using GPUs, which often meant some AI-related processes, but released their "Docker Docs AI" and &lt;a href="https://www.docker.com/blog/docker-documentation-ai-powered-assistant/" rel="noopener noreferrer"&gt;wrote a blog post about it pn May 22, 2024&lt;/a&gt;. That was the first step towards a Docker AI assistant, but it was for helping with Docker-related questions based on the documentation.&lt;/li&gt;
&lt;li&gt;On the official Docker Forums, &lt;a href="https://forums.docker.com/t/good-news-for-docker-enthusiasts-join-the-ask-gordon-beta-program/145827" rel="noopener noreferrer"&gt;I could announce the Beta "Ask Gordon"&lt;/a&gt; on January 2, 2025. It was an AI integrated into Docker Desktop.&lt;/li&gt;
&lt;li&gt;Docker Desktop is a tool to support container-based development, which currently also includes working with AI models, so on April 9, 2025, the &lt;a href="https://www.docker.com/blog/docker-documentation-ai-powered-assistant/" rel="noopener noreferrer"&gt;Docker Model Runner was announced&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Not long after the announcement of the Model Runner, &lt;a href="https://www.docker.com/blog/announcing-docker-mcp-catalog-and-toolkit-beta/" rel="noopener noreferrer"&gt;the MCP Catalog and Toolkit were announced&lt;/a&gt; as well on May 5, 2025.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;So this is where we are now, and I'm sure I could have mentioned many other details, but my goal was to give you an overview, before I jump to other GPU- or AI-related posts. At least that's the goal.&lt;/p&gt;

&lt;p&gt;Until then, you can check the Excel sheet in which I collected some GPU- and container-related events in chronological order:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://1drv.ms/x/c/9d670019d6697cb6/EbZ8adYZAGcggJ2RjAAAAAABhaNs-vt8Yj4QIeqzhs4Rxg" rel="noopener noreferrer"&gt;https://1drv.ms/x/c/9d670019d6697cb6/EbZ8adYZAGcggJ2RjAAAAAABhaNs-vt8Yj4QIeqzhs4Rxg&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are looking for links, you can move the mouse pointer over the top right corner of the cells where you see a little red arrow (at least this is how it works at the time of writing this post).&lt;/p&gt;

&lt;p&gt;In the next section, you can see the links of websites that I also linked in this article above except those that I didn't link as a source, but only a reference to a product or one of my previous posts.&lt;/p&gt;

&lt;p&gt;If you find incorrect information, please let me know in the comments. You can also share your opinion or add an important event that you would have mentioned.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wikipedia

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Manchester_Baby" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Manchester_Baby&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Whirlwind_I" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Whirlwind_I&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/ATI_Technologies" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/ATI_Technologies&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Nvidia" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Nvidia&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Graphics_processing_unit#1990s" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Graphics_processing_unit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units#First_generation" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units#First_generation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/GeForce_256" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/GeForce_256&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Fermi_(microarchitecture)" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Fermi_(microarchitecture)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/ImageNet" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/ImageNet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Volta_(microarchitecture)" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Volta_(microarchitecture)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Docker_(software)#History" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Docker_(software)#History&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;GitHub repositories

&lt;ul&gt;
&lt;li&gt;lxc

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/lxc/lxc/releases/tag/lxc_0_1_0" rel="noopener noreferrer"&gt;https://github.com/lxc/lxc/releases/tag/lxc_0_1_0&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;NVIDIA

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/NVIDIA/nvidia-docker/releases/tag/v0.0.0-poc" rel="noopener noreferrer"&gt;https://github.com/NVIDIA/nvidia-docker/releases/tag/v0.0.0-poc&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/NVIDIA/nvidia-docker/tree/v1.0.1" rel="noopener noreferrer"&gt;https://github.com/NVIDIA/nvidia-docker/tree/v1.0.1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/NVIDIA/libnvidia-container/commit/e1a4a1101578726c76f7b1d8092bf2dd3b4a2cc4" rel="noopener noreferrer"&gt;https://github.com/NVIDIA/libnvidia-container/commit/e1a4a1101578726c76f7b1d8092bf2dd3b4a2cc4&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/NVIDIA/nvidia-container-runtime/tree/v1.0.0" rel="noopener noreferrer"&gt;https://github.com/NVIDIA/nvidia-container-runtime/tree/v1.0.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/NVIDIA/nvidia-container-runtime/releases/tag/v1.0.0" rel="noopener noreferrer"&gt;https://github.com/NVIDIA/nvidia-container-runtime/releases/tag/v1.0.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/NVIDIA/libnvidia-container/releases/tag/v1.0.0" rel="noopener noreferrer"&gt;https://github.com/NVIDIA/libnvidia-container/releases/tag/v1.0.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/NVIDIA/nvidia-docker/issues/1268" rel="noopener noreferrer"&gt;https://github.com/NVIDIA/nvidia-docker/issues/1268&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/NVIDIA/nvidia-container-runtime" rel="noopener noreferrer"&gt;https://github.com/NVIDIA/nvidia-container-runtime&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/NVIDIA/libnvidia-container" rel="noopener noreferrer"&gt;https://github.com/NVIDIA/libnvidia-container&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/NVIDIA/nvidia-container-toolkit" rel="noopener noreferrer"&gt;https://github.com/NVIDIA/nvidia-container-toolkit&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Docker

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/docker/cli/commit/1ba368a5ac06766c6ba43f54a3ffd4f5452af760" rel="noopener noreferrer"&gt;https://github.com/docker/cli/commit/1ba368a5ac06766c6ba43f54a3ffd4f5452af760&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Docker

&lt;ul&gt;
&lt;li&gt;Blog

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/blog/wsl-2-gpu-support-is-here/" rel="noopener noreferrer"&gt;https://www.docker.com/blog/wsl-2-gpu-support-is-here/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/blog/docker-documentation-ai-powered-assistant/" rel="noopener noreferrer"&gt;https://www.docker.com/blog/docker-documentation-ai-powered-assistant/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/blog/docker-documentation-ai-powered-assistant/" rel="noopener noreferrer"&gt;https://www.docker.com/blog/docker-documentation-ai-powered-assistant/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/blog/announcing-docker-mcp-catalog-and-toolkit-beta/" rel="noopener noreferrer"&gt;https://www.docker.com/blog/announcing-docker-mcp-catalog-and-toolkit-beta/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Forum

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://forums.docker.com/t/good-news-for-docker-enthusiasts-join-the-ask-gordon-beta-program/145827" rel="noopener noreferrer"&gt;https://forums.docker.com/t/good-news-for-docker-enthusiasts-join-the-ask-gordon-beta-program/145827&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;NVIDIA

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.nvidia.com/blog/announcing-cuda-on-windows-subsystem-for-linux-2/" rel="noopener noreferrer"&gt;https://developer.nvidia.com/blog/announcing-cuda-on-windows-subsystem-for-linux-2/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nvidia.com/content/pdf/fermi_white_papers/p.glaskowsky_nvidia%27s_fermi-the_first_complete_gpu_architecture.pdf" rel="noopener noreferrer"&gt;https://www.nvidia.com/content/pdf/fermi_white_papers/p.glaskowsky_nvidia's_fermi-the_first_complete_gpu_architecture.pdf&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Standford

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://robotics.stanford.edu/~ang/papers/icml09-LargeScaleUnsupervisedDeepLearningGPU.pdf" rel="noopener noreferrer"&gt;https://robotics.stanford.edu/~ang/papers/icml09-LargeScaleUnsupervisedDeepLearningGPU.pdf&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ai.stanford.edu/~rajatr/papers/icml09_gpu.pdf" rel="noopener noreferrer"&gt;https://ai.stanford.edu/~rajatr/papers/icml09_gpu.pdf&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Microsoft

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://devblogs.microsoft.com/commandline/gpu-compute-wsl-install-and-wsl-update-arrive-in-the-windows-insiders-fast-ring-for-the-windows-subsystem-for-linux/" rel="noopener noreferrer"&gt;https://devblogs.microsoft.com/commandline/gpu-compute-wsl-install-and-wsl-update-arrive-in-the-windows-insiders-fast-ring-for-the-windows-subsystem-for-linux/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;AceCloud

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://acecloud.ai/blog/the-evolution-of-gpu/#early" rel="noopener noreferrer"&gt;https://acecloud.ai/blog/the-evolution-of-gpu/#early&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Britannica

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.britannica.com/technology/graphics-processing-unit" rel="noopener noreferrer"&gt;https://www.britannica.com/technology/graphics-processing-unit&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Vintage3d

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://vintage3d.org/3dlabs.php#sthash.XHcdU0gg.okYrVDAw.dpbs" rel="noopener noreferrer"&gt;https://vintage3d.org/3dlabs.php#sthash.XHcdU0gg.okYrVDAw.dpbs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Medium

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://medium.com/neuralmagic/a-brief-history-of-gpus-27122d8fd45" rel="noopener noreferrer"&gt;https://medium.com/neuralmagic/a-brief-history-of-gpus-27122d8fd45&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>container</category>
      <category>history</category>
      <category>gpu</category>
    </item>
    <item>
      <title>Error message: Is the Docker daemon running?</title>
      <dc:creator>Ákos Takács</dc:creator>
      <pubDate>Sun, 05 Jan 2025 16:06:43 +0000</pubDate>
      <link>https://dev.to/rimelek/error-message-is-the-docker-daemon-running-3l7c</link>
      <guid>https://dev.to/rimelek/error-message-is-the-docker-daemon-running-3l7c</guid>
      <description>&lt;p&gt;I shared this post on the official &lt;a href="https://forums.docker.com/t/tutorial-solve-the-error-message-is-the-docker-daemon-running/145891" rel="noopener noreferrer"&gt;Docker forum&lt;/a&gt; as well, in case you prefer reading there. I will try to keep these posts in sync, but leave a comment if you think theye are not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;There is a common error message that Docker users get&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sometimes you get an IP address&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cannot connect to the Docker daemon at tcp://127.0.0.1:2375. Is the docker daemon running?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or sometimes a default domain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cannot connect to the Docker daemon at http://docker.example.com. Is the docker daemon running?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Although the solution is often quite simple, people often jump to the wrong conclusion that the Docker client knows exactly whether the daemon is running or not, and they don't understand what else could be the problem, when they see the Docker daemon running. So in this post I will explain what the error message is, what it isn't, and how can a single error message mean so many things. Of course, I will also share the solutions I know about, but what I want you to learn here is how to understand the error message in the context of your environment and the way to find a solution for your case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The meaning of the error message&lt;/li&gt;
&lt;li&gt;Quick guide for the impatient&lt;/li&gt;
&lt;li&gt;
Problems and solutions in details

&lt;ul&gt;
&lt;li&gt;Find out which Docker variant you have&lt;/li&gt;
&lt;li&gt;Docker runtime contexts&lt;/li&gt;
&lt;li&gt;Running remote Docker daemons&lt;/li&gt;
&lt;li&gt;
Check if the Docker daemon is actually running

&lt;ul&gt;
&lt;li&gt;Docker CE status on Linux running under Systemd&lt;/li&gt;
&lt;li&gt;Docker as Snap package status&lt;/li&gt;
&lt;li&gt;Docker CE status in Docker Desktop&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Check if the Docker unix domain socket exists&lt;/li&gt;

&lt;li&gt;Check if you have permission to the socket&lt;/li&gt;

&lt;li&gt;Make sure you are using the right socket&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;I got "is the Docker daemon running", but I followed the official installation guide&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  The meaning of the error message
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Notice that it is a question at the end and not a statement. So it is either the dockerd process is not running or the client could not tell whether it is running or not. The Docker client will not check the running processes on the host machine. The client could run on your workstation while you are trying to connect to a Docker daemon running on a remote machine or in a virtual machine, maybe even created by Docker Desktop. So the client will attempt to connect to the daemon, and if it fails to do so, the client will tell you that one possible reason of failing is that the daemon is not even running.&lt;/p&gt;

&lt;p&gt;The second detail you need to notice is the unix socket in the error message. That is a file on the filesystem of the machine on which the client is running, but it is almost always the same as the one on which the daemon is running, or at least the same physical machine while the daemon is running in a VM (like Docker Desktop). In case there is only one machine, this is the file used by both, the client and the daemon, so the client can communicate with the Docker daemon. This file is not a regular file, just a way for processes to communicate with each other through the kernel memory. Fortunately, you don't have to know anything about that, but this is probably the reason why you can't mount this socket file (or any other unix domain socket) into a virtual machine.&lt;/p&gt;

&lt;p&gt;Oh... you think you could do that with Docker Desktop? No, you couldn't, but there is a Docker socket inside the VM as well. So when you mount the docker socket into a container, you mount the one from the virtual machine.&lt;/p&gt;

&lt;p&gt;Even though this file is not a regular file, you can still set permissions on the file, which means you can allow or deny access to the file for different users. If you don't have access to the unix domain socket which is configured for the client, it will not be able to use it for communication. It will not be able to connect to the Docker daemon, even if the daemon is running and listening on this socket for requests. Note that &lt;code&gt;unix://&lt;/code&gt; is just a protocol similarly to &lt;code&gt;http://&lt;/code&gt;. The difference is that after an HTTP protocol, the next part is the domain name or IP address, while the next part after the unix domain socket protocol is the file path, which starts with a slash character on Linux. That is why you see 3 slashes. One is part of the filepath, the other two are part of the protocol reference.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick guide for the impatient
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Here is a summary of what you should consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There are multiple Docker variants, and knowing which one you are using can help you find out whether it is running or not, because you know where to look for the process. On the other hand, if you know, how you can list processes on the operating system, and look for the Docker daemon in the list, that can help you to figure out which Docker variant you have. Be careful, because you could have more than one.&lt;/li&gt;
&lt;li&gt;Are you using a remote daemon running on a remote server, or a local daemon&lt;/li&gt;
&lt;li&gt;Does the socket file exist?&lt;/li&gt;
&lt;li&gt;Does the Docker client has access to the file?&lt;/li&gt;
&lt;li&gt;If you have access to the file, are you sure that this socket is the one that is used by the Docker daemon?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Problems and solutions in details
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Find out which Docker variant you have
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;First you need to know which Docker variant you have. People often install Docker CE and Docker Desktop when they only want to have the Desktop. Then they actually try to connect to the wrong daemon. I wrote about the different kind of Docker installations in "&lt;a href="https://dev.to/rimelek/you-run-containers-not-dockers-discussing-docker-variants-components-and-versioning-4lpn"&gt;You run containers, not dockers - Discussing Docker variants, components and versioning&lt;/a&gt;". If you want to learn more about it in practice, you can also read "&lt;a href="https://dev.to/rimelek/which-docker-variant-am-i-using-and-where-is-the-daemon-running-44bc"&gt;Which Docker variant am I using and where is the daemon running?&lt;/a&gt;"&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker runtime contexts
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;We have to talk about &lt;a href="https://docs.docker.com/engine/manage-resources/contexts/" rel="noopener noreferrer"&gt;docker contexts&lt;/a&gt;. If I'm not specific enough, it could be confused with the docker build contexts which is a completely different topic. What I'm talking about now is what we could call the "Docker runtime context". The context definition that tells the client how to connect to the daemon on the server where the containers will run. The following command can show you the current contexts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker context &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sometimes the easiest solution is switching to another context or create one first if none of the existing contexts are correct. If you don't want to create a context, you can also set an environment variable temporarily to change the host. It is mentioned in the linked documentation, but there is another one with an example: &lt;a href="https://docs.docker.com/engine/security/protect-access/#secure-by-default" rel="noopener noreferrer"&gt;Protect the Docker daemon socket / Secure by default&lt;/a&gt;. I will also use this later in this post.&lt;/p&gt;

&lt;p&gt;If you want to get the actual endpoint from the current context, you can also run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker context inspect &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s1"&gt;'{{ .Endpoints.docker.Host }}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which can show you something like &lt;code&gt;unix:///var/run/docker.sock&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running remote Docker daemons
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Even if you are using Docker CE, the Docker client can connect to a remote server. If you know that the daemon is running on a remote server, because you have a VPS to which you want to connect using SSH, you could get a slightly different error from what I shared in the intro.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cannot connect to the Docker daemon at http://docker.example.com. Is the docker daemon running?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is still the same kind of error, but there is an HTTP endpoint instead of the unix socket. This is special, since even when you don't have a unix domain socket on Linux, you have a TCP socket, not HTTP. When you try to actually use URL as an endpoint, you will get "invalid bind address format".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;DOCKER_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://docker.example.com"&lt;/span&gt; docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Failed to initialize: unable to resolve docker endpoint: invalid bind address format: http://docker.example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is still currently the default at the time of writing this post &lt;a href="https://github.com/docker/cli/blob/667ece32cff54d8deeafdc54c9d86d808155d8c3/cli/connhelper/connhelper.go#L59" rel="noopener noreferrer"&gt;when the unix socket cannot be found through the SSH connection&lt;/a&gt;. You can reproduce it with the following command, assuming the hostname to your VPS is &lt;code&gt;myvps&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;DOCKER_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ssh://myvps/var/run/docke.sock"&lt;/span&gt; docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above command I intentionally set a wrong socket path using an SSH connection, but I can try TCP as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;DOCKER_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"tcp://127.0.0.1"&lt;/span&gt; docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we get the IP address in the error message, not the default domain&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cannot connect to the Docker daemon at tcp://127.0.0.1:2375. Is the docker daemon running?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And if you try a domain name, you see the domain name in the error message, except when the domain name cannot be resolved to an IP address, in which case you have a DNS issue with a completely different error message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;DOCKER_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"tcp://myvips"&lt;/span&gt; docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error during connect: Get "http://myvips:2375/v1.47/containers/json": dial tcp: lookup myvips: no such host
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is out of the scope of this post, but I still have to mention that the environment variable is not the only way to get this error. You could have a Docker runtime context as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check if the Docker daemon is actually running
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;First of all, the "Find out which Docker variant you have" section could already help you, because knowing which Docker variant you had also required being able to check the running processes, but there is more.&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker CE status on Linux running under Systemd
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;If you followed the official documentation to install Docker CE, the Docker daemon is running under systemd. In that case the following command should tell you if it is running or not.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It actually works for non-official packages if Systemd was used to run the daemon. Since most of the Linux distributions today use Systemd, it usually works. On older Windows Subsystem for Linux versions, Systemd was not available, so you had to run a different command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;service docker status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This actually works even if you have systemd, since the &lt;code&gt;service&lt;/code&gt; script can recognize it and use systemctl commands behind the scenes.&lt;/p&gt;

&lt;p&gt;The output would be something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;● docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: enabled)
     Active: active (running) since Wed 2025-01-01 17:32:24 CET; 3 days ago
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is only the beginning the output, but the important part is "&lt;code&gt;Active: active (running)&lt;/code&gt;". Depending on your terminal configuration, the circle next to the service name could be green when the service is running.&lt;/p&gt;

&lt;p&gt;If you have &lt;a href="https://docs.docker.com/engine/security/rootless/" rel="noopener noreferrer"&gt;Rootless Docker&lt;/a&gt;, the daemon is running as your non-root user, and the systemctl command is different:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl --user status docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;notice that there is no &lt;code&gt;sudo&lt;/code&gt; in the command and we use the &lt;code&gt;--user&lt;/code&gt; flag.&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker as a Snap package status
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;If you installed Docker as a Snap package, the following command will list the running services&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;snap services
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Service                          Startup  Current   Notes
docker.dockerd                   enabled  active    -
docker.nvidia-container-toolkit  enabled  inactive  -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only service I have in this example is the Docker daemon and the Nvidia Container Toolkit which is inactive. The "dockerd.dockerd" service is active, so it is running. If you have many Snap services, you can of course get the status of one service directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;snap services docker.dockerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Service         Startup  Current  Notes
docker.dockerd  enabled  active
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Docker CE status in Docker Desktop
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;If you have Docker Desktop, the Docker daemon is running in a virtual machine. If the host operating system is Linux, there is a systemctl command that shows the status of the &lt;code&gt;docker-desktop&lt;/code&gt; service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl --user status docker-desktop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is again the status of Docker Desktop, and not the daemon which is running in the virtual machine of Docker Desktop. If you want to know the status of the engine, open the GUI of Docker Desktop and look for the "Engine running" message currently in the bottom left corner of the window.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check if the Docker unix domain socket exists
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;It is usually at &lt;code&gt;/var/run/docker.sock&lt;/code&gt;, but it can be changed, so just look for what the error message says, or run the command I already mentioned before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker context inspect &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s1"&gt;'{{ .Endpoints.docker.Host }}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it is a unix domain socket file, you should be able to check it by running the following command in which I will assume the path is the default:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;file /var/run/docker.sock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/var/run/docker.sock: socket
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It also tells you if the file is an actual socket. If not, you have to fix the socket.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check if you have permission to the socket
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Maybe it is an existing socket, but you don't have permission to access it. The following command can reveal that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /var/run/docker.sock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;srw-rw---- 1 root docker 0 Jan  3 12:49 /var/run/docker.sock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It shows that the socket is owned by root, and it is assigned to the "docker" group. &lt;code&gt;rw-rw----&lt;/code&gt; means that only the owner (root) or a user in the group (docker) can read and write the file. so you are either using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to list containers, or you add your user to the Docker group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; docker USERNAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is actually a third option that I sort of "invented", since I have never seen this anywhere else, but you can read about it in "&lt;a href="https://dev.to/rimelek/install-docker-and-portainer-in-a-vm-using-ansible-21ib#allow-nonroot-users-to-use-the-docker-commands"&gt;Allow non-root users to use the docker commands&lt;/a&gt;".&lt;/p&gt;

&lt;h3&gt;
  
  
  Make sure you are using the right socket
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;As I already pointed out, some users install Docker CE when they actually want to use Docker Desktop, but when the documentation says that adding the official package repository to the system is required, and it links to the configuration guide, they continue with the rest of the steps, which are not just not required,  but also not recommended. The only package you need from the official repository is the Docker Client which is called "&lt;code&gt;docker-ce-cli&lt;/code&gt;", and which will install "&lt;code&gt;docker-buildx-plugin&lt;/code&gt;" and "&lt;code&gt;docker-compose-plugin&lt;/code&gt;" as well.&lt;/p&gt;

&lt;p&gt;If you install "&lt;code&gt;docker-ce&lt;/code&gt;", you will have a second daemon outside of Docker Desktop, and when Docker Desktop is running, it changes the runtime context to "desktop-linux", so even if you checked the context previously, it will be changed, and you connect to another socket. This is one more time when &lt;code&gt;docker context&lt;/code&gt; commands could be useful to see which context you are in, but know that when you use the docker commands with &lt;code&gt;sudo&lt;/code&gt;, you will run the docker client as root, which has no configured access to your Docker Desktop socket in your home directory, and you connect to the daemon of Docker CE on the host, not Docker Desktop. So even if the socket you check is accessible by your user, it is possible that it is not the one you wanted to use, and you can get an error message when trying to connect to the wrong daemon.&lt;/p&gt;

&lt;p&gt;If you have Rootless Docker, it is a similar situation, since it has a different socket too, and it is configured only for your non-root user, so using sudo would result in using another daemon.&lt;/p&gt;

&lt;p&gt;Don't forget to check the endpoint in the context using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker context &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker context inspect &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s1"&gt;'{{ .Endpoints.docker.Host }}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and find out if this is the right context. It is also possible that something added an environment variable to your shell. Run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_HOST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to find out if this is the case, figure out where it was set (possibly in your &lt;code&gt;.bashrc&lt;/code&gt; file in your home) and remove it or change it. This could happen also when you are running Docker in a CI/CD pipeline, although in that case it is not likely to be wrong. You should still check it to make sure.&lt;/p&gt;

&lt;p&gt;If you have multiple Docker installation on the same machine, or you changed the daemon configuration or used a non-official installer for the daemon part, it is possible that the socket in the error message is not the one that the daemon is using, so you need to reconfigure the client either by setting a docker runtime context or the environment variable.&lt;/p&gt;

&lt;h2&gt;
  
  
  I got "is the docker daemon running", but I followed the official installation guide
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;We had multiple cases on the &lt;a href="https://forums.docker.com/" rel="noopener noreferrer"&gt;Docker forum&lt;/a&gt; when the user stated they followed the official documentation &lt;a href="https://docs.docker.com/engine/install/ubuntu/" rel="noopener noreferrer"&gt;for example on Ubuntu&lt;/a&gt;, and they still got the error message.&lt;/p&gt;

&lt;p&gt;The problem is that people often ignore the "Next steps" section, which is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Continue to &lt;a href="https://docs.docker.com/engine/install/linux-postinstall/" rel="noopener noreferrer"&gt;Post-installation steps for Linux&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is where the group membership is set. Even if you didn't ignore it and configured the user properly, it can be that you have one of the previously mentioned issues due to multiple Docker daemons on your system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;By this time you could learn that the error message is not a statement, but a question or an assumption. There is a general form of this error message, but the endpoint can be multiple thing. I hope I could help you to understand the meaning of all, so you can make your Docker work again.&lt;/p&gt;

&lt;p&gt;You could learn some of the risks of running multiple Docker daemons on the same machine, but there are other risks not included here, since those are not related to the discussed error messages.&lt;/p&gt;

&lt;p&gt;As always I tried to include all the related problems and solutions that I know of, but obviously I can't always think of everything. If you are sure, that you have none of the issues mentioned in this post, leave a comment here or on the &lt;a href="https://forums.docker.com/" rel="noopener noreferrer"&gt;official Docker forums&lt;/a&gt; to figure out what happened to your system. Please, share what you have actually tried, and not just that you tried everything mentioned in this or any other post, since it could be easy to miss something!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Which Docker variant am I using and where is the daemon running?</title>
      <dc:creator>Ákos Takács</dc:creator>
      <pubDate>Thu, 26 Dec 2024 16:40:39 +0000</pubDate>
      <link>https://dev.to/rimelek/which-docker-variant-am-i-using-and-where-is-the-daemon-running-44bc</link>
      <guid>https://dev.to/rimelek/which-docker-variant-am-i-using-and-where-is-the-daemon-running-44bc</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;People often install Docker CE and Docker Desktop when they only want to have the Desktop. Then they actually try to connect to the wrong daemon. I wrote about the different kind of Docker installations in "&lt;a href="https://dev.to/rimelek/you-run-containers-not-dockers-discussing-docker-variants-components-and-versioning-4lpn"&gt;You run containers, not dockers - Discussing Docker variants, components and versioning&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;So sometimes you want to know which Docker variant you are running. Maybe you don't remember, or accidentally enabled an option while installing the host operating system, or someone else installed Docker and you had to take over the project. When you ask a question on the official &lt;a href="https://forums.docker.com/" rel="noopener noreferrer"&gt;Docker Community Forums&lt;/a&gt;, we also need to know which one you are using, since the cause of an issue and the solution could be completely different.&lt;/p&gt;

&lt;p&gt;You can find out which Docker you installed, but first you need to know what operating system you are using and what package managers it supports. Then you need to know how you can list the installed packages, using the supported package managers and listing processes can help too.&lt;/p&gt;

&lt;p&gt;In this post I try to summarize the different kind of Docker installations and how you can tell which variant you have.&lt;br&gt;
You can also have one that I don't write about in this post, but I hope you can use the methods to discover what you or anyone else installed before.&lt;/p&gt;
&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
Docker Engine on Linux

&lt;ul&gt;
&lt;li&gt;The number of dockerd processes running directly on Linux&lt;/li&gt;
&lt;li&gt;Docker as a Snap package&lt;/li&gt;
&lt;li&gt;
Docker installed using the default package manager of the Linux distribution

&lt;ul&gt;
&lt;li&gt;Docker on Debian-based Linux distributions&lt;/li&gt;
&lt;li&gt;Docker on Red Hat-based Linux distributions&lt;/li&gt;
&lt;li&gt;Find non-snap dockerd processes on Linux&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
Docker Desktop

&lt;ul&gt;
&lt;li&gt;Docker Desktop on Linux&lt;/li&gt;
&lt;li&gt;Docker Desktop on macOS&lt;/li&gt;
&lt;li&gt;Docker Desktop on Windows&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Docker Contexts&lt;/li&gt;
&lt;li&gt;Remote Docker daemon&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Docker Engine on Linux
&lt;/h2&gt;
&lt;h3&gt;
  
  
  The number of dockerd processes running directly on Linux
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;When using the Docker Engine on Linux directly, based on the &lt;a href="https://mobyproject.org/" rel="noopener noreferrer"&gt;Moby project&lt;/a&gt;, you can run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pidof dockerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That should show a single number, the process id of the Docker daemon. If you get none, then the Daemon is either not running, or you most likely run the daemon on a remote machine or in a virtual machine. It is less likely, but it is also possible that you found another non-official way to install Docker, and the daemon executable has a different name. Then you need to find the maintainer of that Docker daemon and ask for their help.&lt;/p&gt;

&lt;p&gt;If you get multiple process IDs, that means you have multiple Docker daemons running, which is most likely by accident. You should run only one Docker daemon, unless you are really experienced, and you know how to make sure that these daemons are using different sockets, data directories and iptables rules (or it is enabled only for one daemon).&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker as a Snap package
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;When Docker is installed as a &lt;a href="https://snapcraft.io/docker" rel="noopener noreferrer"&gt;Snap package&lt;/a&gt; on Linux, the following command can be used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;snap list docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you get an error message, you either have no snap package manager at all, or Docker is not installed as a snap package. Otherwise, you would get something like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Name    Version  Rev   Tracking       Publisher   Notes
docker  27.2.0   2964  latest/stable  canonical✓  -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The exact output can be different depending on the version you have. Then you can find the process in the process list using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ps aux | &lt;span class="nb"&gt;grep &lt;/span&gt;dockerd | &lt;span class="nb"&gt;grep &lt;/span&gt;snap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the Docker daemon installed as a Snap package is also running, you will get an output like below:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;root         916  0.0  1.6 2037476 67312 ?       Ssl  15:51   0:02 dockerd --group docker --exec-root=/run/snap.docker --data-root=/var/snap/docker/common/var-lib-docker --pidfile=/run/snap.docker/docker.pid --config-file=/var/snap/docker/2964/config/daemon.json&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker installed using the default package manager of the Linux distribution
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Docker on Debian-based Linux distributions
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;On Debian-based Linux distributions, you can use &lt;code&gt;dpkg&lt;/code&gt; to find out if Docker was installed as an APT package.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dpkg &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s1"&gt;'docker*'&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s1"&gt;'^ii'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output would be something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ii  docker-buildx-plugin      0.19.2-1~ubuntu.24.04~noble   arm64        Docker Buildx cli plugin.
ii  docker-ce                 5:27.4.0-1~ubuntu.24.04~noble arm64        Docker: the open-source application container engine
ii  docker-ce-cli             5:27.4.0-1~ubuntu.24.04~noble arm64        Docker CLI: the open-source application container engine
ii  docker-ce-rootless-extras 5:27.4.0-1~ubuntu.24.04~noble arm64        Rootless support for Docker.
ii  docker-compose-plugin     2.31.0-1~ubuntu.24.04~noble   arm64        Docker Compose (V2) plugin for the Docker CLI.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Recent APT versions also support the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt list &lt;span class="nt"&gt;--installed&lt;/span&gt; &lt;span class="s1"&gt;'docker*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then the output would be similar to the one below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Listing... Done
docker-buildx-plugin/noble,now 0.19.2-1~ubuntu.24.04~noble arm64 [installed]
docker-ce-cli/noble,now 5:27.4.0-1~ubuntu.24.04~noble arm64 [installed]
docker-ce-rootless-extras/noble,now 5:27.4.0-1~ubuntu.24.04~noble arm64 [installed,automatic]
docker-ce/noble,now 5:27.4.0-1~ubuntu.24.04~noble arm64 [installed]
docker-compose-plugin/noble,now 2.31.0-1~ubuntu.24.04~noble arm64 [installed]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above outputs show that I installed the Docker CE package. If you installed &lt;code&gt;docker.io&lt;/code&gt; instead, you would get one of the outputs below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ii  docker.io         26.1.3-0ubuntu1~24.04.1 arm64        Linux container runtime
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Listing... Done
docker.io/noble-updates,now 26.1.3-0ubuntu1~24.04.1 arm64 [installed]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Docker on Red Hat-based Linux distributions
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Red Hat-based Linux distributions might use &lt;code&gt;dnf&lt;/code&gt; or &lt;code&gt;yum&lt;/code&gt;. Then you can search for packages by running the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dnf list &lt;span class="nt"&gt;--installed&lt;/span&gt; &lt;span class="s1"&gt;'docker*'&lt;/span&gt;
&lt;span class="c"&gt;# or&lt;/span&gt;
yum list &lt;span class="nt"&gt;--installed&lt;/span&gt; &lt;span class="s1"&gt;'docker*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Find non-snap dockerd processes on Linux
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Regardless of the package manager and the exact package, you can get the docker daemon process in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ps auxf | &lt;span class="nb"&gt;grep &lt;/span&gt;dockerd | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s1"&gt;'snap\|grep'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you would get something like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;root        5584  0.0  1.7 1966352 68348 ?       Ssl  20:08   0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you see "rootlesskit" in the output like below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu      1767  0.0  0.2 1826120 11392 ?       Ssl  16:25   0:00  \_ rootlesskit --state-dir=/run/user/1000/dockerd-rootless --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run --propagation=rslave /usr/bin/dockerd-rootless.sh
ubuntu      1778  0.0  0.2 1899656 9984 ?        Sl   16:25   0:00      \_ /proc/self/exe --state-dir=/run/user/1000/dockerd-rootless --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run --propagation=rslave /usr/bin/dockerd-rootless.sh
ubuntu      1808  0.0  1.6 2039632 66504 ?       Sl   16:25   0:00      |   \_ dockerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You have &lt;a href="https://docs.docker.com/engine/security/rootless/" rel="noopener noreferrer"&gt;Rootless Docker&lt;/a&gt;, which just means the daemon is running as your non-root user also using a different socket and Docker data dir.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Desktop
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Docker Desktop on Linux
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Docker Desktop can be installed on Linux, macOS and Windows (except Windows Server), so the way to find the process in a process list could be different, but in this post I focus on Linux, because that is where most of you can be confused by the multiple ways to install Docker.&lt;/p&gt;

&lt;p&gt;If you have Docker Desktop installed on Linux, the following command would reveal it to you:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ps aux | &lt;span class="nb"&gt;grep &lt;/span&gt;docker-desktop | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nb"&gt;grep&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;takacsa+    9804  0.0  0.5 1335920 81420 ?       Ssl  21:43   0:00 /opt/docker-desktop/bin/com.docker.backend
takacsa+    9818  0.2  0.6 1402484 110804 ?      Sl   21:43   0:03 /opt/docker-desktop/bin/com.docker.backend run
takacsa+    9863  0.3  1.2 1187000148 198528 ?   Sl   21:43   0:05 /opt/docker-desktop/Docker Desktop --reason=open-tray --analytics-enabled=false --name=dashboard
takacsa+    9912  0.0  0.3 33806596 52224 ?      S    21:43   0:00 /opt/docker-desktop/Docker Desktop --type=zygote --no-zygote-sandbox
takacsa+    9913  0.0  0.3 33806588 52352 ?      S    21:43   0:00 /opt/docker-desktop/Docker Desktop --type=zygote
takacsa+    9915  0.0  0.0 33806616 12876 ?      S    21:43   0:00 /opt/docker-desktop/Docker Desktop --type=zygote
takacsa+    9972  0.1  0.8 34220684 144844 ?     Sl   21:43   0:02 /opt/docker-desktop/Docker Desktop --type=gpu-process --enable-crash-reporter=978d60a8-7422-4f87-adcc-19c08da28830,no_channel --user-data-dir=/home/takacsakos/.config/Docker Desktop --gpu-preferences=WAAAAAAAAAAgAAAEAAAAAAAAAAAAAAAAAABgAAEAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAGAAAAAAAAAAYAAAAAAAAAAgAAAAAAAAACAAAAAAAAAAIAAAAAAAAAA== --shared-files --field-trial-handle=3,i,9326157886237964838,13802503777149121506,262144 --disable-features=SpareRendererForSitePerProcess --variations-seed-version
takacsa+    9977  0.0  0.4 33872616 68480 ?      Sl   21:43   0:00 /opt/docker-desktop/Docker Desktop --type=utility --utility-sub-type=network.mojom.NetworkService --lang=en-US --service-sandbox-type=none --enable-crash-reporter=978d60a8-7422-4f87-adcc-19c08da28830,no_channel --user-data-dir=/home/takacsakos/.config/Docker Desktop --standard-schemes=app --secure-schemes=app --fetch-schemes=scout-graphql,scout-rest,docker-hub,docker-extensions-be,project-api --shared-files=v8_context_snapshot_data:100 --field-trial-handle=3,i,9326157886237964838,13802503777149121506,262144 --disable-features=SpareRendererForSitePerProcess --variations-seed-version
takacsa+   10050  0.8  0.9 1186775380 159476 ?   Sl   21:43   0:11 /opt/docker-desktop/Docker Desktop --type=renderer --enable-crash-reporter=978d60a8-7422-4f87-adcc-19c08da28830,no_channel --user-data-dir=/home/takacsakos/.config/Docker Desktop --standard-schemes=app --secure-schemes=app --fetch-schemes=scout-graphql,scout-rest,docker-hub,docker-extensions-be,project-api --app-path=/opt/docker-desktop/resources/app.asar --enable-sandbox --lang=en-US --num-raster-threads=4 --enable-main-frame-before-activation --renderer-client-id=4 --time-ticks-at-unix-epoch=-1734808931784927 --launch-time-ticks=4855412489 --shared-files=v8_context_snapshot_data:100 --field-trial-handle=3,i,9326157886237964838,13802503777149121506,262144 --disable-features=SpareRendererForSitePerProcess --variations-seed-version --desktop-ui-launch-options={"isPackaged":true,"isMainWindow":true,"isE2eTest":false,"needsPrimaryIpcClient":true,"needsBackendErrorsIpcClient":true}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On a Linux, which supports &lt;code&gt;ps -f&lt;/code&gt; to get a tree of processes, you can try the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ps auxf | &lt;span class="nb"&gt;grep &lt;/span&gt;docker-desktop | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nb"&gt;grep&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;takacsa+    9804  0.0  0.5 1335920 81420 ?       Ssl  21:43   0:00  \_ /opt/docker-desktop/bin/com.docker.backend
takacsa+    9818  0.2  0.6 1402484 110804 ?      Sl   21:43   0:03  |   \_ /opt/docker-desktop/bin/com.docker.backend run
takacsa+    9863  0.3  1.2 1187000148 198588 ?   Sl   21:43   0:05  |       \_ /opt/docker-desktop/Docker Desktop --reason=open-tray --analytics-enabled=false --name=dashboard
takacsa+    9912  0.0  0.3 33806596 52224 ?      S    21:43   0:00  |           \_ /opt/docker-desktop/Docker Desktop --type=zygote --no-zygote-sandbox
takacsa+    9972  0.1  0.8 34220684 144844 ?     Sl   21:43   0:02  |           |   \_ /opt/docker-desktop/Docker Desktop --type=gpu-process --enable-crash-reporter=978d60a8-7422-4f87-adcc-19c08da28830,no_channel --user-data-dir=/home/takacsakos/.config/Docker Desktop --gpu-preferences=WAAAAAAAAAAgAAAEAAAAAAAAAAAAAAAAAABgAAEAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAGAAAAAAAAAAYAAAAAAAAAAgAAAAAAAAACAAAAAAAAAAIAAAAAAAAAA== --shared-files --field-trial-handle=3,i,9326157886237964838,13802503777149121506,262144 --disable-features=SpareRendererForSitePerProcess --variations-seed-version
takacsa+    9913  0.0  0.3 33806588 52352 ?      S    21:43   0:00  |           \_ /opt/docker-desktop/Docker Desktop --type=zygote
takacsa+    9915  0.0  0.0 33806616 12876 ?      S    21:43   0:00  |           |   \_ /opt/docker-desktop/Docker Desktop --type=zygote
takacsa+   10050  0.7  0.9 1186775380 159476 ?   Sl   21:43   0:11  |           |       \_ /opt/docker-desktop/Docker Desktop --type=renderer --enable-crash-reporter=978d60a8-7422-4f87-adcc-19c08da28830,no_channel --user-data-dir=/home/takacsakos/.config/Docker Desktop --standard-schemes=app --secure-schemes=app --fetch-schemes=scout-graphql,scout-rest,docker-hub,docker-extensions-be,project-api --app-path=/opt/docker-desktop/resources/app.asar --enable-sandbox --lang=en-US --num-raster-threads=4 --enable-main-frame-before-activation --renderer-client-id=4 --time-ticks-at-unix-epoch=-1734808931784927 --launch-time-ticks=4855412489 --shared-files=v8_context_snapshot_data:100 --field-trial-handle=3,i,9326157886237964838,13802503777149121506,262144 --disable-features=SpareRendererForSitePerProcess --variations-seed-version --desktop-ui-launch-options={"isPackaged":true,"isMainWindow":true,"isE2eTest":false,"needsPrimaryIpcClient":true,"needsBackendErrorsIpcClient":true}
takacsa+    9977  0.0  0.4 33872616 68480 ?      Sl   21:43   0:00  |           \_ /opt/docker-desktop/Docker Desktop --type=utility --utility-sub-type=network.mojom.NetworkService --lang=en-US --service-sandbox-type=none --enable-crash-reporter=978d60a8-7422-4f87-adcc-19c08da28830,no_channel --user-data-dir=/home/takacsakos/.config/Docker Desktop --standard-schemes=app --secure-schemes=app --fetch-schemes=scout-graphql,scout-rest,docker-hub,docker-extensions-be,project-api --shared-files=v8_context_snapshot_data:100 --field-trial-handle=3,i,9326157886237964838,13802503777149121506,262144 --disable-features=SpareRendererForSitePerProcess --variations-seed-version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the virtual machine is running too, you will see the following as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;takacsa+   14622  0.0  0.0 4060012 2432 ?        Sl   22:34   0:00  |       \_ /opt/docker-desktop/bin/virtiofsd --socket-path=/home/takacsakos/.docker/desktop/virtiofs.sock0 -o cache=auto --shared-dir=/home --sandbox=none --announce-submounts --xattr --xattrmap=:prefix:all::user.docker.desktop.::bad:all::: --translate-uid squash-guest:0:1000:4294967295 --translate-gid squash-guest:0:1000:4294967295
takacsa+   14670 97.7  4.4 5389724 728184 ?      Sl   22:34   0:08  |       \_ qemu-system-x86_64 -accel kvm -cpu host -machine q35 -m 3958 -smp 8 -kernel /opt/docker-desktop/linuxkit/kernel -append init=/init loglevel=1 root=/dev/vdb rootfstype=erofs ro vsyscall=emulate panic=0 eth0.dhcp eth1.dhcp linuxkit.unified_cgroup_hierarchy=1     vpnkit.connect=tcp+connect://192.168.65.2:40409 console=ttyS0 -serial pipe:/tmp/qemu-console888894299/fifo -netdev user,id=net0,ipv6=off,net=192.168.65.0/24,dhcpstart=192.168.65.9 -device virtio-net-pci,netdev=net0 -vga none -nographic -monitor none -drive if=none,file=/home/takacsakos/.docker/desktop/vms/0/data/Docker.raw,format=raw,id=hd0 -device virtio-blk-pci,drive=hd0,serial=dummyserial -drive if=none,file=/opt/docker-desktop/linuxkit/boot.img,format=raw,id=hd1,readonly=on -device virtio-blk-pci,drive=hd1,serial=dummyserial -object memory-backend-memfd,id=mem,size=3958M,share=on -numa node,memdev=mem -chardev socket,id=char0,path=/home/takacsakos/.docker/desktop/virtiofs.sock0 -device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=virtiofs0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Docker Desktop on macOS
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;You don't have as many ways to install Docker on macOS as you have on Linux, but you could still have Docker Desktop or Rancher Desktop, or you could have only the client while managing a remote Docker daemon ona  remote server. so it cans till be useful to find out if Docker Desktop os installed. Of course, on macOS, you would also have the whale icon at the top of the screen, but for those who prefer the terminal, or in case the icon is not shown, the below command can tell you if Docker Desktop is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ps aux | &lt;span class="nb"&gt;grep &lt;/span&gt;Docker.app | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nb"&gt;grep&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output is quite long, but you would find the below part as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/Applications/Docker.app/Contents/MacOS/Docker Desktop.app/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Docker.app&lt;/code&gt; is also the name of the application that you can look for on your Mac if the Desktop is not running, but you want to know if it is installed or not. Since the app name can be changed in a future release, searching for &lt;code&gt;docker&lt;/code&gt; in the terminal instead of &lt;code&gt;Docker.app&lt;/code&gt; can also be enough, but you may have a virtual machine or any folder appearing in the process list even if it is not for Docker Desktop.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Desktop on Windows
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;On Windows, you can use the "Task Manager" desktop app or run the following command in PowerShell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Get-Process | Select-String docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The application on Windows is called "Docker Desktop"&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Contexts
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;You can also check what contexts are configured for the Docker client. Regardless of which Docker you are using, the client is most likely using the same &lt;code&gt;$HOME/.docker&lt;/code&gt; folder which contains information about the contexts. Although this directory can be changed, I don't remember any client changing it, except when &lt;a href="https://www.youtube.com/watch?v=jaj5OCFQHxU" rel="noopener noreferrer"&gt;I made a video&lt;/a&gt; for which &lt;a href="https://gist.github.com/rimelek/f10d9e301f7686cd82938c5128ea7595" rel="noopener noreferrer"&gt;I wrote a script&lt;/a&gt; to do it. The following command can list all the contexts you have:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker context &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output could show something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME              DESCRIPTION                               DOCKER ENDPOINT                                       ERROR
default           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock                           
desktop-linux *   Docker Desktop                            unix:///home/takacsakos/.docker/desktop/docker.sock   
rootless          Rootless mode                             unix:///run/user/1000/docker.sock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The active context (which you are using when running docker commands) is the one with the "*" character on the right side of the context name.&lt;/p&gt;

&lt;p&gt;The endpoint of the default is usually the following unix Domain socket (on Linux and macOS): &lt;code&gt;unix:///var/run/docker.sock&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Just because you see a local unix socket, it doesn't necessarily mean, you are using a local daemon directly on the host, although usually that is the case.&lt;br&gt;
So the default context is usually for the Docker Engine running directly on the host.&lt;/p&gt;

&lt;p&gt;Now then notice the "desktop-linux" context which shows another unix domain socket, but we know that the Docker Engine is running in the virtual machine of Docker Desktop. In fact, Docker Desktop can also use the default context with the default endpoint depending on how you installed it.&lt;br&gt;
You always need to check the context name and the description as well. If you don't know what that means, you can still ask the community, but make sure you search for it first.&lt;/p&gt;

&lt;p&gt;You can also notice the "rootless" context. Anyone could name a context "rootless", but this is usually the rootless version of Docker CE. That means, that you are most likely have Docker CE, or at least a Docker that runts directly on your Linux host.&lt;/p&gt;

&lt;h2&gt;
  
  
  Remote Docker daemon
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Sometimes you have the client on your local machine, but the daemon is not just in a virtual machine, but in a remote machine over which you have no control at all. If you control that remote machine, you most likely know about it, but some &lt;a href="https://circleci.com/docs/using-docker/" rel="noopener noreferrer"&gt;CI/CD tools like CircleCI&lt;/a&gt; can support a Docker Engine which is not directly running in your environment. People often notice it when they try to access containers using their IP addresses, and it doesn't work. The same problem occurs when you use Docker Desktop since Docker Desktop runs the Docker daemon in a virtual machine, but another case can be when your processes run in a container while the Docker daemon is running either on the host or more likely in another container or a remote Docker host. Whether it is a virtual machine or physical machine is irrelevant.&lt;/p&gt;

&lt;p&gt;If you are not the one who installed Docker and you don't even control the environment, but you paid for an online service, always rad the documentation and ask their support or community whenever it is possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Maybe you have Rancher Desktop which could use docker as container Engine, or you could have Podman or Podman Desktop, which are really not Docker, but the method is the same.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You use the package manager to list installed packages filtered to the name of the software&lt;/li&gt;
&lt;li&gt;or try to look for it in the process list,&lt;/li&gt;
&lt;li&gt;or use &lt;code&gt;docker context ls&lt;/code&gt; to learn about Docker contexts which can help you find out which Docker daemon you are trying to connect to.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the operating system is not Linux, you can still use the tools supported by the operating system to list running processes and installed apps. I would not recommend running multiple Docker on the same machine for beginners, but if you have any of the mentioned apps or even something I haven't mentioned, be aware of which one you are running at the moment and share it with people when you need help from them.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Using gVisor's container runtime in Docker Desktop</title>
      <dc:creator>Ákos Takács</dc:creator>
      <pubDate>Tue, 26 Nov 2024 22:41:49 +0000</pubDate>
      <link>https://dev.to/rimelek/using-gvisors-container-runtime-in-docker-desktop-374m</link>
      <guid>https://dev.to/rimelek/using-gvisors-container-runtime-in-docker-desktop-374m</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I &lt;a href="https://dev.to/rimelek/comparing-3-docker-container-runtimes-runc-gvisor-and-kata-containers-16j"&gt;compared 3 container runtimes&lt;/a&gt; previously, which included runsc from gVisor. Since the default is runc, that is the default in Docker Desktop as well. The Kata runtime could not be tested in Docker Desktop, since that would require nested virtualization in the VM of Docker Desktop, so runsc is the only runtime from that comparison we could try in Docker Desktop which isn't already in it.&lt;/p&gt;

&lt;p&gt;The only question is how we could copy the runtime binary into Docker Desktop. This could also be asked in general related to any binary, since the root filesystem of Docker Desktop is read-only, but the runtime is executed by Docker so copying that would actually make sense.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/h6_VvlOdAuY"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Download runsc into Docker Desktop&lt;/li&gt;
&lt;li&gt;Configure the Docker daemon to use the runtime&lt;/li&gt;
&lt;li&gt;Download and install runsc in one step&lt;/li&gt;
&lt;li&gt;Testing runsc after the installation&lt;/li&gt;
&lt;li&gt;Reloading the internal daemon config to support Windows&lt;/li&gt;
&lt;li&gt;Possible error messages&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Download runsc into Docker Desktop
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;If a runtime can be downloaded directly without using any package manager, you can download it to a location which can be read by the Docker daemon. There is one thing that we know the Docker daemon can read, and we can write, and that is a local volume. So we will need a container that downloads runsc to a local volume.&lt;/p&gt;

&lt;p&gt;First we will need to check the official documentation for an &lt;a href="https://gvisor.dev/docs/user_guide/install/#install-latest" rel="noopener noreferrer"&gt;installation guide&lt;/a&gt;. Despite that the documentation mentions installing specific releases as well, I couldn't figure it out and the link they provide to the releases on GitHub shows that there is no release at all, so we will install the latest version.&lt;/p&gt;

&lt;p&gt;I used the script from the documentation, except that I removed the parenthesis and I also needed to remove &lt;code&gt;sudo&lt;/code&gt; from the beginning of the last line. We don't need it, since we will use the root user in the container. I will use the script directly in a compose file, so I had to escape all the dollar characters with a second dollar character. This is the compose file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;runsc-installer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alpine:3.20&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;volume&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/local/bin&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sh&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;set -e&lt;/span&gt;
        &lt;span class="s"&gt;ARCH=$(uname -m)&lt;/span&gt;
        &lt;span class="s"&gt;URL=https://storage.googleapis.com/gvisor/releases/release/latest/$${ARCH}&lt;/span&gt;
        &lt;span class="s"&gt;wget $${URL}/runsc $${URL}/runsc.sha512 \&lt;/span&gt;
          &lt;span class="s"&gt;$${URL}/containerd-shim-runsc-v1 $${URL}/containerd-shim-runsc-v1.sha512&lt;/span&gt;
        &lt;span class="s"&gt;sha512sum -c runsc.sha512 \&lt;/span&gt;
          &lt;span class="s"&gt;-c containerd-shim-runsc-v1.sha512&lt;/span&gt;
        &lt;span class="s"&gt;rm -f *.sha512&lt;/span&gt;
        &lt;span class="s"&gt;chmod a+rx runsc containerd-shim-runsc-v1&lt;/span&gt;
        &lt;span class="s"&gt;mv runsc containerd-shim-runsc-v1 /usr/local/bin&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;runsc-runtime-binaries&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The chance of you using &lt;code&gt;runsc-runtime-binaries&lt;/code&gt; as a volume for something else is very low. But if you do have it, make sure you use a different volume name.&lt;/p&gt;

&lt;p&gt;Place the file anywhere as &lt;code&gt;compose.yml&lt;/code&gt; and run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;docker compose run --rm runsc-installer&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see the progress which is something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Connecting to storage.googleapis.com (172.217.19.123:443)
saving to 'runsc'
runsc                100% |****************************************************************| 60.2M  0:00:00 ETA
'runsc' saved
Connecting to storage.googleapis.com (172.217.19.123:443)
saving to 'runsc.sha512'
runsc.sha512         100% |****************************************************************|   136  0:00:00 ETA
'runsc.sha512' saved
Connecting to storage.googleapis.com (172.217.19.123:443)
saving to 'containerd-shim-runsc-v1'
containerd-shim-runs 100% |****************************************************************| 27.7M  0:00:00 ETA
'containerd-shim-runsc-v1' saved
Connecting to storage.googleapis.com (172.217.19.123:443)
saving to 'containerd-shim-runsc-v1.sha512'
containerd-shim-runs 100% |****************************************************************|   155  0:00:00 ETA
'containerd-shim-runsc-v1.sha512' saved
runsc: OK
containerd-shim-runsc-v1: OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whenever a new version comes out, you can run the command again, and it will replace the old files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure the Docker daemon to use the runtime
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;As I also mentioned in &lt;a href="https://dev.to/rimelek/everything-about-docker-volumes-1ib0"&gt;Everything about Docker volumes&lt;/a&gt;, you can inspect a volume and see the mount point&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;docker volume inspect runsc-runtime-binaries&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is my output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CreatedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-10-28T17:21:24Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Driver"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"local"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Labels"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"com.docker.compose.project"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"03-gvisor-in-docker-desktop-data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"com.docker.compose.version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2.29.7"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"com.docker.compose.volume"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"data"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Mountpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/var/lib/docker/volumes/runsc-runtime-binaries/_data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"runsc-runtime-binaries"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Options"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Scope"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"local"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Normally a mount point would be where you mount something not from where you mounted it, but I guess it is called mount point because of what I described in the linked tutorial, since unless you are using the default local volumes, this is where the data could be mounted to. Now that we know that, the following script can be used to add the runtime to the daemon config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;

&lt;span class="nv"&gt;volume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"runsc-runtime-binaries"&lt;/span&gt;
&lt;span class="nv"&gt;volume_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/var/lib/docker/volumes/&lt;/span&gt;&lt;span class="nv"&gt;$volume&lt;/span&gt;&lt;span class="s2"&gt;/_data"&lt;/span&gt;

docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--mount&lt;/span&gt; &lt;span class="s2"&gt;"type=bind,source=&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/.docker,target=/etc/docker"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--mount&lt;/span&gt; &lt;span class="s2"&gt;"type=volume,source=&lt;/span&gt;&lt;span class="nv"&gt;$volume&lt;/span&gt;&lt;span class="s2"&gt;,target=&lt;/span&gt;&lt;span class="nv"&gt;$volume_path&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  ubuntu &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$volume_path&lt;/span&gt;&lt;span class="s2"&gt;/runsc"&lt;/span&gt; &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script also mounts &lt;code&gt;$HOME/.docker&lt;/code&gt; to &lt;code&gt;/etc/docker&lt;/code&gt; in the container, since it contains the &lt;code&gt;daemon.json&lt;/code&gt; and &lt;code&gt;runsc install&lt;/code&gt; will add the parameters to &lt;code&gt;/etc/docker/daemon.json&lt;/code&gt; by default and back up the original file as &lt;code&gt;daemon.json~&lt;/code&gt;. If you save the script as &lt;code&gt;runsc-install.sh&lt;/code&gt;, this is how you could run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x ./runsc-install.sh
./runsc-install.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script would work only on Linux and macOS, and we would have other problems as well, so before doing too much this way, let's jump to the next section and introduce Docker Compose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Download and install runsc in one step
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;If we want to make the installer simpler, we can add the "install" command to the compose file. This means that we have to mount &lt;code&gt;$HOME/.docker&lt;/code&gt; in the compose file. We should also remove the last line of the original script in the compose file, because we don't want to move the binaries to &lt;code&gt;/usr/local/bin&lt;/code&gt; anymore, since &lt;code&gt;runsc install&lt;/code&gt; will need to run from a different folder, and we need only one mount for the volume. So we change the mount definition of the volume too and we also want a platform independent way to refer to our home folder so we replace &lt;code&gt;$HOME&lt;/code&gt; with the tilde character (&lt;code&gt;~&lt;/code&gt;), which actually came as a surprise to me, but it works in the compose file even on Windows. And finally, we run the installer. So these are the last three lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;dest&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/var/lib/docker/volumes/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RUNSC_VOLUME_NAME&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;runsc&lt;/span&gt;&lt;span class="p"&gt;-runtime-binaries&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/_data/"&lt;/span&gt;
&lt;span class="nb"&gt;mv &lt;/span&gt;runsc containerd-shim-runsc-v1 &lt;span class="nv"&gt;$$&lt;/span&gt;dest
&lt;span class="nv"&gt;$$&lt;/span&gt;dest/runsc &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that &lt;code&gt;$dest&lt;/code&gt; is escaped with a second dollar character but &lt;code&gt;$RUNSC_VOLUME_NAME&lt;/code&gt; isn't. This is because the volume name will be a compose parameter with a default value so it has to be interpreted by compose. This is the full compose file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;runsc-installer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alpine:3.20&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;volume&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/docker/volumes/${RUNSC_VOLUME_NAME:-runsc-runtime-binaries}/_data/&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bind&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;~/.docker&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/docker&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sh&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;set -e&lt;/span&gt;
        &lt;span class="s"&gt;ARCH=$(uname -m)&lt;/span&gt;
        &lt;span class="s"&gt;URL=https://storage.googleapis.com/gvisor/releases/release/latest/$${ARCH}&lt;/span&gt;
        &lt;span class="s"&gt;wget $${URL}/runsc $${URL}/runsc.sha512 \&lt;/span&gt;
          &lt;span class="s"&gt;$${URL}/containerd-shim-runsc-v1 $${URL}/containerd-shim-runsc-v1.sha512&lt;/span&gt;
        &lt;span class="s"&gt;sha512sum -c runsc.sha512 \&lt;/span&gt;
          &lt;span class="s"&gt;-c containerd-shim-runsc-v1.sha512&lt;/span&gt;
        &lt;span class="s"&gt;rm -f *.sha512&lt;/span&gt;
        &lt;span class="s"&gt;chmod a+rx runsc containerd-shim-runsc-v1&lt;/span&gt;

        &lt;span class="s"&gt;dest="/var/lib/docker/volumes/${RUNSC_VOLUME_NAME:-runsc-runtime-binaries}/_data/"&lt;/span&gt;
        &lt;span class="s"&gt;mv runsc containerd-shim-runsc-v1 $$dest&lt;/span&gt;
        &lt;span class="s"&gt;$$dest/runsc install&lt;/span&gt;
&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${RUNSC_VOLUME_NAME:-runsc-runtime-binaries}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way you can even change the volume name. and run the compose project like this on Linux and macOS (or on Windows, with WSL2 integration enabled, and running docker commands in your WSL distribution):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;RUNSC_VOLUME_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"runsc-bin"&lt;/span&gt; docker compose run &lt;span class="nt"&gt;--rm&lt;/span&gt; runsc-installer 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My previous examples used &lt;code&gt;docker compose run&lt;/code&gt;, but you can use &lt;code&gt;docker compose up&lt;/code&gt; as well. Then the log lines will be prefixed with the container name and the container will be kept after running it. &lt;/p&gt;

&lt;h2&gt;
  
  
  Testing runsc after the installation
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Before you test the runtime, don't change any Docker Desktop configuration, because that will reset your client config on the host, since the VM still doesn't know about it. In order to avoid resetting the config file, "Quit Docker Desktop" and then start it again. DO NOT just "Restart" it in the menu, because that will just restart services in the VM without updating the config.&lt;/p&gt;

&lt;p&gt;When the config is ready and Docker Desktop is started again, you can run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; runsc ubuntu &lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything works, you will see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Linux 73309f6efa2c 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 aarch64 aarch64 aarch64 GNU/Linux
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more testing ideas, check out my blog post about the &lt;a href="https://dev.to/rimelek/comparing-3-docker-container-runtimes-runc-gvisor-and-kata-containers-4g0-temp-slug-7791054"&gt;comparison of runtimes&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reloading the internal daemon config to support Windows
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;I already mentioned some ways to support Windows, but there is one more problem. When using the WSL2 backend, Docker Desktop on Windows supports Nvidia GPUs, because WSL2 supports it too. Unfortunately, it seems that the Nvidia runtime configuration currently completely overrides the "runtimes" section in the daemon config instead of adding itself to the existing "runtimes" section.&lt;/p&gt;

&lt;p&gt;That means, even if we add runsc in the daemon config on the host, it will not be usable You will see it even in Docker Desktop's graphical interface, but it will not actually be the one that is used by the Docker daemon. So I will show a workaround I came up with, until it is fixed in Docker Desktop if it is ever considered a bug to be fixed. Let's not forget that there is nothing to indicate that using a custom runtime is supported in Docker Desktop. Just because we can do something, it doesn't mean we will always be able to do it if it is not supported. So maybe it will be fixed in the future, but it is also possible, that nothing that I described in this blogpost will work in the future. With that in mind, here is the workaround:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We have to find the config file used by the docker daemon in the virtual machine&lt;/li&gt;
&lt;li&gt;We have to mount it from the VM into our installer container&lt;/li&gt;
&lt;li&gt;We have to find a way to pass a custom config path to the &lt;code&gt;runsc install&lt;/code&gt; command and automatically update the internal config file&lt;/li&gt;
&lt;li&gt;After updating the internal config file, we have to reload it, which means, we have to send a HUP signal to the dockerd process.&lt;/li&gt;
&lt;li&gt;In order to be able to send signals to the dockerd process, we need to use the process namespace of the host.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first can be done by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--pid&lt;/span&gt; host alpine:3.20 sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'ps aux | grep dockerd | grep -v grep'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should show something like the following:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;308 root      0:45 /usr/local/bin/dockerd --config-file /run/config/docker/daemon.json --containerd /run/containerd/containerd.sock --pidfile /run/desktop/docker.pid --swarm-default-advertise-addr=192.168.65.3 --host-gateway-ip 192.168.65.254&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;So we know, the config is at &lt;code&gt;/run/config/docker/daemon.json&lt;/code&gt;. Let's check the content by running the following command on Windows&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-v&lt;/span&gt; /run/config/docker/daemon.json:/daemon.json alpine:3.20 &lt;span class="nb"&gt;cat&lt;/span&gt; /daemon.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output is something like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{"builder":{"gc":{"defaultKeepStorage":"20GB","enabled":true}},"experimental":true,"features":{"cone-registries":["hubproxy.docker.internal:5555"],"mtu":1500,"runtimes":{"nvidia":{"path":"nvidia-contain-driver":"overlayfs","userland-proxy":false}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This indeed shows the nvidia container runtime.&lt;/p&gt;

&lt;p&gt;We could also download "runsc" to somewhere or use the one on the already created volume to run &lt;code&gt;runsc install -help&lt;/code&gt;, or we can check the source code too: &lt;a href="https://github.com/google/gvisor/blob/745828301c936ddd1dac49b2611bdf1a4477f9ab/runsc/cmd/install.go#L60" rel="noopener noreferrer"&gt;https://github.com/google/gvisor/blob/745828301c936ddd1dac49b2611bdf1a4477f9ab/runsc/cmd/install.go#L60&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This shows we have a &lt;code&gt;config_file&lt;/code&gt; option. Great. Now we need to notify the dockerd process that it should reload the daemon config, which requires some Linux experience, but the pkill command can help us like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pkill &lt;span class="nt"&gt;-HUP&lt;/span&gt; dockerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will work only if we add &lt;code&gt;pid: host&lt;/code&gt; to the installer service.&lt;/p&gt;

&lt;p&gt;Note that this will notify all dockerd processes, so if you have Docker in Docker, it will reload all Docker daemon's configuration. It shouldn't be a problem unless you broke the config of any docker daemon. The new compose file is the following now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;runsc-installer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alpine:3.20&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;volume&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/lib/docker/volumes/${RUNSC_VOLUME_NAME:-runsc-runtime-binaries}/_data/&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bind&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;~/.docker&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/docker&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bind&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/run/config/docker/daemon.json&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/run/config/docker/daemon.json&lt;/span&gt;
    &lt;span class="na"&gt;pid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;host&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sh&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;set -e&lt;/span&gt;
        &lt;span class="s"&gt;ARCH=$(uname -m)&lt;/span&gt;
        &lt;span class="s"&gt;URL=https://storage.googleapis.com/gvisor/releases/release/latest/$${ARCH}&lt;/span&gt;
        &lt;span class="s"&gt;wget $${URL}/runsc $${URL}/runsc.sha512 \&lt;/span&gt;
          &lt;span class="s"&gt;$${URL}/containerd-shim-runsc-v1 $${URL}/containerd-shim-runsc-v1.sha512&lt;/span&gt;
        &lt;span class="s"&gt;sha512sum -c runsc.sha512 \&lt;/span&gt;
          &lt;span class="s"&gt;-c containerd-shim-runsc-v1.sha512&lt;/span&gt;
        &lt;span class="s"&gt;rm -f *.sha512&lt;/span&gt;
        &lt;span class="s"&gt;chmod a+rx runsc containerd-shim-runsc-v1&lt;/span&gt;

        &lt;span class="s"&gt;dest="/var/lib/docker/volumes/${RUNSC_VOLUME_NAME:-runsc-runtime-binaries}/_data/"&lt;/span&gt;
        &lt;span class="s"&gt;mv runsc containerd-shim-runsc-v1 $$dest&lt;/span&gt;
        &lt;span class="s"&gt;$$dest/runsc install&lt;/span&gt;

        &lt;span class="s"&gt;$$dest/runsc install -config_file /run/config/docker/daemon.json&lt;/span&gt;
        &lt;span class="s"&gt;pkill -HUP dockerd&lt;/span&gt;
&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${RUNSC_VOLUME_NAME:-runsc-runtime-binaries}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using this compose file, you can even test runsc on Windows, but you will have to run the installer every single time you restart Docker Desktop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Possible error messages
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;If something goes wrong and runsc cannot be found, you can see an error message like below, which I got when I accidentally wrote "volume" instead of "volumes" in the script, so Docker tried to find the runtime using an incorrect path.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /var/run/desktop-containerd/daemon/io.containerd.runtime.v2.task/moby/3a01e11f99939f9fd85151e2ae97f15d5f39ff94e835187357b82807147dcc52/log.json: no such file or directory): fork/exec /var/lib/docker/volume/runsc-runtime-binaries/_data/runsc: no such file or directory: &amp;lt;nil&amp;gt;: unknown.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Without the automatic reloading of the internal daemon config, if you try the runtime too early before activating it in the VM, for example because your config was reset to the previous one after you changed something in Docker Desktop before stopping it, you could get an error about the invalid runtime name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker: Error response from daemon: unknown or invalid runtime name: runsc.
See 'docker run --help'.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Changing the runtime in Docker Desktop is not the first thing that Docker Desktop users think. It is probably not even in the top 10, but sometimes, it can be useful if you have only Docker Desktop at the moment. So this post was basically just a way to know more about Docker Desktop, Docker Compose, and how we can change the daemon configuration manually, or automatically.&lt;/p&gt;

&lt;p&gt;If you break the daemon configuration, you can fix it, but you have to quit Docker Desktop and start it again.&lt;/p&gt;

&lt;p&gt;You could also learn a little bit about volumes, but if you need more, you can read the already mentioned &lt;a href="https://dev.to/rimelek/everything-about-docker-volumes-1ib0"&gt;Everything about Docker volumes&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally, it is always good to understand general Linux commands even when we use Docker Desktop on Windows, but we want to run Linux containers. It helps us to troubleshoot and do things that are not officially supported yet, but can be done at the moment.&lt;/p&gt;

&lt;p&gt;Docker Desktop is not Docker, as I pointed it out in "&lt;a href="https://dev.to/rimelek/you-run-containers-not-dockers-discussing-docker-variants-components-and-versioning-4lpn"&gt;You run containers, not dockers - Discussing Docker variants, components and versioning&lt;/a&gt;", which means there are some things that you can do with Docker Desktop but not with Docker CE, or you can do something with Docker CE but not with Docker Desktop. I still like to find the limits of both so we can enjoy as many features of both as possible.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Comparing 3 Docker container runtimes - Runc, gVisor and Kata Containers</title>
      <dc:creator>Ákos Takács</dc:creator>
      <pubDate>Tue, 29 Oct 2024 19:04:16 +0000</pubDate>
      <link>https://dev.to/rimelek/comparing-3-docker-container-runtimes-runc-gvisor-and-kata-containers-16j</link>
      <guid>https://dev.to/rimelek/comparing-3-docker-container-runtimes-runc-gvisor-and-kata-containers-16j</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Previously &lt;a href="https://dev.to/rimelek/you-run-containers-not-dockers-discussing-docker-variants-components-and-versioning-4lpn#docker-daemon-dependencies"&gt;I wrote about the multiple variants of Docker and also the dependencies&lt;/a&gt; behind the Docker daemon. One of the dependencies was the container runtime called &lt;a href="https://github.com/opencontainers/runc" rel="noopener noreferrer"&gt;runc&lt;/a&gt;. That is what creates the usual containers we are all familiar with. When you use Docker, this is the default runtime, which is understandable since it was started by Docker, Inc.&lt;/p&gt;

&lt;p&gt;We can change it and use another runtime to create containers differently. We can also choose a runtime that doesn't even create containers, but virtual machines. You could also create your own runtime that adds something to the definition of the container or increases the level of the isolation.&lt;/p&gt;

&lt;p&gt;The Docker documentation mentions &lt;a href="https://docs.docker.com/engine/daemon/alternative-runtimes/" rel="noopener noreferrer"&gt;alternative container runtimes&lt;/a&gt;, but I will not write about all of them. Obviously I'm not trying to copy the documentation. I want to compare 3 different kind of runtimes which you could use for a simple Linux container like &lt;a href="https://hub.docker.com/_/ubuntu" rel="noopener noreferrer"&gt;Ubuntu&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I will not explain how you can install these runtimes, but if you think the documentations are not good enough, please, let me know in the comments and I will see what I can do.&lt;/p&gt;

&lt;p&gt;You can also watch a video of this topic on YouTube.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Ht6XHBqZoIE"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The chosen 3 runtimes&lt;/li&gt;
&lt;li&gt;
Introduce the runtimes

&lt;ul&gt;
&lt;li&gt;runc&lt;/li&gt;
&lt;li&gt;kata-runtime&lt;/li&gt;
&lt;li&gt;runsc&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Comparing the runtimes&lt;/li&gt;

&lt;li&gt;Checking differences in practice&lt;/li&gt;

&lt;li&gt;Resource handling and limitations of the Kata runtime&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  The chosen 3 runtimes
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Although the documentation also mentions "youki", that is mentioned as a "drop-in replacement" of the default runtime basically doing the same, so let's stick with runc. The second runtime will be Kata runtime from &lt;a href="https://katacontainers.io/" rel="noopener noreferrer"&gt;Kata containers&lt;/a&gt;, since it runs small virtual machines which is good for showing how differently it uses the CPU and memory. This also adds a higher level of isolation with some downsides as well. And the third runtime will be runsc from &lt;a href="https://gvisor.dev/" rel="noopener noreferrer"&gt;gVisor&lt;/a&gt; which is a perfect third runtime to see how we can run containers and still have a little more secure isolation. I will show how we can recognize the differences by running commands from the isolated environments and from the host.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduce the runtimes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  runc
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;It probably doesn't even require much explanation, since this is the default, and we could just compare everything to it. What runc does is what we are already used to. It creates and runs containers using Linux kernel namespaces based on a config file which we don't have to create manually if we use Docker. We know that every process is running on the host and the kernel just doesn't let the processes see everything on the host. Which doesn't change the fact that we can see everything from the host that runs in the containers.&lt;/p&gt;

&lt;p&gt;Since everything runs on the host, nothing is required in the container, only the process which we want to isolate. It means there is no visible kernel in the container on the filesystem. Just because you don't see the wheels inside a car, it doesn't mean the car is just floating. But when you run &lt;code&gt;uname -a&lt;/code&gt; in the container to get information about the Kernel, the output will be the same as you could see on the host, except the hostname which is different in a container unless you use the host network or share the &lt;a href="https://en.wikipedia.org/wiki/Linux_namespaces#UTS" rel="noopener noreferrer"&gt;UTS namespace&lt;/a&gt; of the host with the container. &lt;/p&gt;

&lt;p&gt;There is no hardware virtualization, the processes in a container can see all the resources on the host, including the CPU and memory. Of course, you can set a CPU or memory limit for the containers, but that only restricts the amount of memory and CPU that the kernel allows the processes in the container to use. The hardware remains the same regardless of where the process is.&lt;/p&gt;

&lt;p&gt;It can also lead to problems, since some applications like databases could use a default resource limit based on the available hardware. So if you set a lower memory limit for the container, the application will try to use more memory and the operating system will kill it. So setting the resource limits on application level can be important in addition to the container level resource limits. But what if the application doesn't support it or there is a bug, and it ignores the parameter?&lt;/p&gt;

&lt;h3&gt;
  
  
  kata-runtime
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;As I mentioned before, the Kata runtime runs containers in their own very small virtual machines. If you still remember how runc worked, let's see how the Kata runtime changes everything?&lt;/p&gt;

&lt;p&gt;The key is the fact that it runs a virtual machine below the container. Since we have a virtual machine, we need a kernel in the VM. That means, when you run &lt;code&gt;uname -a&lt;/code&gt; you will get information about a different kernel. The one in the virtual machine created by the Kata runtime. Even though there is a virtual machine with another kernel, we will still not see the kernel on the filesystem, since the container is not replaced, but extended with a virtual machine layer.&lt;/p&gt;

&lt;p&gt;The CPU and memory from the container will not be the same either. In this case, we will have hardware virtualization, so we will see the hardware available for the virtual machine. It also means we will not be able to use all the CPUs and all the memory. A VM created by the Kata runtime will get 2 gigabytes of memory and 1 vCPU. So even if the application in the container does not support resource limits, it can only detect the hardware in the VM. There are some important details regarding resource limits with the Kata runtime, but we will discuss it later.&lt;/p&gt;

&lt;p&gt;It is still important to know, that the Docker daemon will still be on the host, and not in the VM created by the Kata runtime. It should be obvious, since the virtual machine is created by the runtime which is executed by a shim process instructed by containerd which is instructed by dockerd, the Docker daemon. So it couldn't possibly be in the virtual machine that it will create. This fact will be important when we want to talk about mounting files from the "host machine". The host machine is where the Docker daemon is running.&lt;/p&gt;

&lt;h3&gt;
  
  
  runsc
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;As mentioned before, runsc from gVisor creates containers. The difference is that tries to make the container more secure by intercepting system calls sent by processes in the container before the host kernel could handle it. This interception makes some requests a little slower. In fact, &lt;a href="https://gvisor.dev/docs/user_guide/production/" rel="noopener noreferrer"&gt;the documentation mentions&lt;/a&gt; that you should use runsc only for "user-facing containers" like reverse proxy containers that the users are directly interacting with. Of course, if you are also interacting with a web application with a security hole and someone can execute a command through your application, using runsc only for the reverse proxy will not help a lot. But the point is that you probably don't want to use this runtime for all your containers due to the impact on the performance.&lt;/p&gt;

&lt;p&gt;Let's assume you use it. Then the process in the container will see almost everything that it would see from a container created by runc, the default runtime. The difference is that runsc will have an &lt;a href="https://gvisor.dev/docs/" rel="noopener noreferrer"&gt;application kernel&lt;/a&gt; to handle the intercepted system calls so when you run &lt;code&gt;uname -a&lt;/code&gt;, you will see that kernel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing the runtimes
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The below table shows a comparison of the three runtimes&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;runc (default)&lt;/th&gt;
&lt;th&gt;runsc&lt;/th&gt;
&lt;th&gt;kata-runtime&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;developer&lt;/td&gt;
&lt;td&gt;opencontainers&lt;/td&gt;
&lt;td&gt;gVisor (Google)&lt;/td&gt;
&lt;td&gt;Kata Containers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;isolation type&lt;/td&gt;
&lt;td&gt;container&lt;/td&gt;
&lt;td&gt;container&lt;/td&gt;
&lt;td&gt;virtual machine&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;available resources&lt;/td&gt;
&lt;td&gt;all resources&lt;/td&gt;
&lt;td&gt;all resources&lt;/td&gt;
&lt;td&gt;limited resources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;used kernel&lt;/td&gt;
&lt;td&gt;host kernel&lt;/td&gt;
&lt;td&gt;application kernel&lt;/td&gt;
&lt;td&gt;kernel in the VM&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;installable on&lt;/td&gt;
&lt;td&gt;VM / Physical&lt;/td&gt;
&lt;td&gt;VM / Physical&lt;/td&gt;
&lt;td&gt;VM (with nested virt.) / Physical&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As you can see, most of the differences are the consequences of the isolation type. If you know what the runtimes create and the differences between a VM and a container, you have a pretty good idea what to expect. The one additional difference is the application kernel used by runsc. That's why you see a different kernel, but in every case you still have a container either on the host or in a VM, so you will never see the kernel files. Even if you mount &lt;code&gt;/boot&lt;/code&gt; into the container, you will mount it from the host where the Docker daemon is running, so you can't mount that folder from the VM created by the Kata runtime.&lt;/p&gt;

&lt;p&gt;The second difference which is worth mentioning is that the runtimes that create containers can be used in virtual machines, but when the runtime creates a virtual machine, you need nested virtualization enabled for the host VM. This means you could not test Kata containers in Docker Desktop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Checking differences in practice
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;I wrote a script that can execute the same test command in different environments, including the host machine. I &lt;a href="https://gist.github.com/rimelek/05241c26a3b10ff8c9cfe1035b787996" rel="noopener noreferrer"&gt;uploaded it to GitHub as a gist&lt;/a&gt;, but I share here too to make sure you don't depend on the availability of GitHub and the gist.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-eu&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; pipefail

&lt;span class="nv"&gt;runtimes&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;Host runc runsc kata&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="nv"&gt;YELLOW_START&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'\033[1;33m'&lt;/span&gt;
&lt;span class="nv"&gt;YELLOW_END&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'\033[0m'&lt;/span&gt;

&lt;span class="nb"&gt;declare&lt;/span&gt; &lt;span class="nt"&gt;-A&lt;/span&gt; &lt;span class="nv"&gt;COMMANDS&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;cpus]&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'nproc'&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;memory]&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'free | grep "Mem" | awk "{print \$2}"'&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;kernel]&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'uname -nrv'&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;filesystem]&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'ls /boot | awk "/vmlinuz-/" | sort -r | head -n1'&lt;/span&gt;
&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;runtime_run&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;runtime&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="nb"&gt;local command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$runtime&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s2"&gt;"Host"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"docker run --rm --runtime &lt;/span&gt;&lt;span class="nv"&gt;$runtime&lt;/span&gt;&lt;span class="s2"&gt; ubuntu &lt;/span&gt;&lt;span class="nv"&gt;$command&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="k"&gt;fi

  case&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$mode&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="k"&gt;in
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;   &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$command&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;;;&lt;/span&gt;
    yellow&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;YELLOW_START&lt;/span&gt;&lt;span class="k"&gt;}${&lt;/span&gt;&lt;span class="nv"&gt;command&lt;/span&gt;&lt;span class="k"&gt;}${&lt;/span&gt;&lt;span class="nv"&gt;YELLOW_END&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;;;&lt;/span&gt;
    &lt;span class="nb"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;   &lt;span class="nb"&gt;eval&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$command&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="p"&gt;;;&lt;/span&gt;
    &lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Invalid mode: &lt;/span&gt;&lt;span class="nv"&gt;$mode&lt;/span&gt;&lt;span class="s2"&gt;. Valid modes: echo, exec"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;return &lt;/span&gt;1
  &lt;span class="k"&gt;esac&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;function &lt;/span&gt;showresult&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;runtime&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="nb"&gt;local command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

  runtime_run yellow &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$runtime&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$command&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$label&lt;/span&gt;&lt;span class="s2"&gt;: "&lt;/span&gt;
  runtime_run &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$runtime&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$command&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="nv"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;((&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${#&lt;/span&gt;&lt;span class="nv"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; 0 &lt;span class="o"&gt;))&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  for &lt;/span&gt;label &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;!COMMANDS[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;labels+&lt;span class="o"&gt;=(&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$label&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;done
fi

for &lt;/span&gt;label &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;labels&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  for &lt;/span&gt;runtime &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;runtimes&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;showresult &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$runtime&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$label&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;COMMANDS&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;$label&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo
  &lt;/span&gt;&lt;span class="k"&gt;done
done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since using the gist is still easier, let's try to download it first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; runtime-test.sh https://gist.githubusercontent.com/rimelek/05241c26a3b10ff8c9cfe1035b787996/raw/5ef46b3cbdeda5e5bd1f094724929ed9514e2f85/docker-container-runtime-test.sh
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x runtime-test.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you can either run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./runtime-test.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;without parameters, or test the availability of the CPU, memory and kernel one by one. The beginning of the script shows the categories you can test and what commands will be executed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;declare&lt;/span&gt; &lt;span class="nt"&gt;-A&lt;/span&gt; &lt;span class="nv"&gt;COMMANDS&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;cpus]&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'nproc'&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;memory]&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'free | grep "Mem" | awk "{print \$2}"'&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;kernel]&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'uname -nrv'&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;filesystem]&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'ls /boot | awk "/vmlinuz-/" | sort -r | head -n1'&lt;/span&gt;
&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even before the above part, you can find the list of runtimes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;runtimes&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;Host runc runsc kata&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your runtime names are different, you can change the list of runtimes in the script. "Host" here is not a runtime, but the host machine. I intentionally wrote it with an uppercase "H" to make it different from the runtime names.&lt;/p&gt;

&lt;p&gt;If you run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./runtime-test.sh cpus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following commands will be generated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;nproc
&lt;/span&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; runc ubuntu &lt;span class="nb"&gt;nproc
&lt;/span&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; runsc ubuntu &lt;span class="nb"&gt;nproc
&lt;/span&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; kata ubuntu &lt;span class="nb"&gt;nproc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also test the memory availability&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./runtime-test.sh memory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The generated commands would be&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;free | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Mem"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s2"&gt;"{print &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;2}"&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; runc ubuntu free | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Mem"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s2"&gt;"{print &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;2}"&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; runsc ubuntu free | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Mem"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s2"&gt;"{print &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;2}"&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; kata ubuntu free | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Mem"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s2"&gt;"{print &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;2}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To test the kernel version, you can run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./runtime-tets.sh kernel
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which generates an executes these commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-nrv&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; runc ubuntu &lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-nrv&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; runsc ubuntu &lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-nrv&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; kata ubuntu &lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-nrv&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yes, we are using &lt;code&gt;uname -nrv&lt;/code&gt; here instead of &lt;code&gt;uname -a&lt;/code&gt; to make the lines shorter by not showing redundant information like the CPU architecture multiple times in the output.&lt;/p&gt;

&lt;p&gt;And finally, you can also try to list the files under &lt;code&gt;/boot&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./runtime-tets.sh filesystem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which generates and executes the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; /boot | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s2"&gt;"/vmlinuz-/"&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n1&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; runc ubuntu &lt;span class="nb"&gt;ls&lt;/span&gt; /boot | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s2"&gt;"/vmlinuz-/"&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n1&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; runsc ubuntu &lt;span class="nb"&gt;ls&lt;/span&gt; /boot | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s2"&gt;"/vmlinuz-/"&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n1&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; kata ubuntu &lt;span class="nb"&gt;ls&lt;/span&gt; /boot | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s2"&gt;"/vmlinuz-/"&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-n1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using &lt;code&gt;awk&lt;/code&gt; I try to list only files starting with &lt;code&gt;vmlinuz-&lt;/code&gt;, but I want to get the latest only, because otherwise we could get too many files which would be irrelevant for the test. If there is one, that proves we can see the kernel files. In this test we assume the current kernel with which the operating system was booted is the latest.&lt;/p&gt;

&lt;p&gt;You can see the output of the script running on my machine without arguments&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls /boot | awk "/vmlinuz-/" | sort -r | head -n1
filesystem: vmlinuz-5.15.0-122-generic

docker run --rm --runtime runc ubuntu ls /boot | awk "/vmlinuz-/" | sort -r | head -n1
filesystem:
docker run --rm --runtime runsc ubuntu ls /boot | awk "/vmlinuz-/" | sort -r | head -n1
filesystem:
docker run --rm --runtime kata ubuntu ls /boot | awk "/vmlinuz-/" | sort -r | head -n1
filesystem:
nproc
cpus: 12

docker run --rm --runtime runc ubuntu nproc
cpus: 12

docker run --rm --runtime runsc ubuntu nproc
cpus: 12

docker run --rm --runtime kata ubuntu nproc
cpus: 1

free | grep "Mem" | awk "{print \$2}"
memory: 16241924

docker run --rm --runtime runc ubuntu free | grep "Mem" | awk "{print \$2}"
memory: 16241924

docker run --rm --runtime runsc ubuntu free | grep "Mem" | awk "{print \$2}"
memory: 16241924

docker run --rm --runtime kata ubuntu free | grep "Mem" | awk "{print \$2}"
memory: 2038464

uname -nrv
kernel: ta-lxlt 5.15.0-122-generic #132-Ubuntu SMP Thu Aug 29 13:45:52 UTC 2024

docker run --rm --runtime runc ubuntu uname -nrv
kernel: aea867200366 5.15.0-122-generic #132-Ubuntu SMP Thu Aug 29 13:45:52 UTC 2024

docker run --rm --runtime runsc ubuntu uname -nrv
kernel: 91d3bb0285b8 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016

docker run --rm --runtime kata ubuntu uname -nrv
kernel: 9ca9368d8781 6.1.62 #1 SMP Mon Sep  9 09:44:34 UTC 2024
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to see me executing these commands, consider watching the video linked at the beginning of this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resource handling and limitations of the Kata runtime
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Since the Kata runtime runs a virtual machine, you cannot assign half a CPU to a VM. The docker commands still support setting &lt;code&gt;--cpus 0.5&lt;/code&gt;, but it only means this will be the amount that the qemu process can use. By default, as mentioned before, the VM has 1 vCPU and if you set the limit to be half a CPU, you will get 1 and a half rounded up to an integer, so you will get 2. The limit will increase the default amount and not replace it.&lt;/p&gt;

&lt;p&gt;This is what happens with the memory as well, except it doesn't have to be rounded up. So if you set &lt;code&gt;--memory 500M&lt;/code&gt; you get 2 and a half gigabytes of memory. If you want to test it, you can use the commands generated by the scripts and add the limits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; kata &lt;span class="nt"&gt;--cpus&lt;/span&gt; 0.5 ubuntu &lt;span class="nb"&gt;nproc
&lt;/span&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--runtime&lt;/span&gt; kata &lt;span class="nt"&gt;--memory&lt;/span&gt; 500M ubuntu free | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"Mem"&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s2"&gt;"{print &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;2}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2
2562752
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where the second line is the memory in kilobytes.&lt;/p&gt;

&lt;p&gt;I bet the first thing you think that it is a bug. There is an &lt;a href="https://github.com/kata-containers/kata-containers/issues/10093" rel="noopener noreferrer"&gt;issue on GitHub where someone thought the same&lt;/a&gt;. The fact is that Kata containers are different and &lt;a href="https://github.com/kata-containers/kata-containers/blob/main/docs/Limitations.md" rel="noopener noreferrer"&gt;there are Limitations&lt;/a&gt;. The first I noticed too, that there is no way to share process or network namespaces between Docker containers. The fact that you cannot use the process namespace or network namespace of the host is easily understandable because we have a VM and not just a host kernel isolating our processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Originally Docker created only containers. In fact, it used LXC as an "exec driver" which is basically what we call runtime today or at least the closest thing to it. It was deprecated in &lt;a href="https://github.com/moby/moby/releases/tag/v1.8.0" rel="noopener noreferrer"&gt;Docker 1.8.0&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now you can even choose a runtime which creates a virtual machine or a container with a more secure isolation. Once there was a runtime for using an NVIDIA GPU called &lt;a href="https://github.com/NVIDIA/nvidia-container-runtime" rel="noopener noreferrer"&gt;nvidia-container-runtime&lt;/a&gt;. That project is now deprecated and Docker has the "&lt;code&gt;--gpus&lt;/code&gt;" option instead. Talking about GPUs is not the scope of this blogpost, but it is a good example of a special runtime that gave additional capabilities to containers.&lt;/p&gt;

&lt;p&gt;Each runtime has benefits and downsides. Which one is the best for you depends on what you need it for. I recommend testing the runtimes before making a decision. Running a small VM could seem to be a good idea, but you can discover downsides that change your mind.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>devops</category>
    </item>
    <item>
      <title>You run containers, not dockers - Discussing Docker variants, components and versioning</title>
      <dc:creator>Ákos Takács</dc:creator>
      <pubDate>Sun, 27 Oct 2024 10:23:26 +0000</pubDate>
      <link>https://dev.to/rimelek/you-run-containers-not-dockers-discussing-docker-variants-components-and-versioning-4lpn</link>
      <guid>https://dev.to/rimelek/you-run-containers-not-dockers-discussing-docker-variants-components-and-versioning-4lpn</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Usually I start with a blogpost and then I create a video. In this case I made a video in two languages, and now I'm here for you, who prefer reading. As the title suggests, I want to clear up a misunderstanding, and in the meantime explain how many kind of Docker exists and what components Docker has. We will also cover some of the history of Docker.&lt;/p&gt;

&lt;p&gt;I often see that people think they are running "dockers". Keep in mind that "Dockers" is a garment company and has absolutely nothing to do with "Docker", the software.&lt;/p&gt;

&lt;p&gt;If you prefer watching videos, most but not all of what I explain here can be watched on Youtube&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ItSuWaxdHhA"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;For more videos, you can subscribe to my YouTube channel: &lt;a href="https://www.youtube.com/@akos.takacs" rel="noopener noreferrer"&gt;@akos.takacs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;What I call Docker&lt;/li&gt;
&lt;li&gt;Docker as a company&lt;/li&gt;
&lt;li&gt;
Docker as a software

&lt;ul&gt;
&lt;li&gt;The beginning of Docker as a software and where we are now&lt;/li&gt;
&lt;li&gt;Two main ways of running the Docker daemon in terms of isolation&lt;/li&gt;
&lt;li&gt;
Docker running in a virtual machine

&lt;ul&gt;
&lt;li&gt;Docker in a VM in general&lt;/li&gt;
&lt;li&gt;Docker in a VM using Docker Desktop&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Docker runing directly on a physical machine 

&lt;ul&gt;
&lt;li&gt;The source code of Moby&lt;/li&gt;
&lt;li&gt;Docker Enterprise Edition&lt;/li&gt;
&lt;li&gt;Docker IO&lt;/li&gt;
&lt;li&gt;Snap package&lt;/li&gt;
&lt;li&gt;Docker CE&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Docker in Docker&lt;/li&gt;

&lt;li&gt;

Docker CE components

&lt;ul&gt;
&lt;li&gt;Two main parts of Docker CE&lt;/li&gt;
&lt;li&gt;
Docker client

&lt;ul&gt;
&lt;li&gt;The docker command&lt;/li&gt;
&lt;li&gt;Docker SDK&lt;/li&gt;
&lt;li&gt;Docker Compose&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Docker daemon

&lt;ul&gt;
&lt;li&gt;The history of dockerd&lt;/li&gt;
&lt;li&gt;Rootless vs Rootful Docker&lt;/li&gt;
&lt;li&gt;Docker daemon dependencies&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Podman pretending to be Docker&lt;/li&gt;

&lt;li&gt;Packages and versioning&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I call Docker
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;In short, the Docker daemon. I could stop this post hear, since when I say "I run Docker on a server" I mean I run the Docker daemon, even if I don't have the client on that server. The daemon is required for running containers, but a container is not a docker, Docker just creates a container. Similarly to how bread is not a baker. &lt;/p&gt;

&lt;p&gt;Unfortunately, it is more complicated, so let's not stop this post here just yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker as a company
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The company who made Docker as a software was "dotCloud, Inc". The website doesn't exist anymore, but you can still find the &lt;a href="https://github.com/dotcloud" rel="noopener noreferrer"&gt;GitHub organization&lt;/a&gt;. Later the company was renamed to "Docker, Inc" which was big news back then, and you can find some articles linked on &lt;a href="https://www.docker.com/press-release/dotcloud-inc-now-docker-inc/" rel="noopener noreferrer"&gt;Docker's website&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gigaom: The link takes you to an error page, so I don't even quote it.&lt;/li&gt;
&lt;li&gt;InfoQ: &lt;a href="https://www.infoq.com/news/2013/10/dotcloud-renamed-docker/" rel="noopener noreferrer"&gt;https://www.infoq.com/news/2013/10/dotcloud-renamed-docker/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Forbes: &lt;a href="https://www.forbes.com/sites/benkepes/2013/10/29/docker-and-the-timely-pivot/" rel="noopener noreferrer"&gt;https://www.forbes.com/sites/benkepes/2013/10/29/docker-and-the-timely-pivot/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, in a conversation we just say "Docker". For example "Docker Desktop is Docker's product." or "I know someone working at Docker". In the rest of this blog post we will talk about Docker as a software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker as a software
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The beginning of Docker as a software and where we are now
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The first commit of Docker happened on January 19, 2013. You can still find it on GitHub: &lt;a href="https://github.com/moby/moby/commit/a27b4b8cb8e838d03a99b6d2b30f76bdaf2f9e5d" rel="noopener noreferrer"&gt;https://github.com/moby/moby/commit/a27b4b8cb8e838d03a99b6d2b30f76bdaf2f9e5d&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nowadays, Docker has different variants and multiple components. So when we say Docker, we could mean the entire software family. Collection of different products. That is what makes the confusion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Two main ways of running the Docker daemon in terms of isolation
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;You can run Docker (meaning the Docker daemon) directly on a server or even on your machine, but there are two kind of containers. Linux containers require a Linux host. Windows containers require a Windows host. This is because a container is not a virtual machine. A container is just a process running on your host in an isolated way, so the process can't see everything on the host, only what it is allowed to see.&lt;/p&gt;

&lt;p&gt;This means if you have a Windows machine, and you want to try Linux containers, you are in trouble. No... I'm kidding. But it is true that in this case you could not run Linux containers without the help of virtual machines.&lt;/p&gt;

&lt;p&gt;I haven't mentioned macOS yet, because there is no such thing as "macOS container" at the moment, but you can install Docker in a virtual machine even on macOS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker running in a virtual machine
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Docker in a VM in general
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;If you want to run Linux containers on Windows or macOS, you need a virtual machine with a Linux operating system inside. If you want to run Windows containers on Linux or macOS, you could probably try a Windows VM, but I have never tried and Windows containers are different so you would also need to deal with &lt;a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container" rel="noopener noreferrer"&gt;two different isolation modes&lt;/a&gt; and the HyperV isolation mode would require nested virtualization enabled for your virtual machine.&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker in a VM using Docker Desktop
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;A special way of using Docker in a virtual machine is when you have the client on your host and the Docker daemon is in the virtual machine. You could configure it manually similarly to how you would &lt;a href="https://docs.docker.com/engine/daemon/remote-access/" rel="noopener noreferrer"&gt;configure a remote Docker daemon&lt;/a&gt; for your local client, but Docker Desktop solves it for you.&lt;/p&gt;

&lt;p&gt;Docker Desktop can run on Windows, macOS and also on Linux. Yes, it runs on Linux, but it still creates a virtual machine instead of accessing the Docker daemon on your host.&lt;/p&gt;

&lt;p&gt;When you run Docker Desktop on Windows, you can also switch between Windows containers and Linux containers, but Windows containers are supported by Docker Desktop only on Windows. You could try running &lt;a href="https://docs.docker.com/desktop/vm-vdi/" rel="noopener noreferrer"&gt;Docker Desktop in a Windows VM&lt;/a&gt;, but that would also require nested virtualization if you want to use Docker Desktop for running Linux containers, or Windows containers with HyperV isolation. Not to mention that as far as I remember, you could not even install Docker Desktop if you don't have HyperV or WSL2 enabled, which requires nested virtualization in a VM.&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker is actually Docker Desktop on macOS
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;One problem on macOS that adds more confusion is that the application which you can install on macOS is called "Docker". When you open the launchpad, the label below the Docker icon is "Docker". Don't forget that you are still using Docker Desktop and the Docker daemon is running inside the virtual machine as well as all your containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker running directly on a physical machine
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The source code of Moby
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;I shared the first commit of Docker's source code at the beginning of the "Docker as a software" section. The source code is on GitHub in the &lt;a href="https://github.com/moby/moby" rel="noopener noreferrer"&gt;moby/moby&lt;/a&gt; repository. During the years since Docker was created, many repositories were used. "moby/moby" was originally called "dotcloud/docker" and the URL still works (&lt;a href="http://github.com/dotcloud/docker" rel="noopener noreferrer"&gt;http://github.com/dotcloud/docker&lt;/a&gt;). When the company was renamed, the repository was renamed again to "docker/docker"  (&lt;a href="https://github.com/dotcloud/docker" rel="noopener noreferrer"&gt;https://github.com/dotcloud/docker&lt;/a&gt;), but both are redirected to Moby today. Moby is the base source code of the Docker variants we have today.&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker Enterprise Edition
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Years ago there was a Docker EE (Enterprise Edition), but it was bought by &lt;a href="https://www.mirantis.com/" rel="noopener noreferrer"&gt;Mirantis&lt;/a&gt;, who made the &lt;a href="https://www.mirantis.com/software/mirantis-container-runtime/" rel="noopener noreferrer"&gt;Mirantis Container Runtime&lt;/a&gt;. You can even read it on their website:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Mirantis Container Runtime (formerly Docker Engine - Enterprise)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Since I'm focusing on Docker, this is all I wanted to say about an edition that does not exist today.&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker IO
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker.io&lt;/code&gt; is based on Moby, but not supported by Docker, Inc. You could find this package on a Debian or Ubuntu Linux distribution. On Ubuntu, running &lt;code&gt;apt info docker.io&lt;/code&gt; reveals that it is maintained by "Ubuntu Developers". The output of the command contains exactly&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Maintainer: Ubuntu Developers &amp;lt;ubuntu-devel-discuss@lists.ubuntu.com&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and the following on Debian&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Maintainer: Debian Go Packaging Team &amp;lt;team+pkg-go@tracker.debian.org&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://web.archive.org/web/20140718091133/http://docs.docker.com/installation/ubuntulinux/" rel="noopener noreferrer"&gt;Internet Archive's "Wayback machine" reveals&lt;/a&gt; that the earlier installation guide for Ubuntu started with installing &lt;code&gt;docker.io&lt;/code&gt;, but recent guides start with removing &lt;code&gt;docker.io&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;I don't recommend using &lt;code&gt;docker.io&lt;/code&gt; today as it will install an older version (at the time of writing this post, it is 24.0.7 on Ubuntu 22.04) and most of the tutorials will show you features based the official version and the &lt;a href="https://docs.docker.com/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Snap package
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;There is a &lt;a href="https://snapcraft.io/docker" rel="noopener noreferrer"&gt;snap package&lt;/a&gt; which is also based on Moby, but built by Canonical. I quote:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Authors&lt;/p&gt;

&lt;p&gt;This snap is built by Canonical based on source code published by Docker, Inc. It is not endorsed or published by Docker, Inc.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So I don't recommend this variant either, unless the installation guide of the official version doesn't work on your distribution but Snap does. A snap is basically a kind of container with possible restrictions like you can only mount files from your home directory.&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker CE
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Docker CE is the Docker Community Edition based on Moby and &lt;a href="[official%20documentation](https://docs.docker.com/engine/install/)"&gt;officially supported by Docker, Inc&lt;/a&gt;. So when you want to install Docker on Linux without a virtual machine, this is what I recommend. This is also what you can install on a Windows server, but the installation is &lt;a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=dockerce#windows-server-1" rel="noopener noreferrer"&gt;described by Microsoft&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker in Docker
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The term "Docker in Docker" is misleading. You are not running Docker in a Docker. As I explained, the word Docker should be used for the Docker daemon, but you are not running a Docker daemon in a Docker daemon. &lt;/p&gt;

&lt;p&gt;Then what is that you are doing when using Docker in Docker? You are running a Docker daemon inside a Docker container. Since calling it "Docker daemon in Docker container" would be long and wouldn't sound as good, we call it "Docker in Docker" in short.&lt;/p&gt;

&lt;p&gt;If you want to try it, you can find &lt;a href="https://hub.docker.com/_/docker" rel="noopener noreferrer"&gt;Docker images on Docker Hub&lt;/a&gt; and you can use a Docker image downloaded (pulled) from Docker Hub to run a Docker container in which you run a Docker daemon to run isolated processes (containers) in an already isolated environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker CE components
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Two main parts of Docker CE
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;As I pointed out earlier, you can run a Docker daemon in a virtual machine or on a remote machine and still keep the "docker" command on your local machine. This is possible because we can talk about two main parts of Docker CE. The client and the daemon. The client can be anywhere, but the daemon has to be where the containers will be running.&lt;/p&gt;

&lt;p&gt;The client can instruct the Docker daemon through a TCP or Unix socket &lt;a href="https://docs.docker.com/reference/api/engine/" rel="noopener noreferrer"&gt;using the API service&lt;/a&gt;, but a "client" can mean multiple things.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Client
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The docker command
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Often when we talk about the Docker client, we think of the "docker" command used in a terminal. For a beginner, who sees the name of the command, it could be confusing, but this is not the entire Docker, and it will not run "dockers", but Docker containers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker SDK
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The client &lt;a href="https://docs.docker.com/reference/api/engine/sdk/" rel="noopener noreferrer"&gt;can also be SDK&lt;/a&gt;, although the SDKs are for the core Docker and not for the plugins like &lt;a href="https://github.com/docker/buildx" rel="noopener noreferrer"&gt;Buildx&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker Compose
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;There is another plugin called &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt;. There was a Compose v1 originally written in Python for which we used the &lt;code&gt;docker-compose&lt;/code&gt; command, but now we need to use &lt;code&gt;docker compose&lt;/code&gt; without the dash.&lt;/p&gt;

&lt;p&gt;Both versions require a so-called "compose file", but v1 required &lt;code&gt;docker-compose.yml&lt;/code&gt; while the new version accept &lt;code&gt;compose.yml&lt;/code&gt; as well. The other difference is that v1 required&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;XY&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;at the beginning of the file to specify which version of the compose syntax we wanted to use and the XY was a number like 3.8. So this was never the version of Docker Compose, but the version of the compose schema in the yaml file. For more information about the history of Docker Compose, you can visit the &lt;a href="https://docs.docker.com/compose/intro/history/" rel="noopener noreferrer"&gt;description in the official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Daemon
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The history of dockerd
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The Docker daemon can be started by running &lt;code&gt;dockerd&lt;/code&gt;. Until &lt;a href="https://docs.docker.com/engine/release-notes/prior-releases/#180-2015-08-11" rel="noopener noreferrer"&gt;Docker 1.8.0&lt;/a&gt;, the daemon could be started with the docker command using the &lt;code&gt;-d&lt;/code&gt; flag, but it was changed to &lt;code&gt;docker daemon&lt;/code&gt; replacing the flag with a subcommand. Then &lt;a href="https://docs.docker.com/engine/release-notes/prior-releases/#deprecation" rel="noopener noreferrer"&gt;that became deprecated too&lt;/a&gt; in Docker 1.13 after the dockerd binary was introduced in &lt;a href="https://docs.docker.com/engine/release-notes/prior-releases/#1120-2016-07-28" rel="noopener noreferrer"&gt;Docker 1.12&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Rootless vs rootful Docker
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Originally the Docker daemon required a root user. Running it as a non-root user &lt;a href="https://docs.docker.com/engine/release-notes/19.03/#runtime-11" rel="noopener noreferrer"&gt;became possible in Docker CE 19.03&lt;/a&gt; as an experimental feature which was already one of the core features of Podman (we will talk about it later) from the beginning. This is what we call "rootless mode", so the original mode became the "rootful mode".&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker daemon dependencies
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Originally, Docker was a single binary. Before introducing the "dockerd" command for the daemon, &lt;code&gt;docker-containerd&lt;/code&gt;, &lt;code&gt;docker-containerd-shim&lt;/code&gt; and &lt;code&gt;docker-runc&lt;/code&gt; was introduced in &lt;a href="https://docs.docker.com/engine/release-notes/prior-releases/#1110-2016-04-13" rel="noopener noreferrer"&gt;Docker 1.11&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So once we had a single binary, then "Docker, Inc" started  separating the functionalities into multiple binaries on Linux. That was the beginning the of dependencies and components we have today, except that these dependencies are now not limited to Docker. &lt;a href="https://containerd.io/" rel="noopener noreferrer"&gt;containerd&lt;/a&gt; can also be the &lt;a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd" rel="noopener noreferrer"&gt;container runtime of Kubernetes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As the "docker" command is the client for "dockerd", &lt;a href="https://github.com/containerd/nerdctl" rel="noopener noreferrer"&gt;nerdctl&lt;/a&gt; is the client for containerd, although the project was started only &lt;a href="https://github.com/containerd/nerdctl/commit/f0d302cac40fbdbfcfe74a3ba5cbefdf2f5b3741" rel="noopener noreferrer"&gt;at the end of 2020&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now we have dockerd which uses containerd, but containerd will not create containers directly. It needs a runtime and the &lt;a href="https://github.com/opencontainers/runc" rel="noopener noreferrer"&gt;default runtime is runc&lt;/a&gt;, but that can be changed. containerd actually doesn't have to know the parameters of the runtime. There is a shim process between containerd and runc, so containerd knows the parameters of the shim, and the shim knows the parameters of runc or other runtimes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Podman pretending to be Docker
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;I admit the title of this section sounds worse than it is, but the fact is that sometimes when you install &lt;a href="https://podman.io/" rel="noopener noreferrer"&gt;Podman&lt;/a&gt;, you can also have an alias called "docker" pointing to "podman". That can make you believe that you are running Docker and come to the Docker forum asking about an issue which is actually not related to Docker. The alias exists because Podman tries to keep a similar command line interface to the interface of Docker, so when someone relies on an existing docker command, they don't have to rewrite their scripts if they are lucky.&lt;/p&gt;

&lt;p&gt;But again, Podman is not Docker and Podman Desktop is not Docker Desktop!&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;docker version&lt;/code&gt; and &lt;code&gt;docker info&lt;/code&gt; commands can give you an idea of what you are actually using.&lt;/p&gt;

&lt;h2&gt;
  
  
  Packages and versioning
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;In my mind the core components of Docker CE are the Docker daemon and the Docker client. When you install Docker CE on Linux, after you added the official APT repository provided by "Docker, Inc", the following packages could be installed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;docker-ce&lt;/strong&gt;: The daemon&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;docker-ce-rootless-extras&lt;/strong&gt;: Scripts to run rootless Docker&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;docker-ce-cli&lt;/strong&gt;: The docker command&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These packages have the same version. You can use a different version of the command line interface (cli), but if you want to make sure that the docker command is compatible with the API of the Docker daemon, use the same versions.&lt;/p&gt;

&lt;p&gt;Everything else listed below has completely independent version numbers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;docker-compose (Compose v1)&lt;/li&gt;
&lt;li&gt;Plugins

&lt;ul&gt;
&lt;li&gt;docker-compose-plugin (Compose v2)&lt;/li&gt;
&lt;li&gt;docker-buildx&lt;/li&gt;
&lt;li&gt;docker-init (only in Docker Desktop)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Docker Desktop&lt;/li&gt;

&lt;li&gt;Docker daemon dependencies

&lt;ul&gt;
&lt;li&gt;containerd&lt;/li&gt;
&lt;li&gt;containerd-shim*&lt;/li&gt;
&lt;li&gt;runc&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Docker has a long history, and it changed a lot during the years, but one thing was always true. You always ran containers and not dockers. If you say "I want to run multiple dockers" on the Docker forum, we will think that you want to run multiple Docker daemons. Well... since we are kind of trained by our users, we will eventually realize what you mean, but knowing the right terms helps you communicate your issue better and get an answer faster.&lt;/p&gt;

&lt;p&gt;Before you ask a question, always think about what you are using and communicate that to the potential helpers.&lt;/p&gt;

&lt;p&gt;And finally, you can find an important screenshot from the video below that shows all the variants and components I described in this post. Click on the image to download it in full size.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://1drv.ms/i/s!ArZ8adYZAGedgpkyXlN_K1EnV4T72A?embed=1&amp;amp;width=1920&amp;amp;height=1080" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2F1drv.ms%2Fi%2Fs%21ArZ8adYZAGedgpkyXlN_K1EnV4T72A%3Fembed%3D1%26width%3D660" alt="Docker variants and components diagram" width="660" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>containers</category>
    </item>
    <item>
      <title>Install Docker and Portainer in a VM using Ansible</title>
      <dc:creator>Ákos Takács</dc:creator>
      <pubDate>Sun, 02 Jun 2024 14:05:28 +0000</pubDate>
      <link>https://dev.to/rimelek/install-docker-and-portainer-in-a-vm-using-ansible-21ib</link>
      <guid>https://dev.to/rimelek/install-docker-and-portainer-in-a-vm-using-ansible-21ib</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This episode is actually why I started this series in the first place. I am an active Docker user and Docker fan, but I like containers and DevOps topics in general. I am a &lt;a href="https://forums.docker.com/u/rimelek" rel="noopener noreferrer"&gt;moderator on the official Docker forums&lt;/a&gt; and I see that people often struggle with the installation process of Docker CE or Docker Desktop. Docker Desktop starts a virtual machine, and the GUI is to manage the Docker CE inside the virtual machine even on Linux. Even though I prefer not to use a GUI for creating containers, I admit it can be useful in some situations, but you always need to be ready to use the command line where all the commands are available. In this episode I will use Ansible to install Docker CE in the previously created virtual machine, and I will also install a web-based graphical interface, &lt;a href="https://www.portainer.io/" rel="noopener noreferrer"&gt;Portainer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/tfv_C1uMLqI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;If you want to be notified about other videos as well, you can subscribe to my YouTube channel: &lt;a href="https://www.youtube.com/@akos.takacs" rel="noopener noreferrer"&gt;https://www.youtube.com/@akos.takacs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
Before you begin

&lt;ul&gt;
&lt;li&gt;Requirements&lt;/li&gt;
&lt;li&gt;Download the already written code of the previous episode&lt;/li&gt;
&lt;li&gt;Have an inventory file&lt;/li&gt;
&lt;li&gt;Activate the Python virtual environment&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Small improvements before we start today's main topic

&lt;ul&gt;
&lt;li&gt;Disable gathering facts automatically&lt;/li&gt;
&lt;li&gt;Create an inventory group for LXD playbooks&lt;/li&gt;
&lt;li&gt;Reload ZFS pools after removing LXD to make the playbook more stable&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Add a host to a dynamically created inventory group&lt;/li&gt;

&lt;li&gt;Use a dynamically created inventory group&lt;/li&gt;

&lt;li&gt;

Install Docker CE in a VM using Ansible

&lt;ul&gt;
&lt;li&gt;Using 3rd-party roles to install Docker&lt;/li&gt;
&lt;li&gt;Default variables for the docker role&lt;/li&gt;
&lt;li&gt;Add docker to the Ansible roles&lt;/li&gt;
&lt;li&gt;Install the dependencies of Docker&lt;/li&gt;
&lt;li&gt;Configure the official APT repository&lt;/li&gt;
&lt;li&gt;Install a specific version of Docker CE&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Allow non-root users to use the docker commands&lt;/li&gt;

&lt;li&gt;Install Portainer CE, the web-based GUI for containers&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Before you begin
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The project requires Nix which we discussed in &lt;a href="https://dev.to/rimelek/install-ansible-8-on-ubuntu-2004-lts-using-nix-46hm"&gt;Install Ansible 8 on Ubuntu 20.04 LTS using Nix&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You will also need an Ubuntu remote server. I recommend an Ubuntu 22.04 virtual machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Download the already written code of the previous episode
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;If you started the tutorial with this episode, clone the project from GitHub:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

git clone https://github.com/rimelek/homelab.git
&lt;span class="nb"&gt;cd &lt;/span&gt;homelab


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;If you cloned the project now, or you want to make sure you are using the exact same code I did, switch to the previous episode in a new branch&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; tutorial.episode.8b tutorial.episode.9.1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Have the inventory file
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Copy the inventory template&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cp &lt;/span&gt;inventory-example.yml inventory.yml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Change &lt;code&gt;ansible_host&lt;/code&gt; to the IP address of your Ubuntu server that you use for this tutorial,&lt;/li&gt;
&lt;li&gt;and change &lt;code&gt;ansible_user&lt;/code&gt; to the username on the remote server that Ansible can use to log in.&lt;/li&gt;
&lt;li&gt;If you still don't have an SSH private key, read the &lt;a href="https://dev.to/rimelek/ansible-playbook-and-ssh-keys-33bo#generate-an-ssh-key"&gt;Generate an SSH key part of Ansible playbook and SSH keys&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;If you want to run the playbook called &lt;code&gt;playbook-lxd-install.yml&lt;/code&gt;, you will need to configure a physical or virtual disk which I wrote about in &lt;a href="https://dev.to/rimelek/the-simplest-way-to-install-lxd-using-ansible-h5o#install-zfs-utils-and-create-a-zfs-pool"&gt;The simplest way to install LXD using Ansible&lt;/a&gt;. If you don't have a usable physical disk, Look for &lt;code&gt;truncate -s 50G &amp;lt;PATH&amp;gt;/lxd-default.img&lt;/code&gt; to create a virtual disk.&lt;/li&gt;
&lt;li&gt;You will need an encrypted secret file which I wrote about in the &lt;a href="https://dev.to/rimelek/use-sops-in-ansible-to-read-your-secrets-2gfa#encrypt-a-file"&gt;Encrypt a file section of "Use SOPS in Ansible ro read your secrets"&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Activate the Python virtual environment
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;How you activate the virtual environment, depends on how you created it. In the episode of &lt;a href="https://dev.to/rimelek/the-first-ansible-playbook-579h#install-ansible"&gt;The first Ansible playbook&lt;/a&gt; describes the way to create and activate the virtual environment using the "venv" Python module and in the episode of &lt;a href="https://dev.to/rimelek/the-first-ansible-role-paf"&gt;The first Ansible role&lt;/a&gt; we created helper scripts as well, so if you haven't created it yet, you can create the environment by running&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

./create-nix-env.sh venv


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Optionally start an ssh agent:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

ssh-agent &lt;span class="nv"&gt;$SHELL&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;and activate the environment with&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;source &lt;/span&gt;homelab-env.sh


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Small improvements before we start today's main topic
&lt;/h2&gt;

&lt;p&gt;You can skip this part too, if you joined the tutorial at this episode, and you don't want to improve other playbooks.&lt;/p&gt;
&lt;h3&gt;
  
  
  Disable gathering facts automatically
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;We discussed facts before in the "&lt;a href="https://dev.to/rimelek/using-facts-and-the-github-api-in-ansible-4i00"&gt;Using facts and the GitHub API in Ansible&lt;/a&gt;" episode, but we left this setting on default in other playbooks. Let's quickly add &lt;code&gt;gather_facts: false&lt;/code&gt; to all playbooks, except &lt;code&gt;playbook-hello.yml&lt;/code&gt; as that was to demonstrate how a playbook runs.&lt;/p&gt;
&lt;h3&gt;
  
  
  Create an inventory group for LXD playbooks
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Now that we have a separate group for the virtual machines that run Docker, we can also create a new group for LXD, so when we add more machines, we will not install LXD on every single machine, and we will not remove it from a machine on which it was not installed. Let's add the following to the &lt;code&gt;inventory.yml&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;lxd_host_machines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;YOURHOSTNAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: Replace &lt;code&gt;YOURHOSTNAME&lt;/code&gt; with your actual hostname which you used in the inventory under the special group called "&lt;code&gt;all&lt;/code&gt;".&lt;/p&gt;

&lt;p&gt;In my case, it is the following:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;lxd_host_machines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ta-lxlt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;And now, replace &lt;code&gt;hosts: all&lt;/code&gt; in &lt;code&gt;playbook-lxd-install.yml&lt;/code&gt; and &lt;code&gt;playbook-lxd-remove.yml&lt;/code&gt; with &lt;code&gt;hosts: lxd_host_machines&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Reload ZFS pools after removing LXD to make the playbook more stable
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;When I wrote the "&lt;a href="https://dev.to/rimelek/remove-lxd-using-ansible-3i7i"&gt;Remove LXD using Ansible&lt;/a&gt;" episode, the playbook worked for me every time. Since then, I noticed that sometimes it cannot delete the ZFS pool, because it's missing. I couldn't actually figure out why it happens, but a workaround can be implemented to make the playbook more stable. We have to restart the &lt;code&gt;zfs-import-cache&lt;/code&gt; Systemd service, which will reload the ZFS pools so the next task can delete it and the disks can be wiped as well.&lt;/p&gt;

&lt;p&gt;Open &lt;code&gt;roles/zfs_destroy_pool/tasks/main.yml&lt;/code&gt; and look for the following task:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get zpool facts&lt;/span&gt;
  &lt;span class="na"&gt;ignore_errors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;community.general.zpool_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;zfs_destroy_pool_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_zpool_facts_task&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;All we have to do is add a new task before it:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# To fix the issue of missing ZFS pool after uninstalling LXD&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Restart ZFS import cache&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.systemd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;restarted&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;zfs-import-cache&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/systemd_module.html" rel="noopener noreferrer"&gt;built-in systemd module&lt;/a&gt; can restart a Systemd service if the "state" is "restarted".&lt;/p&gt;
&lt;h2&gt;
  
  
  Add a host to a dynamically created inventory group
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Since we already configured our SSH client for the newly created virtual machine last time, we could create a new inventory group and add the virtual machine to that group. But sometimes you don't want to do that, or you can't. That's why I wanted to show you a way to create a new inventory group without changing the inventory file. Then we can add a host to this group. Since last time we also used Ansible to get the IP address of the new virtual machine, we can add that IP to the inventory group. The next task will show you how you can do that.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

    &lt;span class="c1"&gt;# region task: Add Docker VM to Ansible inventory&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add Docker VM to Ansible inventory&lt;/span&gt;
      &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.add_host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_lxd_docker_vm&lt;/span&gt;
        &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_inventory_hostname&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
        &lt;span class="na"&gt;ansible_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_user&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
        &lt;span class="na"&gt;ansible_become_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_pass&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
        &lt;span class="c1"&gt;# ansible_host is not necessary, inventory hostname will be used&lt;/span&gt;
        &lt;span class="na"&gt;ansible_ssh_private_key_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_ssh_priv_key&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
        &lt;span class="c1"&gt;#ansible_ssh_common_args: "-o StrictHostKeyChecking=no"&lt;/span&gt;
        &lt;span class="na"&gt;ansible_ssh_host_key_checking&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;You need to add the above task to the "Create the VM" play in the &lt;code&gt;playbook-lxd-docker-vm.yml&lt;/code&gt; playbook. The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/add_host_module.html" rel="noopener noreferrer"&gt;builtin add_host module&lt;/a&gt; is what we needed. Despite what you see in the task, it has only 2 parameters. Everything else is a variable which you could use in the inventory file as well. The &lt;code&gt;groups&lt;/code&gt; and &lt;code&gt;name&lt;/code&gt; parameters have aliases, and I thought that using the &lt;code&gt;hostname&lt;/code&gt; alias of &lt;code&gt;name&lt;/code&gt; would be better as we indeed add a hostname or an IP as its value. &lt;code&gt;groups&lt;/code&gt; can be a list or a string. I defined it as a string as I have only one group to which I will add the VM. &lt;/p&gt;

&lt;p&gt;I mentioned before that I like to start the name of helper variables with an underscore. The name of the group is not a variable, but I start it with an underscore, so I will know it is a temporary, dynamically created inventory group. We have some variables that we defined in the previous episode.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;vm_inventory_hostname&lt;/code&gt;: The inventory hostname of the VM, which is also used in the SSH client configuration. It actually comes from the value of &lt;code&gt;config_lxd_docker_vm_inventory_hostname&lt;/code&gt; with a default in case it is not defined.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;config_lxd_docker_vm_user&lt;/code&gt;: The user that we created using cloud init and with which we can SSH into the VM.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;config_lxd_docker_vm_pass&lt;/code&gt;: The sudo password of the user. It comes from a secret.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;vm_ssh_priv_key&lt;/code&gt;: The path of the SSH private key used for the SSH connection. It is just a short alias for &lt;code&gt;config_lxd_docker_vm_ssh_priv_key&lt;/code&gt; which can be defined in the inventory file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using these variables we could configure the SSH connection parameters like &lt;code&gt;ansible_user&lt;/code&gt;, &lt;code&gt;ansible_become_pass&lt;/code&gt; and &lt;code&gt;ansible_ssh_private_key_file&lt;/code&gt;. We also have a new variable, &lt;code&gt;ansible_ssh_host_key_checking&lt;/code&gt;. When we first SSH to a remote server, we need to accept the fingerprint of the server's SSH host key, which means we know and trust the server. Since we dynamically created this virtual machine and detected its IP address, we would need to accept it every time we recreate our VM, so I just disable host key checking by setting boolean false as value.&lt;/p&gt;
&lt;h2&gt;
  
  
  Use a dynamically created inventory group
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;We have a new inventory group, but we still don't use it. Now we need a new play in the paybook which will actually use this group. Add the following play skeleton to the end of &lt;code&gt;playbook-lxd-docker-vm.yml&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# play: Configure the OS in the VM&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure the OS in the VM&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_lxd_docker_vm&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;pre_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="c1"&gt;# endregion&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Even though we forced Ansible to wait until the virtual machine gets an IP address, having an IP address doesn't mean the SSH daemon is ready in the VM. So we need the following pre task:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Waiting for SSH connection&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.wait_for_connection&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20&lt;/span&gt;
        &lt;span class="na"&gt;delay&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
        &lt;span class="na"&gt;sleep&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
        &lt;span class="na"&gt;connect_timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/wait_for_connection_module.html" rel="noopener noreferrer"&gt;built-in wait_for_connection module&lt;/a&gt; can be used to retry connecting to the servers. We start checking it immediately, so we set the delay to 0. If we are lucky, it will be ready right away. If it does not connect in 2 seconds (&lt;code&gt;connect_timeout&lt;/code&gt;), Ansible will "sleep" for 3 seconds and try again. If the connection is not made in 20 seconds (&lt;code&gt;timeout&lt;/code&gt;), the task will fail.&lt;/p&gt;

&lt;p&gt;While I was testing the almost finished playbook, I realized that sometimes the installation of some packages failed like if they were not in the APT cache yet, so I added a new pre task to update the APT cache before we start the VM. We already discussed this module in &lt;a href="https://dev.to/rimelek/using-facts-and-the-github-api-in-ansible-4i00"&gt;Using facts and the GitHub API in Ansible&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Update APT cache&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;If you don't want to add it, you can just rerun the playbook and it will probably work. Now we have an already written Ansible role which we will want to use here too, which is &lt;code&gt;cli_tools&lt;/code&gt;. So this is how our second play looks like in &lt;code&gt;playbook-lxd-docker-vm.yml&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# play: Configure the OS in the VM&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure the OS in the VM&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_lxd_docker_vm&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;pre_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Waiting for SSH connection&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.wait_for_connection&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20&lt;/span&gt;
        &lt;span class="na"&gt;delay&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
        &lt;span class="na"&gt;sleep&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
        &lt;span class="na"&gt;connect_timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Update APT cache&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cli_tools&lt;/span&gt;
&lt;span class="c1"&gt;# endregion&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now you could delete the virtual machine if you already created it last time, and run the playbook with the new play again to create the virtual machine and immediately install the command line tools in it.&lt;/p&gt;
&lt;h2&gt;
  
  
  Install Docker CE in a VM using Ansible
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Using 3rd-party roles to install Docker
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;I admit, that there are already existing great roles we could use to Install Docker. For example the one &lt;a href="https://galaxy.ansible.com/ui/standalone/roles/geerlingguy/docker/" rel="noopener noreferrer"&gt;made by Jeff Geerling&lt;/a&gt; which supports multiple Linux distributions, so feel free to use it, but we are still practicing writing our own roles, so I made a simple one for you, although that works only on Ubuntu. On the other hand, I will add something that even Jeff Geerling didn't do.&lt;/p&gt;
&lt;h3&gt;
  
  
  Default variables for the docker role
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;We will create some default variables in &lt;code&gt;roles/docker/defaults/main.yml&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;docker_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*.*.*"&lt;/span&gt;
&lt;span class="na"&gt;docker_sudo_users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The first, &lt;code&gt;docker_version&lt;/code&gt; is to define which version we want to install. When you just start playing with Docker but don't want to use &lt;a href="https://labs.play-with-docker.com/" rel="noopener noreferrer"&gt;Play with Docker&lt;/a&gt;, you probably want to install the latest version. That's why the default value is "&lt;code&gt;*.*.*&lt;/code&gt;", which means the latest major, minor and patch version of Docker CE. You will see the implementation soon. The second variable is &lt;code&gt;docker_sudo_users&lt;/code&gt;, which is an empty list. We will be able to add users to the list who should be able to use Docker. We will discuss it later in more details.&lt;/p&gt;
&lt;h3&gt;
  
  
  Add docker to the Ansible roles
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Before we continue, let's add "docker" as a new role to our second play in &lt;code&gt;playbook-lxd-docker-vm.yml&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

  &lt;span class="c1"&gt;# ...&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cli_tools&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;
      &lt;span class="na"&gt;docker_sudo_users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_docker_sudo_users&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default([])&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;You know very well now that this way we can add this new config variable to the &lt;code&gt;inventory.yml&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;all&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# ...&lt;/span&gt;
    &lt;span class="na"&gt;config_lxd_docker_vm_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;manager&lt;/span&gt;
    &lt;span class="na"&gt;config_lxd_docker_vm_docker_sudo_users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_user&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Note that &lt;code&gt;config_lxd_docker_vm_user&lt;/code&gt; is probably already defined if you followed the previous episodes as well.&lt;/p&gt;
&lt;h3&gt;
  
  
  Install the dependencies of Docker
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;As I say it very often we should always start with the &lt;a href="https://docs.docker.com/engine/install/ubuntu/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;. It starts with uninstalling old and unofficial packages. Our role will not include it, so you will do it manually or write your own role as a homework. Then it updates the APT cache, which we just did as a pre task, so we will install the dependencies first. The official documentation says:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;ca-certificates curl


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Which looks like this in Ansible in &lt;code&gt;roles/docker/tasks/main.yml&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install dependencies&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ca-certificates&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;curl&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: We know that the "cli_tools" role already installed curl, but we don't care, because when we create a role, we try to make it without depending on other roles. So even if we decide later not to use the "cli_tools" role, our "docker" role will still work perfectly.&lt;/p&gt;
&lt;h3&gt;
  
  
  Configure the official APT repository
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The official documentation continues with creating a folder, &lt;code&gt;/etc/apt/keyrings&lt;/code&gt;. It uses the&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 &lt;span class="nt"&gt;-d&lt;/span&gt; /etc/apt/keyrings


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;command, but it really just creates a folder this time, which looks like this in Ansible:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;


&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Make sure the folder of the keyrings exists&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;directory&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0755&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/apt/keyrings&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The next step is downloading the APT key for the repository. Previously, the documentation used the &lt;code&gt;apt-key&lt;/code&gt; command which was deprecated on Ubuntu, so it was replaced with the following:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;sudo &lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.asc
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;a+r /etc/apt/keyrings/docker.asc


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Which looks like this in Ansible:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install the APT key of Docker's APT repo&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.get_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://download.docker.com/linux/ubuntu/gpg&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/apt/keyrings/docker.asc&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;a+r&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Then the official documentation shows how you can add the repository to APT depending on the CPU architecture and Ubuntu release code name, like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Add the repository to Apt sources:&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"deb [arch=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;dpkg &lt;span class="nt"&gt;--print-architecture&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; /etc/os-release &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$VERSION_CODENAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; stable"&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/docker.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;In the "&lt;a href="https://dev.to/rimelek/using-facts-and-the-github-api-in-ansible-4i00"&gt;Using facts and the GitHub API in Ansible&lt;/a&gt;" episode we already learned to get the architecture. We also need the release code name, so we gather the &lt;code&gt;distribution_release&lt;/code&gt; subset of Ansible facts as well.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get distribution release fact&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.setup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;gather_subset&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;distribution_release&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;architecture&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Before we continue with the next task, we will add some variables to &lt;code&gt;roles/docker/vars/main.yml&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;docker_archs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;x86_64&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;amd64&lt;/span&gt;
  &lt;span class="na"&gt;amd64&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;amd64&lt;/span&gt;
  &lt;span class="na"&gt;aarch64&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arm64&lt;/span&gt;
  &lt;span class="na"&gt;arm64&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;arm64&lt;/span&gt;
&lt;span class="na"&gt;docker_arch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;docker_archs[ansible_facts.architecture]&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;span class="na"&gt;docker_distribution_release&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible_facts.distribution_release&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Again, this is very similar to what we have done before to separate our helper variables from the tasks, and now we can generate &lt;code&gt;docker.list&lt;/code&gt; under &lt;code&gt;/etc/apt/sources.list.d/&lt;/code&gt;. To do that, we add the new task in &lt;code&gt;roles/docker/tasks/main.yml&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add Docker APT repository&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.apt_repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;filename&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;
    &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;deb&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;[arch={{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;docker_arch&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;signed-by=/etc/apt/keyrings/docker.asc]&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;https://download.docker.com/linux/ubuntu&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;docker_distribution_release&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;stable"&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;
    &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_repository_module.html" rel="noopener noreferrer"&gt;built-in apt_repository module&lt;/a&gt; can also update APT cache after adding the new repo. The filename is automatically generated if we don't set it, but the "filename" parameter is actually a name without the extension, so do not add &lt;code&gt;.list&lt;/code&gt; at the end of the name.&lt;/p&gt;
&lt;h3&gt;
  
  
  Install a specific version of Docker CE
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The official documentation recommends using the following command to list the available versions of Docker CE on Ubuntu.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apt-cache madison docker-ce | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{ print $3 }'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;It shows the full package name which includes more than just the version of Docker CE. Fortunately, the actual version number can be parsed like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apt-cache madison docker-ce &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'$3 ~ /^([0-9]+:)([0-9]+\.[0-9]+\.[0-9]+)(-[0-9]+)?(~.*)$/ {print $3}'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Where the version number is the second expression in parentheses.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

[0-9]+\.[0-9]+\.[0-9]+


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;We can replace it with an actual version number, but keeping the backslashes:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

26\.1\.3


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Let's search for only that version:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apt-cache madison docker-ce &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'$3 ~ /^([0-9]+:)26\.1\.3(-[0-9]+)?(~.*)$/ {print $3}'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

5:26.1.3-1~ubuntu.22.04~jammy


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;This is what we will implement in Ansible:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get full package version for {{ docker_version }}&lt;/span&gt;
  &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;apt-cache madison docker-ce \&lt;/span&gt;
    &lt;span class="s"&gt;| awk '$3 ~ /^([0-9]+:){{ docker_version | replace('*', '[0-9]+') | replace('.', '\.') }}(-[0-9]+)?(~.*)$/ {print $3}'&lt;/span&gt;
  &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_docker_versions_command&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;I hope now it starts to make sense why the default value of &lt;code&gt;docker_version&lt;/code&gt; was &lt;code&gt;*.*.*&lt;/code&gt;. It was because we replace that with regular expressions. We also escape all dots as otherwise it would mean "any character" in the regular expression. This solution allows us to install the latest version unless we override the default value with an actual version number. Even if we override it, we can use a version like &lt;code&gt;26.0.*&lt;/code&gt; to get a list of available patch version of Docker CE 26.0 instead of the latest major version. Of course, this is still a list of versions unless we set a specific version number, but we can get the first line in the next task. According to the official documentation, we would install Docker CE and related packages like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;VERSION_STRING&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5:26.1.0-1~ubuntu.24.04~noble
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;docker-ce&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$VERSION_STRING&lt;/span&gt; docker-ce-cli&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$VERSION_STRING&lt;/span&gt; containerd.io docker-buildx-plugin docker-compose-plugin


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Let's do it in Ansible:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install Docker CE&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;_full_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_docker_versions_command.stdout_lines[0]&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-ce={{ _full_version }}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-ce-cli={{ _full_version }}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-ce-rootless-extras={{ _full_version }}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;containerd.io&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-buildx-plugin&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-compose-plugin&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;What is not mentioned in the documentation is marking Docker CE packages as held. In the terminal it would be like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apt-mark hold docker-ce docker-ce-cli docker-ce-rootless-extras containerd.io docker-compose-plugin


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;It will be almost the same in Ansible as we need to use the built-in command module:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Hold Docker CE packages&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apt-mark hold docker-ce docker-ce-cli docker-ce-rootless-extras containerd.io docker-compose-plugin&lt;/span&gt;
  &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;You can check the list of held packages:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apt-mark showheld


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Output:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

containerd.io
docker-ce
docker-ce-cli
docker-ce-rootless-extras
docker-compose-plugin


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Note: I never actually saw an upgraded containerd causing problems, but this is a very important component of Docker CE, so I decided to hold that too. If it causes any problem, you can "unhold" that any time by running the following command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

apt-mark unhold containerd.io


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Allow non-root users to use the docker commands
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;This is the part where I don't follow the documentation. The official documentation mentions this on the &lt;a href="https://docs.docker.com/engine/install/linux-postinstall/" rel="noopener noreferrer"&gt;Linux post-installation steps for Docker Engine&lt;/a&gt; page:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The Docker daemon binds to a Unix socket, not a TCP port. By default it's the root user that owns the Unix socket, and other users can only access it using sudo. The Docker daemon always runs as the root user.&lt;/p&gt;

&lt;p&gt;If you don't want to preface the docker command with sudo, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group. On some Linux distributions, the system automatically creates this group when installing Docker Engine using a package manager. In that case, there is no need for you to manually create the group.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Of course, using the &lt;code&gt;docker&lt;/code&gt; group is not really secure, which is also mentioned right after the previous quote in the documentation:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The docker group grants root-level privileges to the user. For details on how this impacts security in your system, see &lt;a href="https://docs.docker.com/engine/security/#docker-daemon-attack-surface" rel="noopener noreferrer"&gt;Docker Daemon Attack Surface&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If we want to have just a little bit more secure solution, we don't use the &lt;code&gt;docker&lt;/code&gt; group which can directly access the docker socket, but we create another group like &lt;code&gt;docker-sudo&lt;/code&gt; and we allow all users in this group to run the docker command as root by using &lt;code&gt;sudo docker&lt;/code&gt; without a password. It would involve creating a new rule in &lt;code&gt;/etc/sudoers.d/docker&lt;/code&gt; like:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

%docker-sudo ALL=(root) NOPASSWD: /usr/bin/docker


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;This would force users to always run &lt;code&gt;sudo docker&lt;/code&gt;, not just &lt;code&gt;docker&lt;/code&gt; and they will often forget it and get an error message. We could add an alias like&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;docker&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'sudo \docker'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;to &lt;code&gt;~/.bash_aliases&lt;/code&gt;, but that would work only when the user uses the bash shell. Instead of that, we can add a new script in &lt;code&gt;/usr/local/bin/docker&lt;/code&gt;,&lt;br&gt;
which usually overrides &lt;code&gt;/usr/bin/docker&lt;/code&gt; and add this command in the script:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;#!/usr/bin/env sh&lt;/span&gt;

&lt;span class="nb"&gt;exec sudo&lt;/span&gt; /usr/bin/docker &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;This is what we will do with Ansible, so our new script will be executed which will call the original docker command as an argument of sudo. Now even when we use Visual Studio Code's Remote Explorer to connect to the remote virtual machine and use Docker in the VM from VSCode, &lt;code&gt;/var/log/auth.log&lt;/code&gt; on Debian-based systems will contain exactly what docker commands were executed. If you don't find this file, it may be called &lt;code&gt;/var/log/secure&lt;/code&gt; on your system. This is for example how browsing files from VSCode in containers looks like:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

May 19 20:14:36 docker sudo:  manager : PWD=/home/manager ; USER=root ; COMMAND=/usr/bin/docker container exec --interactive d71cae80db867ee79ba66fa947ab126ac6f7b0e482ebb8b3320d9f3bfa3fb3e6 /bin/sh -c 'stat -c \'%f %h %g %u %s %X %Y %Z %n\' "/"* || true &amp;amp;&amp;amp; stat -c \'%f %h %g %u %s %X %Y %Z %n\' "/".*'


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;This is useful when you want to be able to investigate accidental damage when a Docker user executes a command that they should not have executed, and they don't even know what they executed. It will not protect you from intentional harm as an actual hacker could also delete the logs. On the other hand, if you have a remote logging server where you collect logs from all machines, you will probably have the logs to figure out what happened.&lt;/p&gt;

&lt;p&gt;Now let's configure this in Ansible.&lt;/p&gt;

&lt;p&gt;First we will create the &lt;code&gt;docker-sudo&lt;/code&gt; group:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ensure group "docker-sudo" exists&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker-sudo&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now we can finally use our &lt;code&gt;docker_sudo_users&lt;/code&gt; variable which we defined in &lt;code&gt;roles/docker/defaults/main.yml&lt;/code&gt; and check if there is any user who doesn't exist.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check if docker sudo users are existing users&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.getent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;passwd&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;item&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;loop&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;docker_sudo_users&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;This &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/getent_module.html" rel="noopener noreferrer"&gt;built-in getent module&lt;/a&gt; basically calls the &lt;code&gt;getent&lt;/code&gt; command on Linux:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

getent passwd manager


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;If the user exists, it returns the passwd record of the user, and it fails otherwise. Now let's add the groups to the users:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add users to the docker-sudo group&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;item&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="c1"&gt;# users must be added to docker-sudo group without removing them from other groups&lt;/span&gt;
    &lt;span class="na"&gt;append&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker-sudo&lt;/span&gt;
  &lt;span class="na"&gt;loop&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;docker_sudo_users&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;We used the &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/user_module.html" rel="noopener noreferrer"&gt;built-in user module&lt;/a&gt;, to add the &lt;code&gt;docker-sudo&lt;/code&gt; group to the users defined in &lt;code&gt;docker_sudo_users&lt;/code&gt;. We are close to the end. The next step is creating the script using a similar solution we used in the &lt;code&gt;hello_world&lt;/code&gt; role at the beginning of the series.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create a sudo wrapper for Docker&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.copy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;#!/usr/bin/env sh&lt;/span&gt;

      &lt;span class="s"&gt;exec sudo /usr/bin/docker "$@"&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/local/bin/docker&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0755&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;And finally, using the same method, we create the sudoers rule:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow run execute /usr/bin/docker as root without password&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.copy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;%docker-sudo ALL=(root) NOPASSWD: /usr/bin/docker&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/sudoers.d/docker&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;We could now run the playbook to install Docker in the virtual machine:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

./run.sh playbook-lxd-docker-vm.yml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Install Portainer CE, the web-based GUI for containers
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Installing &lt;a href="https://docs.portainer.io/start/install-ce" rel="noopener noreferrer"&gt;Portainer CE&lt;/a&gt; is the easy part, actually. We will need to use the &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/pip_module.html" rel="noopener noreferrer"&gt;built-in pip module&lt;/a&gt; to install a Python dependency for Ansible to be able to manage Docker, and then we will also use a &lt;a href="https://docs.ansible.com/ansible/2.9/modules/docker_container_module.html" rel="noopener noreferrer"&gt;community module, called docker_container&lt;/a&gt;. Let's create the tasks file first at &lt;code&gt;roles/portainer/tasks/main.yml&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install Python requirements&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.pip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install portainer&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;community.docker.docker_container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;portainer_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;started&lt;/span&gt;
    &lt;span class="na"&gt;container_default_behavior&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;no_defaults&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;portainer_image&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;restart_policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;portainer_external_port&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:9443"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;portainer_volume_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:/data"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock:/var/run/docker.sock&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;After defining the variables, this will be basically equivalent of running the following in shell:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 9443:9443 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; portainer &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--restart&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;always &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /var/run/docker.sock:/var/run/docker.sock &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; portainer_data:/data &lt;span class="se"&gt;\&lt;/span&gt;
  portainer/portainer-ce:2.20.2-alpine


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Let's add our defaults at &lt;code&gt;roles/portainer/defaults/main.yml&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;portainer_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;portainer&lt;/span&gt;
&lt;span class="na"&gt;portainer_image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;portainer/portainer-ce:2.20.2-alpine&lt;/span&gt;
&lt;span class="na"&gt;portainer_external_port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9443&lt;/span&gt;
&lt;span class="na"&gt;portainer_volume_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;portainer_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}_data"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;That's it, and now we add the role to the playbook:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

  &lt;span class="c1"&gt;# ...&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cli_tools&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;
      &lt;span class="na"&gt;docker_sudo_users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_docker_sudo_users&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default([])&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;portainer&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;When you finished installing Portainer, you need to quickly open it in a web browser on port 9443. If you do it on a publicly available machine in your LAN network, you can simply open it like &lt;code&gt;https://192.168.4.58:9443&lt;/code&gt;. In this tutorial, our virtual machine needs an SSH tunnel like below, so you can use &lt;code&gt;https://127.0.0.1:9443&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="s"&gt;ssh -L 9443:127.0.0.1:9443 -N docker.lxd.ta-lxlt&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Your hostname will be different. If you already have other containers with a forwarded port, you can add more ports to the tunnel:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="s"&gt;ssh \&lt;/span&gt;
  &lt;span class="s"&gt;-L 9443:127.0.0.1:9443 \&lt;/span&gt;
  &lt;span class="s"&gt;-L 32768:127.0.0.1:32768 \&lt;/span&gt;
  &lt;span class="s"&gt;-N \&lt;/span&gt;
  &lt;span class="s"&gt;docker.lxd.ta-lxlt&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;If you have containers without forwarded ports from the host, you can forward your local port directly to the container IP.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="s"&gt;ssh \&lt;/span&gt;
  &lt;span class="s"&gt;-L 9443:127.0.0.1:9443 \&lt;/span&gt;
  &lt;span class="s"&gt;-L 32768:127.0.0.1:32768 \&lt;/span&gt;
  &lt;span class="s"&gt;-L 8080:172.17.0.4:80 \&lt;/span&gt;
  &lt;span class="s"&gt;-N \&lt;/span&gt;
  &lt;span class="s"&gt;docker.lxd.ta-lxlt&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;When you can finally open Portainer in your browser, create your first user and configure the connection to the local Docker environment. If you wait too long, the webinterface will show an error message, and you will need to go to the terminal in the virtual machine and restart portainer:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

docker restart portainer


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;After that, you can start the configuration. This way if you install Portainer on a publicly available server, there will be less time for others to log in before you do, and after you initialized Portainer, it is no longer possible to log in without a password.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;I hope this episode helped you to install Docker and allow non-root users to use the Docker commands in a little more secure way than the documentation suggests. Now you can have a web-basd graphical interface for containers, however, Portainer is definitely not Docker Desktop, so you will not have the extra features like Docker Desktop extensions. Using Ansible can help to deploy your entire dev environment, destroy it and recreate any time. Using containers you can have pre-built and pre-configured applications that you can try and learn more about it to customize the configuration for your needs. When you have a production environment, you need to focus much more on security, but now that you have the tools to begin with a dev environment, you can make that new step more easily.&lt;/p&gt;

&lt;p&gt;The final source code of this episode can be found on GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rimelek/homelab/tree/tutorial.episode.10" rel="noopener noreferrer"&gt;https://github.com/rimelek/homelab/tree/tutorial.episode.10&lt;/a&gt;&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/rimelek" rel="noopener noreferrer"&gt;
        rimelek
      &lt;/a&gt; / &lt;a href="https://github.com/rimelek/homelab" rel="noopener noreferrer"&gt;
        homelab
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Source code to create a home lab. Part of a video tutorial
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;README&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;This project was created to help you build your own home lab where you can test
your applications and configurations without breaking your workstation, so you can
learn on cheap devices without paying for more expensive cloud services.&lt;/p&gt;
&lt;p&gt;The project contains code written for the tutorial, but you can also use parts of it
if you refer to this repository.&lt;/p&gt;
&lt;p&gt;Tutorial on YouTube in English: &lt;a href="https://www.youtube.com/watch?v=K9grKS335Mo&amp;amp;list=PLzMwEMzC_9o7VN1qlfh-avKsgmiU8Jofv" rel="nofollow noopener noreferrer"&gt;https://www.youtube.com/watch?v=K9grKS335Mo&amp;amp;list=PLzMwEMzC_9o7VN1qlfh-avKsgmiU8Jofv&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Tutorial on YouTube in Hungarian: &lt;a href="https://www.youtube.com/watch?v=dmg7lYsj374&amp;amp;list=PLUHwLCacitP4DU2v_DEHQI0U2tQg0a421" rel="nofollow noopener noreferrer"&gt;https://www.youtube.com/watch?v=dmg7lYsj374&amp;amp;list=PLUHwLCacitP4DU2v_DEHQI0U2tQg0a421&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Note: The inventory.yml file is not shared since that depends on the actual environment
so it will be different for everyone. If you want to learn more about the inventory file
watch the videos on YouTube or read the written version on &lt;a href="https://dev.to" rel="nofollow"&gt;https://dev.to&lt;/a&gt;. Links in
the video descriptions on YouTube.&lt;/p&gt;
&lt;p&gt;You can also find an example inventory file in the project root. You can copy that and change
the content, so you will use your IP…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/rimelek/homelab" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
      <category>docker</category>
      <category>lxd</category>
      <category>portainer</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Use Ansible to create and start LXD virtual machines</title>
      <dc:creator>Ákos Takács</dc:creator>
      <pubDate>Mon, 11 Mar 2024 22:46:31 +0000</pubDate>
      <link>https://dev.to/rimelek/use-ansible-to-create-and-start-lxd-virtual-machines-4nme</link>
      <guid>https://dev.to/rimelek/use-ansible-to-create-and-start-lxd-virtual-machines-4nme</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The next step in our home lab is to finally have an Ansible playbook to create and start a virtual machine. It also means that eventually we will have different kind of dynamically created servers into which we probably want to SSH. In this chapter I will show you a way to create a LXD virtual machine on Ubuntu and also automatically configure your host to be able to SSH into the new virtual machine. In this capter we will assume that we still don't want to manage these virtual machines from Ansible, but that will be the final goal.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; In the original post and video I used a different LXD remote image repository. I had to change it, because that repository was replaced surprisingly in an active LTS LXD release. That's because of the original developers of LXD &lt;a href="https://linuxcontainers.org/incus/docs/main/" rel="noopener noreferrer"&gt;started Incus&lt;/a&gt; and they stopped supporting LXD in their image repository. There is a &lt;a href="https://images.lxd.canonical.com/" rel="noopener noreferrer"&gt;new repo&lt;/a&gt;, which does not include Ubuntu server images currently, only desktop.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/HMqK3ThhLGc"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
Before you begin

&lt;ul&gt;
&lt;li&gt;Requirements&lt;/li&gt;
&lt;li&gt;Download the already written code of the previous episode&lt;/li&gt;
&lt;li&gt;Have an inventory file&lt;/li&gt;
&lt;li&gt;Activate the Python virtual environment&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Preparing the new inventory&lt;/li&gt;

&lt;li&gt;

Check how Ansible interprets the inventory file

&lt;ul&gt;
&lt;li&gt;Wrapper script to run any command in the Nix environment&lt;/li&gt;
&lt;li&gt;Use the ansible-inventory command&lt;/li&gt;
&lt;li&gt;Get the value of a single variable&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;New playbook for creating a VM&lt;/li&gt;

&lt;li&gt;

Start a virtual machine from Ansible

&lt;ul&gt;
&lt;li&gt;Common parameters without variables for starting a VM&lt;/li&gt;
&lt;li&gt;
Dynamically set parameters for starting a VM

&lt;ul&gt;
&lt;li&gt;VM name and resource limits&lt;/li&gt;
&lt;li&gt;Cloud-init config for SSH access and sudo password&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Run the playbook to start the VM&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Get the IP address of a virtual machine&lt;/li&gt;

&lt;li&gt;

Accessing the virtual machine from the Ansible controller

&lt;ul&gt;
&lt;li&gt;Why don't we use LAN network&lt;/li&gt;
&lt;li&gt;LXD proxy&lt;/li&gt;
&lt;li&gt;SSH tunnel&lt;/li&gt;
&lt;li&gt;SSH jump servers&lt;/li&gt;
&lt;li&gt;Using separate config files for different kind-of-servers&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Use Ansible to configure the SSH client

&lt;ul&gt;
&lt;li&gt;Make sure config.d exists&lt;/li&gt;
&lt;li&gt;Get information about the VM&lt;/li&gt;
&lt;li&gt;Get the IP addresses used by the VM&lt;/li&gt;
&lt;li&gt;Get information about the LXD network&lt;/li&gt;
&lt;li&gt;Get the usable IP address assigned to eth0&lt;/li&gt;
&lt;li&gt;Generate the new SSH client config for the VM&lt;/li&gt;
&lt;li&gt;Run the playbook to generate the SSH client config&lt;/li&gt;
&lt;li&gt;Access a webservice on the crated VM&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Before you begin
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The project requires Nix which we discussed in &lt;a href="https://dev.to/rimelek/install-ansible-8-on-ubuntu-2004-lts-using-nix-46hm"&gt;Install Ansible 8 on Ubuntu 20.04 LTS using Nix&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You will also need an Ubuntu remote server. I recommend an Ubuntu 22.04 virtual machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Download the already written code of the previous episode
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;If you started the tutorial with this episode, clone the project from GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/rimelek/homelab.git
&lt;span class="nb"&gt;cd &lt;/span&gt;homelab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you cloned the project now, or you want to make sure you are using the exact same code I did, switch to the previous episode in a new branch&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; tutorial.episode.8b tutorial.episode.8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Have the inventory file
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Copy the inventory template&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp &lt;/span&gt;inventory-example.yml inventory.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Change &lt;code&gt;ansible_host&lt;/code&gt; to the IP address of your Ubuntu server that you use for this tutorial,&lt;/li&gt;
&lt;li&gt;and change &lt;code&gt;ansible_user&lt;/code&gt; to the username on the remote server that Ansible can use to log in.&lt;/li&gt;
&lt;li&gt;If you still don't have an SSH private key, read the &lt;a href="https://dev.to/rimelek/ansible-playbook-and-ssh-keys-33bo#generate-an-ssh-key"&gt;Generate an SSH key part of Ansible playbook and SSH keys&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;If you want to run the playbook called &lt;code&gt;playbook-lxd-install.yml&lt;/code&gt;, you will need to configure a physical or virtual disk which I wrote about in &lt;a href="https://dev.to/rimelek/the-simplest-way-to-install-lxd-using-ansible-h5o#install-zfs-utils-and-create-a-zfs-pool"&gt;The simplest way to install LXD using Ansible&lt;/a&gt;. If you don't have a usable physical disk, Look for &lt;code&gt;truncate -s 50G &amp;lt;PATH&amp;gt;/lxd-default.img&lt;/code&gt; to create a virtual disk.&lt;/li&gt;
&lt;li&gt;You will need an encrypted secret file which I wrote about in the &lt;a href="https://dev.to/rimelek/use-sops-in-ansible-to-read-your-secrets-2gfa#encrypt-a-file"&gt;Encrypt a file section of "Use SOPS in Ansible ro read your secrets"&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Activate the Python virtual environment
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;How you activate the virtual environment, depends on how you created it. In the episode of &lt;a href="https://dev.to/rimelek/the-first-ansible-playbook-579h#install-ansible"&gt;The first Ansible playbook&lt;/a&gt; describes the way to create and activate the virtual environment using the "venv" Python module and in the episode of &lt;a href="https://dev.to/rimelek/the-first-ansible-role-paf"&gt;The first Ansible role&lt;/a&gt; we created helper scripts as well, so if you haven't created it yet, you can create the environment by running&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./create-nix-env.sh venv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Optionally start an ssh agent:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh-agent &lt;span class="nv"&gt;$SHELL&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;and activate the environment with&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;source &lt;/span&gt;homelab-env.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Preparing the new inventory
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Although in this tutorial I used only one server, you could add multiple servers to the inventory file, but you probably don't want to create a specific VM on each server.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You could have a playbook in which you define a specific host instead of an inventory group. Not a very good way in terms of redundancy.&lt;/li&gt;
&lt;li&gt;You could define a list under the &lt;code&gt;vars:&lt;/code&gt; section of a specific host in the inventory file, in which all the items would be a mapping of key-value pairs with the configurations for the virtual machine you want to create on that host and handle that list in a loop in the Ansible role responsible for creating the VM. That's much better, but too complicated for this tutorial.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For now, I will create a playbook for creating a general virtual machine which allows using Docker, so we could install it on multiple hosts. Just imagine that you want to be able to use Docker on all of your Linux hosts in a VM to test network connections between them. I know, it is hard to imagine that, so we will need a list of servers which could include a single server, so you will run Docker in a VM only on one machine.&lt;/p&gt;

&lt;p&gt;It's time to change our inventory file to use new inventory groups in addition to the special group called "all". As a reminder, see my old inventory file:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;all&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ansible_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-homelab&lt;/span&gt;
    &lt;span class="na"&gt;config_apt_update&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;config_lxd_zfs_pool_disks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/dev/disk/by-id/scsi-1ATA_Samsung_SSD_850_EVO_500GB_S2RBNX0J103301N-part6&lt;/span&gt;
    &lt;span class="na"&gt;sops&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('community.sops.sops',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'secrets.yml')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible.builtin.from_yaml&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ta-lxlt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;ansible_host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.4.58&lt;/span&gt;
      &lt;span class="na"&gt;ansible_ssh_private_key_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;~/.ssh/ansible&lt;/span&gt;
      &lt;span class="na"&gt;ansible_become_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sops.become_pass&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In the new inventory file, we will have an additional group:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;docker_vm_host_machines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ta-lxlt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;That's it, defined in the end of the previous inventory file. Since we have all the parameters for the host in the group "all", we don't have to define any in the new group, but we still have to add the name of the host machine without values.&lt;/p&gt;

&lt;p&gt;When we create our new playbook, in the "&lt;code&gt;hosts:&lt;/code&gt;" section we will refer to "&lt;code&gt;docker_vm_host_machines&lt;/code&gt;" instead of "&lt;code&gt;all&lt;/code&gt;"&lt;/p&gt;
&lt;h2&gt;
  
  
  Check how Ansible interprets the inventory file
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Wrapper script to run any command in the Nix environment
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;When I started this tutorial series, I thought we would need&lt;br&gt;
only the &lt;code&gt;ansible-playbook&lt;/code&gt; command. I was wrong. Now we should try the &lt;code&gt;ansible-inventory&lt;/code&gt; command and also run an ad-hoc Ansible command. Instead of creating two more wrapper scripts only for the new commands, I will create a script called &lt;code&gt;nix.sh&lt;/code&gt; which can run any.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;nix.sh&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env nix-shell&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -i bash&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -p sops&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -I https://github.com/NixOS/nixpkgs/archive/refs/tags/23.05.tar.gz&lt;/span&gt;

&lt;span class="nb"&gt;source &lt;/span&gt;config.sh

&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;I could use this script from all the other wrapper scripts to reduce redundancy, but I don't want to change many files in this chapter, so let's keep it for another day.&lt;/p&gt;

&lt;p&gt;Make the script executable:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x nix.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And run the following command to get the version of Ansible:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./nix.sh ansible &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible [core 2.15.0]
  config file = None
  configured module search path = ['/Users/ta/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/ta/Data/projects/myprojects/tutorials/iac/venv/lib/python3.11/site-packages/ansible
  ansible collection location = /Users/ta/.ansible/collections:/usr/share/ansible/collections
  executable location = /Users/ta/Data/projects/myprojects/tutorials/iac/venv/bin/ansible
  python version = 3.11.5 (main, Oct 15 2023, 20:58:46) [Clang 11.1.0 ] (/Users/ta/Data/projects/myprojects/tutorials/iac/venv/bin/python3.11)
  jinja version = 3.1.2
  libyaml = True
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Use the ansible-inventory command
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The following command would normally show you the structure of the inventory file as Ansible can see it:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ansible-inventory &lt;span class="nt"&gt;-i&lt;/span&gt; inventory.yml &lt;span class="nt"&gt;--graph&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Since we installed Ansible in a Nix environment, we will need to use the previously created wrapper script:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./nix.sh ansible-inventory &lt;span class="nt"&gt;-i&lt;/span&gt; inventory.yml &lt;span class="nt"&gt;--graph&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;My output:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@all:
  |--@ungrouped:
  |--@docker_vm_host_machines:
  |  |--ta-lxlt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Since "all" includes the new group, any variables we define in that group, will be available in the others.&lt;/p&gt;
&lt;h3&gt;
  
  
  Get the value of a single variable
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Using the "ansible" command, we can use the "debug" module to get the value of &lt;code&gt;ansible_host&lt;/code&gt;. We will tell Ansible to run task in the group called "&lt;code&gt;docker_vm_host_machines&lt;/code&gt;", but it will get the parameter defined in "&lt;code&gt;all&lt;/code&gt;".&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./nix.sh ansible docker_vm_host_machines &lt;span class="nt"&gt;-i&lt;/span&gt; inventory.yml &lt;span class="nt"&gt;-m&lt;/span&gt; debug &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nv"&gt;var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ansible_host
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;My output:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ta-lxlt | SUCCESS =&amp;gt; {
    "ansible_host": "192.168.4.58"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  New playbook for creating a VM
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Let's create the skeleton of the playbook and call the file "&lt;code&gt;playbook-lxd-docker-vm.yml&lt;/code&gt; in the project root.".&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# region play: Create the VM&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create the VM&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker_vm_host_machines&lt;/span&gt;
  &lt;span class="na"&gt;pre_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;I used a special comment syntax which is supported in JetBrains IDEs and also in Visual Studio code. These are what I use most of the time. If it doesn't work for you in VSCode by default, you can try the extension called &lt;a href="https://marketplace.visualstudio.com/items?itemName=maptz.regionfolder" rel="noopener noreferrer"&gt;region folding for VSCode&lt;/a&gt;. This syntax allows us to collapse different regions of the code and assign a name to it which will be shown in the collapsed state. It will be useful when we will have multiple plays, and we want to see the name when the play is collapsed instead of something like &lt;code&gt;&amp;lt;4 keys&amp;gt;&lt;/code&gt;. I will use this for tasks as well.&lt;/p&gt;

&lt;p&gt;You can see the new inventory group set for the play.&lt;/p&gt;
&lt;h2&gt;
  
  
  Start a virtual machine from Ansible
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Now we will need an Ansible role to create and start a virtual machine. Since LXD was originally for containers, the module still called &lt;a href="https://docs.ansible.com/ansible/latest/collections/community/general/lxd_container_module.html" rel="noopener noreferrer"&gt;community.general.lxd_container&lt;/a&gt;. Let's see the skeleton of the task in the tasks section of the playbook containing only the static parameters.&lt;/p&gt;
&lt;h3&gt;
  
  
  Common parameters without variables for starting a VM
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

    &lt;span class="c1"&gt;# region task: Start Docker VM&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start Docker VM&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;community.general.lxd_container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unix:/var/snap/lxd/common/lxd/unix.socket&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;virtual-machine&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;started&lt;/span&gt;
        &lt;span class="na"&gt;wait_for_ipv4_addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;boot.autostart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false"&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;We need to be able to communicate with the LXD daemon, so we define the path of the LXD unix socket file in the &lt;code&gt;url&lt;/code&gt; parameter.&lt;/li&gt;
&lt;li&gt;We want a virtual machine, so we set the type to &lt;code&gt;virtual-machine&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;We want the task return only when the virtual machine got an IP address already, so we can continue with other tasks that require that IP address.&lt;/li&gt;
&lt;li&gt;This is a test virtual machine, so we don't want it to start when the host starts, so &lt;code&gt;boot.autostart&lt;/code&gt; should be &lt;code&gt;false&lt;/code&gt; in the config section.&lt;/li&gt;
&lt;li&gt;When we run the playbook, we don't want to start the VM manually, so we set the state to &lt;code&gt;started&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of course, we want set a source. A source is a collection of parameters containing how and from where LXD should download the base image for the virtual machine.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start Docker VM&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;community.general.lxd_container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# ...&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image&lt;/span&gt;
          &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://cloud-images.ubuntu.com/releases&lt;/span&gt;
          &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;simplestreams&lt;/span&gt;
          &lt;span class="na"&gt;alias&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;22.04"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;So as you see above, we will use the cloud variant of the Ubuntu 22.04 image from the specified server using the simplestream protocol.&lt;/p&gt;
&lt;h3&gt;
  
  
  Dynamically set parameters for starting a VM
&lt;/h3&gt;
&lt;h4&gt;
  
  
  VM name and resource limits
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Our VM will need a name, and we want to limit the number of CPUs and the memory.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start Docker VM&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;community.general.lxd_container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# ...&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
        &lt;span class="c1"&gt;# ...&lt;/span&gt;
        &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="c1"&gt;# ...&lt;/span&gt;
          &lt;span class="na"&gt;limits.cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_cpu&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
          &lt;span class="na"&gt;limits.memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_memory&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The value of these parameters will come from the inventory file, but I wanted to keep this task short, so we will set the variables in the "vars" section of the play:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# region play: Create the VM&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create the VM&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker_vm_host_machines&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;vm_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default('docker',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_memory&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default('4GiB',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_cpus&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(4,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;pre_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This way we have default values even if the config parameters are not defined in the inventory file or when defined with empty values (that is what "true" for as second parameter). I defined only the name of the virtual machine in the inventory and let Ansible use the default CPU and memory limits. Now we got to another interesting part.&lt;/p&gt;
&lt;h4&gt;
  
  
  Cloud-init config for SSH access and sudo password
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;We can create a virtual machine with the already described parameters, but without cloud-init configs we have no users in the virtual machine, not to mention the SSH configuration for that user. We want the following cloud-init config:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#cloud-config&lt;/span&gt;
&lt;span class="na"&gt;users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;vm_user&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
    &lt;span class="na"&gt;lock_passwd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;groups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sudo&lt;/span&gt;
    &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/bin/bash&lt;/span&gt;
    &lt;span class="na"&gt;passwd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_pass&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;ssh_authorized_keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;lookup('file'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;vm_ssh_pub_key)&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;

&lt;span class="na"&gt;packages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;openssh-server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;First of all we needed to install the openssh-server package and then define a list of users. There will be only one user by default and that user will be in the "sudo" group. The user's default shell will be the bash shell, and we define a password and a list of SSH public keys. The password comes from the &lt;code&gt;vm_pass&lt;/code&gt; variable, and we will need the "file" lookup plugin to read the public key defined in the &lt;code&gt;vm_ssh_pub_key&lt;/code&gt; variable. Let's add it to the task:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start Docker VM&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;community.general.lxd_container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# ...&lt;/span&gt;
        &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="c1"&gt;# ...&lt;/span&gt;
          &lt;span class="na"&gt;cloud-init.user-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;#cloud-config&lt;/span&gt;
            &lt;span class="s"&gt;users:&lt;/span&gt;
              &lt;span class="s"&gt;- name: {{ vm_user }}&lt;/span&gt;
                &lt;span class="s"&gt;lock_passwd: false&lt;/span&gt;
                &lt;span class="s"&gt;groups: sudo&lt;/span&gt;
                &lt;span class="s"&gt;shell: /bin/bash&lt;/span&gt;
                &lt;span class="s"&gt;passwd: "{{ vm_pass }}"&lt;/span&gt;
                &lt;span class="s"&gt;ssh_authorized_keys:&lt;/span&gt;
                  &lt;span class="s"&gt;- {{ lookup('file', vm_ssh_pub_key) }}&lt;/span&gt;

            &lt;span class="s"&gt;packages:&lt;/span&gt;
              &lt;span class="s"&gt;- openssh-server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And set the variables:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# region play: Create the VM&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create the VM&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker_vm_host_machines&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# ...&lt;/span&gt;
    &lt;span class="na"&gt;vm_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_user&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default('manager',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_pass&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible.builtin.password_hash(salt=vm_pass_salt)&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_ssh_pub_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_ssh_pub_key&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(vm_ssh_priv_key&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'.pub',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;pre_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The default user will be "manager" and there will be no default password, but we need to use &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/password_hash_filter.html" rel="noopener noreferrer"&gt;ansible.builtin.password_hash&lt;/a&gt; to convert the plain text password to a hash. To use this filter, you need to add the following line to the &lt;code&gt;requirements.txt&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;passlib==1.7.4 # for using ansible.builtin.password_hash()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Don't forget to run &lt;code&gt;pip install -r requirements.txt&lt;/code&gt; to install the new library.&lt;/p&gt;

&lt;p&gt;Alternatively, you could remove the filter and pass a password hash directly. The path of the SSH public key will come from the path of the private key, plus &lt;code&gt;.pub&lt;/code&gt; as extension, but we still let the user override it. You might have noticed that we have an argument for &lt;code&gt;password_hash&lt;/code&gt; called "salt" and the value will come from the variable &lt;code&gt;vm_pass_salt&lt;/code&gt;. That means we will need to set two more variables.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# region play: Create the VM&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create the VM&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker_vm_host_machines&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# ...&lt;/span&gt;
    &lt;span class="na"&gt;vm_pass_salt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_pass_salt&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_ssh_priv_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_ssh_priv_key&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;pre_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Note that the order of the variables does not matter. If you prefer defining the variables first which will be used by other variables, that's fine. If you want to start with the variables that you will eventually refer to in the tasks, that's okay too. For me, it felt easier to explain the variables in this order. You can also notice that there are no default values here, so we add pre tasks to check the required variables:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;pre_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Fail if config_lxd_docker_vm_ssh_priv_key is not defined&lt;/span&gt;
      &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config_lxd_docker_vm_ssh_priv_key | default('', &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s"&gt;) == ''&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.fail&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;msg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;You must set an SSH private key to log in to the VM&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Fail if config_lxd_docker_vm_pass_salt is not defined&lt;/span&gt;
      &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config_lxd_docker_vm_pass_salt | default('', &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s"&gt;) == ''&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.fail&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;msg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;You must set a password salt for the sudo password to log in to the VM&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now our whole play looks like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# region play: Create the VM&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create the VM&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker_vm_host_machines&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;vm_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default('docker',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_memory&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default('4GiB',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_cpus&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(4,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_user&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default('manager',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_pass&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible.builtin.password_hash(salt=vm_pass_salt)&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_ssh_pub_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_ssh_pub_key&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(vm_ssh_priv_key&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'.pub',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;

    &lt;span class="na"&gt;vm_pass_salt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_pass_salt&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_ssh_priv_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_ssh_priv_key&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;pre_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Fail if config_lxd_docker_vm_ssh_priv_key is not defined&lt;/span&gt;
      &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config_lxd_docker_vm_ssh_priv_key | default('', &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s"&gt;) == ''&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.fail&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;msg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;You must set an SSH private key to log in to the VM&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Fail if config_lxd_docker_vm_pass_salt is not defined&lt;/span&gt;
      &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config_lxd_docker_vm_pass_salt | default('', &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s"&gt;) == ''&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.fail&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;msg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;You must set a password salt for the sudo password to log in to the VM&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

    &lt;span class="c1"&gt;# region task: Start Docker VM&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start Docker VM&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;community.general.lxd_container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unix:/var/snap/lxd/common/lxd/unix.socket&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;virtual-machine&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;started&lt;/span&gt;
        &lt;span class="na"&gt;wait_for_ipv4_addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image&lt;/span&gt;
          &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://cloud-images.ubuntu.com/releases&lt;/span&gt;
          &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;simplestreams&lt;/span&gt;
          &lt;span class="na"&gt;alias&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;22.04"&lt;/span&gt;
        &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;boot.autostart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false"&lt;/span&gt;
          &lt;span class="na"&gt;limits.cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_cpu&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
          &lt;span class="na"&gt;limits.memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_memory&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
          &lt;span class="na"&gt;cloud-init.user-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;#cloud-config&lt;/span&gt;
            &lt;span class="s"&gt;users:&lt;/span&gt;
              &lt;span class="s"&gt;- name: {{ vm_user }}&lt;/span&gt;
                &lt;span class="s"&gt;lock_passwd: false&lt;/span&gt;
                &lt;span class="s"&gt;groups: sudo&lt;/span&gt;
                &lt;span class="s"&gt;shell: /bin/bash&lt;/span&gt;
                &lt;span class="s"&gt;passwd: "{{ vm_pass }}"&lt;/span&gt;
                &lt;span class="s"&gt;ssh_authorized_keys:&lt;/span&gt;
                  &lt;span class="s"&gt;- {{ lookup('file', vm_ssh_pub_key) }}&lt;/span&gt;

            &lt;span class="s"&gt;packages:&lt;/span&gt;
              &lt;span class="s"&gt;- openssh-server&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;That is a completely fine play in a playbook, but we have some undefined variables in the play which we want to set or other variables we want to override. Let's add the following required variables to our inventory file.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;all&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# ...&lt;/span&gt;
    &lt;span class="na"&gt;config_lxd_docker_vm_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sops.config_lxd_docker_vm_pass&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;config_lxd_docker_vm_pass_salt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sops.config_lxd_docker_vm_pass_salt&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;config_lxd_docker_vm_ssh_priv_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;~/.ssh/ansible&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In my case, the SSH private key is the same as I used for the host. I could use a template here too to get the value from the other variable, but I don't recommend reading values from Ansible's built-in variables, because it could lead to some confusion when you change how you access the host, and you forget that it was used somewhere else too. You could introduce a third variable like &lt;code&gt;config_global_ssh_priv_key&lt;/code&gt; and read the value from it in the value of the other two variables. The password and the salt parameter comes &lt;a href="https://dev.to/rimelek/use-sops-in-ansible-to-read-your-secrets-2gfa"&gt;from a secret&lt;/a&gt;. To add the variables, if your secret is already encrypted, use the helper script created for sops.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./sops.sh secret.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is the content of my encrypted &lt;code&gt;secrets.yml&lt;/code&gt; in the project root:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;become_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HomeLab2023&lt;/span&gt;
&lt;span class="na"&gt;config_lxd_docker_vm_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
&lt;span class="na"&gt;config_lxd_docker_vm_pass_salt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;3rehbvjfbr&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Run the playbook to start the VM
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;We can now run the playbook which will create a new virtual machine on the remote server.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./run.sh playbook-lxd-docker-vm.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can now SSH to the remote server and see that the virtual machine is running:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;lxc list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;My output:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+--------+---------+------------------------+------+-----------------+-----------+
|  NAME  |  STATE  |          IPV4          | IPV6 |      TYPE       | SNAPSHOTS |
+--------+---------+------------------------+------+-----------------+-----------+
| docker | RUNNING | 10.17.181.143 (enp5s0) |      | VIRTUAL-MACHINE | 0         |
+--------+---------+------------------------+------+-----------------+-----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you want to get a shell as root in the virtual machine, just run&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;lxc shell docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You could also use SSH. You already know the IP address, which was in the output of &lt;code&gt;lxc list&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh manager@10.17.181.143
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We haven't installed anything in the VM yet, but we can finally play with it and delete it when we are done.&lt;/p&gt;
&lt;h2&gt;
  
  
  Get the IP address of a virtual machine
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The IP address will be different on your machine, and I always like to share commands that will run the same way for everyone, so let's get the IP address automatically.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;vm_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;docker
&lt;span class="nv"&gt;vm_info&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lxc list &lt;span class="s2"&gt;"^&lt;/span&gt;&lt;span class="nv"&gt;$vm_name&lt;/span&gt;&lt;span class="s2"&gt;$"&lt;/span&gt; &lt;span class="nt"&gt;--format&lt;/span&gt; json | jq &lt;span class="s1"&gt;'.[0]'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;ip_addresses&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vm_info&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'
      .state.network
      | to_entries
      | .[].value.addresses[]
      | select(.family == "inet" and .scope == "global")
      | .address
    '&lt;/span&gt;
&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now the &lt;code&gt;$ip_addresses&lt;/code&gt; variable probably contains a single IP address, unless the virtual machine has multiple networks. In this tutorial we know that we added only one, but later we could change it, so why not make it work with multiple networks as well.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;network&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$vm_info&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.expanded_devices.eth0.network'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;ip_range&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lxc network show &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$network&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | yq &lt;span class="s1"&gt;'.config["ipv4.address"]'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;With the above two lines we will get the CIDR notation of the network address. This is what I have in &lt;code&gt;$ip_range&lt;/code&gt;: &lt;code&gt;10.17.181.1/24&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The IP address in the value is the IP of the gateway, but the gateway and the mask size (24) together describes an IP range which has to include the IP we are looking for. With this specific mask size we could check that the IP address starts with &lt;code&gt;10.17.181.&lt;/code&gt;, but later we could define a different network with another mask size, so this is when we could use &lt;code&gt;grepcidr&lt;/code&gt;. On Ubuntu, we can install it this way:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;grepcidr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Or download it from the website: &lt;a href="https://www.pc-tools.net/unix/grepcidr/" rel="noopener noreferrer"&gt;https://www.pc-tools.net/unix/grepcidr/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, we can run the following command to make sure that the final list contains IP addresses only in the specified IP range:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ip_addresses&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | grepcidr &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ip_range&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;That's nice. We could detect the IP address, which is not that useful interactively, but it will help us continue with the next step.&lt;/p&gt;
&lt;h2&gt;
  
  
  Accessing the virtual machine from the Ansible controller
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Why don't we use LAN network
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;In the previous sections we learned about SSH-ing to the virtual machine from the host on which the VM is running and also about detecting the IP address of the VM automatically. It would be easier if we could use the SSH client with the SSH keys on the Ansible controller to log in to the virtual machine. One way could be changing the network settings of the virtual machine to get an IP address on the LAN network instead of on the local LXD bridge (lxdbr0). That would work if you have a local network, and you can manage the IP addresses to assign one to a virtual machine. It is not always the case, so I have chosen a different approach. We will keep our current IP address and learn about proxies and how SSH can help us again.&lt;/p&gt;
&lt;h3&gt;
  
  
  LXD proxy
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;You could use an &lt;a href="https://documentation.ubuntu.com/lxd/en/latest/reference/devices_proxy/" rel="noopener noreferrer"&gt;LXD proxy device&lt;/a&gt;, but there are multiple requirements.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The configuration is different for virtual machines and containers.&lt;/li&gt;
&lt;li&gt;You would need a static IP address which is recommended anyway, so it won't be a problem in the future, but for now, we are experimenting with dynamically assigned IP addresses.&lt;/li&gt;
&lt;li&gt;Without firewall settings, you would make the port available from all machines. In a local homelab that is probably fine, but I prefer another solution which is coming in the following sections.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  SSH tunnel
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;My first idea was simply using an SSH tunnel. I do it frequently. It is basically SSH-ing to a host and use that SSH connection to forward a request from a specific port of the client through the host to an endpoint which is accessible from the host. Note that my remote server's hostname is still &lt;code&gt;ta-lxlt&lt;/code&gt; and the IP address of the virtual machine is &lt;code&gt;10.17.181.143&lt;/code&gt;. I open a terminal and run the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-L&lt;/span&gt; 127.0.0.1:2000:10.17.181.143:22 &lt;span class="nt"&gt;-N&lt;/span&gt; ta-lxlt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After &lt;code&gt;-L&lt;/code&gt; I have the port mapping. The format is&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;CLIENT_IP&amp;gt;:&amp;lt;CLIENT_PORT&amp;gt;:&amp;lt;ENDPOINT_IP&amp;gt;:&amp;lt;ENDPOINT_PORT&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;CLIENT_IP&lt;/code&gt; is optional, but I want to make sure the port is accessible only from localhost. If you know how port forwarding works with Docker, this is very similar, except Docker doesn't need the endpoint ip, since it is always the IP of the container. &lt;code&gt;-N&lt;/code&gt; allows me to keep the SSH connection without actually executing any command on the remote server. I open a new terminal and run the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-p&lt;/span&gt; 2000 manager@127.0.0.1 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you don't want it to ask for the password, use the SSH key:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-p&lt;/span&gt; 2000 &lt;span class="nt"&gt;-i&lt;/span&gt; ~/.ssh/ansible manager@127.0.0.1 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;or use the SSH agent as we discussed it in &lt;a href="https://dev.to/rimelek/ansible-playbook-and-ssh-keys-33bo"&gt;Ansible playbook and SSH keys&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Although this is not what we will use for the SSH connection, SSH tunnels can help you with accessing other ports as well, like a web application running inside the virtual machine.&lt;/p&gt;
&lt;h3&gt;
  
  
  SSH jump servers
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;You know that you can SSH to the remote server and from that you can SSH to the virtual machine. You also know that you can pass a command to SSH to execute on the remote server, so you can also run another SSH command.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-t&lt;/span&gt; ta-lxlt &lt;span class="nt"&gt;--&lt;/span&gt; ssh manager@10.17.181.143
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;-t&lt;/code&gt; was required because the SSH command running on the remote server required a pseudo-terminal. We can also try to SSH to the virtual machine and pass the address of a jump server. A jump server is a server to which we have access, and from which we have access to another server. In our case, the virtual machine.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-J&lt;/span&gt; ta-lxlt manager@10.17.181.143
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This jump server works only because I configured that host in my SSH client config (&lt;code&gt;$HOME/.ssh/config&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host ta-lxlt ta-lxlt.lan 192.168.4.58
  HostName 192.168.4.58
  Port 22
  User ta
  IdentityFile ~/.ssh/ta-lxlt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We can add our virtual machine too. We will need a name that we can pass to the SSH client. It could be the internal full-qualified hostname of the VM with the remote server's hostname as suffix. Something like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host docker.lxd.ta-lxlt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The internal full qualified hostname is the name of the VM ending with ".lxd". You can confirm it by running the following command on the remote server:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;lxc &lt;span class="nb"&gt;exec &lt;/span&gt;docker &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;hostname&lt;/span&gt; &lt;span class="nt"&gt;-A&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Host docker.lxd.ta-lxlt
  Hostname 10.17.181.143
  User manager
  IdentityFile ~/.ssh/ansible
  ProxyJump ta-lxlt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now that we have the ProxyJump parameter defined in the SSH client config, the following command on the Ansible controller is enough to log in to the virtual machine:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh docker.lxd.ta-lxlt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;That way you don't have to remember the IP address every time. We will automate the SSH client configuration of the virtual machine, so even if the IP address changes, you can still log in using the same hostname.&lt;/p&gt;
&lt;h3&gt;
  
  
  Using separate config files for different kind of servers
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Using &lt;code&gt;$HOME/.ssh/config&lt;/code&gt; for all your SSH client configs works, but I had so many servers (physical, virtual, behind VPN connections) that I found it hard to maintain the config file. Fortunately with recent SSH versions, you can include other config files in the main SSH config. For example, this is how my main config looks like:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Include config.d/homelab
Include config.d/external-services
Include config.d/home
Include config.d/multipass

Include config.d/docker.lxd.ta-lxlt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The configuration of &lt;code&gt;ta-lxlt&lt;/code&gt; is in &lt;code&gt;config.d/home&lt;/code&gt;, I also have an old swarm cluster client config in &lt;code&gt;config.d/homelab&lt;/code&gt;, but I found it better to include the dynamically generated client config for the virtual machine directly in the main config.&lt;/p&gt;
&lt;h2&gt;
  
  
  Use Ansible to configure the SSH client
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Make sure config.d exists
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Since &lt;code&gt;config.d&lt;/code&gt; is not created by default,we need to make sure it exists. This is hour following task in the playbook:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="c1"&gt;# region task: Create base dir for the new SSH client config file&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create base dir for the new SSH client config file&lt;/span&gt;
      &lt;span class="na"&gt;delegate_to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;directory&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('env',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'HOME')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/.ssh/config.d/"&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;delegate_to&lt;/code&gt; is used to run the task on a specific host regardless of where the rest of the tasks were running, and we can use the &lt;code&gt;env&lt;/code&gt; lookup plugin to get the home of our local user. You know everything else from the previous episodes.&lt;/p&gt;
&lt;h3&gt;
  
  
  Get information about the VM
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;As we did in the section Get the IP address of a virtual machine, we need to get information about the virtual machine, but now we use Ansible.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="c1"&gt;# region task: Get VM info&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get VM info&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lxc list "^{{ vm_name }}$" --format json&lt;/span&gt;
      &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_vm_info_command&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We save the info in a registered variable. Here comes the fun part. We already learned that the order of the variables doesn't matter. In order to avoid using blocks and indenting our playbook deeper, we can add our helper variables to the &lt;code&gt;vars&lt;/code&gt; section of the playbook. I will keep using the underscore character as a prefix.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;_vm_info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_vm_info_command.stdout&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;from_json&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;first&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This way we converted the json text in the standard output to a list object  in Ansible, and we got the first item from the list.&lt;/p&gt;
&lt;h3&gt;
  
  
  Get the IP addresses used by the VM:
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;To get the IP addresses in Ansible, we can use a very similar approach to what we used in the terminal. The difference is that we used &lt;code&gt;jq&lt;/code&gt; in the terminal (we installed it in &lt;a href="https://dev.to/rimelek/using-facts-and-the-github-api-in-ansible-4i00#install-the-latest-yq-from-github"&gt;Using facts and the GitHub API in Ansible&lt;/a&gt;), and we will use Jinja filters in Ansible. Let's add the following variable to the &lt;code&gt;vars&lt;/code&gt; section of the playbook.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;_ip_addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt; 
        &lt;span class="s"&gt;_vm_info.state.network&lt;/span&gt;
          &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;dict2items&lt;/span&gt;
          &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;map(attribute='value.addresses')&lt;/span&gt;
          &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;flatten&lt;/span&gt;
          &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;selectattr('family',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'equalto',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'inet')&lt;/span&gt;
          &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;selectattr('scope',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'equalto',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'global')&lt;/span&gt;
          &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;map(attribute='address')&lt;/span&gt;
      &lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Get information about the LXD network
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Getting the name of the network is the easy part. Add the following variable to the &lt;code&gt;vars&lt;/code&gt; section of the playbook:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;_network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_vm_info.expanded_devices.eth0.network&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now that we know the name of the network, we will need to find out what subnet the network is using. Let's add the following task to the playbook:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="c1"&gt;# region task: Get network info&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get network info&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lxc network show "{{ _network }}"&lt;/span&gt;
      &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_network_info_command&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Remember, &lt;code&gt;lxc network show&lt;/code&gt; returns a YAML output, not json. So we need to add the following variable to the &lt;code&gt;vars&lt;/code&gt; section to the playbook.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;_network_info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_network_info_command.stdout&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;from_yaml&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Get the IP usable IP address assigned to eth0
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Now that we have everything about the LXD network, we can get the CIDR notation of the subnet and the actual IP address of the VM. We will use a new filter in Ansible, called &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/utils/reduce_on_network_filter.html" rel="noopener noreferrer"&gt;ansible.utils.reduce_on_network&lt;/a&gt;. It is basically the alternative to &lt;code&gt;grepcidr&lt;/code&gt; which we used in the terminal. Let's add the following variables to the &lt;code&gt;vars&lt;/code&gt; section of the playbook:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;_ip_range&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_network_info.config['ipv4.address']&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;
      &lt;span class="s"&gt;_ip_addresses&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible.utils.reduce_on_network(_ip_range)&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;first&lt;/span&gt;
      &lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Because I'm super careful, I also filter to the first item in the list, so even if for some reason there are multiple lines, I will deal with only one in the next steps. This filter requires the following line in the &lt;code&gt;requirements.txt&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;netaddr==0.9.0 # for using ansible.utils.reduce_on_network()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Don't forget to run &lt;code&gt;pip install -r requirements.txt&lt;/code&gt; to install the new library.&lt;/p&gt;
&lt;h3&gt;
  
  
  Generate the new SSH client config for the VM
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Since we put our helper variables to the &lt;code&gt;vars&lt;/code&gt; section of the playbook, our task to generate and save the client config will be very short:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="c1"&gt;# region task: Add SSH client config&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add SSH client config&lt;/span&gt;
      &lt;span class="na"&gt;delegate_to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.copy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('env',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'HOME')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/.ssh/config.d/{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_inventory_hostname&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;Host {{ vm_inventory_hostname }}&lt;/span&gt;
            &lt;span class="s"&gt;Hostname {{ _ip }}&lt;/span&gt;
            &lt;span class="s"&gt;User {{ vm_user }}&lt;/span&gt;
            &lt;span class="s"&gt;IdentityFile {{ vm_ssh_priv_key }}&lt;/span&gt;

            &lt;span class="s"&gt;ProxyJump {{ inventory_hostname }}&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We had to delegate it to localhost again. We used the good old copy module save the content defined in the task to the destination file. There is one variable missing and that is the name of the file which will also be the host in the client config. Let's add the following variable to the &lt;code&gt;vars&lt;/code&gt; section of the playbook:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;vm_inventory_hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;
        &lt;span class="s"&gt;config_lxd_docker_vm_inventory_hostname&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(vm_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'.lxd.'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;inventory_hostname,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;
      &lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Note that the default suffix is the inventory hostname of the remote server. This is the name which you use in the inventory file under the hosts section. For me, it is the same as the hostname of the remote server, but it could be different. If you want to override the generated name, you can set &lt;code&gt;config_lxd_docker_vm_inventory_hostname&lt;/code&gt; in the inventory file.&lt;/p&gt;

&lt;p&gt;As a final step, we have to include the generated config file in the main config:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="c1"&gt;# region task: Include the new SSH client config in the main config&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Include the new SSH client config in the main config&lt;/span&gt;
      &lt;span class="na"&gt;delegate_to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.lineinfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;
        &lt;span class="na"&gt;create&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('env',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'HOME')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/.ssh/config"&lt;/span&gt;
        &lt;span class="na"&gt;line&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Include&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config.d/{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_inventory_hostname&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We use the &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/lineinfile_module.html" rel="noopener noreferrer"&gt;lineinfile module&lt;/a&gt; to add a new line to the main config. Since it is also possible that you don't have that main config either (although at this point it is unlikely), &lt;code&gt;create: true&lt;/code&gt; makes sure the file is created if necessary. We also set the filepath in &lt;code&gt;path&lt;/code&gt; and the line in the &lt;code&gt;line&lt;/code&gt; parameter.&lt;/p&gt;
&lt;h3&gt;
  
  
  Run the playbook to generate the SSH client config
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Now you can finally run the playbook. In case you couldn't follow the steps, this is the full playbook:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="c1"&gt;# region play: Create the VM&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create the VM&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker_vm_host_machines&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;vm_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default('docker',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_memory&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default('4GiB',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_cpus&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(4,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_user&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default('manager',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_pass&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible.builtin.password_hash(salt=vm_pass_salt)&lt;/span&gt;&lt;span class="nv"&gt;  &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_ssh_pub_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_ssh_pub_key&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(vm_ssh_priv_key&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'.pub',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;

    &lt;span class="na"&gt;vm_pass_salt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_pass_salt&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_ssh_priv_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config_lxd_docker_vm_ssh_priv_key&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;vm_inventory_hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;
        &lt;span class="s"&gt;config_lxd_docker_vm_inventory_hostname&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(vm_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'.lxd.'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;inventory_hostname,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;
      &lt;span class="s"&gt;}}"&lt;/span&gt;

    &lt;span class="na"&gt;_vm_info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_vm_info_command.stdout&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;from_json&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;first&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_ip_addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt; 
        &lt;span class="s"&gt;_vm_info.state.network&lt;/span&gt;
          &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;dict2items&lt;/span&gt;
          &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;map(attribute='value.addresses')&lt;/span&gt;
          &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;flatten&lt;/span&gt;
          &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;selectattr('family',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'equalto',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'inet')&lt;/span&gt;
          &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;selectattr('scope',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'equalto',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'global')&lt;/span&gt;
          &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;map(attribute='address')&lt;/span&gt;
      &lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_vm_info.expanded_devices.eth0.network&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_network_info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_network_info_command.stdout&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;from_yaml&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_ip_range&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_network_info.config['ipv4.address']&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;
      &lt;span class="s"&gt;_ip_addresses&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible.utils.reduce_on_network(_ip_range)&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;first&lt;/span&gt;
      &lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;pre_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Fail if config_lxd_docker_vm_ssh_priv_key is not defined&lt;/span&gt;
      &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config_lxd_docker_vm_ssh_priv_key | default('', &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s"&gt;) == ''&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.fail&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;msg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;You must set an SSH private key to log in to the VM&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Fail if config_lxd_docker_vm_pass_salt is not defined&lt;/span&gt;
      &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config_lxd_docker_vm_pass_salt | default('', &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s"&gt;) == ''&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.fail&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;msg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;You must set a password salt for the sudo password to log in to the VM&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

    &lt;span class="c1"&gt;# region task: Start Docker VM&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start Docker VM&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;community.general.lxd_container&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unix:/var/snap/lxd/common/lxd/unix.socket&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;virtual-machine&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;started&lt;/span&gt;
        &lt;span class="na"&gt;wait_for_ipv4_addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image&lt;/span&gt;
          &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://cloud-images.ubuntu.com/releases&lt;/span&gt;
          &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;simplestreams&lt;/span&gt;
          &lt;span class="na"&gt;alias&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;22.04"&lt;/span&gt;
        &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;boot.autostart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false"&lt;/span&gt;
          &lt;span class="na"&gt;limits.cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_cpu&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
          &lt;span class="na"&gt;limits.memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_memory&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
          &lt;span class="na"&gt;cloud-init.user-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;#cloud-config&lt;/span&gt;
            &lt;span class="s"&gt;users:&lt;/span&gt;
              &lt;span class="s"&gt;- name: {{ vm_user }}&lt;/span&gt;
                &lt;span class="s"&gt;lock_passwd: false&lt;/span&gt;
                &lt;span class="s"&gt;groups: sudo&lt;/span&gt;
                &lt;span class="s"&gt;shell: /bin/bash&lt;/span&gt;
                &lt;span class="s"&gt;passwd: "{{ vm_pass }}"&lt;/span&gt;
                &lt;span class="s"&gt;ssh_authorized_keys:&lt;/span&gt;
                  &lt;span class="s"&gt;- {{ lookup('file', vm_ssh_pub_key) }}&lt;/span&gt;

            &lt;span class="s"&gt;packages:&lt;/span&gt;
              &lt;span class="s"&gt;- openssh-server&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;

    &lt;span class="c1"&gt;# region task: Create base dir for the new SSH client config file&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create base dir for the new SSH client config file&lt;/span&gt;
      &lt;span class="na"&gt;delegate_to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;directory&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('env',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'HOME')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/.ssh/config.d/"&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;

    &lt;span class="c1"&gt;# region task: Get VM info&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get VM info&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lxc list "^{{ vm_name }}$" --format json&lt;/span&gt;
      &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_vm_info_command&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;

    &lt;span class="c1"&gt;# region task: Get network info&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get network info&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lxc network show "{{ _network }}"&lt;/span&gt;
      &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_network_info_command&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;

    &lt;span class="c1"&gt;# region task: Add SSH client config&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add SSH client config&lt;/span&gt;
      &lt;span class="na"&gt;delegate_to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.copy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('env',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'HOME')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/.ssh/config.d/{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_inventory_hostname&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;Host {{ vm_inventory_hostname }}&lt;/span&gt;
            &lt;span class="s"&gt;Hostname {{ _ip }}&lt;/span&gt;
            &lt;span class="s"&gt;User {{ vm_user }}&lt;/span&gt;
            &lt;span class="s"&gt;IdentityFile {{ vm_ssh_priv_key }}&lt;/span&gt;

            &lt;span class="s"&gt;ProxyJump {{ inventory_hostname }}&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;

    &lt;span class="c1"&gt;# region task: Include the new SSH client config in the main config&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Include the new SSH client config in the main config&lt;/span&gt;
      &lt;span class="na"&gt;delegate_to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.lineinfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;present&lt;/span&gt;
        &lt;span class="na"&gt;create&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('env',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'HOME')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/.ssh/config"&lt;/span&gt;
        &lt;span class="na"&gt;line&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Include&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;config.d/{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vm_inventory_hostname&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="c1"&gt;# endregion&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And the command to run it:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./run.sh playbook-lxd-docker-vm.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After that, you can log in to the VM with the following command from the Ansible controller:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh docker.lxd.&amp;lt;server_hostname&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In my case, it is:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh docker.lxd.ta-lxlt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Access a webservice on the created VM
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Run a Python http server as an example in the virtual machine:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; http.server 8888
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Open a tunnel on the ansible controller:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-L&lt;/span&gt; 127.0.0.1:8888:127.0.0.1:8888 &lt;span class="nt"&gt;-N&lt;/span&gt; docker.lxd.ta-lxlt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And open &lt;code&gt;127.0.0.1:8888&lt;/code&gt; from a web browser or run the following curl command on the Ansible controller:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl 127.0.0.1:8888
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;My output:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;http-equiv=&lt;/span&gt;&lt;span class="s"&gt;"Content-Type"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"text/html; charset=utf-8"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;Directory listing for /&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Directory listing for /&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;hr&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;ul&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;li&amp;gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;".bash_history"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;.bash_history&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&amp;lt;/li&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;li&amp;gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;".bash_logout"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;.bash_logout&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&amp;lt;/li&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;li&amp;gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;".bashrc"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;.bashrc&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&amp;lt;/li&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;li&amp;gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;".cache/"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;.cache/&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&amp;lt;/li&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;li&amp;gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;".profile"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;.profile&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&amp;lt;/li&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;li&amp;gt;&amp;lt;a&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;".ssh/"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;.ssh/&lt;span class="nt"&gt;&amp;lt;/a&amp;gt;&amp;lt;/li&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/ul&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;hr&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Just think about what happened here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You SSH-d to a remote server&lt;/li&gt;
&lt;li&gt;Through that remote server you SSH-d to the virtual machine directly from the Ansible controller&lt;/li&gt;
&lt;li&gt;You opened a tunnel for a web application from the Ansible controller to the localhost of the virtual machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Yes, you are right, configuring a LAN IP would have been easier, but not an option for everyone, and if you ask me, less fun ass well.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;I think it's important to mention again that our current solution is not ideal. We use a dynamically assigned IP address for the virtual machine, so even though the IP address will not change every time you restart the virtual machine, it is not dedicated to this VM. That means, when you can't access the virtual machine from the Ansible controller, you need to run the playbook again, even if the virtual machine is already created. For servers, we usually use static IP addresses, but to choose the right IP address could be another challenge, so for now, we used a dynamically assigned IP. That's how we learn step by step.&lt;/p&gt;

&lt;p&gt;The final source code of this episode can be found on GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rimelek/homelab/tree/tutorial.episode.9.1" rel="noopener noreferrer"&gt;https://github.com/rimelek/homelab/tree/tutorial.episode.9.1&lt;/a&gt;&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/rimelek" rel="noopener noreferrer"&gt;
        rimelek
      &lt;/a&gt; / &lt;a href="https://github.com/rimelek/homelab" rel="noopener noreferrer"&gt;
        homelab
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Source code to create a home lab. Part of a video tutorial
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;README&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;This project was created to help you build your own home lab where you can test
your applications and configurations without breaking your workstation, so you can
learn on cheap devices without paying for more expensive cloud services.&lt;/p&gt;
&lt;p&gt;The project contains code written for the tutorial, but you can also use parts of it
if you refer to this repository.&lt;/p&gt;
&lt;p&gt;Tutorial on YouTube in English: &lt;a href="https://www.youtube.com/watch?v=K9grKS335Mo&amp;amp;list=PLzMwEMzC_9o7VN1qlfh-avKsgmiU8Jofv" rel="nofollow noopener noreferrer"&gt;https://www.youtube.com/watch?v=K9grKS335Mo&amp;amp;list=PLzMwEMzC_9o7VN1qlfh-avKsgmiU8Jofv&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Tutorial on YouTube in Hungarian: &lt;a href="https://www.youtube.com/watch?v=dmg7lYsj374&amp;amp;list=PLUHwLCacitP4DU2v_DEHQI0U2tQg0a421" rel="nofollow noopener noreferrer"&gt;https://www.youtube.com/watch?v=dmg7lYsj374&amp;amp;list=PLUHwLCacitP4DU2v_DEHQI0U2tQg0a421&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Note: The inventory.yml file is not shared since that depends on the actual environment
so it will be different for everyone. If you want to learn more about the inventory file
watch the videos on YouTube or read the written version on &lt;a href="https://dev.to" rel="nofollow"&gt;https://dev.to&lt;/a&gt;. Links in
the video descriptions on YouTube.&lt;/p&gt;
&lt;p&gt;You can also find an example inventory file in the project root. You can copy that and change
the content, so you will use your IP…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/rimelek/homelab" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
      <category>virtualmachine</category>
      <category>infrastructureascode</category>
      <category>ansible</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Everything about Docker volumes</title>
      <dc:creator>Ákos Takács</dc:creator>
      <pubDate>Sun, 07 Jan 2024 20:54:45 +0000</pubDate>
      <link>https://dev.to/rimelek/everything-about-docker-volumes-1ib0</link>
      <guid>https://dev.to/rimelek/everything-about-docker-volumes-1ib0</guid>
      <description>&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;"Where are the Docker volumes?" This question comes up a lot on the &lt;a href="https://forums.docker.com" rel="noopener noreferrer"&gt;Docker Forum&lt;/a&gt;. There is no problem with curiosity, but this is usually asked when someone wants to edit or at least read files directly on the volume from a terminal or an IDE, but not through a container. So I must start with a statement:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; You should never handle files on a volume directly without entering a container, unless there is an emergency, and even then, only at your own risk.&lt;/p&gt;

&lt;p&gt;Why I am saying it, you will understand if you read the next sections. Although the original goal with this tutorial was to explain where the volumes are, it is hard to talk about it without understanding what the volumes are and what different options you have when using volumes. As a result of that, by reading this tutorial, you can learn basically everything about the local volumes, but you can also search for volume plugins on &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker Hub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Where does Docker store data?&lt;/li&gt;
&lt;li&gt;What is a Docker volume&lt;/li&gt;
&lt;li&gt;
Custom volume path

&lt;ul&gt;
&lt;li&gt;Custom volume path overview&lt;/li&gt;
&lt;li&gt;Avoid accidental data loss on volumes&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Docker CE volumes on Linux&lt;/li&gt;

&lt;li&gt;

Docker Desktop volumes

&lt;ul&gt;
&lt;li&gt;Docker Desktop volumes on macOS&lt;/li&gt;
&lt;li&gt;
Docker Desktop volumes on Windows

&lt;ul&gt;
&lt;li&gt;Switching between Linux and Windows containers&lt;/li&gt;
&lt;li&gt;Linux containers&lt;/li&gt;
&lt;li&gt;Windows containers&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Docker Desktop volumes on Linux&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;

Editing files on volumes

&lt;ul&gt;
&lt;li&gt;The danger of editing volume contents outside a container&lt;/li&gt;
&lt;li&gt;View and edit files htrough Docker Desktop&lt;/li&gt;
&lt;li&gt;
Container based dev environments

&lt;ul&gt;
&lt;li&gt;Docker Desktop Dev environment&lt;/li&gt;
&lt;li&gt;Visual Studio Code remote development&lt;/li&gt;
&lt;li&gt;Visual Studio Code dev containers&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where does Docker store data?
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Before we talk about the location of volumes, we first have to talk about the location of all data that Docker handles. When I say "Docker", I usually mean "Docker CE".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://learn-docker.it-sziget.hu/en/latest/pages/intro/getting-started.html#concept-docker-ce" rel="noopener noreferrer"&gt;Docker CE&lt;/a&gt; is the community edition of Docker and can run directly on Linux. It has a data root directory, which is the following by default:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/var/lib/docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can change it in the daemon configuration, so if it is changed on your system, you will need to replace this folder in the examples I show. To find out what the data root is, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker info &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s1"&gt;'{{ .DockerRootDir }}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In case of Docker Desktop of course, you will always have a virtual machine, so the path you get from the above command will be in the virtual machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Docker volume?
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;For historical reasons, the concept of volumes can be confusing. There is a &lt;a href="https://docs.docker.com/storage/volumes/" rel="noopener noreferrer"&gt;page in the documentation&lt;/a&gt; which describes what volumes are, but when you see a Compose file or a docker run command, you see two types of volumes, but only one of them is actually a volume.&lt;/p&gt;

&lt;p&gt;Example Compose file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd:2.4&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./docroot:/usr/local/apache2/htdocs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Did I just define a volume? No. It is a &lt;a href="https://docs.docker.com/storage/bind-mounts/" rel="noopener noreferrer"&gt;bind mount&lt;/a&gt;. Let’s just use the long syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd:2.4&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bind&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./docroot&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/local/apache2/htdocs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "volumes" section should have been "storage" or "mounts" to be more clear. In fact, the “docker run” command supports the &lt;code&gt;--mount&lt;/code&gt; option in addition to &lt;code&gt;-v&lt;/code&gt; and &lt;code&gt;--volume&lt;/code&gt;, and only &lt;code&gt;--mount&lt;/code&gt; supports the type parameter to directly choose between volume and bind mount.&lt;/p&gt;

&lt;p&gt;Then what do we call a volume? Let’s start with answering another question. What do we not call a volume? A file can never be a volume. A volume is always a directory, and it is a directory which is created by Docker and handled by Docker throughout the entire lifetime of the volume. The main purpose of a volume is to populate it with the content of the directory to which you mount it in the container. That’s not the case with bind mounts. Bind mounts just completely override the content of the mount point in the container, but at least you can choose where you want to mount it from.&lt;/p&gt;

&lt;p&gt;You should also know that you can disable copying data from the container to your volume and use it as a simple bind mount, except that Docker creates it in the Docker data root, and when you delete the volume after you wrote something on it, you will lose the data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;docroot&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd:2.4&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;volume&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docroot&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/local/apache2/htdocs&lt;/span&gt;
        &lt;span class="na"&gt;volume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;nocopy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find this and other parameters in the &lt;a href="https://docs.docker.com/compose/compose-file/05-services/#volumes" rel="noopener noreferrer"&gt;documentation of volumes in a compose file&lt;/a&gt;. Scroll down to the "Long syntax" to read about "nocopy".&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom volume path
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Custom volume path overview
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;There is indeed a special kind of volume which seems to mix bind mounts and volumes. The following example will assume you are using Docker CE on Linux.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;volume_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"test-volume"&lt;/span&gt;
&lt;span class="nb"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$volume_name&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$volume_name&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
docker volume create &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$volume_name&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--driver&lt;/span&gt; &lt;span class="s2"&gt;"local"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--opt&lt;/span&gt; &lt;span class="s2"&gt;"type=none"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--opt&lt;/span&gt; &lt;span class="s2"&gt;"device=&lt;/span&gt;&lt;span class="nv"&gt;$source&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--opt&lt;/span&gt; &lt;span class="s2"&gt;"o=bind"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Okay, so you created a volume and you also specified where the source directory is (device), and you specified that it is a bind mount. Don’t worry, you find it confusing because it is confusing. o=bind doesn’t mean that you will bind mount a directory into the container, which will always happen, but that you will bind mount the directory to the path where Docker would have created the volume if you didn’t define the source.&lt;/p&gt;

&lt;p&gt;This is basically the same what you would do on Linux with the mount command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mount &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nb"&gt;bind source&lt;/span&gt;/ target/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without &lt;code&gt;-o&lt;/code&gt; bind the first argument must be a block device. This is why we use the “device” parameter, even though we mount a folder.&lt;/p&gt;

&lt;p&gt;This is one way to know where the Docker volume is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;em&gt;Even the the above example assumed Linux, custom volume path would work on other operating systems as well, since Docker Desktop would mount the required path into the virtual machine.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let’s just test if it works and inspect the volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume inspect test-volume
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will get a json like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CreatedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-01-05T00:55:15Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Driver"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"local"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Labels"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Mountpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/var/lib/docker/volumes/test-volume/_data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"test-volume"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Options"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"device"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/home/ta/test-volume"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"o"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bind"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"none"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Scope"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"local"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The “Mountpoint” field in the json is not the path in a container, but the path where the specified device should be mounted at. In our case, the device is actually a directory. So let’s see the content of the mount point:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker volume inspect test-volume &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s1"&gt;'{{ .Mountpoint }}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also check the content of the source directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt; test-volume/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course, both are empty as we have no container yet. How would Docker know what the content should be? As we already learned it, we need to mount the volume into a container to populate the volume.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; test-container &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; test-volume:/usr/local/apache2/htdocs &lt;span class="se"&gt;\&lt;/span&gt;
  httpd:2.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the content in the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec &lt;/span&gt;test-container &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-lai&lt;/span&gt; /usr/local/apache2/htdocs/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;total 16
 256115 drwxr-xr-x 2 root     root     4096 Jan  5 00:33 .
5112515 drwxr-xr-x 1 www-data www-data 4096 Apr 12  2023 ..
 256139 -rw-r--r-- 1      501 staff      45 Jun 11  2007 index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that we added the flag "i" to the "ls" command so we can see the inode number, which identifies the files and directories on the filesystem in the first column.&lt;/p&gt;

&lt;p&gt;Check the directory created by Docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo ls&lt;/span&gt; &lt;span class="nt"&gt;-lai&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker volume inspect test-volume &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s1"&gt;'{{ .Mountpoint }}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;256115 drwxr-xr-x 2 root root  4096 Jan  5 00:33 .
392833 drwx-----x 3 root root  4096 Jan  5 00:55 ..
256139 -rw-r--r-- 1  501 staff   45 Jun 11  2007 index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, only the parent directory is different, so we indeed see the same files in the container and in the directory created by Docker. Now let’s check our source directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-lai&lt;/span&gt; test-volume/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;total 12
256115 drwxr-xr-x  2 root root  4096 Jan  5 00:33 .
255512 drwxr-xr-x 11 ta   ta    4096 Jan  5 00:32 ..
256139 -rw-r--r--  1  501 staff   45 Jun 11  2007 index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, the same files, except the parent. We confirmed, that we could create an empty volume directory, we could populate it when we started a container and mounted the volume, and the files appeared where Docker creates volumes. Now let’s check one more thing. Since this is a special volume where we defined some parameters, there is an &lt;code&gt;opts.json&lt;/code&gt; right next to "_data"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo cat&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;dirname&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;docker volume inspect test-volume &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s1"&gt;'{{ .Mountpoint }}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/opts.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"MountType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"none"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"MountOpts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"bind"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"MountDevice"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"/home/ta/test-volume"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"Quota"&lt;/span&gt;&lt;span class="p"&gt;:{&lt;/span&gt;&lt;span class="nl"&gt;"Size"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now remove the test container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker container &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; test-container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the directory created by Docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo ls&lt;/span&gt; &lt;span class="nt"&gt;-lai&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker volume inspect test-volume &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s1"&gt;'{{ .Mountpoint }}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is empty now.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;392834 drwxr-xr-x 2 root root 4096 Jan  5 00:55 .
392833 drwx-----x 3 root root 4096 Jan  5 00:55 ..
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And notice that even the inode has changed, not just the content disappeared. On the other hand, the directory we created is untouched and you can still find the index.html there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Avoid accidental data loss on volumes
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Let me show you an example using Docker Compose. The compose file would be the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;docroot&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;driver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
    &lt;span class="na"&gt;driver_opts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;none&lt;/span&gt;
      &lt;span class="na"&gt;device&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./docroot&lt;/span&gt;
      &lt;span class="na"&gt;o&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bind&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;httpd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httpd:2.4&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;volume&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docroot&lt;/span&gt;
        &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/local/apache2/htdocs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can populate &lt;code&gt;./docroot&lt;/code&gt; in the project folder by running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will then find &lt;code&gt;index.html&lt;/code&gt; in the docroot folder. You probably know that you can delete a compose project by running &lt;code&gt;docker compose down&lt;/code&gt;, and delete the volumes too by passing the flag &lt;code&gt;-v&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose down &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can run it, and the volume will be destroyed, but not the content of the already populated "docroot" folder. It happens, because the folder which is managed by Docker in the Docker data root does not physically have the content. So the one that was managed by Docker could be safely removed, but it didn’t delete your data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker CE volumes on Linux
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;This question seems to be already answered in the previous sections, but let’s evaluate what we learned and add some more details.&lt;/p&gt;

&lt;p&gt;So you can find the local default volumes under &lt;code&gt;/var/lib/docker/volumes&lt;/code&gt; if you didn’t change the data root. For the sake of simplicity of the commands, I will keep using the default path.&lt;/p&gt;

&lt;p&gt;The Docker data root is not accessible by normal users, only by administrators. Run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt; /var/lib/docker/volumes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;total 140
drwx-----x 23 root root  4096 Jan  5 00:55 .
drwx--x--- 13 root root  4096 Dec 10 14:27 ..
drwx-----x  3 root root  4096 Jan 25  2023 0c5f9867e761f6df0d3ea9411434d607bb414a69a14b3f240f7bb0ffb85f0543
drwx-----x  3 root root  4096 Sep 19 13:15 1c963fb485fbbd5ce64c6513186f2bc30169322a63154c06600dd3037ba1749a
...
drwx-----x  3 root root  4096 Jan  5  2023 apps_cache
brw-------  1 root root  8, 1 Dec 10 14:27 backingFsBlockDev
-rw-------  1 root root 65536 Jan  5 00:55 metadata.db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These are the names of the volumes and two additional special files.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;backingFsBlockDev&lt;/li&gt;
&lt;li&gt;metadata.db&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are not going to discuss it in more details. All you need to know at this point is that this is where the volume folders are. Each folder has a sub-folder called "_data" where the actual data is, and there could be an opts.json with metadata next to the "_data" folder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: When you use rootless Docker, the Docker data root will be in your user’s home: &lt;code&gt;$HOME/.local/share/docker&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Desktop volumes
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Docker Desktop volumes are different depending on the operating system and whether you want to run Linux containers or Windows containers.&lt;/p&gt;

&lt;p&gt;Docker Desktop always runs a virtual machine for Linux containers and runs Docker CE in it in a quite complicated way, so your volumes will be in the virtual machine too. Because of that fact when you want to access the volumes, you either have to find a way to run a shell in the virtual machine, or find a way to share the filesystem on the network and use your filebrowser, IDE or terminal on the host.&lt;/p&gt;

&lt;p&gt;Parts of what I show here and more can be found in my presentation which I gave on the 6th Docker Community All-Hands. Tyler Charboneau wrote a &lt;a href="https://www.docker.com/blog/how-to-fix-and-debug-docker-containers-like-a-superhero/" rel="noopener noreferrer"&gt;blog post&lt;/a&gt; about it, but you can also find &lt;a href="https://www.youtube.com/watch?v=8zVOCnfkycY" rel="noopener noreferrer"&gt;the video&lt;/a&gt; in the blog post.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Desktop volumes on macOS
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;On macOS, you can only run Linux containers and there is no such thing as macOS container yet (2024. january).&lt;/p&gt;

&lt;p&gt;You can get to the volumes folder by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--privileged&lt;/span&gt; &lt;span class="nt"&gt;--pid&lt;/span&gt; host ubuntu:22.04 &lt;span class="se"&gt;\&lt;/span&gt;
  nsenter &lt;span class="nt"&gt;--all&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; 1 &lt;span class="se"&gt;\&lt;/span&gt;
    sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'cd /var/lib/docker/volumes &amp;amp;&amp;amp; sh'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or just simply mount that folder to a container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm -it \
  -v /var/lib/docker/volumes:/var/lib/docker/volumes \
  --workdir /var/lib/docker/volumes \
  ubuntu:22.04 \
  bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also run an NFS server in a container that mounts the volumes so you can mount the remote fileshare on the host. The following compose.yml file can be used to run the NFS server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;nfs-server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openebs/nfs-server-alpine:0.11.0&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/lib/docker/volumes:/mnt/nfs&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;SHARED_DIRECTORY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/mnt/nfs&lt;/span&gt;
      &lt;span class="na"&gt;SYNC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sync&lt;/span&gt;
      &lt;span class="na"&gt;FILEPERMISSIONS_UID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
      &lt;span class="na"&gt;FILEPERMISSIONS_GID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
      &lt;span class="na"&gt;FILEPERMISSIONS_MODE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0755"&lt;/span&gt;
    &lt;span class="na"&gt;privileged&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;127.0.0.1:2049:2049/tcp&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;127.0.0.1:2049:2049/udp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the mount point on the host:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /var/lib/docker/volumes
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;0700 /var/lib/docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mount the base directory of volumes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;mount &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;vers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4 &lt;span class="nt"&gt;-t&lt;/span&gt; nfs 127.0.0.1:/ /var/lib/docker/volumes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And list the content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /var/lib/docker/volumes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Docker Desktop volumes on Windows
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Switching between Linux and Windows containers
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Docker Desktop on Windows allows you to switch between Linux containers and Windows containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58pq57r9430ix57fmd7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58pq57r9430ix57fmd7o.png" alt="Click on the Docker icon" width="762" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnm8bcxzkisqy618tih5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnm8bcxzkisqy618tih5.png" alt="Switch to Linux containers" width="687" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To find out which one you are using, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker info &lt;span class="nt"&gt;--format&lt;/span&gt; &lt;span class="s1"&gt;'{{ .OSType }}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it returns "windows", you are using Windows containers, and if it returns "linux", you are using Linux containers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Linux containers
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Since Linux containers always require a virtual machine, you will have your volumes in the virtual machine the same way as you would on macOS. The difference is how you can access them. A common way is through a Docker container. Usually I would run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--rm&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-it&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--privileged&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--pid&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;host&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ubuntu:22.04&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nx"&gt;nsenter&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--all&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-c&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'cd /var/lib/docker/volumes &amp;amp;&amp;amp; sh'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But if you have an older kernel in WSL2 which doesn’t support the time namespace, you can get an error message like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nsenter: cannot open /proc/1/ns/time: No such file or directory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If that happens, make sure you have the latest kernel in WSL2. If you built a custom kernel, you may need to rebuild it from a new version.&lt;/p&gt;

&lt;p&gt;If you can’t update the kernel yet, exclude the time namespace, and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--rm&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-it&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--privileged&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--pid&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;host&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;ubuntu:22.04&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nx"&gt;nsenter&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-m&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-n&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-c&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'cd /var/lib/docker/volumes &amp;amp;&amp;amp; sh'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can simply mount the base directory in a container the same way as we could on macOS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;run&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--rm&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-it&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;-v&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/var/lib/docker/volumes:/var/lib/docker/volumes&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;--workdir&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/var/lib/docker/volumes&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nx"&gt;ubuntu:22.04&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nx"&gt;bash&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We don’t need to run a server in a container to share the volumes, since it works out of the box in WSL2. You can just open the Windows explorer and go to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\\wsl.localhost\docker-desktop-data\data\docker\volumes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrd1z02n527nr5j3h8ts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrd1z02n527nr5j3h8ts.png" alt="wsl.localhost path" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Warning:&lt;/strong&gt; &lt;em&gt;WSL2 let’s you edit files more easily even if the files are owned by root on the volume, so do it at your own risk. My recommendation is using it only for debugging.&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Windows containers
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Windows containers can mount their volumes from the host. Let's create a volume&lt;/p&gt;

&lt;p&gt;docker volume create windows-volume&lt;br&gt;
Inspect the volume:&lt;/p&gt;

&lt;p&gt;You will get something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CreatedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-01-06T16:27:03+01:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Driver"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"local"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Labels"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Mountpoint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"C:&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;ProgramData&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;Docker&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;volumes&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;windows-volume&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;_data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"windows-volume"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Options"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Scope"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"local"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So now you got the volume path on Windows in the “Mountpoint” field, but you don’t have access to, it unless you are Administrator. The following command works only from Powershell run as Administrator&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;cd&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;volume&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;inspect&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;windows-volume&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--format&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'{{ .Mountpoint }}'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to access it from Windows Explorer, you can first go to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;C:\ProgramData
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This folder is hidden by default, so if you want to open it, just type the path manually in the navigation bar, or enable hidden folders on Windows 11 (works differently on older Windows):&lt;/p&gt;

&lt;p&gt;Menu bar » View » Show » Hidden Items&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtiul2vp8l46snwm94a1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtiul2vp8l46snwm94a1.png" alt="Unhide hidden folders" width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then try to open the folder called “Docker” which gives you a prompt to ask for permission to access to folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfn78oi94xyplionk4rb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfn78oi94xyplionk4rb.png" alt="Get access to the Docker folder" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and then try to open the folder called “volumes” which will do the same.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cqzr4swdh45wy1quogs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cqzr4swdh45wy1quogs.png" alt="Get access to the volumes folder" width="800" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that you can open any Windows container volume from Windows explorer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Desktop volumes on Linux
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;On Windows, you could have Linux containers and Window containers, so you had to switch between them. On Linux, you can install Docker CE in rootful and rootless mode, and you can also install Docker Desktop. These are 3 different and separate Docker installations and you can switch between them by changing context or logging in as a different user.&lt;/p&gt;

&lt;p&gt;You can check the existing contexts by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker context &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have Docker CE installed on your Linux, and you are logged in as a user who installed the rootless Docker, and you also have Docker Desktop installed, you can see at least the following three contexts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                TYPE                DESCRIPTION                               DOCKER ENDPOINT                                       KUBERNETES ENDPOINT   ORCHESTRATOR
default             moby                Current DOCKER_HOST based configuration   unix:///var/run/docker.sock
desktop-linux *     moby                Docker Desktop                            unix:///home/ta/.docker/desktop/docker.sock
rootless            moby                Rootless mode                             unix:///run/user/1000/docker.sock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to use Docker Desktop, you need to switch to the context called "desktop-linux".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker context use desktop-linux
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: The default is usually rootful Docker CE and the other too are obvious. Only the rootful Docker CE needs to run as root, so if you want to interact with Docker Desktop, don’t make the mistake of running the docker commands with sudo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;docker context &lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                TYPE                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES ENDPOINT   ORCHESTRATOR
default *           moby                Current DOCKER_HOST based configuration   unix:///var/run/docker.sock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In terms of accessing volumes, Docker Desktop works similarly on macOS and Linux, so you have the following options:&lt;/p&gt;

&lt;p&gt;Run a shell in the virtual machine using nsenter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--privileged&lt;/span&gt; &lt;span class="nt"&gt;--pid&lt;/span&gt; host ubuntu:22.04 &lt;span class="se"&gt;\&lt;/span&gt;
  nsenter &lt;span class="nt"&gt;--all&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; 1 &lt;span class="se"&gt;\&lt;/span&gt;
    sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'cd /var/lib/docker/volumes &amp;amp;&amp;amp; sh'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or just simply mount that folder to a container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /var/lib/docker/volumes:/var/lib/docker/volumes &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--workdir&lt;/span&gt; /var/lib/docker/volumes &lt;span class="se"&gt;\&lt;/span&gt;
  ubuntu:22.04 &lt;span class="se"&gt;\&lt;/span&gt;
  bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And of course, you can use the nfs server compose project with the following &lt;code&gt;compose.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;nfs-server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;openebs/nfs-server-alpine:0.11.0&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/lib/docker/volumes:/mnt/nfs&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;SHARED_DIRECTORY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/mnt/nfs&lt;/span&gt;
      &lt;span class="na"&gt;SYNC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sync&lt;/span&gt;
      &lt;span class="na"&gt;FILEPERMISSIONS_UID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
      &lt;span class="na"&gt;FILEPERMISSIONS_GID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
      &lt;span class="na"&gt;FILEPERMISSIONS_MODE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0755"&lt;/span&gt;
    &lt;span class="na"&gt;privileged&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;127.0.0.1:2049:2049/tcp&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;127.0.0.1:2049:2049/udp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and prepare the mount point. Remember, you can have Docker CE running as root, which means &lt;code&gt;/var/lib/docker&lt;/code&gt; probably exists, so let's create the mount point as &lt;code&gt;/var/lib/docker-desktop/volumes&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /var/lib/docker-desktop/volumes
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;0700 /var/lib/docker-desktop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And mount it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;mount &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;vers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4 &lt;span class="nt"&gt;-t&lt;/span&gt; nfs 127.0.0.1:/ /var/lib/docker-desktop/volumes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And check the content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /var/lib/docker-desktop/volumes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You could ask why we mount the volumes into a folder on the host, which requires sudo if the docker commands don't. The reason is that you will need sudo to use the mount command, so it shouldn't be a problem to access the volumes as root.&lt;/p&gt;

&lt;h2&gt;
  
  
  Editing files on volumes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The danger of editing volume contents outside a container
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Now you know how you can find out where the volumes are. You also know how you can create a volume with a custom path, even if you are using Docker Desktop, which creates the default volumes inside a virtual machine.&lt;/p&gt;

&lt;p&gt;But most of you wanted to know where the volumes were to edit the files. &lt;strong&gt;Note that I will quote the following paragraphs only for better visibility, not because it was written by someone else.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Any operation inside the Docker data root is dangerous, and can break your Docker completely, or cause problems that you don’t immediately recognize, so you should never edit files without mounting the volume into a container, except if you defined a &lt;a href="https://learn-docker.it-sziget.hu/en/latest/pages/advanced/volumes.html#custom-volume-path" rel="noopener noreferrer"&gt;custom volume path&lt;/a&gt; so you don’t have to go into the Docker data root.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Even if you defined a custom path, we are still talking about a volume, which will be mounted into a container, in which the files can be accessed by a process which requires specific ownership and permissions. By editing the files from the host, you can accidentally change the permission or the owner making it inaccessible for the process in the container.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Even though I don’t recommend it, I understand that sometimes we want to play with our environment to learn more about, but we still have to try to find a less risky way to do it.&lt;/p&gt;

&lt;p&gt;You know where the volumes are, and you can edit the files with a text editor from command line or even from the graphical interface. One problem on Linux and macOS could be setting the proper permissions so you can edit the files even if you are not root. Discussing permissions could be another tutorial, but this is one reason why we have to try to separate the data managed by a process in a Docker container from the source code or any files that requires an interactive user. Just think of an application that is not running in a container, but the files still have to be owned by another user. An example could be a webserver, where the files has to be owned by a user or group so the webserver has access to the files, while you still should be able to upload files.&lt;/p&gt;

&lt;h3&gt;
  
  
  View and Edit files through Docker Desktop
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Docker Desktop let's you browse files from the GUI, which is great for debugging, but I don’t recommend it for editing files, even though Docker Desktop makes that possible too. Let’s see why I am saying it.&lt;/p&gt;

&lt;p&gt;Open the Containers tab of Docker Desktop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1z5taj1eh7ydyl2fcdv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1z5taj1eh7ydyl2fcdv.png" alt="Containers tab" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the three dots in the line of the container in which you want to browse files&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0127av51gtokztppjki.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0127av51gtokztppjki.png" alt="Container menu" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to a file that you want to edit&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8cyv667b01jppa3dr51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8cyv667b01jppa3dr51.png" alt="File browser" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;em&gt;Notice that Docker Desktop shows you whether the files are modified on the container’s filesystem, or you see a file on a volume.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Right click on the file and select "Edit file".&lt;/p&gt;

&lt;p&gt;Before you do anything, run a test container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; httpd &lt;span class="nt"&gt;-v&lt;/span&gt; httpd_docroot:/usr/local/apache2/htdocs httpd:2.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And check the permissions of the index file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; httpd &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /usr/local/apache2/htdocs/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-rw-r--r-- 1 504 staff 45 Jun 11  2007 index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then edit the file and click on the floppy icon on the right side or just press CTRL+S (Command+S on macOS) to save the modification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy43na8ynv4g4b7rf2vt4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy43na8ynv4g4b7rf2vt4.png" alt="File menu" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then run the following command from a terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; httpd &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /usr/local/apache2/htdocs/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you will see that the owner of the file was changed to root.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;total 4
-rw-r--r-- 1 root root 69 Jan  7 12:21 index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One day it might work better, but I generally don't recommend editing files in containers from the Graphical interface.&lt;/p&gt;

&lt;p&gt;Edit only source code that you mount into the container during development or Use Compose watch to update the files when you edit them, but let the data be handled only by the processes in the containers.&lt;/p&gt;

&lt;p&gt;Some applications are not optimized for running in containers and there are different folders and files at the same place where the code is, so it is hard to work with volumes and mounts while you let the process in the container change a config file, which you also want to edit occasionally. In that case you ned to learn how permissions are handled on Linux using the &lt;code&gt;chmod&lt;/code&gt; and &lt;code&gt;chown&lt;/code&gt; commands so you both have permission to access the files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Container based dev environments
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Docker Desktop Dev environment
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;One of the features of Docker Desktop is that you can run a development environment in a container. In this tutorial we will not discuss it in details, but it is good to know that it exists, and you can basically work inside a container into which you can mount volumes.&lt;/p&gt;

&lt;p&gt;More information in the &lt;a href="https://docs.docker.com/desktop/dev-environments/" rel="noopener noreferrer"&gt;documentation of the Dev environment&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Visual Studio Code remote development
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The dev environment of Docker Desktop can be opened from Visual Studio Code as it supports opening projects in containers similarly to how it supports remote development through SSH connection or in Windows Subsystem for Linux. You can use it without Docker Desktop to simply open a shell in a container or even open a project in a container.&lt;/p&gt;

&lt;p&gt;More information is in the &lt;a href="https://code.visualstudio.com/docs/containers/overview" rel="noopener noreferrer"&gt;documentation of VSCode about containers&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Visual Studio Code dev containers
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Microsoft also created container images for creating a dev container, which is similar to what Docker Desktop supports, but the process of creating a dev container is different.&lt;/p&gt;

&lt;p&gt;More information in the &lt;a href="https://code.visualstudio.com/docs/devcontainers/containers" rel="noopener noreferrer"&gt;documentation of VSCode about dev containers&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;There are multiple ways to browse the content of the Docker volumes, but it is not recommended to edit the files on the volumes. If you know enough about how containers work and what are the folders and files that you can edit without harming your system, you probably know enough not to edit the files that way in the first place.&lt;/p&gt;

&lt;p&gt;For debugging reasons or to learn about Docker by changing things in the environment, you can still edit the files at your own risk.&lt;/p&gt;

&lt;p&gt;Everything I described in this tutorial is true even if the user is not an interactive user, but an external user from the container’s point of view, trying to manage files directly in the Docker data root.&lt;/p&gt;

&lt;p&gt;So with that in mind if you ever think of doing something like that, stop for a moment, grab a paper and write the following sentence 20 times to the paper:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I do not touch the Docker data root directly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you enjoyed this tutorial, I also recommend reading about &lt;a href="https://dev.to/rimelek/docker-compose-volumes-volume-only-projects-and-init-containers-5468"&gt;Volume-only Compose projects&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can also subscribe to my &lt;a href="https://www.youtube.com/@akos.takacs" rel="noopener noreferrer"&gt;YouTube channel&lt;/a&gt; if you want to be notified about my videos as well. If you are interested in Hungarian contents, I have a &lt;a href="https://www.youtube.com/@itsziget" rel="noopener noreferrer"&gt;channel for you too&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This tutorial was originally posted on a website where I write about Docker-related topics to help beginners and even more advanced users. Check it out and see if you can find something interesting:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://learn-docker.it-sziget.hu/en/latest/pages/advanced/volumes.html" rel="noopener noreferrer"&gt;https://learn-docker.it-sziget.hu/en/latest/pages/advanced/volumes.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Using facts and the GitHub API in Ansible</title>
      <dc:creator>Ákos Takács</dc:creator>
      <pubDate>Sat, 30 Dec 2023 21:48:18 +0000</pubDate>
      <link>https://dev.to/rimelek/using-facts-and-the-github-api-in-ansible-4i00</link>
      <guid>https://dev.to/rimelek/using-facts-and-the-github-api-in-ansible-4i00</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;We will create a new Ansible role and a playbook to automate the installation of the &lt;a href="https://dev.to/rimelek/command-line-tools-i-always-install-on-ubuntu-servers-4828"&gt;Command line tools I always install on Ubuntu servers&lt;/a&gt;. Having the installer and Ansible role is not enough. It is always a good practice to document the role, what it is for and how people can use it, so we will discuss that too.&lt;/p&gt;

&lt;p&gt;The new features we will learn about today are the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using multiple tasks files and including a tasks file in the &lt;code&gt;main.yml&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Using Ansible facts, disabling them and gathering a subset of the available facts.&lt;/li&gt;
&lt;li&gt;Creating a symbolic link&lt;/li&gt;
&lt;li&gt;Updating the apt repository cache&lt;/li&gt;
&lt;li&gt;Using the folder "vars" in addition to "defaults"&lt;/li&gt;
&lt;li&gt;Using regular expressions in Ansible&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will start from the source code of the 7th episode:&lt;br&gt;
&lt;a href="https://github.com/rimelek/homelab/tree/tutorial.episode.7" rel="noopener noreferrer"&gt;https://github.com/rimelek/homelab/tree/tutorial.episode.7&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/n8bqAg5qtSE"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
Before you begin

&lt;ul&gt;
&lt;li&gt;Requirements&lt;/li&gt;
&lt;li&gt;Download the already written code of the previous episode&lt;/li&gt;
&lt;li&gt;Have an inventory file&lt;/li&gt;
&lt;li&gt;Activate the Python virtual environment&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Ansible playbook and optional APT cache update&lt;/li&gt;

&lt;li&gt;

Ansible role

&lt;ul&gt;
&lt;li&gt;Ansible role overview&lt;/li&gt;
&lt;li&gt;Creating a symbolic link&lt;/li&gt;
&lt;li&gt;Using multiple tasks files&lt;/li&gt;
&lt;li&gt;
Install the latest yq from GitHub

&lt;ul&gt;
&lt;li&gt;Using the GitHub API to get the latest release&lt;/li&gt;
&lt;li&gt;Get the version number of the latest release&lt;/li&gt;
&lt;li&gt;Get the architecture and operating system of the server&lt;/li&gt;
&lt;li&gt;Saving helper variables in addition to defaults&lt;/li&gt;
&lt;li&gt;Installing the desired version of yq&lt;/li&gt;
&lt;li&gt;Skip downloading when the existing version is the desired one&lt;/li&gt;
&lt;li&gt;Full yq tasks file&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Documenting Ansible roles&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Before you begin
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The project requires Nix which we discussed in &lt;a href="https://dev.to/rimelek/install-ansible-8-on-ubuntu-2004-lts-using-nix-46hm"&gt;Install Ansible 8 on Ubuntu 20.04 LTS using Nix&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You will also need an Ubuntu remote server. I recommend an Ubuntu 22.04 virtual machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Download the already written code of the previous episode
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;If you started the tutorial with this episode, clone the project from GitHub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/rimelek/homelab.git
&lt;span class="nb"&gt;cd &lt;/span&gt;homelab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you cloned the project now, or you want to make sure you are using the exact same code I did, switch to the previous episode in a new branch&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; tutorial.episode.7b tutorial.episode.7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Have the inventory file
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Copy the inventory template&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp &lt;/span&gt;inventory-example.yml inventory.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Change &lt;code&gt;ansible_host&lt;/code&gt; to the IP address of your Ubuntu server that you use for this tutorial,&lt;/li&gt;
&lt;li&gt;and change &lt;code&gt;ansible_user&lt;/code&gt; to the username on the remote server that Ansible can use to log in.&lt;/li&gt;
&lt;li&gt;If you still don't have an SSH private key, read the &lt;a href="https://dev.to/rimelek/ansible-playbook-and-ssh-keys-33bo#generate-an-ssh-key"&gt;Generate an SSH key part of Ansible playbook and SSH keys&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;If you want to run the playbook called &lt;code&gt;playbook-lxd-install.yml&lt;/code&gt;, you will need to configure a physical or virtual disk which I wrote about in &lt;a href="https://dev.to/rimelek/the-simplest-way-to-install-lxd-using-ansible-h5o#install-zfs-utils-and-create-a-zfs-pool"&gt;The simplest way to install LXD using Ansible&lt;/a&gt;. If you don't have a usable physical disk, Look for &lt;code&gt;truncate -s 50G &amp;lt;PATH&amp;gt;/lxd-default.img&lt;/code&gt; to create a virtual disk.&lt;/li&gt;
&lt;li&gt;You will need an encrypted secret file which I wrote about in the &lt;a href="https://dev.to/rimelek/use-sops-in-ansible-to-read-your-secrets-2gfa#encrypt-a-file"&gt;Encrypt a file section of "Use SOPS in Ansible ro read your secrets"&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Activate the Python virtual environment
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;How you activate the virtual environment, depends on how you created it. In the episode of &lt;a href="https://dev.to/rimelek/the-first-ansible-playbook-579h#install-ansible"&gt;The first Ansible playbook&lt;/a&gt; describes the way to create and activate the virtual environment using the "venv" Python module and in the episode of &lt;a href="https://dev.to/rimelek/the-first-ansible-role-paf"&gt;The first Ansible role&lt;/a&gt; we created helper scripts as well, so if you haven't created it yet, you can create the environment by running&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./create-nix-env.sh venv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Optionally start an ssh agent:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh-agent &lt;span class="nv"&gt;$SHELL&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;and activate the environment with&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;source &lt;/span&gt;homelab-env.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Ansible playbook and optional APT cache update
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Before we can talk about the role, we have to start with a playbook. Previously, we only had playbooks for specific tasks like installing and removing LXD. The goal is to have a playbook that installs the common dependencies with which you can play on the remote servers even without Ansible, so when you are trying to do something new, you don't have to start with yaml files without even knowing what you want to do in the end. Let's call this playbook file "&lt;code&gt;playbook-system-base.yml&lt;/code&gt;", and for now, add only the role that we will create soon.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cli_tools&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We still assume that all our machines that we configure in the inventory file are targets. It will change, but not in this post.&lt;/p&gt;

&lt;p&gt;This ansible role will contain the installation of lots of APT packages. We could have other roles that want to install APT packages, so we also want to make sure the APT cache is up-to-date. It would be a waste of time to update the cache in every role, so we will update it in a pre task:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
  &lt;span class="na"&gt;pre_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;APT update&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config_apt_update | default(false, &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s"&gt;) | bool&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cli_tools&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In this case we use the built-in "apt" module to update the cache, without installing anything, but apparently, updating the cache will also mean the task will always report a change. To disable that, we add &lt;code&gt;changed_when: false&lt;/code&gt; to the task. We also want a way to skip the updater pre task. When you have to run a playbook 10 times in two minutes while you are developing it, updating the cache every time is simply not necessary. We add a condition which will use the new &lt;code&gt;config_apt_update&lt;/code&gt; variable. If it is not defined in the inventory file, we use "false" as default value, but you can always override it from command line.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./run.sh playbook-system-base.yml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;config_apt_update&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;I will define it in my inventory, so this is how the global vars section looks like now:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;all&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ansible_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-homelab&lt;/span&gt;
    &lt;span class="na"&gt;sops&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('community.sops.sops',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'secrets.yml')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible.builtin.from_yaml&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;config_apt_update&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;config_lxd_zfs_pool_disks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/dev/disk/by-id/scsi-1ATA_Samsung_SSD_850_EVO_500GB_S2RBNX0J103301N-part6&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Ansible facts
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;There is one more line we need to add to the playbook.&lt;/p&gt;

&lt;p&gt;When you run a playbook, as the very first step, Ansible detects devices and collects information for example about networks and the version of the Linux distribution. The collected information will be available through variables and these are the facts. Sometimes you don't need these facts, and you want to speed up the execution of the playbook, especially when you have to run it on 100 servers or on just a couple but very often during development. If that is the case, you can set &lt;code&gt;gather_facts: false&lt;/code&gt; in the playbook like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;pre_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;APT update&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config_apt_update | default(false, &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s"&gt;) | bool&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;roles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cli_tools&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you use roles you didn't write, and you don't want to find out what facts they need, just leave the facts gathering enabled.&lt;/p&gt;

&lt;p&gt;Now you may think you understand the difference between the variables we used before and the facts, but in fact, you can also define facts using &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/set_fact_module.html" rel="noopener noreferrer"&gt;the set_fact builtin module&lt;/a&gt;. So one more important difference is the scope. You can define a variable in a task, but that variable will not be available in the next task. Facts are available everywhere, and you can also cache them, so when you define a fact, run the playbook, remove the definition and rerun the playbook, you can still read the fact from the cache. Of course it depends on the &lt;a href="https://docs.ansible.com/ansible/latest/plugins/cache.html" rel="noopener noreferrer"&gt;used cache plugin&lt;/a&gt;, and the default is memory. So by default, the facts are not available when you run a playbook the second time. If ou want to see how persistent fact caching works, the following example can show it.&lt;/p&gt;

&lt;p&gt;Run the following commands in terminal:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANSIBLE_CACHE_PLUGIN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;jsonfile 
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANSIBLE_CACHE_PLUGIN_CONNECTION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;/var/cache"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Use the following playbook:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ansible.builtin.set_fact&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cacheable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;mytest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ansible.builtin.debug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;var&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible_facts.mytest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After running the playbook, you will find a file named "localhost" in the folder you specified in the plugin connection.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mytest"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hello"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then run the following playbook:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ansible.builtin.debug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;var&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible_facts.mytest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And Ansible will still remember the value of "mytest":&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ok: [localhost] =&amp;gt; {
    "ansible_facts.mytest": "hello"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Ansible role
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Ansible role overview
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The new Ansible role will be called "cli_tools". The structure of the role will be the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;defaults/&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;main.yml&lt;/strong&gt;: The place for default parameter values.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vars/&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;main.yml&lt;/strong&gt;: A file to store helper variables which are not intended to be changed by the user. You can use this file if the alternative is storing the variables in the tasks file, which requires creating a block only for those variables.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tasks/&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;main.yml&lt;/strong&gt;: The default tasks file that we always used&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;yq.yml&lt;/strong&gt;: An additional tasks file which we can refer to and load in the &lt;code&gt;main.yml&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;README.md&lt;/strong&gt;: This is basically the documentation of the role containing everything that helps the user to understand how the role can be used, what it expects to be already installed and so on. We will discuss it in more details later.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Creating a symbolic link
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Most of the packages I install on a Debian-based Linux can be installed from an APT repository, but using the built-in &lt;code&gt;apt&lt;/code&gt; module that we already used before is not really interesting, so let's just jump to the interesting part. Sometimes, I just want to have an alias for a command, and that's where I will create a symbolic link like now to point to the pygmentize command. The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/file_module.html" rel="noopener noreferrer"&gt;built-in files module&lt;/a&gt; can create a symbolic link if the state field is "link".&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create "highlight" as a symbolic link to "pygmentize" | Install formatting tools for scripting and user-friendly outputs&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;link&lt;/span&gt;
        &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/bin/pygmentize&lt;/span&gt;
        &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_highlight_dest&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The destination could have been static, but I wanted to make it changeable, so I will have a default value for that in &lt;code&gt;defaults/main.yml&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Using multiple tasks files
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Installing &lt;code&gt;yq&lt;/code&gt; will be complicated, but I don't want to complicate my main tasks file. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/include_tasks_module.html" rel="noopener noreferrer"&gt;built-in include_tasks module&lt;/a&gt; can load another tasks file and expects the name of the file and executes the tasks in it.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Include tasks from another file&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.include_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;file.yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It can be useful in different situations, but in this case, I didn't want to keep the most complicated installation process in the main file. The &lt;code&gt;main.yml&lt;/code&gt; can also be shorter this way. For more details about how this module can be used, don't forget to check the documentation I linked above.&lt;/p&gt;

&lt;p&gt;The following code is the part of &lt;code&gt;main.yml&lt;/code&gt; in the &lt;code&gt;cli_tools&lt;/code&gt; role which shows all the 3 modules I used in the &lt;code&gt;main.yml&lt;/code&gt;, and also includes a block. Most of the tasks will be familiar since we used the APT module before and I also shared the symlink part, but the last task is an include.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;roles/cli_tools/tasks/main.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install formatting tools for scripting and user friendly outputs&lt;/span&gt;
  &lt;span class="na"&gt;block&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;APT packages | Install formatting tools for scripting and user-friendly outputs&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;jq&lt;/span&gt; &lt;span class="c1"&gt;# to handle json files&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;python3-pygments&lt;/span&gt; &lt;span class="c1"&gt;# to highlight codes with "pygmentize"&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create "highlight" as a symbolic link to "pygmentize" | Install formatting tools for scripting and user-friendly outputs&lt;/span&gt;
      &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;ansible.builtin.file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;link&lt;/span&gt;
        &lt;span class="na"&gt;src&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/bin/pygmentize&lt;/span&gt;
        &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_highlight_dest&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ansible.builtin.include_tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yq.yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;I didn't use the "name" parameter in the last task, because the tasks in the included file will have names, so it wouldn't really help to understand the role better and wouldn't add more value to the logs either. It was not my idea. I named every single task until I read about this point of view and I agreed. Unfortunately, I don't have a link to the source.&lt;/p&gt;
&lt;h3&gt;
  
  
  Install the latest yq from GitHub
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Using the GitHub API to get the latest release
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;We can finally discuss the most interesting part. I want to install "yq" from GitHub, which will require two more default variables in &lt;code&gt;defaults/main.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;cli_tools_yq_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;cli_tools_yq_dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/local/bin/yq&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The version number is empty, which will mean that I want to install the latest version. I tried to find a link directly to the latest release, but it turned out, there was no such link. However, the GitHub API can tell us which one is the latest. If you just want to get the URL to download the latest version, you can try the following in the terminal:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sL&lt;/span&gt; https://api.github.com/repos/mikefarah/yq/releases/latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It will return a json which is too long to show it, but let's see the relevant part:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"html_uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/mikefarah/yq/releases/tag/v4.40.5"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"assets"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"browser_download_url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/mikefarah/yq/releases/download/v4.40.5/yq_linux_amd64"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"browser_download_url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://github.com/mikefarah/yq/releases/download/v4.40.5/yq_darwin_arm64"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This will be really important, because it has all the information we need, and it has it more than once.&lt;/p&gt;

&lt;p&gt;Let's see how you can call the API endpoint from Ansible:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get latest version info as json&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cli_tools_yq_version | default('', &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s"&gt;) == ''&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.uri&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://api.github.com/repos/mikefarah/yq/releases/latest&lt;/span&gt;
  &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_yq_latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/uri_module.html" rel="noopener noreferrer"&gt;built-in uri module&lt;/a&gt; allows us to call the endpoint and save the json response into a variable. Of course we want to do that only if the requested version number is empty, that's why we compare the version number to an empty string.&lt;/p&gt;
&lt;h4&gt;
  
  
  Get the version number of the latest release
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;In the previous section, you could see that we could get the download url from the json response, which contains the version number, the architecture and also the operating system. The response also shows that these are the only differences in the download URLs. The download URL is the only thing we need, but sometimes we want to specify the version number instead of getting the latest version. So instead of using the above information to filter to the URL that we know exactly how it looks like, we can just build the URL from scratch. The first important part of that URL is the version number, but the version number can also be found in the &lt;code&gt;html_url&lt;/code&gt; field, which does not require to list the release files.&lt;/p&gt;

&lt;p&gt;Assuming you already have jq on the server, you can run the following:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s1"&gt;'https://api.github.com/repos/mikefarah/yq/releases/latest'&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt;  &lt;span class="se"&gt;\&lt;/span&gt;
  | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.html_url'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | xargs &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/^v//'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;4.50.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We need to the version number in Ansible. We registered the json response in &lt;code&gt;_yq_latest&lt;/code&gt;. It will have a property called "json", which is not a string. It is in fact a decoded version of the json string, since The "uri" module recognized   json in the HTTP response header. The above bash command can be replaced with the following Jinja template in Ansible:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;_yq_latest_version_number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;
      &lt;span class="s"&gt;_yq_latest.json.html_url&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;basename&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;regex_replace('^v(.*)',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;1')&lt;/span&gt;
      &lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We also used a very simple regular expression telling Ansible to remove the leading "v" from the version number. Removing the "v" is not really important. It was just my preference to work with only the numbers.&lt;/p&gt;

&lt;p&gt;We now have the latest version number, and we know that we want to use that as the default value and also be able to override it. This is how you do it:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;_yq_desired_version_number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_version&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(_yq_latest_version_number,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Get the architecture and operating system of the server
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The next important thing after the version number is the release name. The release always starts with "yq_" followed by the operating system and the architecture. We will need the &lt;code&gt;uname&lt;/code&gt; command to get name of the operating system (darwin on macOS and linux on Linux) and the &lt;code&gt;arch&lt;/code&gt; command to get the CPU architecture. Unfortunately, amd64 can also be called x86_64 and arm64 can also be called aarch64, so let's use &lt;code&gt;sed&lt;/code&gt; to fix that.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;uname
arch&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Linux
x86_64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;While we could use the &lt;code&gt;uname&lt;/code&gt; and the &lt;code&gt;arch&lt;/code&gt; commands to get the operating system and the CPU architecture in the terminal, we can use facts in Ansible. Since we disabled the fact gathering, we have to use the &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/setup_module.html" rel="noopener noreferrer"&gt;built-in setup module&lt;/a&gt; to get the architecture and the operating system.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Collect architecture facts&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.setup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;gather_subset&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;architecture&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After that you can get operating system and the architecture from the &lt;code&gt;ansible_facts&lt;/code&gt; variable.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;os&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible_facts.system&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
      &lt;span class="na"&gt;arch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible_facts.architecture&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;var&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;info&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Although I prefer using &lt;code&gt;ansible_facts&lt;/code&gt;, so I can search for where I'm using facts, you could use the variables prefixed with &lt;code&gt;ansible_&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;os&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible_system&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
      &lt;span class="na"&gt;arch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible_architecture&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;var&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;info&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Saving helper variables in addition to defaults
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Variables in Ansible can be defined in many places. In a role, we can have defaults, but we can also have variables which are not for changing them (although we could change them too), but only for organizing our templates, so we don't have to define all the variables in the tasks files.&lt;/p&gt;

&lt;p&gt;The architecture and the operating system is the two most important pieces of information to build the final URL. We have to convert those to a format that can be used in the download URL.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="s1"&gt;'[:upper:]'&lt;/span&gt; &lt;span class="s1"&gt;'[:lower:]'&lt;/span&gt;
&lt;span class="nb"&gt;arch&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/x86_64/amd64/'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="s1"&gt;'s/aarch64/arm64/'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We converted the nme of the operating system to lowercase, and replaced the architecture with the alternative names. Yes, in this project we support only these two.&lt;/p&gt;

&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;linux
amd64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In Ansible, we will save the templates in &lt;code&gt;vars/main.yml&lt;/code&gt;, so it is another folder called "vars/" at the same level as "defaults/".&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;cli_tools_yq_archs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;x86_64&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;amd64&lt;/span&gt;
  &lt;span class="na"&gt;amd64&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;amd64&lt;/span&gt;
  &lt;span class="na"&gt;aarch64&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arm64&lt;/span&gt;
  &lt;span class="na"&gt;arm64&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;arm64&lt;/span&gt;

&lt;span class="na"&gt;cli_tools_yq_os&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible_facts.system&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lower&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;span class="na"&gt;cli_tools_yq_arch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_archs[ansible_facts.architecture]&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;span class="na"&gt;cli_tools_yq_release_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'yq_'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_os&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'_'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;+&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_arch&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is how we will always get arm64 or amd64. Since we get the OS name from fact, it would also work on macOS. It doesn't mean the whole role would work, since we also use the APT package manager, but you could try to move the yq installation into a separate role. Whether you want to use multiple tasks files or a new role, it's up to you.&lt;/p&gt;

&lt;p&gt;The last thing we did was defining the full release name.&lt;/p&gt;
&lt;h4&gt;
  
  
  Installing the desired version of yq
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;We finally have all the information the build the download url:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;_url_base&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/mikefarah/yq/releases/download/&lt;/span&gt;
&lt;span class="na"&gt;_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_url_base&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}v{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_yq_desired_version_number&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_release_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We can use the following task, but in this case, we choose the &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/get_url_module.html" rel="noopener noreferrer"&gt;built-in get_url module&lt;/a&gt; instead of uri.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install yq&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;failed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_yq_install.status_code not in [200, 304]&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;_yq_latest_version_number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;
      &lt;span class="s"&gt;_yq_latest.json.html_url&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;basename&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;regex_replace('^v(.*)',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;1')&lt;/span&gt;
      &lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_yq_desired_version_number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_version&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(_yq_latest_version_number,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_url_base&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/mikefarah/yq/releases/download/&lt;/span&gt;
    &lt;span class="na"&gt;_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_url_base&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}v{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_yq_desired_version_number&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_release_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.get_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_url&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_dest&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0775&lt;/span&gt;
    &lt;span class="na"&gt;force&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;There is one parameter I have to explain.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;force&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Without this parameter wo couldn't update an already installed yq. It tells Ansible to override the downloaded file.&lt;/p&gt;
&lt;h4&gt;
  
  
  Skip downloading when the existing version is the desired one
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Previously, we always overwrote the installed version, which required downloading the file every time. To avoid that we need the version of the already installed yq, and do that only if it is already installed.&lt;/p&gt;

&lt;p&gt;To find out if the file is already downloaded, we can use the &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/stat_module.html" rel="noopener noreferrer"&gt;built-in stat module&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check if {{ cli_tools_yq_dest }} exists&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.stat&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_dest&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_yq_existing_dest_check&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now the boolean &lt;code&gt;_yq_existing_dest_check.stat.exists&lt;/code&gt; variable tells you whether it exists or not. In the terminal, you would get the version number like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yq &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yq (https://github.com/mikefarah/yq/) version v4.40.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;It's not just a version number, so we will use regular expression again, but first we get the version info in Ansible:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get the version information of the existing yq command&lt;/span&gt;
  &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_yq_existing_dest_check.stat.exists&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_dest&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--version"&lt;/span&gt;
  &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_yq_existing_version_info&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;I used the &lt;code&gt;cli_tools_yq_dest&lt;/code&gt; parameter so the task will work even if the path of the base folder is missing from the PATHS environment variable.&lt;/p&gt;

&lt;p&gt;We need to apply the following filter on the version info:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;regex_replace('.*version v(\\d+\\.\\d+\\.\\d+).*', '\\1')&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;As a template variable:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;_yq_existing_version_number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_yq_existing_version_info&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;regex_replace('.*version&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;v(&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;d+&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;d+&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;d+).*',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;1')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We will also need to add the following condition to the task:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;not _yq_existing_dest_check.stat.exists or _yq_existing_version_number != _yq_desired_version_number&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The final task is below:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install yq&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;not _yq_existing_dest_check.stat.exists or _yq_existing_version_number != _yq_desired_version_number&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;_yq_existing_version_number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_yq_existing_version_info&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;regex_replace('.*version&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;v(&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;d+&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;d+&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;d+).*',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;1')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_yq_latest_version_number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;
      &lt;span class="s"&gt;_yq_latest.json.html_url&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;basename&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;regex_replace('^v(.*)',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;1')&lt;/span&gt;
      &lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_yq_desired_version_number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_version&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(_yq_latest_version_number,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_url_base&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/mikefarah/yq/releases/download/&lt;/span&gt;
    &lt;span class="na"&gt;_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_url_base&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}v{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_yq_desired_version_number&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_release_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.get_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_url&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_dest&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0775&lt;/span&gt;
    &lt;span class="na"&gt;force&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Full yq tasks file
&lt;/h4&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Now let's see how &lt;code&gt;yq.yml&lt;/code&gt; looks like:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;roles/cli_tools/tasks/yq.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Collect architecture facts&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.setup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;gather_subset&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;architecture&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get latest version info as json&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cli_tools_yq_version | default('', &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="s"&gt;) == ''&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.uri&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://api.github.com/repos/mikefarah/yq/releases/latest&lt;/span&gt;
  &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_yq_latest&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check if {{ cli_tools_yq_dest }} exists&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.stat&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_dest&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_yq_existing_dest_check&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Get the version information of the existing yq command&lt;/span&gt;
  &lt;span class="na"&gt;changed_when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_yq_existing_dest_check.stat.exists&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_dest&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--version"&lt;/span&gt;
  &lt;span class="na"&gt;register&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_yq_existing_version_info&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install yq&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;not _yq_existing_dest_check.stat.exists or _yq_existing_version_number != _yq_desired_version_number&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;_yq_existing_version_number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_yq_existing_version_info&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;regex_replace('.*version&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;v(&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;d+&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;d+&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;d+).*',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;1')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_yq_latest_version_number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;
      &lt;span class="s"&gt;_yq_latest.json.html_url&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;basename&lt;/span&gt;
        &lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;regex_replace('^v(.*)',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s"&gt;1')&lt;/span&gt;
      &lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_yq_desired_version_number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_version&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default(_yq_latest_version_number,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;true)&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;_url_base&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/mikefarah/yq/releases/download/&lt;/span&gt;
    &lt;span class="na"&gt;_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_url_base&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}v{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_yq_desired_version_number&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}/{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_release_name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;ansible.builtin.get_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;_url&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;dest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;cli_tools_yq_dest&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;root&lt;/span&gt;
    &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0775&lt;/span&gt;
    &lt;span class="na"&gt;force&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Run the final playbook&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./run.sh playbook-system-base.yml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;config_apt_update&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Documenting Ansible roles
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;When you write an Ansible role, you can forget about the parameters and how you can use them. You can forget about some requirements which are needed before you use the role. It is a good practice to have a README file in the root folder of the role. If you want to share the role, it is even more important.&lt;/p&gt;

&lt;p&gt;The README file could have any structure, but the recommended one is the following markdown structure:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
text
role_name
=========

Description

Requirements
------------

List of requirements like the supported operating systems

Role variables
--------------



```yaml
role_variable: value
```



Description of the above variable

Dependencies
------------

List of dependencies like other roles

Example playbook
----------------



```yaml
- hosts: all
  roles:
    - role: role_name
      role_variable: value
```



License
-------

The name of the license

Author information
------------------

Your name or the name of your team and optional email address.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The description part is usually short, but I thought it would be a good idea to describe all the tools that the role would install, so mine is really long. I don't want to share the whole documentation, but you can find it on GitHub.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;This is how a very simple task becomes a very complicated. I wanted to show you what command line tools I usually install on my Linux servers, which become a separate article. In Ansible, it required talking about Ansible facts and organizing our variables better. In my original role, I never overwrote the existing yq binary, and when I needed a new version, I could just remove the binary on the server and rerun the playbook. If you have many servers, it is better to automatically checking whether you have the desired version or not. It also demonstrated what it means to detect the existing state if there is no module to do that for you.&lt;/p&gt;

&lt;p&gt;Now that we have a role to install the most important command line tools, we can reuse it later. For example, when we use Ansible to run new virtual machines in which we also want to have these tools and more. Coming soon in a following tutorial.&lt;/p&gt;

&lt;p&gt;The final source code of this episode can be found on GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rimelek/homelab/tree/tutorial.episode.8" rel="noopener noreferrer"&gt;https://github.com/rimelek/homelab/tree/tutorial.episode.8&lt;/a&gt;&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/rimelek" rel="noopener noreferrer"&gt;
        rimelek
      &lt;/a&gt; / &lt;a href="https://github.com/rimelek/homelab" rel="noopener noreferrer"&gt;
        homelab
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Source code to create a home lab. Part of a video tutorial
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;README&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;This project was created to help you build your own home lab where you can test
your applications and configurations without breaking your workstation, so you can
learn on cheap devices without paying for more expensive cloud services.&lt;/p&gt;
&lt;p&gt;The project contains code written for the tutorial, but you can also use parts of it
if you refer to this repository.&lt;/p&gt;
&lt;p&gt;Tutorial on YouTube in English: &lt;a href="https://www.youtube.com/watch?v=K9grKS335Mo&amp;amp;list=PLzMwEMzC_9o7VN1qlfh-avKsgmiU8Jofv" rel="nofollow noopener noreferrer"&gt;https://www.youtube.com/watch?v=K9grKS335Mo&amp;amp;list=PLzMwEMzC_9o7VN1qlfh-avKsgmiU8Jofv&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Tutorial on YouTube in Hungarian: &lt;a href="https://www.youtube.com/watch?v=dmg7lYsj374&amp;amp;list=PLUHwLCacitP4DU2v_DEHQI0U2tQg0a421" rel="nofollow noopener noreferrer"&gt;https://www.youtube.com/watch?v=dmg7lYsj374&amp;amp;list=PLUHwLCacitP4DU2v_DEHQI0U2tQg0a421&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Note: The inventory.yml file is not shared since that depends on the actual environment
so it will be different for everyone. If you want to learn more about the inventory file
watch the videos on YouTube or read the written version on &lt;a href="https://dev.to" rel="nofollow"&gt;https://dev.to&lt;/a&gt;. Links in
the video descriptions on YouTube.&lt;/p&gt;
&lt;p&gt;You can also find an example inventory file in the project root. You can copy that and change
the content, so you will use your IP…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/rimelek/homelab" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
      <category>ansible</category>
      <category>infrastructureascode</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Command line tools I always install on Ubuntu servers</title>
      <dc:creator>Ákos Takács</dc:creator>
      <pubDate>Mon, 25 Dec 2023 17:34:13 +0000</pubDate>
      <link>https://dev.to/rimelek/command-line-tools-i-always-install-on-ubuntu-servers-4828</link>
      <guid>https://dev.to/rimelek/command-line-tools-i-always-install-on-ubuntu-servers-4828</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;There are some tools that we almost always install on our servers. Sometimes we install them immediately, and sometimes we realize what is missing later, and install it then. In this post we will discuss the tools I always install on Ubuntu servers. Although the following package names will be for Ubuntu linux, most of the tools can also be installed on  other distributions as well and some of them, like jq and yq can be installed on macOS as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Common network tools&lt;/li&gt;
&lt;li&gt;Python 3 support&lt;/li&gt;
&lt;li&gt;Formatting tools for scripting and user-friendly outputs&lt;/li&gt;
&lt;li&gt;Packages for downloading files and webpages&lt;/li&gt;
&lt;li&gt;Monitoring tools in command line&lt;/li&gt;
&lt;li&gt;User manual related packages&lt;/li&gt;
&lt;li&gt;Text editors&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Common network tools
&lt;/h2&gt;

&lt;h3&gt;
  
  
  bridge-utils
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;It contains the brctl command which can manage bridge interfaces. The brctl command can be replaced with the "ip" command, but it is useful to have it in case you copy-paste some commands from tutorials where brctl is used.&lt;/p&gt;

&lt;p&gt;To list bridge interfaces, you could use the following command for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brctl show &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="nt"&gt;-F&lt;/span&gt; &lt;span class="s1"&gt;$'&lt;/span&gt;&lt;span class="se"&gt;\t&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt; &lt;span class="s1"&gt;'$1 != "" {print $1}'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; +2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;br-35a7f7573daa
br-40c75c6946c5
br-415f86d29b58
br-85776ec34ce3
br-a39b1276c115
br-be88ce1dc3d8
br-c506658ca645
docker0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  net-tools
&lt;/h3&gt;

&lt;p&gt;It contains "netstat" which can also be replaced with another command called "ss" which is part of the "iproute2" package on Ubuntu. A common netstat command is the following to list used ports on which processes are listening:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;netstat &lt;span class="nt"&gt;-tulpn&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:8952          0.0.0.0:*               LISTEN      -
tcp        0      0 192.168.4.58:53         0.0.0.0:*               LISTEN      -
tcp        0      0 10.17.181.1:53          0.0.0.0:*               LISTEN      -
tcp        0      0 127.0.0.1:61209         0.0.0.0:*               LISTEN      -
tcp        0      0 192.168.4.58:5432       0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -
tcp6       0      0 ::1:8952                :::*                    LISTEN      -
tcp6       0      0 :::22                   :::*                    LISTEN      -
udp        0      0 10.17.181.1:53          0.0.0.0:*                           -
udp        0      0 192.168.4.58:53         0.0.0.0:*                           -
udp        0      0 0.0.0.0:67              0.0.0.0:*                           -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  iproute2
&lt;/h3&gt;

&lt;p&gt;Usually installed by default on recent Ubuntu distros. It gives you the "ip" command to manage IP addresses and network interfaces. It also contains the "ss" command which can be used instead of "netstat".&lt;/p&gt;

&lt;p&gt;The alternative command to the previously mentioned brctl command is the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ip &lt;span class="nt"&gt;-json&lt;/span&gt; &lt;span class="nb"&gt;link &lt;/span&gt;show &lt;span class="nb"&gt;type &lt;/span&gt;bridge &lt;span class="se"&gt;\&lt;/span&gt;
  | jq &lt;span class="nt"&gt;-r&lt;/span&gt; .[].ifname &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;sort&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This specific command requires jq which I will write about later.&lt;/p&gt;

&lt;p&gt;If you want to replace the netstat command with the newer "ss" command, you can run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ss &lt;span class="nt"&gt;-tulpn&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see the parameters are the same, but the output will be slightly different:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Netid             State              Recv-Q             Send-Q                          Local Address:Port                           Peer Address:Port             Process
udp               UNCONN             0                  0                                 10.17.181.1:53                                  0.0.0.0:*
udp               UNCONN             0                  0                                192.168.4.58:53                                  0.0.0.0:*
udp               UNCONN             0                  0                              0.0.0.0%lxdbr0:67                                  0.0.0.0:*
tcp               LISTEN             0                  5                                   127.0.0.1:61209                               0.0.0.0:*
tcp               LISTEN             0                  32                                10.17.181.1:53                                  0.0.0.0:*
tcp               LISTEN             0                  256                              192.168.4.58:53                                  0.0.0.0:*
tcp               LISTEN             0                  128                                   0.0.0.0:22                                  0.0.0.0:*
tcp               LISTEN             0                  244                              192.168.4.58:5432                                0.0.0.0:*
tcp               LISTEN             0                  16                                  127.0.0.1:8952                                0.0.0.0:*
tcp               LISTEN             0                  128                                      [::]:22                                     [::]:*
tcp               LISTEN             0                  16                                      [::1]:8952                                   [::]:*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Python 3 support
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Python 3 is recommended usually instead of Python 2. This role also installs common tools which we use in a Python environment:&lt;/p&gt;

&lt;h3&gt;
  
  
  python3
&lt;/h3&gt;

&lt;p&gt;The main package for Python 3. Sometimes you want a specific version if the APT repository supports multiple versions. Then you could specify that like &lt;code&gt;python3.11&lt;/code&gt; if the exact version number matters.&lt;/p&gt;

&lt;h3&gt;
  
  
  python3-pip
&lt;/h3&gt;

&lt;p&gt;Pip package manager for Python 3. If you install &lt;code&gt;python3.11&lt;/code&gt; then the package name will be &lt;code&gt;python3.11-pip&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  python3-venv
&lt;/h3&gt;

&lt;p&gt;The venv module is one of the modules that can create a virtual environment.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  python3-virtualenv
&lt;/h3&gt;

&lt;p&gt;The virtualenv module is also one that can create a virtual environment. It is probably better known than venv, but you can use very similarly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; virtualenv venv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Formatting tools for scripting and user-friendly outputs
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;h3&gt;
  
  
  jq
&lt;/h3&gt;

&lt;p&gt;To handle JSON files and JSON outputs in a script or format and highlight it, jq can be very handy. Many command line tools provide a json output, so you don't have to write a custom parser for a table a list in a terminal. Instead of that, you can use jq to get a specific value from the output or even modify the output. For more information, you can visit &lt;a href="https://jqlang.github.io/jq/" rel="noopener noreferrer"&gt;https://jqlang.github.io/jq/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;lxc list &lt;span class="nt"&gt;--format&lt;/span&gt; json docker &lt;span class="se"&gt;\&lt;/span&gt;
  | jq &lt;span class="nt"&gt;-r&lt;/span&gt; .[].created_at
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command is similar to what we will discuss in the near future in another post. Docker also supports JSON outputs, so jq can be useful in case you are not comfortable with golang templates.&lt;/p&gt;

&lt;h3&gt;
  
  
  python3-pygments
&lt;/h3&gt;

&lt;p&gt;It will install the "pygmentize" command to highlight contents of files like source codes in the terminal. It isn't required for JSON because of jq, but it could be useful for other kind of outputs.&lt;/p&gt;

&lt;p&gt;For example instead of using &lt;code&gt;cat lxd-init.yml&lt;/code&gt; to get the content of a yaml file (which we did before in previous chapters), you can use the pygmentize command and have a highlighted, colorized output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pygmentize lxd-init.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to read about the installation of LXD for running virtual machines and containers on Linux, read &lt;a href="https://dev.to/rimelek/creating-virtual-machines-with-lxd-581k"&gt;Creating virtual machines with LXD&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instead of remembering the name "pygmentize" you would probably prefer "highlight", so you can also add a symbolic link to "pygmentize".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /usr/bin/pygmentize /usr/local/bin/highlight
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  yq
&lt;/h3&gt;

&lt;p&gt;yq is similar to jq, except it is for YAML files. Let's say you want to get the profile definitions from an lxd init yaml. You can run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yq .profiles lxd-init.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
  &lt;span class="na"&gt;devices&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;eth0&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eth0&lt;/span&gt;
      &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lxdbr0&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nic&lt;/span&gt;
    &lt;span class="na"&gt;root&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
      &lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disk&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more information about this command visit &lt;a href="https://github.com/mikefarah/yq" rel="noopener noreferrer"&gt;https://github.com/mikefarah/yq&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Packages for downloading files and webpages
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;curl&lt;/strong&gt; and &lt;strong&gt;wget&lt;/strong&gt; are two popular commands to download files from the internet or just handle API calls. I usually install both of them, so whatever commands I find in a documentation, I can copy and paste. However, I prefer using curl. Curl is also implemented in programming languages like PHP.&lt;/p&gt;

&lt;p&gt;Example commands with curl and wget to get the response header and follow redirections:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt; http://google.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget &lt;span class="nt"&gt;-O-&lt;/span&gt; &lt;span class="nt"&gt;-S&lt;/span&gt; &lt;span class="nt"&gt;--spider&lt;/span&gt; http://google.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Compress and decompress files
&lt;/h2&gt;

&lt;p&gt;On Linux I usually use "tar" and "gzip" to compress files. Sometimes I need to download a release from GitHub or any file compresses with zip, so I also install "zip" and "unzip".&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring tools in command line
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;h3&gt;
  
  
  htop
&lt;/h3&gt;

&lt;p&gt;Probably everyone knows about the "top" command. Htop is similar, but gives us a more user-friendly output. It shows processes using the most resources, how much available resources you have and who runs those processes. For more information, visit &lt;a href="https://htop.dev/" rel="noopener noreferrer"&gt;https://htop.dev/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cxbl88uj0tp9qr5plfun.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxbl88uj0tp9qr5plfun.png" alt="htop screenshot" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  btop
&lt;/h3&gt;

&lt;p&gt;Btop is even more advanced than htop. It is almost like a GUI in terminal, and it feels like a dashboard of an airplane. I like it, but when I want to see what processes are using most of my resources, it is usually &lt;code&gt;htop&lt;/code&gt; that comes to my mind and not &lt;code&gt;btop&lt;/code&gt;, since btop shows more by default than I usually want. For more information see &lt;a href="https://github.com/aristocratos/btop" rel="noopener noreferrer"&gt;https://github.com/aristocratos/btop&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y6gbkpjq5saos3z33kdy.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6gbkpjq5saos3z33kdy.png" alt="btop screenshot" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  glances
&lt;/h3&gt;

&lt;p&gt;It has a web-based interface too, but you can use it from terminal. "Glances" shows information about Docker containers too by default, so when it comes to Docker containers, it is really useful. For more information see &lt;a href="https://glances.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;https://glances.readthedocs.io/en/latest/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l1rou8aggbxnc5gulwh2.png" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1rou8aggbxnc5gulwh2.png" alt="glances htop" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I created the screenshot in a virtual machine (which we will create together in a following tutorial) where I have Docker installed. As you can probably see (click on the image if it is too small), there is a "containers" section as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  User manual related packages
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The "man" command is almost always available on Linux, but sometimes it is missing. Some virtual machine images and container images don't come with this package, so I installed it to make sure I always have it, so I can get the user manual of curl for example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;man curl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Text editors
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;h3&gt;
  
  
  nano
&lt;/h3&gt;

&lt;p&gt;A simple text editor in terminal with limited capabilities, but it is very easy to use. I usually prefer this one and I don't care how many people tell me that vim is much better. When I want to edit a simple file quickly like &lt;code&gt;/etc/hosts&lt;/code&gt; I don't need any advanced editor and when I need an advanced editor, I almost always have GUI, so I can use VSCode for example.&lt;/p&gt;

&lt;h3&gt;
  
  
  vim
&lt;/h3&gt;

&lt;p&gt;An advanced text editor in terminal which is very popular. It has many keyboard shortcuts for example to delete multiple lines and other tasks which is not as simple in a terminal as it would be with a GUI. Even if I usually don't need it, I install it, because it could be useful sometimes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;The list I shared in this post shows what I install on my servers. Those servers are usually debian based like Ubuntu, but whatever Linux distributions I run, I like to use the same tools. For example, I use nano, jq, curl almost every day on my macOS.&lt;/p&gt;

&lt;p&gt;I also use Docker and although it has a command line part as well, it is much more than that, so I didn't include it in this list.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>ubuntu</category>
    </item>
    <item>
      <title>Use SOPS in Ansible to read your secrets</title>
      <dc:creator>Ákos Takács</dc:creator>
      <pubDate>Tue, 14 Nov 2023 21:08:52 +0000</pubDate>
      <link>https://dev.to/rimelek/use-sops-in-ansible-to-read-your-secrets-2gfa</link>
      <guid>https://dev.to/rimelek/use-sops-in-ansible-to-read-your-secrets-2gfa</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When we are using Ansible, we often need passwords or tokens or any secrets that we don't want to store as plain text and definitely don't want to commit to a git repository. At least not as plain text. Personally, I don't like to commit secrets even if they are encrypted, unless it is required.&lt;/p&gt;

&lt;p&gt;I used Ansible Vault for years, and I was suggested to replace it with GPG since we used that too anyway, but somehow I just didn't like it so much, so I kept using Ansible Vault. My problem with Ansible Vault was that when I encrypted a yaml file which contained secrets, the whole file was encrypted including parameter names. Of course there was a solution for that. Creating an unencrypted file without the values and another which was completely encrypted, so you could manage two files instead of one. Or you could encrypt individual variables manually.&lt;/p&gt;

&lt;p&gt;Maybe about a year ago, I started to use Secrets Operations (SOPS) and it made everything easier. Not to mention that I can use SOPS without Ansible. In this post I will use SOPS to finally encrypt the sudo password (aka become pass) for the user on the remote server and I will use the Nix package manager to install the required to packages, &lt;a href="https://github.com/getsops/sops" rel="noopener noreferrer"&gt;sops&lt;/a&gt; and &lt;a href="https://github.com/FiloSottile/age" rel="noopener noreferrer"&gt;age&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/aconnyghHBw"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;If you want to be notified about new videos, you can subscribe to my YouTube channel: &lt;a href="https://www.youtube.com/@akos.takacs" rel="noopener noreferrer"&gt;https://www.youtube.com/@akos.takacs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
Before you begin

&lt;ul&gt;
&lt;li&gt;Requirements&lt;/li&gt;
&lt;li&gt;Download the already written code of the previous episode&lt;/li&gt;
&lt;li&gt;Have an inventory file&lt;/li&gt;
&lt;li&gt;Activate the Python virtual environment&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Why Nix again?&lt;/li&gt;

&lt;li&gt;Create a new config file&lt;/li&gt;

&lt;li&gt;Generate the age key pair&lt;/li&gt;

&lt;li&gt;Create a wrapper script for sops and detect the public key automatically&lt;/li&gt;

&lt;li&gt;Encrypt a file&lt;/li&gt;

&lt;li&gt;Edit an already encrypted file&lt;/li&gt;

&lt;li&gt;Use the secret in the inventory file&lt;/li&gt;

&lt;li&gt;Use a dedicated user for Ansible&lt;/li&gt;

&lt;li&gt;Run Ansible in a Nix shell&lt;/li&gt;

&lt;li&gt;Test if Ansible could read the secret&lt;/li&gt;

&lt;li&gt;Don't commit secrets to a git repository&lt;/li&gt;

&lt;li&gt;What more you should know&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Before you begin
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The project requires Nix which we discussed in &lt;a href="https://dev.to/rimelek/install-ansible-8-on-ubuntu-2004-lts-using-nix-46hm"&gt;Install Ansible 8 on Ubuntu 20.04 LTS using Nix&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You will also need an Ubuntu remote server. I recommend an Ubuntu 22.04 virtual machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Download the already written code of the previous episode
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;If you started the tutorial with this episode, clone the project from GitHub:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

git clone https://github.com/rimelek/homelab.git
&lt;span class="nb"&gt;cd &lt;/span&gt;homelab


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;If you cloned the project now, or you want to make sure you are using the exact same code I did, switch to the previous episode in a new branch&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; tutorial.episode.6b tutorial.episode.6


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Have the inventory file
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Copy the inventory template&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;cp &lt;/span&gt;inventory-example.yml inventory.yml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Change &lt;code&gt;ansible_host&lt;/code&gt; to the IP address of your Ubuntu server that you use for this tutorial,&lt;/li&gt;
&lt;li&gt;and change &lt;code&gt;ansible_user&lt;/code&gt; to the username on the remote server that Ansible can use to log in.&lt;/li&gt;
&lt;li&gt;If you still don't have an SSH private key, read the &lt;a href="https://dev.to/rimelek/ansible-playbook-and-ssh-keys-33bo#generate-an-ssh-key"&gt;Generate an SSH key part of Ansible playbook and SSH keys&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;If you want to run the playbook called &lt;code&gt;playbook-lxd-install.yml&lt;/code&gt;, you will need to configure a physical or virtual disk which I wrote about in &lt;a href="https://dev.to/rimelek/the-simplest-way-to-install-lxd-using-ansible-h5o#install-zfs-utils-and-create-a-zfs-pool"&gt;The simplest way to install LXD using Ansible&lt;/a&gt;. If you don't have a usable physical disk, Look for &lt;code&gt;truncate -s 50G &amp;lt;PATH&amp;gt;/lxd-default.img&lt;/code&gt; to create a virtual disk. &lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Activate the Python virtual environment
&lt;/h3&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;How you activate the virtual environment, depends on how you created it. In the episode of &lt;a href="https://dev.to/rimelek/the-first-ansible-playbook-579h#install-ansible"&gt;The first Ansible playbook&lt;/a&gt; describes the way to create and activate the virtual environment using the "venv" Python module and in the episode of &lt;a href="https://dev.to/rimelek/the-first-ansible-role-paf"&gt;The first Ansible role&lt;/a&gt; we created helper scripts as well, so if you haven't created it yet, you can create the environment by running&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

./create-nix-env.sh venv


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Optionally start an ssh agent:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

ssh-agent &lt;span class="nv"&gt;$SHELL&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;and activate the environment with&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;source &lt;/span&gt;homelab-env.sh


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Why Nix again?
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;There are multiple ways which you can find in the &lt;a href="https://github.com/getsops/sops" rel="noopener noreferrer"&gt;git repository&lt;/a&gt; of sops to install it, but I will show you one that isn't there. The reason is that I want to support this project on Linux and macOS and I also want you to use the same version that I use. So if I follow the guide on GitHub, I need to either download a specific binary from GitHub or use different package managers on different platforms. Both would require automatically detecting the platform. Since downloading binaries would also mean that I would need to manually verify checksums and install dependencies and I like to use general solutions, I rather choose Nix. We already use Nix to create a Python virtual environment, so why not?&lt;/p&gt;
&lt;h2&gt;
  
  
  Create a new config file
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;SOPS supports multiple tools for encryption like PGP or &lt;a href="https://github.com/FiloSottile/age" rel="noopener noreferrer"&gt;age&lt;/a&gt;, and we will use age. The first step is to open a Nix shell with pre-installed &lt;code&gt;age&lt;/code&gt; so we can use the &lt;code&gt;age-keygen&lt;/code&gt; command to generate a keypair. Eventually, we need to run a command like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

age-keygen &lt;span class="nt"&gt;-o&lt;/span&gt; age/private-key


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;SOPS will look for this file in specific folder which is different on macOS and on Linux, so we will override that, and regardless of the OS you are using, you can run the same commands, and I can use the same project on macOS and in a Linux virtual machine where I mount the project from the host. In order to use the same key in the terminal and in Ansible, we will need two environment variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;SOPS_AGE_KEY_FILE&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ANSIBLE_SOPS_AGE_KEYFILE&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since I will have multiple scripts using the same config, I will create a config file which will be a simple shell script. Let's save it in the project root and call it &lt;code&gt;config.sh&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# HOMELAB variables&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HOMELAB_VAR_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HOMELAB_PROJECT_ROOT&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/var"&lt;/span&gt;

&lt;span class="c"&gt;# SOPS variables&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SOPS_AGE_KEY_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOMELAB_VAR_DIR&lt;/span&gt;&lt;span class="s2"&gt;/age/key"&lt;/span&gt;

&lt;span class="c"&gt;# Ansible variables&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANSIBLE_SOPS_AGE_KEYFILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SOPS_AGE_KEY_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;We have a "HOMELAB variables" section for general variables, an "SOPS variables" section for variables that are used by the "sops" executable directly and an "Ansible variables" section for Ansible of course. We also use the &lt;code&gt;HOMELAB_PROJECT_ROOT&lt;/code&gt; variable, which is just a simple dot by default, but can be set in each script. We do this only because this way we don't have to support sourcing the config file in different shells and use different ways to determine the project root.&lt;/p&gt;
&lt;h2&gt;
  
  
  Generate the age key pair
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Now we need a script that runs &lt;code&gt;age-keygen&lt;/code&gt; and stores the private key file based on the config file. Let's call it &lt;code&gt;sops-keygen.sh&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;#!/usr/bin/env nix-shell&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -i bash&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -p age&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -I https://github.com/NixOS/nixpkgs/archive/refs/tags/23.05.tar.gz&lt;/span&gt;

&lt;span class="nv"&gt;current_dir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;dirname&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$0&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HOMELAB_PROJECT_ROOT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$current_dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;source&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOMELAB_PROJECT_ROOT&lt;/span&gt;&lt;span class="s2"&gt;/config.sh"&lt;/span&gt;

&lt;span class="nv"&gt;parent_dir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;dirname&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SOPS_AGE_KEY_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$parent_dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$parent_dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;fi

if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SOPS_AGE_KEY_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;age-keygen &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SOPS_AGE_KEY_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"The private key file is at &lt;/span&gt;&lt;span class="nv"&gt;$SOPS_AGE_KEY_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;It should be executable of course:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;chmod&lt;/span&gt; +x sops-keygen.sh


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;And when you execute it,&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

./sops-keygen.sh


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;the script will generate the private key, save it and show the location. If you are wondering where the public key is, it is in the same file.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;source &lt;/span&gt;config.sh
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="nv"&gt;$SOPS_AGE_KEY_FILE&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The output will be something like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# created: 2023-11-12T11:36:39+01:00
# public key: age1lh5qpf04dq0xcvgg63wf3qha32d8mxfslm97nh0utfl9rv784dts5zpl8e
AGE-SECRET-KEY-1YCKA5MJ44V2YDA8RZ9MKV0YWTDLG8MXFSDEFT9AQ76GQ7005JFQQN0WFN4


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Create a wrapper script for sops and detect the public key automatically
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;You can copy the public key manually when you need it, but sometimes you want to get it in a script like now. We need a wrapper script for the &lt;code&gt;sops&lt;/code&gt; command. In this script we can get the public key and load it into a variable. We don't set this variable in the config file, because we need it only when we run sops. Even then, we would need it only when we want to encrypt a file, not when we decrypt it. I won't overcomplicate this script even more, so I will let the script read the public key every time. The filename will be &lt;code&gt;sops.sh&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;#!/usr/bin/env nix-shell&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -i bash&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -p sops&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -p age&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -I https://github.com/NixOS/nixpkgs/archive/refs/tags/23.05.tar.gz&lt;/span&gt;

&lt;span class="nv"&gt;current_dir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;dirname&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$0&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HOMELAB_PROJECT_ROOT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$current_dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;source&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOMELAB_PROJECT_ROOT&lt;/span&gt;&lt;span class="s2"&gt;/config.sh"&lt;/span&gt;

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SOPS_AGE_RECIPIENTS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;age-keygen &lt;span class="nt"&gt;-y&lt;/span&gt; &amp;lt; &lt;span class="nv"&gt;$SOPS_AGE_KEY_FILE&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

sops &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;And of course we make it executable:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;chmod&lt;/span&gt; +x ./sops.sh


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;You can test it with a simple command that shows its version number:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

./sops.sh &lt;span class="nt"&gt;--version&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Encrypt a file
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;What we need now is a file to encrypt. Let's create &lt;code&gt;secrets.plain.yml&lt;/code&gt; in the project root.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;become_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;And encrypt it:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

./sops.sh &lt;span class="nt"&gt;--encrypt&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; secrets.yml secrets.plain.yml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;We will get a &lt;code&gt;secrets.yml&lt;/code&gt; file like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;become_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
&lt;span class="na"&gt;sops&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
    &lt;span class="na"&gt;gcp_kms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
    &lt;span class="na"&gt;azure_kv&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
    &lt;span class="na"&gt;hc_vault&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
    &lt;span class="na"&gt;age&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;recipient&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;age1lh5qpf04dq0xcvgg63wf3qha32d8mxfslm97nh0utfl9rv784dts5zpl8e&lt;/span&gt;
          &lt;span class="na"&gt;enc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;-----BEGIN AGE ENCRYPTED FILE-----&lt;/span&gt;
            &lt;span class="s"&gt;YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBNZjVEOTJJZWkwQytML1F6&lt;/span&gt;
            &lt;span class="s"&gt;QVFJNHNmcVhHbThMZWI2SGJWalArd2c0bkVvCng3QjgvQms4SE9LT1o3Z21DN1Yw&lt;/span&gt;
            &lt;span class="s"&gt;MVBWcFdnNVV3UGVKNFB4YS85UU9ScWsKLS0tIFNOTkdodkt6aEZMT1pvdGxxVjNm&lt;/span&gt;
            &lt;span class="s"&gt;M29mL2JmMDI4N1FmbUMxVmpsRW1vMWcKRIxPc0dZv9JcEn2NyZ9OJ6QDerh9VIcw&lt;/span&gt;
            &lt;span class="s"&gt;rvD0Tyvrbzoc32cxUZMEUGH+tCwFi5eQ212Fehw1jlLh/YYmYPYBEA==&lt;/span&gt;
            &lt;span class="s"&gt;-----END AGE ENCRYPTED FILE-----&lt;/span&gt;
    &lt;span class="na"&gt;lastmodified&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2023-11-12T20:21:57Z"&lt;/span&gt;
    &lt;span class="na"&gt;mac&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ENC[AES256_GCM,data:QsdpRH36FdY3Gq01U/4NvuiXt/OAkxQv+GURksOmlJCpEzyjKDXrqJOk6D/ZiuGrwPn48803MFSvojNtvdrLr0k8+sJ58/Kv9u1vf7/fxbZczvW5ohVhH1bfou6ZgqsJyMLu8yp3EbXukQ8hJe9N159HgqIdCkyycSFwOGko5O4=,iv:0Y4cP2mzq9wedRLxxYSBM4GAS/VvvbohKWGICO+IWw0=,tag:dfoEDyC0736bL+aXHEmAcA==,type:str]&lt;/span&gt;
    &lt;span class="na"&gt;pgp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
    &lt;span class="na"&gt;unencrypted_suffix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;_unencrypted&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;3.8.1&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The important part for us now is the first line where we can see the &lt;code&gt;become_pass&lt;/code&gt; with a null value. Now I like to edit the values from a terminal and I can simply use&lt;/p&gt;
&lt;h2&gt;
  
  
  Edit an already encrypted file
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

./sops.sh secrets.yml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;It will open the file in the default text editor like &lt;code&gt;nano&lt;/code&gt; or &lt;code&gt;vim&lt;/code&gt; and show you only the decrypted content without the rest of the YAML keys. My default editor currently is vim, but I'm not afraid to tell you that I usually prefer nano, although I don't care enough to change it. If I want, I can change it just for this specific command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;EDITOR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nano ./sops.sh secrets.yml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;I will use the following new value in the encrypted file:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;become_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HomeLab2023&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;After saving the file, the first line in the encrypted view will be like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;become_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ENC[AES256_GCM,data:degiXqRIpLEVdTY=,iv:Loyf/XbEcTq6Z9enDq1ccCFzOYurL4kUa1Js9dpQzgo=,tag:xmhhExQREc6wkY373eW5+g==,type:str]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;We can remove the plaintext file:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;unlink &lt;/span&gt;secrets.plain.yml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Use the secret in the inventory file
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;We need to pass this file somehow to Ansible. We could do it multiple ways, but what I prefer is loading the content&lt;br&gt;
in the inventory file. For that we need a lookup plugin called &lt;code&gt;community.sops.sops&lt;/code&gt; which is the part ot the collection called &lt;code&gt;community.sops&lt;/code&gt;. We can set a variable called &lt;code&gt;sops&lt;/code&gt; and load all the secrets from yaml as an Ansible "dict".&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;all&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;sops&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('community.sops.sops',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'secrets.yml')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible.builtin.from_yaml&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;We also need to use the values somehow like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;all&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;sops&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('community.sops.sops',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'secrets.yml')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible.builtin.from_yaml&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ta-lxlt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;ansible_become_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sops.become_pass&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Now as a reminder, this is my current full &lt;code&gt;inventory.yml&lt;/code&gt; file:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;all&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;vars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ansible_user&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ansible-homelab&lt;/span&gt;
    &lt;span class="na"&gt;config_lxd_zfs_pool_disks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/dev/disk/by-id/scsi-1ATA_Samsung_SSD_850_EVO_500GB_S2RBNX0J103301N-part6&lt;/span&gt;
    &lt;span class="na"&gt;sops&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;lookup('community.sops.sops',&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'secrets.yml')&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ansible.builtin.from_yaml&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ta-lxlt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;ansible_host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.4.58&lt;/span&gt;
      &lt;span class="na"&gt;ansible_ssh_private_key_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;~/.ssh/ansible&lt;/span&gt;
      &lt;span class="na"&gt;ansible_become_pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sops.become_pass&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Use a dedicated user for Ansible
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;You might have noticed that I changed the value of &lt;code&gt;ansible_user&lt;/code&gt; from &lt;code&gt;ta&lt;/code&gt; to &lt;code&gt;ansible-homelab&lt;/code&gt;. I did it, so I don't need to change my user's password and I can still show you the secrets. It is also useful to have a dedicated user for Ansible when you have a CI/CD pipeline, and you don't want to share your password with others who can read the secrets. I created the user with the following command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;pass&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"HomeLab2023"&lt;/span&gt;
&lt;span class="nv"&gt;salt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"sugar"&lt;/span&gt;
useradd &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--shell&lt;/span&gt; /bin/bash &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--create-home&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--groups&lt;/span&gt; &lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--password&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;openssl passwd &lt;span class="nt"&gt;-6&lt;/span&gt; &lt;span class="nt"&gt;-salt&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$salt&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$pass&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  ansible-homelab


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;If you don't have openssl or just prefer to create a user manually, that's fine too, just make sure you add the user to the &lt;code&gt;sudo&lt;/code&gt; group on Ubuntu, so the user can become root. After all, this is why we store the become pass (sudo password) in a secret.&lt;/p&gt;
&lt;h2&gt;
  
  
  Run Ansible in a Nix shell
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Now there is one script that we already have, but needs to be changed. We used Nix to download &lt;code&gt;sops&lt;/code&gt;, but that means Ansible needs to run in a Nix shell. We had this in our original &lt;code&gt;run.sh&lt;/code&gt; in the project root:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

ansible-playbook &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-i&lt;/span&gt; inventory.yml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--ask-become-pass&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;We don't want that script to force Ansible to ask for the become pass so that line must be removed, and we need the usual shebang lines for Nix and also our config parameters. The new &lt;code&gt;run.sh&lt;/code&gt; will be this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;#!/usr/bin/env nix-shell&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -i bash&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -p sops&lt;/span&gt;
&lt;span class="c"&gt;#! nix-shell -I https://github.com/NixOS/nixpkgs/archive/refs/tags/23.05.tar.gz&lt;/span&gt;

&lt;span class="nb"&gt;source &lt;/span&gt;config.sh

ansible-playbook &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-i&lt;/span&gt; inventory.yml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Test if Ansible could read the secret
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;Now let's run the hello playbook which requires root privileges, but we need to change the original destination path of the &lt;code&gt;hello-world.txt&lt;/code&gt;. Otherwise, Ansible would not need to copy again, and it could work even if the sudo password is incorrect.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

./run.sh playbook-hello.yml &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;hello_world_dest&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/opt/hello-world-&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%s&lt;span class="si"&gt;)&lt;/span&gt;.txt


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;As you can see, we can override a role parameter from terminal. Unless this command failed, our secret worked.&lt;/p&gt;
&lt;h2&gt;
  
  
  Don't commit secrets to a git repository
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;We have one more job before we say goodbye. As I stated at the beginning of this post, I prefer not to commit these encrypted secrets either, so I add &lt;code&gt;secrets.yml&lt;/code&gt; to gitignore. The other file I definitely don't want to commit is the private key so let's add the following two lines to gitignore:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

/var
/secrets.yml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;My full gitignore looks like this now:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

/venv
/venv-linux
/inventory.yml
/var
/secrets.yml

# for macOS
/.DS_Store


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  What more you should know
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;In this tutorial I used a bit different way than what you can find in documentations. That made the commands easier, but also more complicated at the same time. &lt;code&gt;sops&lt;/code&gt; supports passing the public key as an argument like &lt;code&gt;sops --encrypt --age age1lh5qpf04dq0xcvgg63wf3qha32d8mxfslm97nh0utfl9rv784dts5zpl8e&lt;/code&gt;, but I used the environment variable instead. Why? Because that way you can still have the option to override it from terminal, although this tutorial is not for production systems but for a private home lab, a playground, so you will probably not need to change it or add multiple public keys which means multiple recipients which would support multiple private keys to decrypt the secret file. You can find everything in the documentations and I recommend reading it and playing with sops and age, so you can discover more like how you can use &lt;code&gt;age&lt;/code&gt; with a hardware key like YubiKey to decrypt files. For using YubiKey, you need a plugin called "&lt;a href="https://github.com/str4d/age-plugin-yubikey" rel="noopener noreferrer"&gt;age-plugin-yubikey&lt;/a&gt;".&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;» Back to table of contents «&lt;/p&gt;

&lt;p&gt;We can now use secrets, but that is completely optional. If you want to pass test passwords like "abc" or "password" as plaintext on a machine where you have nothing to protect like a virtual machine which you just created for playing with Ansible or test some commands, that's fine. Then you don't need to use the lookup plugin in your inventory file.&lt;/p&gt;

&lt;p&gt;I have to note again that this series is for creating a home lab, a local environment to develop and learn and not for a production environment. In a production environment you should always encrypt secrets. Don't forget that SOPS is just one tool so be prepared for other options too when you have to use a keyserver for example in a team.&lt;/p&gt;

&lt;p&gt;The final source code of this episode can be found on GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rimelek/homelab/tree/tutorial.episode.7" rel="noopener noreferrer"&gt;https://github.com/rimelek/homelab/tree/tutorial.episode.7&lt;/a&gt;&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/rimelek" rel="noopener noreferrer"&gt;
        rimelek
      &lt;/a&gt; / &lt;a href="https://github.com/rimelek/homelab" rel="noopener noreferrer"&gt;
        homelab
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Source code to create a home lab. Part of a video tutorial
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;README&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;This project was created to help you build your own home lab where you can test
your applications and configurations without breaking your workstation, so you can
learn on cheap devices without paying for more expensive cloud services.&lt;/p&gt;
&lt;p&gt;The project contains code written for the tutorial, but you can also use parts of it
if you refer to this repository.&lt;/p&gt;
&lt;p&gt;Tutorial on YouTube in English: &lt;a href="https://www.youtube.com/watch?v=K9grKS335Mo&amp;amp;list=PLzMwEMzC_9o7VN1qlfh-avKsgmiU8Jofv" rel="nofollow noopener noreferrer"&gt;https://www.youtube.com/watch?v=K9grKS335Mo&amp;amp;list=PLzMwEMzC_9o7VN1qlfh-avKsgmiU8Jofv&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Tutorial on YouTube in Hungarian: &lt;a href="https://www.youtube.com/watch?v=dmg7lYsj374&amp;amp;list=PLUHwLCacitP4DU2v_DEHQI0U2tQg0a421" rel="nofollow noopener noreferrer"&gt;https://www.youtube.com/watch?v=dmg7lYsj374&amp;amp;list=PLUHwLCacitP4DU2v_DEHQI0U2tQg0a421&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Note: The inventory.yml file is not shared since that depends on the actual environment
so it will be different for everyone. If you want to learn more about the inventory file
watch the videos on YouTube or read the written version on &lt;a href="https://dev.to" rel="nofollow"&gt;https://dev.to&lt;/a&gt;. Links in
the video descriptions on YouTube.&lt;/p&gt;
&lt;p&gt;You can also find an example inventory file in the project root. You can copy that and change
the content, so you will use your IP…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/rimelek/homelab" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
      <category>security</category>
      <category>ansible</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
