<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Flavius Dinu</title>
    <description>The latest articles on DEV Community by Flavius Dinu (@flaviuscdinu).</description>
    <link>https://dev.to/flaviuscdinu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/flaviuscdinu"/>
    <language>en</language>
    <item>
      <title>Are you still searching for a Git and GitHub tutorial in 2025?</title>
      <dc:creator>Flavius Dinu</dc:creator>
      <pubDate>Sun, 03 Aug 2025 07:43:45 +0000</pubDate>
      <link>https://dev.to/flaviuscdinu/are-you-still-searching-for-a-git-and-github-tutorial-in-2025-2ffh</link>
      <guid>https://dev.to/flaviuscdinu/are-you-still-searching-for-a-git-and-github-tutorial-in-2025-2ffh</guid>
      <description>&lt;p&gt;Search no more.&lt;/p&gt;

&lt;p&gt;TL/DR?&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/85Hhpn1-i0E"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Also, if you like the video, don’t forget to like, subscribe, and ring the bell for more videos like this. It helps more than you think!&lt;/p&gt;

&lt;h3&gt;
  
  
  What Are Git and GitHub?
&lt;/h3&gt;

&lt;p&gt;Before touching a single command, clear up the most common beginner confusion: &lt;strong&gt;Git&lt;/strong&gt; and &lt;strong&gt;GitHub are not the same thing.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Git&lt;/strong&gt; is a version control system installed on your computer. It tracks every change you make to your project, maintains a full history, and lets you experiment with confidence. Accidentally deleted something? Git lets you revert. Introduced a bug? Git helps you identify and undo the exact change. It’s your local safety net and experiment playground.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt; is a cloud-based hosting service for Git repositories. If Git is the tool that manages history locally, GitHub is where you push that history to collaborate, back up, review code, manage issues, and coordinate work across a team.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Simple tip:&lt;/strong&gt; Git lives on your machine; GitHub lives on the web. Throughout this tutorial, you’ll see how they work together in the professional workflow developers use every day.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup: Your First Steps
&lt;/h3&gt;

&lt;p&gt;Enough theory. Let’s get everything installed and ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a GitHub Account
&lt;/h3&gt;

&lt;p&gt;Your GitHub profile is your identity in the developer world. Go to GitHub’s signup page, enter your email, choose a username (pick something professional if you intend to show it to employers), set a password, verify your email, and skip optional “personalization” steps for now. You’ll land on your dashboard, your new home base.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Git
&lt;/h3&gt;

&lt;p&gt;Git isn’t usually preinstalled, so you need to add it to your system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Windows:&lt;/strong&gt; Download the installer from the official Git site. Run it and accept the defaults. If prompted to choose a default editor, pick something like VS Code if you have it. Git Bash will be installed — it’s a great shell for running Git.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mac:&lt;/strong&gt; Open Terminal and type git. If it’s not installed, macOS will prompt to install Xcode Command Line Tools. Say yes. Alternatively, if you use Homebrew:&lt;/li&gt;
&lt;li&gt;brew install git&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linux:&lt;/strong&gt; Use your distribution’s package manager, e.g.,&lt;/li&gt;
&lt;li&gt;sudo apt install git&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After installation, verify it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see a version number, congrats, Git is installed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Git
&lt;/h3&gt;

&lt;p&gt;Tell Git who you are so commits are properly attributed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git config --global user.name "Your Name"
git config --global user.email "your.email@example.com"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The --global flag makes this the default for all repositories on your machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Your First Repository
&lt;/h3&gt;

&lt;p&gt;Now that tools are ready, let’s create a project space.&lt;/p&gt;

&lt;h3&gt;
  
  
  On GitHub
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Log into GitHub.&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;+&lt;/strong&gt; in the upper-right and choose &lt;strong&gt;New repository&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Fill in:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Repository name:&lt;/strong&gt; e.g., hello-world&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Description:&lt;/strong&gt; A short sentence explaining the project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visibility:&lt;/strong&gt; Choose &lt;strong&gt;Public&lt;/strong&gt; for now.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Initialize with README:&lt;/strong&gt; Check this, every project should have a README.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Skip .gitignore and license for today; you can add them later. Click &lt;strong&gt;Create repository&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You now have your first repository, a hosted project with an initial README.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46212gtvxlxnjju4pfx3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46212gtvxlxnjju4pfx3.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Core Workflow: Clone → Add → Commit → Push
&lt;/h3&gt;

&lt;p&gt;This is the daily loop of Git.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clone the Repo Locally
&lt;/h3&gt;

&lt;p&gt;On your repo page, click the green &lt;strong&gt;Code&lt;/strong&gt; button, copy the HTTPS URL, then in terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/your-username/hello-world.git
cd hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you have a full local copy, including history.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make a Change
&lt;/h3&gt;

&lt;p&gt;Edit README.md in your editor, and add something like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This is my first local change.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Save the file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage the Change
&lt;/h3&gt;

&lt;p&gt;Staging tells Git what you intend to include in the next snapshot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or stage everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Commit the Change
&lt;/h3&gt;

&lt;p&gt;A commit records the staged changes with a message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git commit -m "Update README file with a new line"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Push to GitHub
&lt;/h3&gt;

&lt;p&gt;Send your local commit to the remote repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;origin is the default remote.&lt;/li&gt;
&lt;li&gt;main is the primary branch.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After pushing, refresh your GitHub repo page and the change should appear. You’ve completed the full cycle: clone → edit → stage → commit → push. This is the foundation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Branches: Parallel Universes
&lt;/h3&gt;

&lt;p&gt;Branching is what makes Git transformative.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;branch&lt;/strong&gt; is a separate line of development, a “parallel universe” of your project if you will. The main branch holds stable, production-ready code. To build a new feature safely, you branch off and experiment in isolation.&lt;/p&gt;

&lt;p&gt;Let’s take a look on how to create a new branch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout -b new-feature
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ve created and switched to new-feature. Create a file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;touch new-feature-file.txt
git add .
git commit -m "Add file for the new feature"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Switch back:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The new file disappears, it lives only on new-feature. Switch back again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout new-feature
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The file returns. That isolation is what makes branching safe and powerful.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pull Requests
&lt;/h3&gt;

&lt;p&gt;Once your feature is ready, you want to merge it into main but in team environments, you do this through a &lt;strong&gt;Pull Request (PR)&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps
&lt;/h3&gt;

&lt;p&gt;Push your feature branch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;git push origin new-feature&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On GitHub, you’ll see a prompt to &lt;strong&gt;Compare &amp;amp; pull request&lt;/strong&gt;. Click it.&lt;/p&gt;

&lt;p&gt;On the PR page:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Confirm you’re merging new-feature into main.&lt;/li&gt;
&lt;li&gt;Give it a clear title (e.g., “Add New Feature”).&lt;/li&gt;
&lt;li&gt;Write a descriptive body: what the change does, why it was made, and how to test it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, create the PR and invite teammates to review. They can comment on specific lines, suggest changes, and approve.&lt;/p&gt;

&lt;p&gt;Once approved, click &lt;strong&gt;Merge pull request&lt;/strong&gt;. Optionally delete the old feature branch to keep things clean.&lt;/p&gt;

&lt;p&gt;You’ve just followed a professional collaboration workflow: develop in isolation, request review, then merge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Team Collaboration in Action
&lt;/h3&gt;

&lt;p&gt;On a team, the repository is the single source of truth. Everyone clones it, works on their own branches, and integrates via PRs. Key practices:&lt;/p&gt;

&lt;h4&gt;
  
  
  Collaborators
&lt;/h4&gt;

&lt;p&gt;Repository owners invite others under &lt;strong&gt;Settings → Collaborators&lt;/strong&gt; , granting push access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Branching Strategy (Feature Branch Workflow)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Never commit directly to main.&lt;/li&gt;
&lt;li&gt;Create a new branch per task, e.g., feature/user-login or bugfix/fix-typo.&lt;/li&gt;
&lt;li&gt;Work on that branch; commit as needed.&lt;/li&gt;
&lt;li&gt;Open a PR to bring changes into main.&lt;/li&gt;
&lt;li&gt;Require at least one reviewer’s approval.&lt;/li&gt;
&lt;li&gt;Merge and delete the feature branch.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Stay Up to Date
&lt;/h3&gt;

&lt;p&gt;Before starting new work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git checkout main
git pull origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;git pull fetches and integrates others’ changes so your base is current.&lt;/p&gt;

&lt;h3&gt;
  
  
  Going Deeper: Advanced Concepts
&lt;/h3&gt;

&lt;p&gt;Now that you’ve mastered the essentials, here are a few things that separate beginners from effective collaborators.&lt;/p&gt;

&lt;h3&gt;
  
  
  Structured Branching (e.g., Git Flow)
&lt;/h3&gt;

&lt;p&gt;Large teams sometimes use models like &lt;strong&gt;Git Flow&lt;/strong&gt; , which introduces branches like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;develop for integration of completed features,&lt;/li&gt;
&lt;li&gt;release branches for stabilization,&lt;/li&gt;
&lt;li&gt;main for production releases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You don’t need this immediately, but it’s useful to know such conventions exist.&lt;/p&gt;

&lt;h3&gt;
  
  
  Merge Conflicts
&lt;/h3&gt;

&lt;p&gt;Conflicts arise when two branches edit the same lines of the same file. Git will pause the merge and mark the conflict in the file with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt; HEAD
Your changes
=======
Their changes
&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; branch-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Resolve by editing the file to the desired state, then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git add &amp;lt;file&amp;gt;
git commit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Modern editors often have visual conflict resolution tools built-in to simplify this.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub for Project Management
&lt;/h3&gt;

&lt;p&gt;GitHub isn’t just for code, it’s a project hub.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Issues:&lt;/strong&gt; Track tasks, bugs, ideas. Assign them, label them, and discuss context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Project Boards:&lt;/strong&gt; Visualize progress (like a Kanban). Drag PRs and Issues across columns like “To Do,” “In Progress,” and “Done.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Mastering these lets you manage the code &lt;em&gt;and&lt;/em&gt; the project around it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;You started by asking: &lt;em&gt;What even is Git?&lt;/em&gt;&lt;br&gt;&lt;br&gt;
Now you’ve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installed and configured Git.&lt;/li&gt;
&lt;li&gt;Created a GitHub repo.&lt;/li&gt;
&lt;li&gt;Done the core workflow: clone, add, commit, push.&lt;/li&gt;
&lt;li&gt;Used branching to isolate work safely.&lt;/li&gt;
&lt;li&gt;Opened and merged a pull request.&lt;/li&gt;
&lt;li&gt;Seen how teams coordinate through collaborators, reviews, and synchronization.&lt;/li&gt;
&lt;li&gt;Touched on advanced ideas like structured branching, merge conflicts, and using GitHub as a project platform.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ve gone from confused beginner to someone with a real, professional workflow. The next step is practice: build your own projects, contribute to open-source, and keep using this loop until it becomes second nature.&lt;/p&gt;

&lt;p&gt;If this guide helped, save it, share it, or subscribe to updates for more practical deep dives. You’re no longer just learning — you’re building. Now go make something awesome.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>versioncontrol</category>
      <category>gittutorial</category>
      <category>git</category>
    </item>
    <item>
      <title>Self-Host n8n with Docker: Should You Do It?</title>
      <dc:creator>Flavius Dinu</dc:creator>
      <pubDate>Tue, 08 Jul 2025 06:52:08 +0000</pubDate>
      <link>https://dev.to/flaviuscdinu/self-host-n8n-with-docker-should-you-do-it-o4a</link>
      <guid>https://dev.to/flaviuscdinu/self-host-n8n-with-docker-should-you-do-it-o4a</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nzk7gvg3rvitgkwc0ye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8nzk7gvg3rvitgkwc0ye.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s be honest: Everyone is using &lt;a href="https://n8n.io/" rel="noopener noreferrer"&gt;n8n&lt;/a&gt; for automating their workflows. And I’m not going to tease you anymore, I’ll say that absolutely YES, you should self-host n8n on Docker.&lt;/p&gt;

&lt;p&gt;However, if you’re unfamiliar with n8n, I've got you covered.&lt;/p&gt;

&lt;p&gt;Think of n8n as the Swiss Army knife of workflow automation. Instead of wrestling with complex CI/CD tools for every little automation task, you get a visual, node-based interface that actually makes sense. Your team can build workflows without needing a PhD in DevOps. No more fighting with overcomplicated tools or reinventing the wheel every time you need to automate something.&lt;/p&gt;

&lt;p&gt;In this guide, I’ll walk you through everything you need to know about getting n8n up and running with Docker. We’ll start simple with a local installation with Docker, and work our way up to a setup that deploys n8n on an EC2 instance in AWS using Terraform (you can use OpenTofu as well).&lt;/p&gt;

&lt;p&gt;TL/DR? Check the video instead:&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/y3_zwh-gB-w"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;For this tutorial you will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker &amp;amp; Docker Compose&lt;/li&gt;
&lt;li&gt;Basic knowledge of the CLI and Git&lt;/li&gt;
&lt;li&gt;An AWS account with permissions to create EC2 instances, security groups, and Route53 records&lt;/li&gt;
&lt;li&gt;Terraform or OpenTofu&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Local n8n installation with Docker-compose
&lt;/h3&gt;

&lt;p&gt;Let’s start simple with a local installation with Docker-compose. For that you can use the &lt;a href="https://github.com/n8n-io/self-hosted-ai-starter-kit/tree/main" rel="noopener noreferrer"&gt;Self-Hosted AI starter kit&lt;/a&gt; for that, build by the n8n team.&lt;/p&gt;

&lt;p&gt;You will first need to clone the repository by using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone git@github.com:n8n-io/self-hosted-ai-starter-kit.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, go to the directory containing the &lt;em&gt;self-hosted-ai-starter-kit&lt;/em&gt; repository, and cp the .env.example to a new file called .env. You will then need to modify this .env file with values that make sense for your environment.&lt;/p&gt;

&lt;p&gt;Depending on your operating system and your GPU, you can deploy n8n in multiple ways:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nvidia GPU&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose --profile gpu-nvidia up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;AMD GPU&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose --profile gpu-amd up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Everyone else:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose --profile cpu up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As I’m on a Mac with Apple Silicon I will use the latter command and in the end this is what the output looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Attaching to n8n, n8n-import, ollama, ollama-pull-llama, qdrant, postgres-1
postgres-1 | 
postgres-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres-1 | 
qdrant | _ _    
qdrant | _____ | |_ ____ _ _ __ | |_  
qdrant | / _` |/ _` | ' __/ _` | '_ \|__ | 
qdrant | | (_| | (_| | | | (_| | | | | |_  
qdrant | \ __, |\__ ,_|_| \ __,_|_| |_|\__ | 
qdrant | |_|                           
qdrant | 
qdrant | Version: 1.14.1, build: 530430fa
qdrant | Access web UI at http://localhost:6333/dashboard
qdrant | 
qdrant | 2025-07-07T18:56:16.373057Z INFO storage::content_manager::consensus::persistent: Loading raft state from ./storage/raft_state.json    
ollama | time=2025-07-07T18:56:16.383Z level=INFO source=routes.go:1235 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
ollama | time=2025-07-07T18:56:16.386Z level=INFO source=images.go:476 msg="total blobs: 6"
ollama | time=2025-07-07T18:56:16.386Z level=INFO source=images.go:483 msg="total unused blobs removed: 0"
ollama | time=2025-07-07T18:56:16.388Z level=INFO source=routes.go:1288 msg="Listening on [::]:11434 (version 0.9.5)"
ollama | time=2025-07-07T18:56:16.388Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
ollama | time=2025-07-07T18:56:16.394Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
ollama | time=2025-07-07T18:56:16.395Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="7.7 GiB" available="4.5 GiB"
postgres-1 | 2025-07-07 18:56:16.415 UTC [1] LOG: starting PostgreSQL 16.9 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 14.2.0) 14.2.0, 64-bit
postgres-1 | 2025-07-07 18:56:16.415 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres-1 | 2025-07-07 18:56:16.415 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres-1 | 2025-07-07 18:56:16.416 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres-1 | 2025-07-07 18:56:16.421 UTC [29] LOG: database system was shut down at 2025-07-06 11:59:00 UTC
postgres-1 | 2025-07-07 18:56:16.425 UTC [1] LOG: database system is ready to accept connections
qdrant | 2025-07-07T18:56:16.431182Z INFO qdrant: Distributed mode disabled    
qdrant | 2025-07-07T18:56:16.431604Z INFO qdrant: Telemetry reporting enabled, id: d33bc0ef-1661-4d58-8b03-f1a751fc0c2f    
qdrant | 2025-07-07T18:56:16.432002Z INFO qdrant: Inference service is not configured.    
qdrant | 2025-07-07T18:56:16.577110Z INFO qdrant::actix: TLS disabled for REST API    
qdrant | 2025-07-07T18:56:16.577167Z INFO qdrant::actix: Qdrant HTTP listening on 6333    
qdrant | 2025-07-07T18:56:16.577376Z INFO actix_server::builder: starting 5 workers
qdrant | 2025-07-07T18:56:16.577395Z INFO actix_server::server: Actix runtime found; starting in Actix runtime
qdrant | 2025-07-07T18:56:16.577407Z INFO actix_server::server: starting service: "actix-web-service-0.0.0.0:6333", workers: 5, listening on: 0.0.0.0:6333
qdrant | 2025-07-07T18:56:16.607417Z INFO qdrant::tonic: Qdrant gRPC listening on 6334    
qdrant | 2025-07-07T18:56:16.607430Z INFO qdrant::tonic: TLS disabled for gRPC API    
ollama | [GIN] 2025/07/07 - 18:56:19 | 200 | 802.291µs | 172.24.0.5 | HEAD "/"
ollama | [GIN] 2025/07/07 - 18:56:20 | 200 | 690.502584ms | 172.24.0.5 | POST "/api/pull"
pulling manifest 
ollama-pull-llama | pulling dde5aa3fc5ff: 100% ▕██████████████████▏ 2.0 GB                         
ollama-pull-llama | pulling 966de95ca8a6: 100% ▕██████████████████▏ 1.4 KB                         
ollama-pull-llama | pulling fcc5a6bec9da: 100% ▕██████████████████▏ 7.7 KB                         
ollama-pull-llama | pulling a70ff7e570d9: 100% ▕██████████████████▏ 6.0 KB                         
ollama-pull-llama | pulling 56bb8bd477a5: 100% ▕██████████████████▏ 96 B                         
ollama-pull-llama | pulling 34bb5ab01051: 100% ▕██████████████████▏ 561 B                         
ollama-pull-llama | verifying sha256 digest 
ollama-pull-llama | writing manifest 
ollama-pull-llama | success 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can access n8n directly on &lt;a href="http://localhost:5678." rel="noopener noreferrer"&gt;http://localhost:5678.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lph7jhwzr07ras0oij8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lph7jhwzr07ras0oij8.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you can run the demo workflow that just chats with Ollama:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p1rp0b1gplpapw8x1d2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p1rp0b1gplpapw8x1d2.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;N8n makes it easy to import workflows directly by using json files, so what you can do is head out to &lt;a href="https://n8n.io/workflows/" rel="noopener noreferrer"&gt;this link&lt;/a&gt;, and choose one of the workflows.&lt;/p&gt;

&lt;p&gt;I’ve chosen the first one in the list and then I clicked on &lt;strong&gt;Use for Free&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsa3hbqsnx43tkliysemb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsa3hbqsnx43tkliysemb.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you get a couple of different options:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3yuyg3jdrr4oflt437b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3yuyg3jdrr4oflt437b.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select &lt;em&gt;Copy template to clipboard [JSON]&lt;/em&gt; and save it to a file on your computer.&lt;/p&gt;

&lt;p&gt;Next, go to your n8n instance and click on create workflow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6gwowdbfkvwpsqoni2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6gwowdbfkvwpsqoni2j.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can select the three dots in the top right corner and select import from file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswjku0lfhes7kgyfki8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswjku0lfhes7kgyfki8p.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the file you’ve created before and the workflow gets imported automatically:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99z84n2eempjdjxl5lc5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99z84n2eempjdjxl5lc5.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Terraform to deploy n8n on AWS EC2 with Docker
&lt;/h3&gt;

&lt;p&gt;I’ve put together a simple Terraform configuration that lets you deploy n8n on AWS without much hassle, making it easy to have the server up and running non stop to run all the workflows you need.&lt;/p&gt;

&lt;p&gt;You can find the code &lt;a href="https://github.com/flavius-dinu/n8n-terraform" rel="noopener noreferrer"&gt;here&lt;/a&gt;, make sure you clone or fork it.&lt;/p&gt;

&lt;p&gt;The configuration is pretty straight forward to use, you will just need to provide a couple of variables using terraform.tfvars, environment variables or a secrets management solution to ensure everything goes smooth:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public_key_path                
create_dns_record              
domain_name                    
key_name                       
basic_auth_password            
n8n_encryption_key             
n8n_user_management_jwt_secret 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By setting the &lt;em&gt;create_dns_record&lt;/em&gt; to &lt;em&gt;true&lt;/em&gt; you will be able to access n8n directly from your domain (right now only through http).&lt;/p&gt;

&lt;p&gt;Also, the &lt;em&gt;public_key_path&lt;/em&gt; should be the path to your ssh public key so make sure you have access to the private key as well in order to connect to the instance via ssh (you don’t really need to connect via ssh to it, but I’ll show how to track something if you do).&lt;/p&gt;

&lt;p&gt;I’m using by default a t2.micro instance, so in this case, the AI capabilities won’t work due to RAM limitations, but you can still import other workflows to n8n as I’ve shown above.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;NOTE&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;:&lt;/em&gt; The solution can be extended based on the demand to get closer to be production ready. You can use an infrastructure orchestration platform or a generic CI/CD to help you with the deployment. You should also implement remote state if you want to use it in production.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As soon as you populate the variables, you can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init

Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v6.2.0...
- Installed hashicorp/aws v6.2.0 (signed by HashiCorp)

...

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you initialize the configuration you can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

Plan: 4 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + n8n_dns_record = (known after apply)
  + n8n_instance_public_ip = (known after apply)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Four resources will be created: an EC2 instance, a security group, a key pair, and optionally a dns record. The n8n installation is handled in the cloud-init script that can be found &lt;a href="https://github.com/flavius-dinu/n8n-terraform/blob/main/templates/user_data.sh.tmpl" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As soon as you’re happy with the plan you can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a couple of minutes, your resources will be up and running, but you will still have to be patient for a couple more minutes for the cloud init script to fully deploy.&lt;/p&gt;

&lt;p&gt;If you are impatient as I certainly am, you can check in real time what is happening with the cloud init script by ssh-ing to the instance and doing a:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tail -f /var/log/cloud-init-output.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As soon as the script finishes you will be able to access n8n via:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ec2_public_ip:5678&lt;/li&gt;
&lt;li&gt;your_record_name:5678 (if you set up a dns record)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can run a terraform output command to see what are the defined outputs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key points
&lt;/h3&gt;

&lt;p&gt;If you’re serious about automating your workflows without any limitations, self-hosting n8n with Docker is the way to go.&lt;/p&gt;

&lt;p&gt;The process is smooth, scalable, and entirely under your control.&lt;/p&gt;

&lt;p&gt;With this setup, you’re not only increasing flexibility, but you’re also building automation on your terms, which let’s be honest, it’s fun.&lt;/p&gt;

&lt;p&gt;So yes, you &lt;em&gt;should&lt;/em&gt; self-host n8n on Docker, and now, you know exactly how.&lt;/p&gt;

</description>
      <category>n8n</category>
      <category>docker</category>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Lens Kubernetes IDE: How to Simplify K8s Management in 2025</title>
      <dc:creator>Flavius Dinu</dc:creator>
      <pubDate>Mon, 23 Jun 2025 09:36:12 +0000</pubDate>
      <link>https://dev.to/flaviuscdinu/lens-kubernetes-ide-how-to-simplify-k8s-management-in-2025-3cgl</link>
      <guid>https://dev.to/flaviuscdinu/lens-kubernetes-ide-how-to-simplify-k8s-management-in-2025-3cgl</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mbypdke1fwpuhsnpk8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mbypdke1fwpuhsnpk8w.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes (K8s) has improved how we deploy and manage containers, but with this power comes a lot of complexity that can easily overwhelm even experienced engineers. If you are managing five production clusters across different environments, juggling kubectlcommands across multiple terminal windows, how fast could you actually debug an issue?&lt;/p&gt;

&lt;p&gt;Visual cluster management has become essential for DevOps workflows, even though kubectl remains the backbone of Kubernetes management.&lt;/p&gt;

&lt;p&gt;You need to remove the friction between memorizing debugging commands across multiple clusters, switching between them, and always remembering what are the resource relationships. This is where Lens comes in, your K8s IDE that shortens the gap between CLI and visual clarity.&lt;/p&gt;

&lt;p&gt;In this post, we will walk through what Lens is, how to install it, how to connect it to a cluster (it’s way easier than you would expect), and we will also deploy a bunch of K8s resources inside it to view and troubleshoot them with Lens.&lt;/p&gt;

&lt;p&gt;Check out my YouTube channel:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://youtube.com/@devopswithflavius/" rel="noopener noreferrer"&gt;DevOps with Flavius&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Lens?
&lt;/h3&gt;

&lt;p&gt;Lens is your unified Kubernetes IDE that offers visual cluster management capabilities that complement kubectl. You should think of it as your mission control center for Kubernetes that gives you a single interface where you can monitor, debug, and manage multiple clusters without drowning in terminal commands.&lt;/p&gt;

&lt;p&gt;One of the core values that Lens offers is multi-cluster visibility. You can stop switching between terminal contexts and memorizing cluster configurations. With Lens, you get real-time dashboards, integrated log streaming, and visual resource management across all your environments.&lt;/p&gt;

&lt;p&gt;On top of that, you can view logs, connect easily to different pods, view configmaps and secrets information easily, port-forward to your services to see if they are working properly, real-time resource monitoring, and much more.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Install Lens
&lt;/h3&gt;

&lt;p&gt;Installing Lens is an easy process. Just head out to &lt;a href="https://k8slens.dev/download" rel="noopener noreferrer"&gt;https://k8slens.dev/download&lt;/a&gt;, choose your operating system, and then follow the process described for each OS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk88oe4yc5w8ltqovxf49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk88oe4yc5w8ltqovxf49.png" width="800" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After installing it, launch Lens and complete the initial setup, and create your own LensID. Lens will scan for existing kubeconfig files in your ~/.kube directory and will give you the option to import discovered clusters. This automatic discovery streamlines the onboarding process if you already have kubectlconfigured.&lt;/p&gt;

&lt;h3&gt;
  
  
  K0s clusters
&lt;/h3&gt;

&lt;p&gt;I will assume that you don’t have any K8s clusters created, so let’s jump into creating one. If you already have a K8s cluster up and running, you don’t need to follow this, but at the same time, I will give you another option of creating lightweight clusters apart from minikube and kind by leveraging &lt;a href="https://k0sproject.io/" rel="noopener noreferrer"&gt;K0s&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So K0s is a lightweight Kubernetes distribution that delivers a production-like cluster from a single binary, making it ideal for learning, testing, and development workflows. Right now, it runs on Linux, with experimental Windows support.&lt;/p&gt;

&lt;p&gt;Here are the steps you have to follow if you want to install it on your Linux machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -sSf https://get.k0s.sh | sudo sh
$ sudo k0s install controller --single
$ sudo k0s start # wait a minute
$ sudo k0s kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Connect Lens to your cluster
&lt;/h3&gt;

&lt;p&gt;As I’m on MacOS, I can’t use K0s right now, so I will stick to kind.&lt;/p&gt;

&lt;p&gt;To create a kind cluster, you can simply run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kind create cluster --name lens-cluster

Creating cluster "lens-cluster" ...
 ✓ Ensuring node image (kindest/node:v1.30.0) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-lens-cluster"

You can now use your cluster with:

kubectl cluster-info --context kind-lens-cluster

Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/

$ kubectl cluster-info --context kind-lens-cluster
Kubernetes control plane is running at https://127.0.0.1:63490
CoreDNS is running at https://127.0.0.1:63490/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As soon as I’ve created my cluster, if I open Lens, I can see that it already connected to it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrnp5w5qdy0lo631iqpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrnp5w5qdy0lo631iqpw.png" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Populate your cluster with different K8s Resources
&lt;/h3&gt;

&lt;p&gt;To really explore Lens, you need to have K8s resources inside your cluster. So I’ve put together a couple of resources that you can deploy to observe what is happening:&lt;/p&gt;

&lt;p&gt;First, I will start with a basic nginx deployment that shows pod lifecycle management:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply this deployment using kubectl. I will use Lens’ built-in terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f nginx_deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lxzjxgogcck2cxebi7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lxzjxgogcck2cxebi7l.png" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, let’s create a service to expose the nginx deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s apply it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f nginx_service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let’s create a config map that will change the nginx.conf file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  nginx.conf: |
    server {
        listen 80;
        location / {
            return 200 'Hello from Lens!\n';
            add_header Content-Type text/plain;
        } 

kubectl apply -f nginx_conf.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we’ve created a couple of resources, we are ready to explore Lens.&lt;/p&gt;

&lt;h3&gt;
  
  
  First Glance at Lens
&lt;/h3&gt;

&lt;p&gt;Open your Lens application, and select the K8s cluster in which you’ve deployed the above resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6zywpy18m7hbc5wbdhcn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6zywpy18m7hbc5wbdhcn.png" width="590" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will see many options from which you can choose, and all of them are really important to understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Overview&lt;/strong&gt; shows your cluster health assessment. This is where you get visibility into node status, resource utilization, and workload distribution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9118zeri1xf1qm46jjfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9118zeri1xf1qm46jjfg.png" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Nodes&lt;/strong&gt; show you data about your cluster nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhk6y7lpfunw9qzuherj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhk6y7lpfunw9qzuherj.png" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Workloads&lt;/strong&gt; will let you explore your deployed resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpznz0ua8jl4qk4cd60dl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpznz0ua8jl4qk4cd60dl.png" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Config&lt;/strong&gt; will show you data about your configmaps, secrets, resource quotas, limit ranges and more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0o6tl6nzuew45nzyhp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0o6tl6nzuew45nzyhp2.png" width="562" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the &lt;strong&gt;Network&lt;/strong&gt; you will see information about your services, ingresses, and others&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2h4ko8h0rhie8g84rwh1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2h4ko8h0rhie8g84rwh1.png" width="538" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And as you can see, there are other options present, so this would be a great time to stay a couple of minutes in the app, and explore all the things that you can do.&lt;/p&gt;

&lt;p&gt;As soon as there are changes happening in your cluster, Lens picks them and propagates them immediately through the interface. Pod restarts, scaling operations, and configuration changes appear without manual refresh, providing live insight into cluster operations that static kubectl output cannot simply match.&lt;/p&gt;

&lt;p&gt;Let’s now explore the resources we have deployed. Here are all the pods running:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rl07hxm63s33po749rq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rl07hxm63s33po749rq.png" width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By clicking on the 3 dots on the right side, you get a couple of options:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmu4e1i65r795e02elcd6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmu4e1i65r795e02elcd6.png" width="328" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can easily attach to a pod, open a shell, evict it, view the logs, edit it, and even delete it.&lt;/p&gt;

&lt;p&gt;I will now press on Shell, to connect to one of my nginx pods:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8m7hmqxa3opjdct67bv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8m7hmqxa3opjdct67bv.png" width="800" height="1035"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the ConfigMap:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flilssuruqtq8j5mddugi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flilssuruqtq8j5mddugi.png" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And this is the service:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojkknr20x7n3px8i12cb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojkknr20x7n3px8i12cb.png" width="800" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Check Out Logs with Lens
&lt;/h3&gt;

&lt;p&gt;Log management through Lens makes K8s log management a breeze. Instead of juggling multiple terminal windows with kubectl logs commands, you get unified log streaming with advanced filtering and search capabilities.&lt;/p&gt;

&lt;p&gt;As you saw, when we connected to the pod’s shell, we had an option to access logs. The interface automatically handles multi-container pods by providing separate log streams with clear container identification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgdte977kvrk1yvz2fan.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgdte977kvrk1yvz2fan.png" width="800" height="780"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The search functionality helps a lot with debugging specific issues. You can use the search box to filter logs for error keywords, specific HTTP status codes, and others. Lens highlights matching terms and provides context around each match, making it easier to understand the sequence of events leading to problems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnesd1fylkj5s5wkz5hf1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnesd1fylkj5s5wkz5hf1.png" width="800" height="725"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You also have the ability to export logs. Select a time range or search filter, then export the results to a file for offline analysis or integration with external logging systems. This feature bridges the gap between interactive debugging and formal incident documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Port-Forward to Nginx
&lt;/h3&gt;

&lt;p&gt;Apart from everything that I’ve shown you until now, you also get an easy way to enable port forwarding through Lens.&lt;/p&gt;

&lt;p&gt;Just go to your Network tab, select Services, and then choose your service:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fht5ctginaz7lmllbekqo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fht5ctginaz7lmllbekqo.png" width="800" height="974"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will see an option to Forward it, so let’s click on it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0lyq27hm8hp0suv15ko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0lyq27hm8hp0suv15ko.png" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can choose a local port to forward it to, or leave it as Random, have the option to directly open in your browser, and also choose https. I will leave everything as default and click on Start.&lt;/p&gt;

&lt;p&gt;As soon as I clicked on start, I will get redirected to my browser, and I can see my web application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwyomxql7c8t0s2rvnk0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwyomxql7c8t0s2rvnk0.png" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, what I’ve added in my configmap is not currently rendering in my web page, and that’s to be expected, as I didn’t specify anywhere in my deployment that I want to use the configmap.&lt;/p&gt;

&lt;p&gt;So let’s update the deployment to use the configmap:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        volumeMounts:
        - name: nginx-config-volume
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
      volumes:
      - name: nginx-config-volume
        configMap:
          name: nginx-config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As soon as I did and I’ve reapplied the code, I see in Lens the yellow triangle with the exclamation point for my pod. This means that there is an issue there. Let’s check the logs and understand what is happening.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wjf7woh2jidy698bozy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wjf7woh2jidy698bozy.png" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It seems like the “server” directive is not allowed in the /etc/nginx/nginx.conf, so this means that there is an issue with my configmap. Let’s fix it and come back:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  nginx.conf: |
    events {}
    http {
        server {
            listen 80;
            location / {
                return 200 'Hello from Lens!\n';
                add_header Content-Type text/plain;
            }
        }
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After fixing it, we now correctly see the intended message from the configmap:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh8b0a3n5u97ugqq8d40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxh8b0a3n5u97ugqq8d40.png" width="790" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Security considerations remain important even with port forwarding. The tunnel only provides local access and doesn’t expose services to other machines on your network. Anyway, please be mindful of sensitive services like databases that contain production data, and terminate port forwards when they’re no longer needed.&lt;/p&gt;

&lt;p&gt;You can always see a list of all the port forwards in the Network -&amp;gt; Port Forwarding:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg7btwv209jyf8zjc4dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg7btwv209jyf8zjc4dw.png" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If connection issues arise, Lens provides diagnostic information about tunnel status, including error messages and retry attempts. Common issues include local port conflicts and service endpoint problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy ArgoCD and connect to It
&lt;/h3&gt;

&lt;p&gt;ArgoCD implements GitOps principles that align perfectly with modern DevOps workflows. Deploying ArgoCD inside your K8s cluster and managing it with Lens demonstrates how visual cluster management integrates with continuous deployment tools.&lt;/p&gt;

&lt;p&gt;If you want to learn more about ArgoCD check out this &lt;a href="https://medium.com/spacelift/using-argocd-terraform-to-manage-kubernetes-cluster-a70e9d852d89" rel="noopener noreferrer"&gt;post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s install ArgoCD, but first let’s create a namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace argocd

kubectl apply -n argocd -f [https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml](https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can easily monitor the deployment progress through Lens by navigating to the argocd namespace in the Workloads section. You’ll see multiple ArgoCD components initializing, such as the API server, repository server, application controller, and Redis cache. The visual status indicators show pod startup progress and resource allocation for each component. This save us you a lot of time and frustration, as you don’t need to spam kubectl get commands.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk43pnktxij5prmgn191t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk43pnktxij5prmgn191t.png" width="800" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ArgoCD requires initial password configuration for the admin user. You can easily get that from Lens by going to Config-&amp;gt;Secrets and selecting the &lt;em&gt;argocd-initial-admin-secret.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyg46i8fhs86ncljo8i13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyg46i8fhs86ncljo8i13.png" width="800" height="639"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Secrets are Base64 encoded, but Lens gives you the ability to show them, by clicking the Show button. Save the password, and now let’s verify if all components were deployed successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejqun2u08ceqdye01a2r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejqun2u08ceqdye01a2r.png" width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Seems like everything is running properly, so we are now ready to access the ArgoCD UI. We will use port forwarding once more. Navigate to your services, and look for the ArgoCD server service, and start the port forward:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9bma2b1eh6s8bp2dsdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9bma2b1eh6s8bp2dsdu.png" width="800" height="1082"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is what you will initially see in your Browser, so ensure you use the _visit this website _option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqn4o55s3obs0u6xk5lo4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqn4o55s3obs0u6xk5lo4.png" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The ArgoCD login page appears, requesting username and password credentials. Use admin as the username and the password you extracted from the secret:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevsskkkh8ecfj4akxemg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevsskkkh8ecfj4akxemg.png" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The ArgoCD dashboard provides a clean interface for application management, showing deployed applications, sync status, and health indicators.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykoaunlg5uy2huxnvlgt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykoaunlg5uy2huxnvlgt.png" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Lens, you will see that a new Custom Resource Definition has appeared for the argoproj.io, making it easy to track all the Argo related resources in a single view:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zw8f1npehluex9ldjm0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zw8f1npehluex9ldjm0.png" width="800" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will now deploy a dummy ArgoCD application just to show you how you will see it in Lens:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/argoproj/argocd-example-apps.git
    targetRevision: HEAD
    path: guestbook
  destination:
    server: https://kubernetes.default.svc
    namespace: default
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the above to a file and use kubectl apply to deploy it. This is how you see the App in Lens after deploying:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmicz1f0lj735b5b13w2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmicz1f0lj735b5b13w2.png" width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And in ArgoCD:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhacbbixltl2fjk070vfq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhacbbixltl2fjk070vfq.png" width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We won’t go through ArgoCD’s capabilities for this tutorial, I just wanted to show you how easy Lens makes the overall process of deployment, and also for debugging. If you have any ArgoCD apps that fail, you can use Lens to examine logs, resource status, and configuration details.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy a Helm Chart
&lt;/h3&gt;

&lt;p&gt;Let’s add a popular Helm repository to our local configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add bitnami https://charts.bitnami.com/bitnami
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As soon as I’ve done this, in Lens, I can go to Helm and select Charts, and I will see all the available from the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqf2j1eo5nrp83vunl1te.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqf2j1eo5nrp83vunl1te.png" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I can easily search for a chart at the top of the Charts page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvia7mocr7aatf8r8gbd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvia7mocr7aatf8r8gbd.png" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s deploy a WordPress application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install my-wordpress bitnami/wordpress \
  --namespace wordpress \
  --create-namespace \
  --set wordpressUsername=admin \
  --set wordpressPassword=secretpassword \
  --set service.type=ClusterIP \
  --set persistence.enabled=false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, we can monitor the deployment process by navigating to the Wordpress namespace in the Workloads-&amp;gt;Pods:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw076cptxo074b6etgqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw076cptxo074b6etgqw.png" width="800" height="95"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Helm release appears in Lens under the Releases section, showing release status, chart version, and installation notes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrcj8vxeh8cujbok0vby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrcj8vxeh8cujbok0vby.png" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can access the WordPress application by doing port forwarding, and similar to ArgoCD’s deployment, we should get the login credentials from the my-wordpresssecret (or well, this time we’ve actually hardcoded the password when we installed the helm chart; don’t do this).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhwol9wy1ghs7gtczij9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhwol9wy1ghs7gtczij9.png" width="800" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Points
&lt;/h3&gt;

&lt;p&gt;Lens is my go-to product when it comes to choosing a K8s IDE. You get all the debugging capabilities into one place, a built-in terminal, and the observability you need to understand what is happening in your K8s cluster.&lt;/p&gt;

&lt;p&gt;Using Lens saves you a lot of time, because you don’t need to remember complicated kubectl commands for everything your process would require. It’s important to understand that Lens is not a 100% replacement for kubectl, and this is not what it promises. It’s a complement to kubectl that aims to make your life easier, and equip you with all the tools you need to make your cloud-native life easier.&lt;/p&gt;

&lt;p&gt;Want to learn Terraform? Check out my Terraform series:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-0-i-like-to-start-counting-from-0-maybe-i-enjoy-lists-too-much-72cd0b86ebcd" rel="noopener noreferrer"&gt;Terraform from 0 to Hero — 0. I like to start counting from 0, maybe I enjoy lists too much&lt;/a&gt;&lt;/p&gt;

</description>
      <category>platform</category>
      <category>devops</category>
      <category>observability</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>IaCConf 2025: Why Infrastructure as Code deserves its own conference</title>
      <dc:creator>Flavius Dinu</dc:creator>
      <pubDate>Tue, 06 May 2025 09:50:34 +0000</pubDate>
      <link>https://dev.to/flaviuscdinu/iacconf-2025-why-infrastructure-as-code-deserves-its-own-conference-400f</link>
      <guid>https://dev.to/flaviuscdinu/iacconf-2025-why-infrastructure-as-code-deserves-its-own-conference-400f</guid>
      <description>&lt;p&gt;Infrastructure as Code (IaC) has come a long way since its beginnings. What started as spinning up cloud resources has now grown into a full-blown ecosystem with a very engaged community. IaC on its own can add a lot of complexity to your processes and cause more issues, but combined with governance, compliance, security, deployment strategies, self-service, it easily becomes the fuel your delivery engine needs.&lt;/p&gt;

&lt;p&gt;This year, &lt;a href="https://www.iacconf.com/" rel="noopener noreferrer"&gt;IaCConf&lt;/a&gt; debuts as a virtual event by bringing together this community, and it’s shaping up to be one of the most relevant and practical events in the space.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes IACConf stand out
&lt;/h2&gt;

&lt;p&gt;Unlike broader DevOps/SRE or cloud-native events, IACConf is focused on the challenges and innovations around defining infrastructure as code. That means the content goes deeper, the conversations are more relevant, and the takeaways are directly applicable to your processes.&lt;/p&gt;

&lt;p&gt;There will be sessions on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Getting started with IaC&lt;/li&gt;
&lt;li&gt;Managing IaC at scale across teams and environments (using OpenTofu, Terraform, Ansible, Crossplane, and others)&lt;/li&gt;
&lt;li&gt;Platform Engineering and IaC&lt;/li&gt;
&lt;li&gt;How IaC impacts your maturity&lt;/li&gt;
&lt;li&gt;AI in IaC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that’s just scratching the surface. Check out the full agenda here.&lt;/p&gt;

&lt;p&gt;One of the things I appreciate most about this conference is that it’s not about product pitches or high-level fluff. Practitioners share what worked, what broke, and how they improved. Whether you are just starting out, are in the trenches, or shaping platform strategy, there’s something for you here.&lt;/p&gt;

&lt;p&gt;On another note, I’m also excited to contribute to this year’s lineup with a session on Getting Started with IaC, together with my colleague Emin Alemdar. But honestly, I’m just as excited to learn and to connect with others working through the same challenges.&lt;/p&gt;

&lt;p&gt;The event will be on Thu, May 15, 2025 @ 11:00am EDT, so don’t forget to register &lt;a href="https://www.iacconf.com/#registration" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This post was originally posted on &lt;a href="https://medium.com/@flaviuscdinu93/iacconf-2025-why-infrastructure-as-code-deserves-its-own-conference-f80ed39cccd4" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>infrastructureascode</category>
      <category>programming</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Deploying a static Website with Pulumi</title>
      <dc:creator>Flavius Dinu</dc:creator>
      <pubDate>Thu, 03 Apr 2025 11:45:30 +0000</pubDate>
      <link>https://dev.to/flaviuscdinu/deploying-a-static-website-with-pulumi-2dbj</link>
      <guid>https://dev.to/flaviuscdinu/deploying-a-static-website-with-pulumi-2dbj</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/pulumi"&gt;Pulumi Deploy and Document Challenge&lt;/a&gt;: Fast Static Website Deployment&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;For this challenge, I've built a simple static website based on &lt;a href="https://docusaurus.io/" rel="noopener noreferrer"&gt;Docusaurus&lt;/a&gt; for tutorials and blog posts. As I'm not too seasoned with Frontend development, I only made small changes to the template, and added some very simple blog posts and tutorials there.&lt;/p&gt;

&lt;p&gt;To make this more interesting, I've decided to build the infrastructure in both Python and JavaScript using Pulumi, and deploy the websites to different S3 buckets and subdomains.&lt;/p&gt;

&lt;p&gt;The Python Pulumi code is deployed with &lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt;. This leverages static credentials for AWS embedded as repository secrets. I have implemented two workflows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One for the actual deployment of the Pulumi Python code and the application called &lt;code&gt;pulumi-deploy&lt;/code&gt;. This one runs on push to the main branch, but it first does a &lt;code&gt;pulumi preview&lt;/code&gt; and waits for manual approval to do a &lt;code&gt;pulumi up&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Another one for removing the application from S3 and destroying the infrastructure called &lt;code&gt;pulumi-destroy&lt;/code&gt;. This one will run manually&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kpkeysjq7kl6r2yn9qd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kpkeysjq7kl6r2yn9qd.png" alt="GitHub Actions deployment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The JavaScript Pulumi code is deployed with &lt;a href="https://spacelift.io/" rel="noopener noreferrer"&gt;Spacelift&lt;/a&gt; and leverages dynamic credentials based on &lt;a href="https://docs.spacelift.io/integrations/cloud-providers" rel="noopener noreferrer"&gt;Spacelift's cloud integration&lt;/a&gt; for AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6aaf4ysxcfj2za8pe31.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6aaf4ysxcfj2za8pe31.png" alt="Spacelift Resources Deployed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Live Demo Link
&lt;/h2&gt;

&lt;p&gt;Even though it is the same website, it is deployed differently as I've mentioned above, thus it is using different subdomains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.flaviusdinu.com" rel="noopener noreferrer"&gt;Pulumi Python with GitHub Actions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docsjs.flaviusdinu.com" rel="noopener noreferrer"&gt;Pulumi JavaScript with Spacelift&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj74k1wwufejp8ckbnca8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj74k1wwufejp8ckbnca8.png" alt="Two different deployments"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Repo
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/flavius-dinu" rel="noopener noreferrer"&gt;
        flavius-dinu
      &lt;/a&gt; / &lt;a href="https://github.com/flavius-dinu/pulumi_dev_challenge" rel="noopener noreferrer"&gt;
        pulumi_dev_challenge
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Deploying a static website with Pulumi&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;The aim of this project is to showcase how to deploy a static website built using Docusaurus by using:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Pulumi (python) and GitHub Actions&lt;/li&gt;
&lt;li&gt;Pulumi (javascript) and Spacelift&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Repo structure explained:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;.github --  GitHub Actions workflows for deploying the Pulumi python code&lt;/li&gt;
&lt;li&gt;docs -- Documentation website built with Docusaurus, that can easily host your blogs and tutorials&lt;/li&gt;
&lt;li&gt;javascript-infrastructure -- Pulumi javascript infrastructure that will be used by Spacelift to deploy the infrastructure and the application&lt;/li&gt;
&lt;li&gt;python-infrastructure -- Pulumi python infrastructure that will be used by Spacelift to deploy the infrastructure and the application&lt;/li&gt;
&lt;li&gt;images -- Images used throughout the documentation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To use this project, feel free to fork the repository and follow the instructions below.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Prerequisites&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;An AWS Route53 hosted zone for your domain&lt;/li&gt;
&lt;li&gt;An AWS S3 bucket for Pulumi state hosting&lt;/li&gt;
&lt;li&gt;An AWS role for building dynamic credentials for Spacelift&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;How to create&lt;/h3&gt;…&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/flavius-dinu/pulumi_dev_challenge" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Ensure you check the &lt;a href="https://github.com/flavius-dinu/pulumi_dev_challenge/blob/main/README.md" rel="noopener noreferrer"&gt;README.md&lt;/a&gt; to understand how to use the automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Journey
&lt;/h2&gt;

&lt;p&gt;Initially, I started this challenge by thinking about what kind of static website I would implement. I was actually debating between making a newsletter or creating a blog/tutorials website, the latter ending up being the right choice for me.&lt;/p&gt;

&lt;p&gt;As I'm by no means a frontend developer, I've started researching what would be the best framework to achieve this based on my limited experience, and it seemed like Docusaurus was the easiest choice for me. Again, this is very subjective, so I would encourage everyone to do their own research before building something like this, especially if this is not your day-to-day.&lt;/p&gt;

&lt;p&gt;I followed Docusaurus documentation for how to initialize the project, and then I deployed the website locally to see how it looks like. After diving into the repository template, I found some of the things I wanted to change pretty easily:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to add an author to website&lt;/li&gt;
&lt;li&gt;How to add social links&lt;/li&gt;
&lt;li&gt;Where to change the footer &lt;/li&gt;
&lt;li&gt;How to add new articles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For some of the other things, I have done a lot of "grep -ri &lt;strong&gt;insert_whatever&lt;/strong&gt;" inside the repository to view what are the locations in which something is used and I made changes to reflect my view.&lt;/p&gt;

&lt;p&gt;Now that I was done with the website, I've decided what I will do for the static hosting. I've debated between using other Cloud providers than the ones I'm accustomed to (AWS, Azure, Oracle Cloud), but as I already had my domain registered in AWS Route53, I ended up choosing AWS. &lt;/p&gt;

&lt;p&gt;Next, I had to decide what services I would use so I chose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 bucket for static content storage &lt;/li&gt;
&lt;li&gt;CloudFront distribution for content delivery&lt;/li&gt;
&lt;li&gt;ACM certificate for HTTPS for DNS validation&lt;/li&gt;
&lt;li&gt;Route53 DNS configuration: 1 record for ACM certificate validation and an A record pointing to the CloudFront distribution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then I built the initial version of the infrastructure using Pulumi with Python, and I decided to deploy it via Spacelift. Then I thought to myself, wouldn't it be more interesting to explore a different way to build the infrastructure and to deploy it? So I ended up with two versions of the infrastructure, one implemented in Python that I decided to deploy, in the end, with GitHub Actions, and another one implemented in Javascript which I ended up deploying with Spacelift.&lt;/p&gt;

&lt;p&gt;By having two deployment options, I could easily see Pulumi in action and how the process is different when using a generic CI/CD and an Infrastructure Orchestration platform. It took me a couple of hours to build the GitHub Actions CI/CD pipelines, but with Spacelift I was done in less than an hour.&lt;/p&gt;

&lt;p&gt;I could've easily just keep a single version of the infrastructure, but I wanted the work to be more challenging, and that's why I went for multiple versions.&lt;/p&gt;

&lt;p&gt;During this process, I faced a couple of issues, but the most important thing that I've learned is the fact that &lt;code&gt;CloudFront&lt;/code&gt; only accepts &lt;code&gt;ACM certificates&lt;/code&gt; from the &lt;code&gt;us-east-1&lt;/code&gt; region. &lt;/p&gt;

&lt;p&gt;This made me waste some time, as I couldn't really understand why that was happening. So to make this clear for everyone, this occurs because even though &lt;code&gt;CloudFront&lt;/code&gt; is a global service, its configuration and metadata are managed centrally in &lt;code&gt;us-east-1&lt;/code&gt;. By requiring certificates in &lt;code&gt;us-east-1&lt;/code&gt;, AWS avoids the complexity of doing certification syncs across regions. I'm sure this is in the study material for multiple AWS certifications, but as I'm not using &lt;code&gt;CloudFront&lt;/code&gt; too much, it really slipped my mind.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Pulumi
&lt;/h2&gt;

&lt;p&gt;I used Pulumi to define and deploy the necessary infrastructure for a static website in AWS. As Pulumi is versatile, it made it easy for me to build my infrastructure resources in the programming languages I like to use.&lt;/p&gt;

&lt;p&gt;It was very easy to set up the boilerplates for both the Python and Javascript examples by running:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;pulumi new python&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pulumi new javascript&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After following the prompts, I started the actual development of the Python project.&lt;/p&gt;

&lt;p&gt;Pulumi helped me easily:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create an S3 bucket with website hosting enabled and tags&lt;/li&gt;
&lt;li&gt;use a CloudFront distribution to serve content securely, ensure only CloudFront can access the S3 bucket using Origin Access Identity, and enable HTTPS using ACM-issued certificates&lt;/li&gt;
&lt;li&gt;implement DNS Configuration with Route53 to retrieve the existing hosted zone and create the necessary DNS records&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why was Pulumi Beneficial
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Multi-Language Support - I implemented the same infrastructure twice using Python and JavaScript which can be useful for teams that have different backgrounds&lt;/li&gt;
&lt;li&gt;Resource Management with code - Pulumi lets you take advantage of programming capabilities (loops, conditionals) to dynamically configure resources; Using functions such as the &lt;code&gt;apply&lt;/code&gt; function makes it easy to handle dependencies&lt;/li&gt;
&lt;li&gt;Easy to use by developers - My background is in DevOps engineering, so I've seen firsthand the issues developers have with DevOps processes. Without self-service, developer velocity is affected, but by leveraging something that developers know, the impact is lower.&lt;/li&gt;
&lt;li&gt;Stateful approach - Pulumi's stateful nature makes it easy to detect changes, manage dependencies, and ensure idempotency&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Future developments
&lt;/h2&gt;

&lt;p&gt;For this small project, I believe the infrastructure code is in good shape, but it would be beneficial to take advantage of some of the capabilities infrastructure orchestration platforms offer out of the box like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Policy as code - Policies could be implemented to restrict the Cloudfront distribution's Price class, require multiple approvals for runs, and ensure you are notified in your Slack/MS teams about failed runs&lt;/li&gt;
&lt;li&gt;Drift detection - Drift happens, so why let it ruin your application? Implementing drift detection and remediation can be a lifesaver when it comes to ensuring infrastructure reliability and resilience&lt;/li&gt;
&lt;li&gt;Self-service - Developers should be able to deploy resources on demand while staying safe&lt;/li&gt;
&lt;li&gt;Secrets management - Use a specialized secrets management solution such as HashiCorp Vault, OpenBao, or Pulumi ESC to ensure secrets are securely stored and encrypted&lt;/li&gt;
&lt;li&gt;Write unit tests - Capture issues before applying the code&lt;/li&gt;
&lt;li&gt;Integrate security vulnerability scanning - Ensure you are not deploying code with vulnerabilities: use Checkov, Kics, or other specialized solutions for that&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition to this, we can explore other deployment options, such as Pulumi Cloud, use other programming languages to build the infrastructure, and even choose other cloud providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key points
&lt;/h2&gt;

&lt;p&gt;This was a very fun challenge that had real-world applicability. I believe Pulumi's flexibility makes it a powerful choice for managing cloud infrastructure, especially where teams prefer using programming languages.&lt;/p&gt;

&lt;p&gt;Pulumi integrates with ease with many CI/CDs and Infrastructure Orchestration Platforms to automate deployments, minimize human errors and implement governance.&lt;/p&gt;

&lt;p&gt;It was a great opportunity to use Pulumi in different contexts and leverage modern DevOps technologies to steamline cloud infrastructure management.&lt;/p&gt;

&lt;p&gt;Keep building!&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>pulumichallenge</category>
      <category>webdev</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Running AI Models in Docker</title>
      <dc:creator>Flavius Dinu</dc:creator>
      <pubDate>Fri, 28 Mar 2025 10:52:51 +0000</pubDate>
      <link>https://dev.to/flaviuscdinu/running-ai-models-in-docker-1nom</link>
      <guid>https://dev.to/flaviuscdinu/running-ai-models-in-docker-1nom</guid>
      <description>&lt;p&gt;Do you know what is the easiest way to run AI models? It’s of course, using Docker.&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/4dC2VdXYlMc"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Docker has added support for running AI models in Docker Desktop 4.40.0, and it’s sooo easy to get started — right now, this is only available on Mac with Apple Silicon. Soon, this will also be available on Windows.&lt;/p&gt;

&lt;p&gt;Note: I got early access to the feature, and this will be soon GA.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does the Model Runner work?
&lt;/h2&gt;

&lt;p&gt;The Docker Model Runner doesn’t run in a container, and it uses a host-installed inference server (llama.cpp), that runs locally on your computer. In the future, Docker will support additional inference servers.&lt;/p&gt;

&lt;p&gt;To break down how this works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You have a host-level process that allows direct access to the hardware GPU acceleration&lt;/li&gt;
&lt;li&gt;GPU acceleration is enabled via Apple’s Metal API during query processing.&lt;/li&gt;
&lt;li&gt;Models are cached locally in your host machine’s storage and are dynamically loaded into memory by llama.cpp when needed
This means that your data never leaves your infrastructure.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Models are stored as OCI artifacts in Docker Hub, ensuring compatibility with other registries, including internal ones. This approach enables faster deployments, reduces disk usage, and improves UX by avoiding unnecessary compression.&lt;/p&gt;

&lt;h2&gt;
  
  
  How can you use the Model Runner?
&lt;/h2&gt;

&lt;p&gt;To get started, first go to Docker Desktop, and under Settings select Features in Development, and ensure the Enable Docker Model Runner is toggled on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6v0j6s6kuzcqfxly9ev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6v0j6s6kuzcqfxly9ev.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you enable the feature, make sure you restart Docker Desktop.&lt;/p&gt;

&lt;p&gt;To see if the model runner is up, you can simply run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker model status
Docker Model Runner is running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to see what are the available models in Docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker model list
{"object":"list","data":[]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course, you haven’t pulled any model yet, the list is going to be empty, as it is in my case.&lt;/p&gt;

&lt;p&gt;Right now, you can find a list of models &lt;a href="https://hub.docker.com/u/ai" rel="noopener noreferrer"&gt;here&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi65ydzazmdptihhy4ndy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi65ydzazmdptihhy4ndy.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To pull a model, you can simply run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker model pull model_name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this example, I will use ai/llama3.2.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker model pull ai/llama3.2
Model ai/llama3.2 pulled successfully
We are now ready to run the model. You can do it either interactively, or just do a one-off command.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker model run ai/llama3.2                                
Interactive chat mode started. Type '/bye' to exit.
&amp;gt; Hey how are you?
I'm just a language model, so I don't have feelings or emotions like 
humans do, but I'm functioning properly and ready to help you 
with any questions or tasks you may have! 
How about you? How's your day going?
docker model run ai/llama3.2 "Hello"
Hello! How are you today? Is there something I can help you with, or would you like to chat?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Well, this is cool, but you want to take it to the next level and use it in your apps, right?&lt;/p&gt;

&lt;p&gt;You can connect to the model in three ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;From the container by using the internal DNS name: &lt;a href="http://model-runner.docker.internal/" rel="noopener noreferrer"&gt;http://model-runner.docker.internal/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;From the host using the Docker Socket&lt;/li&gt;
&lt;li&gt;From the host using TCP&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Another great thing is that the API offers Open-AI compatible endpoints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /engines/{backend}/v1/models
GET /engines/{backend}/v1/models/{namespace}/{name}
POST /engines/{backend}/v1/chat/completions
POST /engines/{backend}/v1/completions
POST /engines/{backend}/v1/embeddings
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Wrap-up
&lt;/h2&gt;

&lt;p&gt;Docker Model Runner makes running AI models ridiculously simple. With just a few commands, you can pull, run, and integrate models into your applications — all while keeping everything local and secure.&lt;/p&gt;

&lt;p&gt;This is just the beginning — as Docker expands support to Windows and additional inference servers, the experience will only get better.&lt;/p&gt;

&lt;p&gt;Stay tuned, and keep building 🐳&lt;/p&gt;

</description>
      <category>devops</category>
      <category>ai</category>
      <category>docker</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>How to effectively use YAML variables in your Terraform code</title>
      <dc:creator>Flavius Dinu</dc:creator>
      <pubDate>Fri, 21 Mar 2025 14:59:34 +0000</pubDate>
      <link>https://dev.to/flaviuscdinu/how-to-effectively-use-yaml-variables-in-your-terraform-code-55pa</link>
      <guid>https://dev.to/flaviuscdinu/how-to-effectively-use-yaml-variables-in-your-terraform-code-55pa</guid>
      <description>&lt;p&gt;TL/DR:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/UlZN18aijRM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;In this post we will explore how to easily use YAML variables inside of your Terraform code.&lt;/p&gt;

&lt;p&gt;The best part? You don't need any external tools for that and everything also applies to OpenTofu.&lt;/p&gt;

&lt;p&gt;First, let's take a look at a very simple example that will create one ec2 instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eu-west-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t2.micro"&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ami_id"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's build a very straightforward yaml configuration that will replace the hardcoded values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;eu-west-1"&lt;/span&gt;
&lt;span class="na"&gt;instance_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;t2.micro"&lt;/span&gt;
&lt;span class="na"&gt;ami&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ami_id"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's suppose we've saved this yaml configuration in the same directory where the Terraform configuration is kept, in a file called &lt;strong&gt;vars.yaml&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To make use of this file, we will need to use it in a locals block. Terraform has a built-in &lt;strong&gt;file&lt;/strong&gt; function that let's you read the contents of your files. Because this is a yaml file, we will need to actually use something to make Terraform read from it properly. Luckily there is a function available for this too, called &lt;strong&gt;yamldecode&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;The logic would be to first open the file, then decode it, then use the local variable in which we implement the logic to use the fields from inside of it.&lt;/p&gt;

&lt;p&gt;This is how this will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;locals&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;yaml_vars&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;yamldecode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"./vars.yaml"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yaml_vars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yaml_vars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance_type&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yaml_vars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ami&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pretty straightforward, right? Now, you can take this up a notch and use it in a for_each as well.&lt;/p&gt;

&lt;p&gt;Let's suppose you build this yaml file for your instances and vpcs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;instances&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;instance1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;instance_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;t2.micro&lt;/span&gt;
    &lt;span class="na"&gt;ami&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ami1&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;env"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dev"&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;instance2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;instance_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;t3.micro&lt;/span&gt;
    &lt;span class="na"&gt;ami&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ami2&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;env"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dev"&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;instance3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;instance_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;t3.micro&lt;/span&gt;
    &lt;span class="na"&gt;ami&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ami2&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;env"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dev"&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;

&lt;span class="na"&gt;vpcs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="na"&gt;vpc1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cidr_block&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.0.1.0/24&lt;/span&gt;
  &lt;span class="na"&gt;vpc2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cidr_block&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.0.2.0/24&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is how your terraform code will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;locals&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;yaml_vars&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;yamldecode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"./vars.yaml"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eu-west-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;for_each&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yaml_vars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instances&lt;/span&gt;
  &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;each&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance_type&lt;/span&gt;
  &lt;span class="nx"&gt;ami&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;each&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ami&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;merge&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;each&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;each&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_vpc"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;for_each&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yaml_vars&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpcs&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_block&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;each&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cidr_block&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can quickly generate as many EC2 instances as you'd like, and as many VPCS as you'd like by easily modifying the yaml file.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>terraform</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Docker from 0 to Hero</title>
      <dc:creator>Flavius Dinu</dc:creator>
      <pubDate>Sat, 15 Mar 2025 12:25:42 +0000</pubDate>
      <link>https://dev.to/flaviuscdinu/-docker-from-0-to-hero-bpk</link>
      <guid>https://dev.to/flaviuscdinu/-docker-from-0-to-hero-bpk</guid>
      <description>&lt;p&gt;&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; has grown a lot in popularity in recent years due to the way it simplifies application packaging and deployment. In my view, understanding Docker and its core concepts is essential for software and DevOps engineers alike.&lt;/p&gt;

&lt;p&gt;In this article we will cover:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Docker Intro&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Images and Containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Networks and Volumes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Registries&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Compose&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Security&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Swarm and Kubernetes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker CI/CD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker Optimization and Beyond Docker&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/@devopswithflavius" rel="noopener noreferrer"&gt;Check out my YouTube Channel!&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. What is Docker?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Docker is an open-source platform that allows you to automate your application lifecycle using container technologies. To put it simply, containers are just simple, lightweight, and isolated environments that run your applications.&lt;/p&gt;

&lt;p&gt;Docker provides an abstraction layer on top of the host operating system, allowing applications to run consistently regardless of differences in the underlying infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why use Docker?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There are several reasons why Docker has become so adopted in all IT processes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;With Docker you can enable your developers to package their applications and all their dependencies into a single element (container) — ensuring consistency and reusability across different environments. This eliminates (or at least tries) the infamous “It works on my machine” problem, which let’s face it — it’s so annoying.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker provides a lightweight alternative to virtualization. The difference between containers and virtual machines is that containers share the host operating system kernel, resulting in reduced overhead and faster startup times compared virtual machines (VMs include a full guest operating system, meaning they come up with their own kernel, which runs on top of a hypervisor).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker allows for the easy scaling and deployment of applications across different environments, making it a great solution for cloud-native architectures.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note: Interviewers will always ask you what’s the difference between containers and virtual machines, so make sure you understand and remember it.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Docker Key benefits&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here are some of the key benefits Docker offers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consistency and reusability —&lt;/strong&gt; Ensures your application will run according to plan, independent of the environment it runs in&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Portability —&lt;/strong&gt; By keeping your Docker images as small as possible, you can easily share them accross different environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Efficiency&lt;/strong&gt; — Being lightweight, Docker containers reduce the overhead when compared to virtual machines. It makes them faster to start and more efficient in terms of resource consumption.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability —&lt;/strong&gt; Scaling up or down application, is something that Docker can do easily. Combining it with Docker Compose, Docker Swarm or Kubernetes, takes scalability to the next level.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Isolation —&lt;/strong&gt; Docker ensures that different applications that run on the same host don’t interfere with each other&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Version control and rollbacks&lt;/strong&gt; — By pairing your Docker images and Docker configurations with version control, you can easily have different versions of your Docker images, making rollbacks a piece of cake&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Getting Started with Docker&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now that we have a fundamental understanding of Docker is, and what are the benefits of using Docker, let’s get started with the installation and setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Installing Docker&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Installing Docker is simple, as Docker provides installation packages for different operating systems, including macOS, Linux distributions and Windows. Simply download the appropriate package for your system and follow the installation instructions provided by Docker.&lt;/p&gt;

&lt;p&gt;For the majority of the operating systems, the easiest thing to do is install&lt;a href="https://www.docker.com/products/docker-desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e50d8c4-13f8-4b7a-811e-e9b80fbdc5d8_1400x874.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpfc28332prkfgu9w3lkv.png" width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Docker Architecture Overview&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To effectively work with Docker, you need to first understand the Docker architecture. At a high level, Docker consists of three main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://docs.docker.com/engine/daemon/" rel="noopener noreferrer"&gt;Docker daemon&lt;/a&gt;&lt;/strong&gt; (dockerd)— The Docker daemon is responsible for building, running, and monitoring containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker client&lt;/strong&gt; — The Docker client is a command-line tool that allows you to interact with the Docker daemon. It sends commands to the daemon and receives information from it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker registry&lt;/strong&gt; — The Docker registry is a centralized repository for Docker images. It allows you to pull images from the registry and push your own images.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Docker’s Core Components&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In addition to the other architecture components, Docker also relies on others as well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image —&lt;/strong&gt; read-only template that contains everything needed to run an application (code, runtime, libraries, and system tools). You can leverage images created by others, or you have the ability to create your own.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Container —&lt;/strong&gt; instance of an image that you can run, start, stop, and delete. They are isolated from each other and the host system, ensuring that applications run in a consistent environment. You can easily create multiple containers from the same image.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Registry&lt;/strong&gt; — centralized place for storing and distributing Docker images. These images can be easily pulled on the host system, and based on them, you can create Docker containers. Docker Hub is the default public registry, but you can also set up private registries or use other public registries as well. Don’t worry, we will cover this in detail in another post.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker CLI&lt;/strong&gt; — primary way to interact with Docker. It provides a set of commands that allow you to manage containers, images, networks, and volumes. With the Docker CLI, you can create, start, stop, and remove containers. You can also build, tag, and push images, as well as manage networks and volumes. We will explore some of the most useful commands below.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dockerfile —&lt;/strong&gt; file that contains a set of instructions for building a Docker image. It specifies the base image, different configurations that you want to be made, and commands to be executed when the image is built. The Dockerfile allows you to automate the process of creating Docker images, making it easy to reproduce and share accross different environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Compose —&lt;/strong&gt; tool for defining and running multi-container Docker applications. It allows you to define a group of containers as a single service and configure their dependencies and network connections. With Docker Compose, you can easily define complex application architectures and manage their deployment and scaling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker Swarm —&lt;/strong&gt; native clustering and orchestration solution provided by Docker. It allows you to create and manage a swarm of Docker nodes, enabling high availability and scalability for your applications. With Docker Swarm, you can deploy and scale your applications across multiple Docker hosts, ensuring that they are always available and can handle increased workloads.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These core components work together to provide a powerful and flexible platform for building, deploying, and managing containerized applications with Docker.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Docker Commands&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now that we have covered the basics of Docker, let’s explore a list of Docker commands that will help you work with Docker more effectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Basic Docker Commands&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here are some of the basic Docker commands you’ll frequently use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker run&lt;/code&gt;: Run a command in a new container&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker build&lt;/code&gt;: Builds a Docker image&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker tag&lt;/code&gt;: Tags an image&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker start&lt;/code&gt;: Start one or more stopped containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker stop&lt;/code&gt;: Stop one or more running containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker rm&lt;/code&gt;: Remove one or more containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker ps&lt;/code&gt;: Gets details of all running Docker containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker ps -a&lt;/code&gt;: Gets details of all your Docker containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker cp&lt;/code&gt;: Copies entire files or folders between your local filesystem to your containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker logs&lt;/code&gt;: Gives you in-depth details into your containers&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Docker Compose and Swarm commands&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We will dive deep into Docker Compose and Swarm in the next part of this article. In the meantime, here are some commonly used Docker Compose and Swarm commands, just as a little teaser of what we will do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker-compose up&lt;/code&gt;: Create and start containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker-compose down&lt;/code&gt;: Stop and remove containers, networks, and volumes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker-compose build&lt;/code&gt;: Build or rebuild services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker swarm init&lt;/code&gt;: Initialize a swarm&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker swarm join&lt;/code&gt;: Join a swarm as a worker or manager&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker service create&lt;/code&gt;: Create a new service&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker node ls&lt;/code&gt;: List nodes in a swarm&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This part was originally posted on &lt;a href="https://techblog.flaviusdinu.com/1-introduction-to-docker-and-the-nautical-leadership-journey-29ff08772912" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  2. Docker images and Docker containers
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is a Docker image?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A Docker image is a template that will be used by your containers, in which you install all the packages required to run your applications.&lt;/p&gt;

&lt;p&gt;Let’s get back to Docker images. Acting as blueprints for your Docker containers, a Docker image is composed of multiple &lt;em&gt;&lt;strong&gt;read-only&lt;/strong&gt;&lt;/em&gt; layers, stacked on top of each other. We will build a Docker image later on, I will show you the layers, so don’t worry.&lt;/p&gt;

&lt;p&gt;Each of these layers is an instruction in your Dockerfile, and these layers may contain different information: you could have a layer that specifies the base image from which you are building your image, or you may have another layer that installs some dependencies that are required for your application, or a layer that simply copies some files from your local filesystem to your Docker image.&lt;/p&gt;

&lt;p&gt;Regardless of the underlying infrastructure (having Docker installed on different operating systems), you can be 100% sure, that your image will run on it if Docker is installed (small caveat, your image architecture must match the host system’s architecture).&lt;/p&gt;

&lt;p&gt;Building these images in layers means that Docker can reuse the layers to speed up the building process for the current image you are using and even reuse these layers across different images that you may be building. That’s why, to speed things up, you should avoid making changes to superior layers, as caching will be broken.&lt;/p&gt;

&lt;p&gt;By default, caching is enabled, but this doesn’t mean that you cannot build an image without reusing the cached information. In some cases, you may want to do that, and there is an argument that comes in handy that lets you do so, and that is the &lt;code&gt;--no-cache&lt;/code&gt; arg.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is a Docker container?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A Docker container is an element created from a Docker image that fits the purpose you have set inside that particular image. Because Docker images are just blueprints, they don’t do anything on their own, and they need to be used inside a container to accomplish the task at hand.&lt;/p&gt;

&lt;p&gt;Docker containers run in their own, &lt;em&gt;&lt;strong&gt;isolated&lt;/strong&gt;&lt;/em&gt; , environments, meaning that they are separated from each other and even the host system. Each of these containers has its filesystems, processes that are running, and network, while they are still sharing the operating system’s kernel. As mentioned in the previous article, this is one of the main differences between containers and virtual machines, so don’t forget to take note.&lt;/p&gt;

&lt;p&gt;Containers are more lightweight when compared to their virtual machine counterparts, and they are designed to be ephemeral. You can easily start, stop, and destroy containers and all of their data may be lost if the data is not explicitly stored outside of it (in a Docker volume, or the relevant information may be pushed to different storing solutions, such as Amazon S3, or Azure Blob storage).&lt;/p&gt;

&lt;p&gt;Docker containers are flexible and make them an ideal choice for microservices applications because you can scale them together or independently, or directly in container orchestration platforms such as Kubernetes or Docker Swarm.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Working with existing Docker images&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As I’ve mentioned before you can easily use existing Docker images and create containers out of them.&lt;/p&gt;

&lt;p&gt;I will just show you an easy example of how to do it, but we will talk more about this when we reach the part dedicated to registries. Docker has a neat way of checking if you have an image locally, otherwise it will pull it from the Docker Hub registry.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Pulling an image&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To pull an image, we can use the &lt;code&gt;docker pull image_name&lt;/code&gt; command. For the sake of this example, let’s pull the &lt;a href="https://hub.docker.com/_/hello-world" rel="noopener noreferrer"&gt;hello-world image&lt;/a&gt; and create a container from it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
478afc919002: Pull complete 
Digest: sha256:53cc4d415d839c98be39331c948609b659ed725170ad2ca8eb36951288f81b75
Status: Downloaded newer image for hello-world:latest
docker.io/library/hello-world:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let’s create the container:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run hello-world                                                        

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (arm64v8)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Pushing an image&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To push an image to a registry, you can run the &lt;code&gt;docker push image_name&lt;/code&gt;command. As I’ve mentioned before, we will deep dive into a different part.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Listing your local images&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To list your images, you can simply run the &lt;code&gt;docker images&lt;/code&gt; command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images         
REPOSITORY     TAG       IMAGE ID       CREATED          SIZE
kindest/node   &amp;lt;none&amp;gt;    c67167dbf296   3 months ago     980MB
hello-world    latest    ee301c921b8a   16 months ago    9.14kB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Removing a local image&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To remove a local image, you can simply run the &lt;code&gt;docker image rm image_name&lt;/code&gt;or &lt;code&gt;docker image rm first_few_letters_of_id&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker image rm ee
Untagged: hello-world:latest
Untagged: hello-world@sha256:53cc4d415d839c98be39331c948609b659ed725170ad2ca8eb36951288f81b75
Deleted: sha256:ee301c921b8aadc002973b2e0c3da17d701dcd994b606769a7e6eaa100b81d44
Deleted: sha256:12660636fe55438cc3ae7424da7ac56e845cdb52493ff9cf949c47a7f57f8b43
➜  episode2 docker images     
REPOSITORY     TAG       IMAGE ID       CREATED          SIZE
kindest/node   &amp;lt;none&amp;gt;    c67167dbf296   3 months ago     980MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;What is a Dockerfile?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A Dockerfile is a special document that contains instructions for building a Docker image. It is essentially a script of commands that Docker uses to automatically create a container image.&lt;/p&gt;

&lt;p&gt;The Dockerfile supports the following instructions, and I’ll list them in the order of their importance (from my point of view, of course):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;FROM&lt;/strong&gt; — this is creating a new build stage, starting from a base image. This means that if you have two FROM instructions in the same Dockerfile, you will have two build stages&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;RUN&lt;/strong&gt; — executes different commands inside the base image you are using. You can have multiple commands in a single RUN instruction, by using “&amp;amp;&amp;amp;”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CMD —&lt;/strong&gt; the default command the docker container will run&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ENTRYPOINT —&lt;/strong&gt; specifies the default executable&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;EXPOSE —&lt;/strong&gt; ports your app is listening on&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ADD —&lt;/strong&gt; adds files from the host system, URLs, and can also add files and extract them&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;COPY&lt;/strong&gt; — adds files from the host system only&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ARG —&lt;/strong&gt; build-time variable that can be used in other instructions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ENV —&lt;/strong&gt; sets env variables inside the docker container&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;WORKDIR —&lt;/strong&gt; sets the working directory for any other commands that may run after it&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VOLUME —&lt;/strong&gt; creates a mount point with the specified path&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SHELL —&lt;/strong&gt; sets the shell&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;USER —&lt;/strong&gt; sets the user&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HEALTHCHECK —&lt;/strong&gt; defines a command to test the health of the container&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;STOPSIGNAL —&lt;/strong&gt; sets the system call signal for exiting a container&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ONBUILD —&lt;/strong&gt; this will set an instruction for the image when it is used as a base image for another one&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;LABEL —&lt;/strong&gt; adds metadata to the image in a key-value format&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MAINTAINER —&lt;/strong&gt; deprecated in favor of LABEL, was used to specify the maintainer of the image&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note 1:&lt;/strong&gt; You will be asked in interviews what is the difference between &lt;strong&gt;ADD&lt;/strong&gt; and &lt;strong&gt;COPY&lt;/strong&gt;. Keep in mind that &lt;strong&gt;COPY&lt;/strong&gt; can be used only for copying files from the host filesystem, while &lt;strong&gt;ADD&lt;/strong&gt; can also get files from a URL or unarchive files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note 2&lt;/strong&gt; : You will get another question in interviews that refers to the difference between &lt;strong&gt;CMD&lt;/strong&gt; and &lt;strong&gt;ENTRYPOINT&lt;/strong&gt;. &lt;strong&gt;CMD&lt;/strong&gt; is used for providing default arguments for the container’s executable, while &lt;strong&gt;ENTRYPOINT&lt;/strong&gt; defines the executable itself. If you set &lt;strong&gt;CMD,&lt;/strong&gt; and not set &lt;strong&gt;ENTRYPOINT,&lt;/strong&gt; what happens is &lt;strong&gt;CMD&lt;/strong&gt; acts as the &lt;strong&gt;ENTRYPOINT.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Dockerfile tutorial&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Nothing goes better in a tutorial than a real-life example.&lt;/p&gt;

&lt;p&gt;If you want to watch a video instead, I have you covered:&lt;/p&gt;

&lt;p&gt;To give you some chills from your agile process, let’s suppose you receive the following ticket:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Title&lt;/strong&gt; : Create a Docker image that standardizes our team’s development process&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As a&lt;/strong&gt; : DevOps Engineer&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I want&lt;/strong&gt; : To build a standard development environment for my team&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So that&lt;/strong&gt; : We can ensure that everyone develops their code with the same versions&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt; : Our image should have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;start from an Alpine image&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;latest open-source version of Terraform installed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;OpenTofu 1.8.1&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Python 3.12 and pip&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;kubectl 1.28&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ansible 2.15&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Golang 1.21&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Acceptance criteria:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A Dockerfile is created that builds the image with all the tools specified and their versions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The image is built successfully and tested to ensure all tools function properly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Documentation on how to use the image is provided&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ok, now that we’ve received the ticket, let’s start building the Dockerfile that solves it.&lt;/p&gt;

&lt;p&gt;We will start with a specific version of Alpine, and you may wonder why we are not using the &lt;strong&gt;“latest”&lt;/strong&gt; keyword. This happens because, in a new version, we may face some dependency issues or unexpected changes that may break your image build. It is a best practice to avoid using “latest” for anything that you are building because, in this way, you ensure consistency.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM alpine:3.20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So, in the above example, I’m specifying the base image as Alpine:3.20.&lt;/p&gt;

&lt;p&gt;Now, we haven’t receive exact instructions about what Terraform version to use, but we know that we have to use the latest open-source version. After some research on their repository, we have found out that the latest open-source version is 1.5.7:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e4327a7-79b1-4b6d-9b23-f39322b37663_1334x502.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsdq0ut0uqh3zjasu3sn.png" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe8876a72-f303-4a41-a448-81bd51349aaa_1400x449.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7fiixtlhdy1z0eiwqll.png" width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we are ready to define our environment with all the versions we want to use. You may wonder why we are defining them inside an ENV block. Well, that’s because we want to be easily able to update the versions when this is required in the future.&lt;/p&gt;

&lt;p&gt;Also, I have defined the pipx bin directory to something in our path. This will be required to easily install Ansible.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ENV TERRAFORM_VERSION=1.5.7 \
    OPENTOFU_VERSION=1.8.1 \
    PYTHON_VERSION=3.12.0 \
    KUBECTL_VERSION=1.28.0 \
    ANSIBLE_VERSION=2.15.0 \
    GOLANG_VERSION=1.21.0 \ 
    PIPX_BIN_DIR=/usr/local/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, let’s install some of the dependencies and some of the helpers you may require for a successful development environment:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN apk add --no-cache \
    curl \
    bash \
    git \
    wget \
    unzip \
    make \
    build-base \
    py3-pip \
    pipx \
    openssh-client \
    gnupg \
    libc6-compat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let’s add the instructions that install Terraform:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN wget -O terraform.zip https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip &amp;amp;&amp;amp; \
    unzip terraform.zip &amp;amp;&amp;amp; \
    mv terraform /usr/local/bin/ &amp;amp;&amp;amp; \
    rm terraform.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We are first downloading the terraform archive, then we are unzipping it, next, we are moving the executable in our path, and in the end, we are removing the archive.&lt;/p&gt;

&lt;p&gt;We will do the same process for OpenTofu:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN wget -O tofu.zip https://github.com/opentofu/opentofu/releases/download/v${OPENTOFU_VERSION}/tofu_${OPENTOFU_VERSION}_linux_amd64.zip &amp;amp;&amp;amp; \
    unzip tofu.zip &amp;amp;&amp;amp; \
    mv tofu /usr/local/bin/ &amp;amp;&amp;amp; \
    rm tofu.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, let’s install kubectl:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN curl -LO "https://dl.k8s.io/release/v$KUBECTL_VERSION/bin/linux/amd64/kubectl" &amp;amp;&amp;amp; \
    chmod +x kubectl &amp;amp;&amp;amp; \
    mv kubectl /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will download the kubectl binary, make it executable, and then move it to the path.&lt;/p&gt;

&lt;p&gt;Now, let’s install Ansible:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN pipx install ansible-core==${ANSIBLE_VERSION}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We are finally ready to install the last tool, golang:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN wget https://golang.org/dl/go$GOLANG_VERSION.linux-amd64.tar.gz &amp;amp;&amp;amp; \
    tar -C /usr/local -xzf go$GOLANG_VERSION.linux-amd64.tar.gz &amp;amp;&amp;amp; \
    rm go$GOLANG_VERSION.linux-amd64.tar.gz &amp;amp;&amp;amp; \
    ln -s /usr/local/go/bin/go /usr/local/bin/go &amp;amp;&amp;amp; \
    ln -s /usr/local/go/bin/gofmt /usr/local/bin/gofmt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the same way, we are downloading the archive, unarchiving it, but now we are creating some symlinks to ensure go is in our PATH.&lt;/p&gt;

&lt;p&gt;Let’s also set a workdir. When we will run our container, this will be our starting directory:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WORKDIR /workspace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We should also add the command we want our Dockerfile to run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CMD ["bash"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the end, your Dockerfile should look like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM alpine:3.20

ENV TERRAFORM_VERSION=1.5.7 \
    OPENTOFU_VERSION=1.8.1 \
    PYTHON_VERSION=3.12.0 \
    KUBECTL_VERSION=1.28.0 \
    ANSIBLE_VERSION=2.15.0 \
    GOLANG_VERSION=1.21.0 \ 
    PIPX_BIN_DIR=/usr/local/bin

RUN apk add --no-cache \
    curl \
    bash \
    git \
    wget \
    unzip \
    make \
    build-base \
    py3-pip \
    pipx \
    openssh-client \
    gnupg \
    libc6-compat

RUN wget -O terraform.zip https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip &amp;amp;&amp;amp; \
    unzip terraform.zip &amp;amp;&amp;amp; \
    mv terraform /usr/local/bin/ &amp;amp;&amp;amp; \
    rm terraform.zip

RUN wget -O tofu.zip https://github.com/opentofu/opentofu/releases/download/v${OPENTOFU_VERSION}/tofu_${OPENTOFU_VERSION}_linux_amd64.zip &amp;amp;&amp;amp; \
    unzip tofu.zip &amp;amp;&amp;amp; \
    mv tofu /usr/local/bin/ &amp;amp;&amp;amp; \
    rm tofu.zip

RUN curl -LO "https://dl.k8s.io/release/v$KUBECTL_VERSION/bin/linux/amd64/kubectl" &amp;amp;&amp;amp; \
    chmod +x kubectl &amp;amp;&amp;amp; \
    mv kubectl /usr/local/bin/

RUN pipx install ansible-core==${ANSIBLE_VERSION}

RUN wget https://golang.org/dl/go$GOLANG_VERSION.linux-amd64.tar.gz &amp;amp;&amp;amp; \
    tar -C /usr/local -xzf go$GOLANG_VERSION.linux-amd64.tar.gz &amp;amp;&amp;amp; \
    rm go$GOLANG_VERSION.linux-amd64.tar.gz &amp;amp;&amp;amp; \
    ln -s /usr/local/go/bin/go /usr/local/bin/go &amp;amp;&amp;amp; \
    ln -s /usr/local/go/bin/gofmt /usr/local/bin/gofmt

WORKDIR /workspace

CMD ["bash"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let’s go to the directory that contains our Dockerfile and build the image:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t dev_image:1.0.0 .
[+] Building 38.2s (8/11)                                                                                                                          docker:desktop-linux
[+] Building 48.1s (12/12) FINISHED                                                                                                                docker:desktop-linux
 =&amp;gt; [internal] load build definition from Dockerfile                                                                                                               0.0s
 =&amp;gt; =&amp;gt; transferring dockerfile: 1.46kB                                                                                                                             0.0s
 =&amp;gt; [internal] load metadata for docker.io/library/alpine:3.20                                                                                                     0.5s
 =&amp;gt; [internal] load .dockerignore                                                                                                                                  0.0s
 =&amp;gt; =&amp;gt; transferring context: 2B                                                                                                                                    0.0s
 =&amp;gt; CACHED [1/8] FROM docker.io/library/alpine:3.20@sha256:0a4eaa0eecf5f8c050e5bba433f58c052be7587ee8af3e8b3910ef9ab5fbe9f5                                        0.0s
 =&amp;gt; [2/8] RUN apk add --no-cache     curl     bash     git     wget     unzip     make     build-base     py3-pip     pipx     openssh-client     gnupg     libc  17.3s 
 =&amp;gt; [3/8] RUN wget -O terraform.zip https://releases.hashicorp.com/terraform/1.5.7/terraform_1.5.7_linux_amd64.zip &amp;amp;&amp;amp;     unzip terraform.zip &amp;amp;&amp;amp;     mv terraform  2.7s 
 =&amp;gt; [4/8] RUN wget -O tofu.zip https://github.com/opentofu/opentofu/releases/download/v1.8.1/tofu_1.8.1_linux_amd64.zip &amp;amp;&amp;amp;     unzip tofu.zip &amp;amp;&amp;amp;     mv tofu /usr  4.1s 
 =&amp;gt; [5/8] RUN curl -LO "https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl" &amp;amp;&amp;amp;     chmod +x kubectl &amp;amp;&amp;amp;     mv kubectl /usr/local/bin/                       5.8s 
 =&amp;gt; [6/8] RUN pipx install ansible-core==2.15.0                                                                                                                    7.8s 
 =&amp;gt; [7/8] RUN wget https://golang.org/dl/go1.21.0.linux-amd64.tar.gz &amp;amp;&amp;amp;     tar -C /usr/local -xzf go1.21.0.linux-amd64.tar.gz &amp;amp;&amp;amp;     rm go1.21.0.linux-amd64.tar  9.2s 
 =&amp;gt; [8/8] WORKDIR /workspace                                                                                                                                       0.0s 
 =&amp;gt; exporting to image                                                                                                                                             0.5s 
 =&amp;gt; =&amp;gt; exporting layers                                                                                                                                            0.5s 
 =&amp;gt; =&amp;gt; writing image sha256:23fe925c0eb2e0931bc86f592373bcd13916e6dbbb4ce74b18fff846fb8f2f4d                                                                       0.0s 
 =&amp;gt; =&amp;gt; naming to docker.io/library/dev_image:1.0.0                                                                                                                 0.0s 

What's next:
    View a summary of image vulnerabilities and recommendations → docker scout quickview
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The “-t” flag of the docker build command, lets us specify the image :.&lt;/p&gt;

&lt;p&gt;Let’s see our newly created image:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images
REPOSITORY     TAG       IMAGE ID       CREATED         SIZE
dev_image      1.0.0     23fe925c0eb2   7 minutes ago   783MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let’s create a container from our image and check all the image versions to see if we meet the acceptance criteria we have in our ticket:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -ti dev_image:1.0.0
062c8343eef7:/workspace#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These two options combined (“-ti”), allow you to run a container interactively with a terminal attached. This is especially useful for running a shell inside the container, so you can execute commands directly inside the container, as we would like to do with this.&lt;/p&gt;

&lt;p&gt;Let’s check out our tools versions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F232e2c7f-b2cd-4030-8331-b1814858c08b_1400x419.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h5paejjl90pdbza7c3o.png" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07e299ca-942c-4fa3-8f90-08407f23a611_762x162.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xtwakxtdh7kc8t34cfb.png" width="762" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we have met two out of three things related to our acceptance criteria, and I guess that the documentation can be easily written from all the details provided above, so we can say that this ticket can be moved into review 😁&lt;/p&gt;

&lt;p&gt;In the real world, you will most likely want to take advantage of an editor to edit your code and run it from the Docker container. You would also want to give a name to your container, so let’s do that. The easiest way to do this is when you are creating it like so:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -ti --name dev_container2 -v ~/Workspace:/workspace dev_image:1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This command will bind the host directory Workspace from the home of my user to /workspace. If I create a container in this way, I will be redirected to my workspace where I have all my code. Pretty neat, right?&lt;/p&gt;

&lt;p&gt;You may ask yourself, why I deleted the archives from the image when I was creating it. The reason is pretty simple, I wanted to make the image as small as possible to keep it portable.&lt;/p&gt;

&lt;p&gt;Everything seems simple, right? Well, it’s not. Until I got all the dependencies right, I messed it up a million times, so don’t worry if you will also mess it up, it’s just part of the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Managing Docker containers&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Listing containers&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We can list containers with the &lt;code&gt;docker ps&lt;/code&gt; command, but this alone will only show the containers that are running:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps                                                                     
CONTAINER ID   IMAGE                  COMMAND                  CREATED       STATUS       PORTS                       NAMES
8315e81fd8e6   kindest/node:v1.30.0   "/usr/local/bin/entr…"   2 weeks ago   Up 9 hours   127.0.0.1:53925-&amp;gt;6443/tcp   my-cluster-control-plane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you want to see all your existing containers, you can run &lt;code&gt;docker ps -a&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps -a
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS                     PORTS                       NAMES
d6ae4b01f9de   dev_image:1.0.0        "bash"                   5 minutes ago    Exited (0) 5 minutes ago                               dev_container2
ad38f4f944e8   dev_image:1.0.0        "bash"                   5 minutes ago    Exited (0) 5 minutes ago                               dev_container
062c8343eef7   dev_image:1.0.0        "bash"                   16 minutes ago   Exited (0) 7 minutes ago                               mystifying_bhaskara
8315e81fd8e6   kindest/node:v1.30.0   "/usr/local/bin/entr…"   2 weeks ago      Up 9 hours                 127.0.0.1:53925-&amp;gt;6443/tcp   my-cluster-control-plane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Starting a container&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is a very simple process, you can simply run &lt;code&gt;docker start container_name&lt;/code&gt; or &lt;code&gt;docker start first_few_letters_of_id&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;episode2 docker start d6
d6
➜  episode2 docker ps      
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS         PORTS                       NAMES
d6ae4b01f9de   dev_image:1.0.0        "bash"                   7 minutes ago   Up 2 seconds                               dev_container2
8315e81fd8e6   kindest/node:v1.30.0   "/usr/local/bin/entr…"   2 weeks ago     Up 9 hours     127.0.0.1:53925-&amp;gt;6443/tcp   my-cluster-control-plane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Attaching to a container&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You’ve started your dev container, but how can you use it? Well, you need to attach to it. The command is &lt;code&gt;docker attach container_name&lt;/code&gt; or &lt;code&gt;docker attach first_few_letters_of_id&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;episode2 docker attach d6
d6ae4b01f9de:/workspace#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Stopping a container&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Similar to starting, this is a very simple process, you can simply run &lt;code&gt;docker stop container_name&lt;/code&gt; or &lt;code&gt;docker stop first_few_letters_of_id&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;episode2 docker start d6 
d6
➜  episode2 docker ps       
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS         PORTS                       NAMES
d6ae4b01f9de   dev_image:1.0.0        "bash"                   11 minutes ago   Up 5 seconds                               dev_container2
8315e81fd8e6   kindest/node:v1.30.0   "/usr/local/bin/entr…"   2 weeks ago      Up 9 hours     127.0.0.1:53925-&amp;gt;6443/tcp   my-cluster-control-plane
➜  episode2 docker stop d6 
d6
➜  episode2 docker ps     
CONTAINER ID   IMAGE                  COMMAND                  CREATED       STATUS       PORTS                       NAMES
8315e81fd8e6   kindest/node:v1.30.0   "/usr/local/bin/entr…"   2 weeks ago   Up 9 hours   127.0.0.1:53925-&amp;gt;6443/tcp   my-cluster-control-plane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Removing containers&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To remove a container, you will simply run &lt;code&gt;docker rm container_name&lt;/code&gt; or &lt;code&gt;docker rm first_few_letters_of_id&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rm d6                                                                  
d6
➜  episode2 docker ps -a
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS                      PORTS                       NAMES
ad38f4f944e8   dev_image:1.0.0        "bash"                   14 minutes ago   Exited (0) 14 minutes ago                               dev_container
062c8343eef7   dev_image:1.0.0        "bash"                   25 minutes ago   Exited (0) 16 minutes ago                               mystifying_bhaskara
8315e81fd8e6   kindest/node:v1.30.0   "/usr/local/bin/entr…"   2 weeks ago      Up 9 hours                  127.0.0.1:53925-&amp;gt;6443/tcp   my-cluster-control-plane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This was originally posted on &lt;a href="https://techblog.flaviusdinu.com/2-setting-sail-with-docker-understanding-containers-and-images-e3c3a30046f4" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  3. Docker Network and Volumes
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Docker networks types&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Docker offers several network types, each serving different purposes:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Bridge Network&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The bridge network is the default network type in Docker. When you start a container without specifying a network, it’s automatically attached to the bridge network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key features&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Containers on the same bridge can communicate with each other&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uses Network Address Translation (NAT) for external communication&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Host Network&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The host network removes network isolation between the container and the Docker host.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key features&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Container uses the host’s network stack directly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Useful for optimizing performance in specific scenarios&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Overlay Network&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Overlay networks are used in Docker Swarm mode to enable communication between containers across multiple Docker hosts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key features&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enables multi-host networking&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Essential for deploying swarm services&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Macvlan Network&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Ipvlan Network&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Ipvlan is similar to Macvlan but works at the network layer instead of the data link layer.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;6. None Network&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The none network disables networking for a container. It’s useful when you want to completely isolate a container from the network&lt;/p&gt;

&lt;p&gt;To be completely honest, I haven’t had a need in my use cases for #4 or #5, but I have worked in some of my use cases with the rest of them. Of course, bridge is the one I’ve used the most.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Docker network examples&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s list the existing networks:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker network list
NETWORK ID     NAME      DRIVER    SCOPE
9a571665758c   bridge    bridge    local
ff5a515b4fd3   host      host      local
cdf9ca0775b3   kind      bridge    local
a62f6451cad1   none      null      local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Bridge network example&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now, let’s create a new bridge network:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker network create --driver bridge my_new_bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You will see an output similar to this, which will actually show you the entire id of the network bridge that you have created.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;06ca1c432576d5e865da9a0bf4d...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If we list the existing networks again, we will see something similar:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker network list                                
NETWORK ID     NAME            DRIVER    SCOPE
9a571665758c   bridge          bridge    local
ff5a515b4fd3   host            host      local
cdf9ca0775b3   kind            bridge    local
06ca1c432576   my_new_bridge   bridge    local
a62f6451cad1   none            null      local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let’s create two alpine containers in this bridge, and ping one from the other:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -dit --name alpine1 --network my_new_bridge alpine
docker run -dit --name alpine2 --network my_new_bridge alpine
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We are using the &lt;code&gt;-d&lt;/code&gt;option, because we don’t want to be attached to the containers.&lt;/p&gt;

&lt;p&gt;If we run &lt;code&gt;docker ps&lt;/code&gt; we can see that both containers are up and running:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps                                                    
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS                       NAMES
a51f7134e7ea   alpine                 "/bin/sh"                28 seconds ago   Up 28 seconds                               alpine2
cacfa2c11cf1   alpine                 "/bin/sh"                34 seconds ago   Up 33 seconds                               alpine1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let’s ping one from the other:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it alpine1 ping -c 2 alpine2

PING alpine2 (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.079 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.136 ms

--- alpine2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.079/0.107/0.136 ms


docker exec -it alpine2 ping -c 2 alpine1

PING alpine1 (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: seq=0 ttl=64 time=0.063 ms
64 bytes from 172.19.0.2: seq=1 ttl=64 time=0.141 ms

--- alpine1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.063/0.102/0.141 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see, ping works just fine between the two containers.&lt;/p&gt;

&lt;p&gt;Bonus: If you want to find the ip address of a container you can use the &lt;code&gt;docker inspect&lt;/code&gt; command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker inspect alpine1 | grep -i IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "172.19.0.2",
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And of course, ping will work on the ip address as well, not only on the hostname:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it alpine2 ping -c 2 172.19.0.2 

PING 172.19.0.2 (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: seq=0 ttl=64 time=0.104 ms
64 bytes from 172.19.0.2: seq=1 ttl=64 time=0.159 ms

--- 172.19.0.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.104/0.131/0.159 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Host network example&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A good candidate for a host network example would be a network monitoring tool that needs direct access to the host system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This example only works on Linux.&lt;/p&gt;

&lt;p&gt;For this example, I will use the &lt;a href="https://prometheus.io/docs/guides/node-exporter/" rel="noopener noreferrer"&gt;Prometheus node exporter&lt;/a&gt;, which is a tool that exposes hardware and OS metrics to Prometheus:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name node_exporter --network host prom/node-exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s see the container port:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker inspect --format='{{.Config.ExposedPorts}}' node_exporter

map[9100/tcp:{}]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt; : &lt;code&gt;docker inspect&lt;/code&gt; returns a json, so you can leverage the &lt;code&gt;--format&lt;/code&gt; option to get the relevant information about your containers. In the above case, I’m checking the values under &lt;code&gt;Config&lt;/code&gt;, and because one of the values is &lt;code&gt;ExposedPorts&lt;/code&gt;, I can easily check the value under it as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d577b45-aac2-4732-9ad8-2a1e9fbd620a_1400x595.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foul45846g3o3ylj6c6rw.png" width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc8cca0c-a332-4618-824e-1a884a97746d_1400x911.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8j91uvjhb347g3qbn1no.png" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Docker Desktop for Windows and MacOS uses a lightweight virtual machine to run containers. This means the &lt;code&gt;--network host&lt;/code&gt; option doesn't work as it does on Linux systems. If you want to use this example on your operating system, you should leverage port mapping like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -d --name node_exporter -p 9100:9100 prom/node-exporter&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Docker volumes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Docker volumes allow you to persist data generated by containers. Whatever data is inside a Docker container is ephemeral (everything is deleted after the container is deleted), and Docker volumes are the ones that save data if your workloads require it.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why use Docker volumes?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;There are a couple of reasons for which you would use Docker volumes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Persistence —&lt;/strong&gt; ensures that data isn’t lost when a container stops or it is deleted&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Sharing&lt;/strong&gt; — you can share data between containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Isolation —&lt;/strong&gt; data is kept outside the container’s writable layer, meaning that there is a better separation of concerns&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Docker volume types&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;There are three types of Docker volumes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Named volumes&lt;/strong&gt; — volumes managed by Docker and stored in Docker’s default location on the host.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Anonymous volumes&lt;/strong&gt; — similar to named volumes but they are assigned a random name when they are created. This name is unique across the Docker host. Just like named volumes, anonymous volumes persist even if you remove the container that uses them, except if you use the &lt;code&gt;--rm&lt;/code&gt;flag when creating the container, in which case the anonymous volume is destroyed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bind mounts&lt;/strong&gt; — while bind mounts are not exactly volumes, I like to think of them as volumes. You’ve already seen this in the last part when I mounted an existing directory from the host system to a Docker container. In that example, I was passing my workspace directory (which had my source code), to a container to ensure that I ran my code with the specific versions I used in my developer environment image.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are also some &lt;a href="https://semaphoreci.com/blog/docker-volumes" rel="noopener noreferrer"&gt;docker volume classes&lt;/a&gt;, but I’ll let you discover them yourselves if you are interested.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Named volume example&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let’s create a named volume:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker volume create nginx                                        
nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let’s create a nginx container that uses this volume:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name nginx_named -v nginx:/usr/share/nginx/html -p 8080:80 nginx:latest 
d96aea6e5cd85ca238150606b0555ead92274a8561c69c6364f45322276ad063
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I’ve mapped the volume I’ve created in the &lt;code&gt;/usr/share/nginx/html&lt;/code&gt; and also mapped the 8080 port to access nginx from my local machine.&lt;/p&gt;

&lt;p&gt;Now, let’s modify the content of the index.html file from our volume:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec nginx_named sh -c "echo 'Hello from the named volume!' &amp;gt; /usr/share/nginx/html/index.html"
curl http://localhost:8080                                                                            
Hello from the named volume!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Ok, so now let’s delete the container, and recreate a new one to easily see if the content is still there:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stop nginx_named                                                                               
nginx_named

docker rm nginx_named                                                                  
nginx_named

docker run -d --name nginx_named -v nginx:/usr/share/nginx/html -p 8080:80 nginx:latest           
e1204ca5dd9aec68bbefb97e8b39c5acbca284f569edf44420e79a2b6b8b6cf7

curl http://localhost:8080                                                                            
Hello from the named volume!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see, the content is still there because our data persisted.&lt;/p&gt;

&lt;p&gt;Let’s clean up before going to the next example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stop nginx_named                                                                               
nginx_named

docker rm nginx_named                                                                  
nginx_named

docker volume rm nginx    
nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Anonymous volume example&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;An anonymous volume doesn’t have to be created before the container. It can be used directly with the &lt;code&gt;-v&lt;/code&gt; option, when you are doing the actual creation of the volume:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name nginx_anonymous_volume -v /usr/share/nginx/html -p 8081:80 nginx:latest 
e739ee4fe20f8cd7f253105b45aa46a311979f56aa6cfce5c858617fddaec800

docker exec nginx_anonymous_volume sh -c "echo 'Hello from Anonymous Volume!' &amp;gt; /usr/share/nginx/html/index.html"

curl http://localhost:8081
Hello from Anonymous Volume!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let’s mount this volume to another nginx container. To do that, we will first need to get the unique identifier associated with it. We can do that&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker inspect --format='{{.Mounts}}' nginx_anonymous_volume

[{volume 6895fe7f79404ec8a2b337f3e74de6291d03c869c89d18bc866a11ae64b66e18
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Based on that long unique identifier we can mount this volume to another container like so:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name nginx_anonymous_volume2 -v 6895fe7f79404ec8a2b337f3e74de6291d03c869c89d18bc866a11ae64b66e18:/usr/share/nginx/html -p 8082:80 nginx:latest
c55a85c4dca0fa34d00b9d4010b44f02d17e303f7602c5618d32fd5c6b8f62ca
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s check if this container will return the same result when we access it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://localhost:8082
Hello from Anonymous Volume!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This part was originally posted on &lt;a href="https://techblog.flaviusdinu.com/docker-nautical-leader-3-docker-networks-and-volumes-32410557f7af" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  4. Docker registries
&lt;/h1&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Publishing an image to a Docker registry&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For this example, we will use Docker Hub, as it is Docker’s own container registry. Head out to &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;https://hub.docker.com/&lt;/a&gt;, and create an account.&lt;br&gt;&lt;br&gt;
After you create your account you should see something similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6dd2f676-aa4d-455a-9ae7-5fb034b980e8_1400x812.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem2f2jb253szsey7y4ny.png" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I already have two images in this account, but given the fact that you’ve just created an account from scratch, you shouldn’t have anything. Now, let’s create a repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cc454b0-ee37-4ca9-9747-44d6bacbca91_1400x511.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdgdk6a8kzxiggrpzcfai.png" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will need to provide a name to your image and a short description, and you can also choose if your image is public or private. Keep in mind that you can only have one private image on a free account. Take also a note on the suggested commands to push an image:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag local-image:tagname new-repo:tagname


docker push new-repo:tagname
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For this example, I will use the development environment image that I’ve built in the second part. Don’t worry if this is the first part you see, as I’ll show you the image again, we will also build it and then tag and push it.&lt;/p&gt;

&lt;p&gt;Before showing the Dockerfile and going through the process, this is the info I’ll use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fadad3e-fb86-49dc-b1f7-7c31342449db_1400x368.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7oind1h9n46ytfb79eph.png" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I clicked on create and this is the result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F931d8fa5-bf5d-4616-9f9e-3e9ce1c37fdd_1400x656.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi82vrhu82hp6xognso3e.png" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s go through the process and push an image.&lt;/p&gt;

&lt;p&gt;This is the Dockerfile I will use:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM alpine:3.20

ENV TERRAFORM_VERSION=1.5.7 \
    OPENTOFU_VERSION=1.8.1 \
    PYTHON_VERSION=3.12.0 \
    KUBECTL_VERSION=1.28.0 \
    ANSIBLE_VERSION=2.15.0 \
    GOLANG_VERSION=1.21.0 \ 
    PIPX_BIN_DIR=/usr/local/bin

RUN apk add --no-cache \
    curl \
    bash \
    git \
    wget \
    unzip \
    make \
    build-base \
    py3-pip \
    pipx \
    openssh-client \
    gnupg \
    libc6-compat

RUN wget -O terraform.zip https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip &amp;amp;&amp;amp; \
    unzip terraform.zip &amp;amp;&amp;amp; \
    mv terraform /usr/local/bin/ &amp;amp;&amp;amp; \
    rm terraform.zip

RUN wget -O tofu.zip https://github.com/opentofu/opentofu/releases/download/v${OPENTOFU_VERSION}/tofu_${OPENTOFU_VERSION}_linux_amd64.zip &amp;amp;&amp;amp; \
    unzip tofu.zip &amp;amp;&amp;amp; \
    mv tofu /usr/local/bin/ &amp;amp;&amp;amp; \
    rm tofu.zip

RUN curl -LO "https://dl.k8s.io/release/v$KUBECTL_VERSION/bin/linux/amd64/kubectl" &amp;amp;&amp;amp; \
    chmod +x kubectl &amp;amp;&amp;amp; \
    mv kubectl /usr/local/bin/

RUN pipx install ansible-core==${ANSIBLE_VERSION}

RUN wget https://golang.org/dl/go$GOLANG_VERSION.linux-amd64.tar.gz &amp;amp;&amp;amp; \
    tar -C /usr/local -xzf go$GOLANG_VERSION.linux-amd64.tar.gz &amp;amp;&amp;amp; \
    rm go$GOLANG_VERSION.linux-amd64.tar.gz &amp;amp;&amp;amp; \
    ln -s /usr/local/go/bin/go /usr/local/bin/go &amp;amp;&amp;amp; \
    ln -s /usr/local/go/bin/gofmt /usr/local/bin/gofmt

WORKDIR /workspace

CMD ["bash"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now let’s build it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t devops-dev-env:1.0.0 .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s check if it’s available locally:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images                         
REPOSITORY           TAG       IMAGE ID       CREATED        SIZE
devops-dev-env       1.0.0     c3069a674574   2 days ago     351MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We are now ready to tag it with our repository name (I could’ve done this from the build phase, but I wanted to show you that you don’t need to worry about it). Your repository name will be different than mine, so ensure you make the changes accordingly.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag devops-dev-env:1.0.0 flaviuscdinu93/devops-dev-env:1.0.0


docker images                                                      
REPOSITORY                      TAG       IMAGE ID       CREATED        SIZE
flaviuscdinu93/devops-dev-env   1.0.0     c3069a674574   2 days ago     351MB
devops-dev-env                  1.0.0     c3069a674574   2 days ago     351MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Before pushing the image, ensure you are signed in with your user in Docker Desktop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F11ad7663-ddcd-4d39-b4d2-8d1d4d62f0da_488x770.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2mnzfx5aaxh0u482u3l.png" width="488" height="770"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s push the image:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push flaviuscdinu93/devops-dev-env:1.0.0
The push refers to repository [docker.io/flaviuscdinu93/devops-dev-env]
e46b168236e7: Pushed 
9b08b92b54a9: Pushed 
f3e42b4e3f66: Pushed 
4b49e924d63f: Pushed 
190aeee88612: Pushed 
8fc9c443438f: Pushed 
9110f7b5208f: Pushed 
1.0.0: digest: sha256:ffa6b.... size: 1794
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, if we head back to Docker Hub, we will see our image there:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07bb1d70-4660-4943-a85f-3e875915792d_1400x314.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6r4zcov8ph2d8sfax9ze.png" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can click on public view, and see what others will see when they find our image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb58a97b-e2d8-45a2-8753-2db16cd97473_1400x424.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdr33j7p2969xckj6no87.png" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we want to pull our image from the registry, we can simply run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull flaviuscdinu93/devops-dev-env:1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Top 10 Docker Registries&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this section, I will present some of the most popular registries in my view, but I won’t show you how to push an image to all of them. Nevertheless, the process is pretty similar to the one I’ve presented above.&lt;/p&gt;

&lt;p&gt;Without further ado, these are the most popular registries:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Docker Hub&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitHub Container Registry&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS Elastic Container Registry (ECR)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Azure Container Registry (ACR)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Google Artifact Registry (GCR)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitLab Container Registry&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Oracle Container Registry&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;JFrog Artifactory&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Harbor&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;RedHat Quay&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;1. Docker Hub&lt;/a&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We’ve already seen how Docker Hub works and how it looks so I don’t think it needs much presentation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2721249b-e9f9-4b8e-919a-1f547007e508_1400x812.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftoocnp3ory8jpsoaqxj7.png" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It offers a massive repository of publicly available images, including official images for popular open-source projects, databases, and programming languages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Free tier with access to public images&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automated builds from your VCS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrates easily with Docker Desktop&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Offers private repositories (1 for free, but there are paid plans available)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://github.com/features/packages" rel="noopener noreferrer"&gt;2. GitHub Container Registry&lt;/a&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you are already using GitHub for version control, the GitHub Container Registry seems like a very viable choice for storing and managing your Docker or even OCI images.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee913b1a-60f1-4308-9c8b-0ddd5514622f_1400x779.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynsaem24m07vcjfowz26.png" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The GitHub Container registry is part of the GitHub Packages offering that has other popular registries as well such as npm, RubyGems, Apache or Gradle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Integrated with GitHub Actions for CI/CD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Free tier available with public repositories&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Private repositories included in GitHub Pro and GitHub Teams plans&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fine-grained permissions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://aws.amazon.com/ecr/" rel="noopener noreferrer"&gt;3. AWS Elastic Container Registry (ECR)&lt;/a&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Amazon ECR is a Docker container registry provided by AWS. It integrates seamlessly with other AWS services, making it an excellent choice for users already in the AWS ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff444f8c1-aaa2-4b79-896f-d0362ca8e77d_1400x811.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqmmx7z0utg5twshqcgl.png" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Integrates seamlessly with Amazon ECS, EKS, Fargate&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is highly scalable and secure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports both private and public repositories&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It has a Pay-as-you-go pricing model&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://azure.microsoft.com/en-us/products/container-registry" rel="noopener noreferrer"&gt;4. Azure Container Registry (ACR)&lt;/a&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Azure’s Container Registry is a Docker container registry provided by Microsoft. If you are already in the Microsoft ecosystem and are heavily dependent inside your workflows on a container registry, this seems like a no-brainer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf59611f-b55f-42c0-944d-d8c2cc3ee6c6_820x536.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpecsbt66voss3skxitpa.png" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Integrated with Azure Kubernetes Service (AKS)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Geo-replication for high availability&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports Helm charts and OCI artifacts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Built-in tasks for image building and patching&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://cloud.google.com/artifact-registry" rel="noopener noreferrer"&gt;5. Google Artifact Registry (GAR)&lt;/a&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Google Artifact Registry is Google’s answer to Docker image management., It’s pretty similar to the GitHub Packages offering and has a strong integration with Google Cloud services. Of course, if you are using Google Cloud, using Google’s registry makes a lot of sense.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F64459929-0e4c-4957-82ef-e472ef5a125d_1400x778.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknw8kq2vaeniul737rs3.png" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Integrated with Google Kubernetes Engine (GKE)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Global availability and high security&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrated with Cloud IAM for fine-grained access control&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automatic vulnerability scanning&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://docs.gitlab.com/ee/user/packages/container_registry/" rel="noopener noreferrer"&gt;6. GitLab Container Registry&lt;/a&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;GitLab Container Registry is a part of the GitLab CI/CD platform. It allows users to build, test, and deploy Docker images directly from their GitLab repositories, and if you keep your code on GitLab, you can easily use it for hosting your Docker images.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F984eec14-e982-4ca1-8745-4a1ea9e8d1dd_1400x808.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ab6h9ulrxwh8y4lj6h9.png" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Integrated with GitLab CI/CD for seamless workflows&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Free for both public and private repositories&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fine-grained access control&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Built-in monitoring and logging&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://container-registry.oracle.com/ords/f?p=113%3A10%3A%3A%3A%3A%3A%3A" rel="noopener noreferrer"&gt;7. Oracle Container Registry&lt;/a&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Oracle Container Registry is Oracle Cloud Infrastructure’s offering for managing your Docker images. It is tailored for Oracle products and services, making it ideal for organizations using Oracle Cloud Infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99e1afe8-611b-487c-b8c3-d6c8fe5a9f9a_1400x723.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9vrbinb6n9d0pwmspdu.png" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Offers pre-built images for Oracle products&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrated with Oracle Cloud Infrastructure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Secure and compliant with Oracle standards&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports both public and private repositories&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://jfrog.com/artifactory/" rel="noopener noreferrer"&gt;8. JFrog Artifactory&lt;/a&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;JFrog Artifactory is a universal artifact repository manager that supports Docker images along with other build artifacts. Artifactory is known for its robust security features and extensive integration options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac8f8b1d-83fa-45a3-ad34-0ba91c0a4f6b_1400x404.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56avfcy2omopn6pjfg17.png" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Supports Docker, Helm, and other package formats&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Advanced security features such as Xray for vulnerability scanning&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;High availability and scalability&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integration with various CI/CD tools&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://goharbor.io/" rel="noopener noreferrer"&gt;9. Harbor&lt;/a&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Harbor is a CNCF-graduated open-source container registry that emphasizes security and compliance. It has a range of features to manage Docker images, including RBAC, vulnerability scanning, and image signing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe8cd7379-2407-4b3f-90fc-845d74ad3b77_1400x631.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frha8j1p3gfk7kkw2mklh.png" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Role-based access control (RBAC)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrated with Clair for vulnerability scanning&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Image replication across multiple registries&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Content trust and image signing&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;&lt;a href="https://quay.io/" rel="noopener noreferrer"&gt;10. RedHat Quay&lt;/a&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you are in the RedHat ecosystem, Quay is a viable choice for hosting your Docker images. It offers both a public and private image repository, along with automated security scanning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c346c0-d120-47ec-bc4c-0691876f9a4c_1400x779.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd6b1pc8i581969br2bp.png" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Continuous security monitoring&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrated with OpenShift for seamless Kubernetes deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supports multiple image storage backends&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Customizable notifications and webhooks&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This part was originally posted on &lt;a href="https://techblog.flaviusdinu.com/docker-nautical-leader-4-top-10-registries-for-your-docker-images-0e34dac5ba92" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;


&lt;h1&gt;
  
  
  5. Docker Compose
&lt;/h1&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;What is Docker Compose?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Docker Compose is a Docker native tool that helps you manage multi-container applications by defining their configuration into a &lt;code&gt;yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;You can add all details related to your containers in this file and also define their networks and volumes.&lt;/p&gt;

&lt;p&gt;It offers the following benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Keeps it simple — you can define your entire application in a single &lt;code&gt;yaml&lt;/code&gt;file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reusability — you can easily share and leverage version control for your configurations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consistency — ensures your environments are the same, regardless of the operating system you are using&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability — you can easily scale up or down with simple Docker-Compose commands&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Docker Compose commands&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here are some of the most important arguments that can be used with the &lt;code&gt;docker-compose&lt;/code&gt; command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;up&lt;/code&gt; — creates and starts containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;down&lt;/code&gt; — stops and removes containers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;start&lt;/code&gt; — starts existing containers for a service&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;stop&lt;/code&gt; — stops running containers without removing them&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;restart&lt;/code&gt; — restarts all stopped and running services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ps&lt;/code&gt; — lists containers and their statuses&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;logs&lt;/code&gt; — displays the log outputs of the services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;run&lt;/code&gt; — runs a one-time command on a service&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;cp&lt;/code&gt; — copies files or folders between a container and the machine that runs Docker Compose&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;pull&lt;/code&gt; — pulls images for services defined in the compose file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;build&lt;/code&gt; — builds or rebuilds services defined in the compose file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;exec&lt;/code&gt; — executes a command in a running container&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are many other commands as well that you can use, but these are the most used ones. You can check what other commands are available by running &lt;code&gt;docker-compose --help&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Docker Compose example&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before jumping into an example, let’s clarify some things first. If you write your Docker Compose code into a &lt;code&gt;docker-compose.yml&lt;/code&gt; file, you won’t need to do anything special when you are running a command.&lt;/p&gt;

&lt;p&gt;However, you don’t need to name it like, and you could choose any name you want. The only difference is, that you will need to specify the compose file with a &lt;code&gt;-f&lt;/code&gt; option.&lt;/p&gt;

&lt;p&gt;Ok, now we are ready to build an example. For this example, I will combine multiple technologies to demonstrate the power of Docker Compose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Redis&lt;/strong&gt; — in-memory data store to persist the API hits of our application&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flask&lt;/strong&gt; — processes requests, and interacts with our Redis DB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;NodeJS&lt;/strong&gt; — will present the Flask application, making API requests to it&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Nginx&lt;/strong&gt; — reverse proxy routing incoming requests to the Node.js fronted and managing traffic between the frontend and backend&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First things, first, I will create a &lt;code&gt;docker-compose.yml&lt;/code&gt; file in which I will define a network:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;networks:
  my_bridge_network:
    driver: bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I will use this bridge network for all of the containers I will define. Next, I will define a volume that Redis will use, as I want to have this data saved, regardless of running &lt;code&gt;docker-compose down&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumes:
  redis_data:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let’s start defining our services:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  redis:
    image: redis:6.2.6
    container_name: redis_server
    networks:
      - my_bridge_network
    volumes:
      - redis_data:/data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above example, I’m using the &lt;code&gt;redis:6.2.6&lt;/code&gt; image, and I’ve named my container &lt;code&gt;redis-server&lt;/code&gt;. I am connecting to the existing network, and also leveraging my named volume in the &lt;code&gt;/data&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;Next, let’s define the flask application:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;backend:
    image: python:3.9.7-slim
    container_name: flask_backend
    working_dir: /app
    volumes:
      - ./backend:/app
    command: &amp;gt;
      sh -c "pip install --no-cache-dir -r requirements.txt &amp;amp;&amp;amp;
        python app.py"
    networks:
      - my_bridge_network
    depends_on:
      - redis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see, in this case, I’ve defined a working directory called &lt;code&gt;/app&lt;/code&gt; and I’m mounting an existing directory on my local machine called &lt;code&gt;backend&lt;/code&gt; in this mount point.&lt;/p&gt;

&lt;p&gt;Next, I’m running two commands, one that installs the prerequisites in &lt;code&gt;requirements.txt&lt;/code&gt; and the other that runs my application. At this point, I’m selecting the network, and I’ve also defined the &lt;code&gt;depends_on&lt;/code&gt; redis, to wait for redis to be available, before running my application.&lt;/p&gt;

&lt;p&gt;By now, you are wondering, what I have in the backend directory, so let me show you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;app.py&lt;/strong&gt; — this connects to Redis using the container name and saves the number of API hits in Redis&lt;/p&gt;

&lt;p&gt;from flask import Flask, jsonify&lt;br&gt;
import redis&lt;/p&gt;

&lt;p&gt;app = Flask(&lt;strong&gt;name&lt;/strong&gt;)&lt;br&gt;
r = redis.Redis(host='redis_server', port=6379, decode_responses=True)&lt;/p&gt;

&lt;p&gt;@app.route('/')&lt;br&gt;
def hello():&lt;br&gt;
    count = r.incr('hits')&lt;br&gt;
    return jsonify(message="Hello from Flask!", hits=count)&lt;/p&gt;

&lt;p&gt;if &lt;strong&gt;name&lt;/strong&gt; == "&lt;strong&gt;main&lt;/strong&gt;":&lt;br&gt;
    app.run(host='0.0.0.0', port=5000)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;requirements.txt&lt;/strong&gt; — this will contain the requirements of our application&lt;/p&gt;

&lt;p&gt;Flask&lt;br&gt;
redis&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, let’s define the frontend service:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; frontend:
    image: node:14.17.6-alpine
    container_name: node_frontend
    working_dir: /app
    volumes:
      - ./frontend:/app
    command: npm start
    ports:
      - "3000:3000"
    networks:
      - my_bridge_network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I’m using a &lt;code&gt;node:14&lt;/code&gt; image, and I’m also mapping an existing directory called &lt;code&gt;frontend&lt;/code&gt; to &lt;code&gt;/app&lt;/code&gt;, which is set as the working directory. I’m also specifying a command that starts the &lt;code&gt;frontend&lt;/code&gt; application, the port mapping, and the same network.&lt;/p&gt;

&lt;p&gt;In the frontend directory, there are multiple folders and files, but I’ve created only one manually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;index.js —&lt;/strong&gt; I’m connecting here to my backend using the container name and Flask’s default port which is 5000, and I’m using a get to find out how many hits I have in my backend&lt;/p&gt;

&lt;p&gt;const express = require('express');&lt;br&gt;
const axios = require('axios'); &lt;br&gt;
const app = express();&lt;br&gt;
const port = 3000;&lt;/p&gt;

&lt;p&gt;app.get('/', async (req, res) =&amp;gt; {&lt;br&gt;
    try {&lt;br&gt;
        const response = await axios.get('&lt;a href="http://flask_backend:5000/'" rel="noopener noreferrer"&gt;http://flask_backend:5000/'&lt;/a&gt;);&lt;br&gt;
        res.send(&lt;code&gt;&amp;lt;h1&amp;gt;${response.data.message}&amp;lt;/h1&amp;gt;&amp;lt;p&amp;gt;API hits: ${response.data.hits}&amp;lt;/p&amp;gt;&lt;/code&gt;);&lt;br&gt;
    } catch (error) {&lt;br&gt;
        res.send(&lt;code&gt;&amp;lt;h1&amp;gt;Error connecting to the backend&amp;lt;/h1&amp;gt;&lt;/code&gt;);&lt;br&gt;
    }&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;app.listen(port, () =&amp;gt; {&lt;br&gt;
    console.log(&lt;code&gt;Frontend listening at http://localhost:${port}&lt;/code&gt;);&lt;br&gt;
});&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To generate the other files, we need to go to our frontend directory and run the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm init -y
npm install axios
npm install express
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will generate a &lt;code&gt;node_modules&lt;/code&gt; folder, a &lt;code&gt;package.json&lt;/code&gt; file and a &lt;code&gt;package-lock.json&lt;/code&gt; file. Go to the &lt;code&gt;package.json&lt;/code&gt; file and add a start command to the &lt;code&gt;scripts&lt;/code&gt;. In the end, it should look similar to this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "frontend",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" &amp;amp;&amp;amp; exit 1",
    "start": "node index.js"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "description": "",
  "dependencies": {
    "axios": "^1.7.7",
    "express": "^4.19.2"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, we are ready to add the last container in our docker-compose configuration: nginx.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nginx:
    image: nginx:1.21.3
    container_name: nginx_proxy
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    ports:
      - "80:80"
    networks:
      - my_bridge_network
    depends_on:
      - frontend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Its setup is pretty similar to what we’ve seen before, and we are mounting an &lt;code&gt;nginx.conf&lt;/code&gt; file to the default configuration path in nginx.&lt;/p&gt;

&lt;p&gt;This is the content of &lt;code&gt;nginx.conf&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;events { }

http {
    server {
        listen 80;

        location / {
            proxy_pass http://node_frontend:3000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see, we’ve set the proxy_pass to our frontend.&lt;/p&gt;

&lt;p&gt;This is how our &lt;code&gt;docker-compose.yaml&lt;/code&gt; file will look like in the end:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;networks:
  my_bridge_network:
    driver: bridge

volumes:
  redis_data:

services:
  redis:
    image: redis:6.2.6
    container_name: redis_server
    networks:
      - my_bridge_network
    volumes:
      - redis_data:/data

  backend:
    image: python:3.9.7-slim
    container_name: flask_backend
    working_dir: /app
    volumes:
      - ./backend:/app
    command: &amp;gt;
      sh -c "pip install --no-cache-dir -r requirements.txt &amp;amp;&amp;amp;
        python app.py"
    networks:
      - my_bridge_network
    depends_on:
      - redis

  frontend:
    image: node:14.17.6-alpine
    container_name: node_frontend
    working_dir: /app
    volumes:
      - ./frontend:/app
    command: npm start
    ports:
      - "3000:3000"
    networks:
      - my_bridge_network

  nginx:
    image: nginx:1.21.3
    container_name: nginx_proxy
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    ports:
      - "80:80"
    networks:
      - my_bridge_network
    depends_on:
      - frontend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And this is our directory structure:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── backend
│   ├── app.py
│   └── requirements.txt
├── docker-compose.yml
├── frontend
│   ├── index.js
│   ├── package-lock.json
│   └── package.json
│   └── node_modules (a lot of files and folders here, omitted)
└── nginx.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, if we run &lt;code&gt;docker-compose up&lt;/code&gt; this is what we’ll see:&lt;/p&gt;

&lt;p&gt;And if you run &lt;code&gt;docker-compose down&lt;/code&gt; and then &lt;code&gt;docker-compose up&lt;/code&gt; again, you will see the count remains the same, because we saved the volume, but keep in mind that if you want to delete the volume you can do that by running &lt;code&gt;docker-compose down -v&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I know this was a fairly complicate example, but these are the examples you will face in real-life.&lt;/p&gt;

&lt;p&gt;As a DevOps engineer, you don’t need to know how to build these applications, as your developers will build them, but you should be able to follow their Python, Javascript, Golang, Java code, and help them if there are issues with making these containers communicate.&lt;/p&gt;

&lt;p&gt;At the same time, as a developer, you don’t need to know Docker Compose and all the instructions you have to build, but you need to know how to containerize your application in a way it makes sense. In this example, we could’ve built custom images that had, for example, the requirements already installed.&lt;/p&gt;

&lt;p&gt;This part was originally posted on &lt;a href="https://techblog.flaviusdinu.com/docker-nautical-leader-5-docker-compose-9803506f8a1f" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  6. Securing your Docker Environment
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What are the most common Docker security risks?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before understanding how to improve security-related issues, we must first understand what are the most common ones:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Vulnerable base images&lt;/strong&gt; — if you are using outdated base images they might contain known vulnerabilities. To solve these issues you need to constantly scan for vulnerabilities and update them as soon as you are facing these kinds of issues&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Excessive privileges&lt;/strong&gt; — you might find it easier to run containers as root, but in reality, even though your job might be easier initially, by running a container with root privileges you will increase the risk of your host machine if the container is compromised.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Container breakout —&lt;/strong&gt; if an attacker gets access to one of your containers, they be able to gain control over the host system as well. This usually happens because of vulnerable images, misconfigurations, and excessive privilege.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Permissive networks —&lt;/strong&gt; as you already know, by default, Docker uses a bridge network that allows containers to communicate with each other and external networks. If you need to ensure better isolation and reduce the chances of having breaches, you need to ensure that you do not expose services that should be kept internal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Insecure image registries —&lt;/strong&gt; there are many registries that are available online from which you can pull your container images. If you are using untrusted registries, this can lead to pulling compromised images that may contain malware or other malicious software&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Secrets management —&lt;/strong&gt; if you are not handling your secrets well, they could be easily exploited in case of attacks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Vulnerabilities in orchestrators (K8s, Swarm)&lt;/strong&gt; — you need to ensure proper RBAC and network policies for your orchestrators to avoid the issues described above&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Finding vulnerabilities in your Docker images&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There are many tools you can use to find vulnerabilities in the Docker images you are using, and you will get information about each vulnerability, severity levels, descriptions, and also references to CVE databases.&lt;/p&gt;

&lt;p&gt;Here are my top three:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://trivy.dev/" rel="noopener noreferrer"&gt;Trivy&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/quay/clair" rel="noopener noreferrer"&gt;Clair&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.snyk.io/scan-using-snyk/snyk-container" rel="noopener noreferrer"&gt;Snyk Container&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will not compare them in this post, but I will show you how to use Trivy to scan your docker images.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Using Trivy to scan Docker images&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Go to &lt;a href="https://github.com/aquasecurity/trivy" rel="noopener noreferrer"&gt;Trivy’s GitHub repository&lt;/a&gt; and follow the instructions on how to install it on your operating system.&lt;/p&gt;

&lt;p&gt;Now, let’s run the image scanner and see what are the vulnerabilities that we have in the image we’ve built in the second part:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;trivy image flaviuscdinu93/devops-dev-env:1.0.0

2024-09-08T15:43:58+03:00 INFO [vuln] Vulnerability scanning is enabled
2024-09-08T15:43:58+03:00 INFO [secret] Secret scanning is enabled
2024-09-08T15:43:58+03:00 INFO [secret] If your scanning is slow, please try '--scanners vuln' to disable secret scanning
2024-09-08T15:43:58+03:00 INFO [secret] Please see also https://aquasecurity.github.io/trivy/v0.55/docs/scanner/secret#recommendation for faster secret detection
2024-09-08T15:43:58+03:00 INFO Detected OS family="alpine" version="3.20.2"
2024-09-08T15:43:58+03:00 INFO [alpine] Detecting vulnerabilities... os_version="3.20" repository="3.20" pkg_num=71
2024-09-08T15:43:58+03:00 INFO Number of language-specific files num=4
2024-09-08T15:43:58+03:00 INFO [gobinary] Detecting vulnerabilities...
2024-09-08T15:43:58+03:00 INFO [python-pkg] Detecting vulnerabilities...
2024-09-08T15:43:58+03:00 WARN Using severities from other vendors for some vulnerabilities. Read https://aquasecurity.github.io/trivy/v0.55/docs/scanner/vulnerability#severity-selection for details.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The output will be a breakdown based on the different components you have installed in the image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb95d35d6-cecc-43d6-9914-acc131d22d81_1400x528.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7vjfshd7ckofgr8mlmb.png" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because in my image, I’m starting from alpine:3.20, I have a couple of issues with my image as you can see above. Every vulnerability has a code, and if you want to learn more about it, you can just copy and paste it into Google or whatever other search engine you are using and search for it.&lt;/p&gt;

&lt;p&gt;Check out this link if you want to learn more about &lt;a href="https://nvd.nist.gov/vuln/detail/CVE-2024-45490" rel="noopener noreferrer"&gt;CVE-2024–45490&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now, those are just the vulnerabilities related to Alpine, but in my image, I have vulnerabilities related to other tools as well:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0a75b2a-d01a-4cf8-99e4-e4ad2424a789_1400x428.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39kj7kjt0wd0v28qiohv.png" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I won’t show you all the vulnerabilities because I have a ton. But as you can see, in Trivy’s output, if you want to solve these issues, you need to upgrade to a version in which these problems are not present anymore.&lt;/p&gt;

&lt;p&gt;You can also run Trivy and return a json with all of the data related to a scan like so:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;trivy image -f json -o results.json flaviuscdinu93/devops-dev-env:1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;My scan has over 4k lines, so I won’t be able to add it here, but you get the point, I have some work to do to fix these issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Building a GitHub Actions pipeline that scans for vulnerabilities&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We will build a very simple repository that contains a Dockerfile (the same one that builds the development environment for your DevOps engineers, used above) and a GitHub Actions workflow that builds the image and checks for vulnerabilities using Trivy.&lt;/p&gt;

&lt;p&gt;This workflow will run only when there are changes to the Dockerfile, because it doesn’t make too much sense to run it otherwise.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Docker Image Scan with Trivy

on:
  push:
    paths:
      - 'Dockerfile'
  pull_request:
    paths:
      - 'Dockerfile'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Above we’ve added a rule for this workflow to run only when there is a push, or a pull request made that changes the Dockerfile.&lt;/p&gt;

&lt;p&gt;Next, we can start defining our job, and checking out the code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  scan:
    name: Scan Docker Image
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Before being able to check the image for vulnerabilities, we need to first install Docker and build the image:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

- name: Build image
  run: |
    docker build -t devimage:latest .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, we need to install Trivy:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Install Trivy
  run: |
    sudo apt-get install wget apt-transport-https gnupg lsb-release
    wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
    echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
    sudo apt-get update
    sudo apt-get install trivy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we are ready to do the scan. In this example, I will force Trivy to exit with an error when the severity is critical, otherwise, the exit code will be 0:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Run Trivy vulnerability scanner
  run: |
    echo "Scanning docker-image"
    trivy image --exit-code 1 --severity CRITICAL devimage:latest
    trivy image --exit-code 0 --severity HIGH,MEDIUM devimage:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We already know that our image has some critical vulnerabilities, so our pipeline will exit with an error:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6c127f-deb6-46af-adf0-8dc27edf3446_1224x844.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2rglvfuh9bx49rqgkbw.png" width="800" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s change this behavior to make it return a 0 exit code for all cases, just to see if it is working properly:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F790ea475-4aff-441f-ae6b-4b8205fe8e16_1073x366.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0femaesxsjkud8nkm5m.png" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By making this change, we can see that our pipeline finishes successfully, and if we check the logs, we can see all of our scan results.&lt;/p&gt;

&lt;p&gt;If you want to leverage the code directly you can get it from &lt;a href="https://github.com/flavius-dinu/docker_scan" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Docker security best practices&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To keep all of your Docker environment secure, you will need to implement at least the following best practices:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Improve your image security&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Constantly scan for vulnerabilities, and use minimal base images such as Alpline or distroless to reduce the attack surface. In addition to this, implement &lt;strong&gt;multi-stage builds&lt;/strong&gt; (we will talk about this in detail in the next parts) to minimize the final image size and reduce vulnerabilities. Leverage CI/CDs to constantly check for issues related to your images.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Implement least privilege access&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Whenever you can, run your images as a different user than root. This will ensure that if your container is compromised, your host machine won’t be affected as much.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Set resource limits&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Ensure your enforce resource limits for CPU and memory, as this will prevent denial of service attacks to resource exhaustion.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Use read-only filesystems&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In some cases, your Docker container won’t need to write anything to the filesystem, so in these cases, it will make much more sense to leverage read-only filesystems. In this way, attackers will have a hard time to install malware on your containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Create and use different networks&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;By creating and using different network, you ensure that isolation is kept as much as possible and only the services that really need to be exposed, will be exposed.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Enable logging and monitoring&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Monitor your container logs to detect any anomalies or even malicious activity. This will help in identifying strange behavior in real time.&lt;/p&gt;

&lt;p&gt;This part was originally posted on &lt;a href="https://techblog.flaviusdinu.com/docker-nautical-leader-6-securing-your-docker-environment-149a08e148ea" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  7. Docker Swarm vs Kubernetes
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Docker Swarm?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Docker Swarm is Docker’s native orchestrating tool. It is working seamlessly with Docker, offering a straightforward approach to container orchestration.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Key Features:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easy Setup&lt;/strong&gt; : Swarm mode is built into the Docker engine, making it simple to set up and use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Familiar Docker CLI&lt;/strong&gt; : If you’re already using Docker, the learning curve for Swarm is minimal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service Discovery&lt;/strong&gt; : Automatic service discovery and load balancing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rolling Updates&lt;/strong&gt; : Built-in support for rolling updates and rollbacks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scaling&lt;/strong&gt; : Easy horizontal scaling of services.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s deploy the same application we’ve built in part 5, using Docker compose, the repository is &lt;a href="https://github.com/flavius-dinu/docker_tutorials" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For Swarm, I’ve created a folder in the repository called swarm, that contains a slightly changed docker-compose file.&lt;/p&gt;

&lt;p&gt;You will see that the first difference is the fact that I’m using an overlay network:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;networks:
  my_overlay_network:
    driver: overlay
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As mentioned in previous parts, overlay networks are distributed across multiple Docker hosts and combined with Swarm, creates a real orchestration platform.&lt;/p&gt;

&lt;p&gt;Also, now you can easily place your services accross multiple nodes:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deploy:
    replicas: 1
    placement:
      constraints:
        - node.role == manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Before deploying the configuration, we need to first initialize or join a swarm. In my case, I will initialize a new swarm:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker swarm init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To deploy the configuration from the swarm directory you should navigate to it and run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stack deploy -c docker-compose.yml my_stack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you want to see the services deployed from inside the stack if you’ve created, you can easily run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker service ls                                                 

ID             NAME                    MODE         REPLICAS   IMAGE                 PORTS
xp3djy1t0d21   my_stack_backend        replicated   1/1        python:3.9.7-slim     
kyrr4hlsk8bq   my_stack_frontend       replicated   1/1        node:14.17.6-alpine   *:3000-&amp;gt;3000/tcp
xkr5we3ht4bl   my_stack_nginx          replicated   1/1        nginx:1.21.3          *:80-&amp;gt;80/tcp
xeofhdsz7fn1   my_stack_redis-server   replicated   1/1        redis:6.2.6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If everything is working well you can access the application at localhost:80:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59a71910-47de-4f82-aa52-5301f298a59d_646x392.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xkkjlshfvv24tr17hpc.png" width="646" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Otherwise, to view logs, you can easily run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker service logs my_stack_serviceName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;E.G:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker service logs my_stack_backend

my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    |  * Running on all addresses (0.0.0.0)
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    |  * Running on http://127.0.0.1:5000
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    |  * Running on http://172.21.0.3:5000
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | Press CTRL+C to quit
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | 10.0.1.4 - - [10/Sep/2024 10:50:05] "GET / HTTP/1.1" 200 -
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | 10.0.1.4 - - [10/Sep/2024 10:50:06] "GET / HTTP/1.1" 200 -
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | 10.0.1.4 - - [10/Sep/2024 10:50:06] "GET / HTTP/1.1" 200 -
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | 10.0.1.4 - - [10/Sep/2024 10:50:07] "GET / HTTP/1.1" 200 -
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | 10.0.1.4 - - [10/Sep/2024 10:50:07] "GET / HTTP/1.1" 200 -
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | 10.0.1.4 - - [10/Sep/2024 10:50:07] "GET / HTTP/1.1" 200 -
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | 10.0.1.4 - - [10/Sep/2024 10:50:07] "GET / HTTP/1.1" 200 -
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | 10.0.1.4 - - [10/Sep/2024 10:50:07] "GET / HTTP/1.1" 200 -
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | 10.0.1.4 - - [10/Sep/2024 10:50:07] "GET / HTTP/1.1" 200 -
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | 10.0.1.4 - - [10/Sep/2024 10:50:07] "GET / HTTP/1.1" 200 -
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | 10.0.1.4 - - [10/Sep/2024 10:50:07] "GET / HTTP/1.1" 200 -
my_stack_backend.1.t8h4m4pllk2h@docker-desktop    | 10.0.1.4 - - [10/Sep/2024 10:50:07] "GET / HTTP/1.1" 200 -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Pros of using Swarm:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Simple to learn and use&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tight integration with Docker&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lightweight and fast&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Cons of using Swarm:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Limited advanced features compared to Kubernetes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Not as widely adopted in enterprise environments&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;What is Kubernetes?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform originally developed by Google, and now under the CNCF umbrella. It offers a more comprehensive set of features for complex, large-scale deployments.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Key Features:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Auto-scaling&lt;/strong&gt; : Horizontal pod autoscaling based on CPU utilization or custom metrics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Self-healing&lt;/strong&gt; : Automatic replacement of failed containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Advanced Networking&lt;/strong&gt; : Powerful networking capabilities with support for various plugins.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Storage Orchestration&lt;/strong&gt; : Dynamic provisioning of storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Declarative Configuration&lt;/strong&gt; : Define the desired state of your system using YAML files.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s reuse the above example, but now define the Kubernetes configuration for it. Again, the repository is &lt;a href="https://github.com/flavius-dinu/docker_tutorials" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For K8s, I have defined two Docker images, one for the backend and the other one for the frontend:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Dockerfile-BE
FROM python:3.9.7-slim

WORKDIR /app

COPY backend/* ./
RUN pip install --no-cache-dir -r requirements.txt

CMD ["python", "app.py"]


# Dockerfile-FE
FROM node:14.17.6-alpine

WORKDIR /app

COPY frontend/* .

RUN npm install

CMD ["node", "index.js"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The first step I’ve done is I’ve built the images, tagged them appropiately and pushed them to the Docker registry. If you want to build your images and host them in your own registries, make sure you are in the root of the repository and then run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t image_name:image_tag -f kubernetes/docker_images/Dockerfile_BE .
docker build -t image_name:image_tag -f kubernetes/docker_images/Dockerfile_FE .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then, you can easily push the images to the registry of your choice using the &lt;code&gt;docker push&lt;/code&gt; commands.&lt;/p&gt;

&lt;p&gt;Now, if you want to use my public images they are available here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;flaviuscdinu93/be:1.0.3&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;flaviuscdinu93/fe:1.0.1&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ok, so for every service that I’m using I will build at least a deployment and a service, with the exception of Redis (which needs a volume), and Nginx (which a need a configmap to hold the configuration).&lt;/p&gt;

&lt;p&gt;All the files are defined in the repository under the kubernetes folder, but let’s walk through a couple of them:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: Namespace
metadata:
  name: dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I’ve defined a namespace for all the different component I will be deploying into K8s to better isolate my resources.&lt;/p&gt;

&lt;p&gt;For the backend, I’ve defined the following deployment file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  namespace: dev
spec:
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: backend
        image: flaviuscdinu93/be:1.0.3
        ports:
        - containerPort: 5000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will create a container in a pod based on the backend image I previosly defined, and open the 5000 port.&lt;/p&gt;

&lt;p&gt;Next, I’ve defined a service to the backend to expose this container:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: dev
spec:
  selector:
    app: backend
  ports:
    - port: 5000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This happens because the selector is set to get the backend app.&lt;/p&gt;

&lt;p&gt;The same things are happening for the others as well, the only difference is that redis will mount a volume, and nginx will use some data from a configmap:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# nginx configmap
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
  namespace: dev
data:
  nginx.conf: |
    events { }

    http {
      upstream frontend {
          server frontend:3000;
      }

      server {
          listen 80;

          location / {
              proxy_pass http://frontend;
              proxy_set_header Host $host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
              proxy_set_header X-Forwarded-Proto $scheme;
          }
      }
    }


# redis deployment using the volume
spec:
    containers:
    - name: redis
      image: redis:6.2.6
      volumeMounts:
      - name: redis-data
        mountPath: /data
    volumes:
    - name: redis-data
      persistentVolumeClaim:
        claimName: redis-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Ok, now that everything is defined we can apply the code. To do that go to the kubernetes directory and run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f .

# To see everything that was created you can easily run:

kubectl get all -n dev   
NAME                            READY   STATUS    RESTARTS      AGE
pod/backend-5459857cff-w2z8f    1/1     Running   0             37m
pod/frontend-679cdc8dc7-ts6lz   1/1     Running   0             37m
pod/nginx-6d6b99d9ff-pvzr4      1/1     Running   3 (37m ago)   37m
pod/redis-8545d5dbbf-dfmwz      1/1     Running   0             37m

NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/backend        ClusterIP   10.96.185.238   &amp;lt;none&amp;gt;        5000/TCP       37m
service/frontend       ClusterIP   10.96.234.210   &amp;lt;none&amp;gt;        3000/TCP       37m
service/nginx          NodePort    10.96.254.83    &amp;lt;none&amp;gt;        80:30542/TCP   37m
service/redis-server   ClusterIP   10.96.169.17    &amp;lt;none&amp;gt;        6379/TCP       37m

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/backend    1/1     1            1           37m
deployment.apps/frontend   1/1     1            1           37m
deployment.apps/nginx      1/1     1            1           37m
deployment.apps/redis      1/1     1            1           37m

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/backend-5459857cff    1         1         1       37m
replicaset.apps/frontend-679cdc8dc7   1         1         1       37m
replicaset.apps/nginx-6d6b99d9ff      1         1         1       37m
replicaset.apps/redis-8545d5dbbf      1         1         1       37m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To access the application from our nginx reverse proxy we can port-forward it for now, as we haven’t defined any loadbalancers:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/nginx 8080:80 --namespace dev

Forwarding from 127.0.0.1:8080 -&amp;gt; 80
Forwarding from [::1]:8080 -&amp;gt; 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now go to &lt;code&gt;localhost:8080&lt;/code&gt; and see if you can access the application:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa634766c-9b4d-45ef-8904-7ffccad65328_444x154.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F036u0uothrc8bpxdyqo0.png" width="444" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see that the application is working properly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Pros of using K8s:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Highly scalable and flexible&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Large ecosystem and community support&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Backed by major cloud providers — AWS, Azure, Google Cloud&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Cons of using K8s:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Steeper learning curve&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;More complex setup and management&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Head to Head&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad7e2f86-6353-4b58-94b3-3548a2585214_1400x788.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwkcdw51t6xgnb78gzky.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are heavily invested in the Docker ecosystem and your team is more familiar with Docker and doesn’t want a steep learning curve, Docker Swarm will be a better choice, especially for a small to mid-sized deployment.&lt;/p&gt;

&lt;p&gt;On the other hand, if you working on complex applications, and you need a solution that’s widely adopted in enterprise environments and has dedicated services in major cloud providers, Kubernetes will be the way to go.&lt;/p&gt;

&lt;p&gt;Even though Kubernetes is more complex, it offers unparalleled power and flexibility for large-scale deployments.&lt;/p&gt;

&lt;p&gt;This part was originally posted on &lt;a href="https://techblog.flaviusdinu.com/docker-nautical-leader-7-kubernetes-vs-docker-swarm-219daf8226ca" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  8. CICD Pipeline for your Docker images
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Repository structure&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You can find the repository &lt;a href="https://github.com/flavius-dinu/docker_pipeline" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This is what the repository structure looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39051bfb-4dbb-402f-a89f-1013f075c76e_568x708.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpp77vou37pb8h14xmo5.png" width="568" height="708"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have a GitHub Actions pipeline called docker-ci-cd.yml, and three images (yes the ones that I’ve started to play around since part 5), each of them with their own Dockerfiles.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Pipeline walkthrough&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The worfklow will run only when there are changes to the images, on both pull-requests and merges to the main branch. We have set an environment variable called REGISTRY, in which we specify the GitHub Actions registry.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Docker CI/CD Pipeline example

on:
  push:
    branches: ["main"]
    paths:
      - 'images/**'
  pull_request:
    branches: ["main"]
    paths:
      - 'images/**'
env:
  REGISTRY: ghcr.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the pipeline, we have three jobs (the last one is just a placeholder that you can build however you’d like):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Detect Changes — This will detect if there are any changes to the images, their code, or even their tests&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build and Push — Builds the image, tests it, runs vulnerability scans, and pushes the images if the branch is main&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy — You can modify this job to deploy the containers using whatever orchestrator you want&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s start with the Detect Changes job:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;detect-changes:
    runs-on: ubuntu-latest
    outputs:
      matrix: ${{ steps.set-matrix.outputs.matrix }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This job will run on a ubuntu machine, and will declare an output that other jobs will use. This output will be built inside this job in a step called set-matrix.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps:
    - uses: actions/checkout@v3
      with:
        fetch-depth: 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;First things first, we are checking out the repository, with a fetch-depth of 0, which means all git history is fetched, which is helpful for comparing multiple changes across commits and branches.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- id: set-matrix
  run: |
    if [ "${{ github.event_name }}" == "pull_request" ]; then
      # For pull requests, compare against the base branch
      BASE_SHA=$(git merge-base ${{ github.event.pull_request.base.sha }} ${{ github.sha }})
      CHANGED_DIRS=$(git diff --name-only $BASE_SHA ${{ github.sha }} | grep '^images/' | cut -d/ -f2 | sort -u | jq -R -s -c 'split("\n")[:-1]')
    elif [ "${{ github.event_name }}" == "push" ]; then
      # For pushes, compare against the previous commit
      CHANGED_DIRS=$(git diff --name-only ${{ github.event.before }} ${{ github.sha }} | grep '^images/' | cut -d/ -f2 | sort -u | jq -R -s -c 'split("\n")[:-1]')
    fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We are checking if the event is either a pull_request or a push, and we will do different things based on the event type.&lt;/p&gt;

&lt;p&gt;If the event is a pull_request, we are finding the common ancestor of the pull request’s base branch and the current commit. Based on it, we are running git diff to see all the files and folders that have changed, and we are filtering the results to find only the folders inside the images that have changed. Then with the cut command, we are extracting the subdirectory names within the images folder, and in the end, we are using jq to format the result as JSON.&lt;/p&gt;

&lt;p&gt;If the event is a push, we compare the previous commit with the current one, and everything else stays pretty much the same.&lt;/p&gt;

&lt;p&gt;If there are no changes, we skip the build with an exit 0 code. Also, at the end, we add our changed directory in the matrix variable and send it to the $GITHUB_OUTPUT variable. This ensures that all the other jobs can access the matrix.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if [ "$CHANGED_DIRS" == "[]" ] || [ -z "$CHANGED_DIRS" ]; then
  echo "No changes detected, skipping build."
  exit 0  # Exit without setting the matrix
fi

echo "matrix=${CHANGED_DIRS}" &amp;gt;&amp;gt; $GITHUB_OUTPUT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The Build and Push job only executes if the &lt;em&gt;detect-changes&lt;/em&gt; job detects changes (the matrix output is not empty).&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;build-and-push:
    needs: detect-changes
    runs-on: ubuntu-latest
    if: ${{ needs.detect-changes.outputs.matrix != '[]' &amp;amp;&amp;amp; needs.detect-changes.outputs.matrix != '' }}
    strategy:
      matrix:
        image: ${{fromJson(needs.detect-changes.outputs.matrix)}}
    permissions:
      contents: read
      packages: write
      security-events: write
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the first two steps, we are checking out the repository, and setting up Docker buildx which provides advanced features for building Docker images.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps:
  - name: Checkout repository
    uses: actions/checkout@v3

  - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the next step, we log into the container registry, but only for push events. It uses the GitHub actor as the username and the GitHub token for authentication:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Log in to the Container registry
  if: github.event_name == 'push'
  uses: docker/login-action@v2
  with:
    registry: ${{ env.REGISTRY }}
    username: ${{ github.actor }}
    password: ${{ secrets.GITHUB_TOKEN }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we extract the metadata for the Docker image, including tags and labels — we are generating a tag based on the Git SHA and branch name.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Extract metadata (tags, labels) for Docker
  id: meta
  uses: docker/metadata-action@v4
  with:
    images: ${{ env.REGISTRY }}/${{ github.repository }}/${{ matrix.image }}
    tags: |
      type=sha,prefix={{branch}}-
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the next step, we build the Docker image using the context from the specific image directory. It doesn’t push the image yet but loads it into the Docker daemon:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Build Docker image
  uses: docker/build-push-action@v4
  with:
    context: ./images/${{ matrix.image }}
    push: false
    tags: ${{ steps.meta.outputs.tags }}
    labels: ${{ steps.meta.outputs.labels }}
    cache-from: type=gha
    cache-to: type=gha,mode=max
    load: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We’ve also included a security vulnerability scanning process based on Trivy, and we are also uploading the security scan result to Security tab.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Run Trivy vulnerability scanner
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: ${{ steps.meta.outputs.tags }}
    format: 'sarif'
    output: 'trivy-results.sarif'
    ignore-unfixed: true
  continue-on-error: true


 - name: Upload Trivy scan results to GitHub Security tab
   uses: github/codeql-action/upload-sarif@v2
   if: always()
   with:
     sarif_file: 'trivy-results.sarif'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As we’ve created a couple of tests for our images, we want to run them before pushing the images, so we are using Google’s container structure tests for that:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Run container structure tests
  run: |
    # Install container-structure-test
    curl -LO https://storage.googleapis.com/container-structure-test/latest/container-structure-test-linux-amd64
    chmod +x container-structure-test-linux-amd64
    sudo mv container-structure-test-linux-amd64 /usr/local/bin/container-structure-test

    # Set the image reference
    IMAGE_REF="${{ steps.meta.outputs.tags }}"

    # Run the tests
    container-structure-test test --image "$IMAGE_REF" --config ./images/${{ matrix.image }}/structure-tests.yaml

  shell: bash

- name: Push Docker image
  if: github.event_name == 'push'
  run: |
    echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u $ --password-stdin
    docker push ${{ steps.meta.outputs.tags }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In case we have a push event, we are pushing the image to the registry.&lt;/p&gt;

&lt;p&gt;The last job, as mentioned before, doesn’t do anything at this moment, but it’s a placeholder for your orchestrator deployment:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deploy:
    needs: build-and-push
    runs-on: ubuntu-latest
    if: github.event_name == 'push' &amp;amp;&amp;amp; github.ref == 'refs/heads/main'

    steps:
      - name: Deploy to production
        run: |
          echo "Deploying to production..."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Example Pipeline Run:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6cf8199-0bfe-4826-83aa-60f794efd486_1400x454.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8vo3jrdz0ythddv9bew.png" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are the images pushed to the registry:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff30a2b00-a0e2-48eb-96c5-3fbab131beb7_436x194.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu58i50d9xhdh1pcjssoh.png" width="436" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, you can see all the vulnerabilities pushed under the security tab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84906a11-08d0-4dc0-abde-288d06a13e80_1400x1268.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6utfj9misi2kdc740mgx.png" width="800" height="724"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are many vulnerabilities in these images, and I will solve a couple of them for the Python one, just to show you the number decreases:&lt;/p&gt;

&lt;p&gt;This is how the Python image looked before:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.9.7-slim
WORKDIR /app
COPY src/* ./
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "app.py"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is how it looks now:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.11-slim-bullseye
RUN useradd -m -s /bin/bash appuser
WORKDIR /app
COPY src/* ./
RUN pip install --no-cache-dir -r requirements.txt
USER appuser
CMD ["python", "app.py"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see, I’ve updated the image to use a newer python version, and made it use a non-root user:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2809a84a-526a-4c7b-a73e-b251b347ab11_1400x475.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcn9mp224bzqjcwbjrdpn.png" width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The pipeline is running for it, and at the end, we will see that the number of vulnerabilities has decreased considerably:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3f7d594-0621-4c0b-bd26-1f61de5ac106_1400x408.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg658s2041chysip1oxzk.png" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This part was originally posted on &lt;a href="https://techblog.flaviusdinu.com/docker-nautical-leader-8-how-to-build-a-ci-cd-pipeline-for-your-docker-images-ac675e5904c9" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  9. Docker Optimization and Beyond Docker
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Dockerfile Optimization&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To create lightweight containers there are a few strategies that you should know:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use multi-stage builds&lt;/strong&gt; — Separate build and runtime environments, and only copy necessary artifacts to the final image&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Choose lightweight base images —&lt;/strong&gt; Alpine Linux or Distroless images for even smaller sizes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Minimize the number of layers&lt;/strong&gt; — Combine RUN commands using &lt;em&gt;&amp;amp; &amp;amp;&lt;/em&gt;, and use .dockerignore to exclude unnecessary files&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cleanup step in your layer&lt;/strong&gt; — Remove package manager caches and temporary files&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Leverage build-cache&lt;/strong&gt; — Order layers from least to most frequently changing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use specific tags for base images&lt;/strong&gt; — Avoid the &lt;em&gt;latest&lt;/em&gt; tag when you are building containers&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s see this in action for two images:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The DevOps development environment image that we’ve built in the second part&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;flaviuscdinu93/devops-dev-env 1.0.0 c3069a674574 3 weeks ago 351MB&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;2. A Python image that runs an application:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;flaviuscdinu93/python_app 1.0.0 7306a8fe9c7c 50 seconds ago 399MB&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is how the first image looks initially:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM alpine:3.20

ENV TERRAFORM_VERSION=1.5.7 \
    OPENTOFU_VERSION=1.8.1 \
    PYTHON_VERSION=3.12.0 \
    KUBECTL_VERSION=1.28.0 \
    ANSIBLE_VERSION=2.15.0 \
    GOLANG_VERSION=1.21.0 \ 
    PIPX_BIN_DIR=/usr/local/bin

RUN apk add --no-cache \
    curl \
    bash \
    git \
    wget \
    unzip \
    make \
    build-base \
    py3-pip \
    pipx \
    openssh-client \
    gnupg \
    libc6-compat

RUN wget -O terraform.zip https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip &amp;amp;&amp;amp; \
    unzip terraform.zip &amp;amp;&amp;amp; \
    mv terraform /usr/local/bin/ &amp;amp;&amp;amp; \
    rm terraform.zip

RUN wget -O tofu.zip https://github.com/opentofu/opentofu/releases/download/v${OPENTOFU_VERSION}/tofu_${OPENTOFU_VERSION}_linux_amd64.zip &amp;amp;&amp;amp; \
    unzip tofu.zip &amp;amp;&amp;amp; \
    mv tofu /usr/local/bin/ &amp;amp;&amp;amp; \
    rm tofu.zip

RUN curl -LO "https://dl.k8s.io/release/v$KUBECTL_VERSION/bin/linux/amd64/kubectl" &amp;amp;&amp;amp; \
    chmod +x kubectl &amp;amp;&amp;amp; \
    mv kubectl /usr/local/bin/

RUN pipx install ansible-core==${ANSIBLE_VERSION}

WORKDIR /workspace

CMD ["bash"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For this example, I won’t use multi-stage builds, as I’ll leverage them for the next image. Now, let’s combine all RUN instructions in a single RUN instruction, remove wget (use only curl), and remove the root cache.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM alpine:3.20

ENV TERRAFORM_VERSION=1.5.7 \
    OPENTOFU_VERSION=1.8.1 \
    KUBECTL_VERSION=1.28.0 \
    ANSIBLE_VERSION=2.15.0 \
    PIPX_BIN_DIR=/usr/local/bin

WORKDIR /workspace

RUN apk add --no-cache \
    bash \
    curl \
    git \
    unzip \
    openssh-client \
    py3-pip \
    pipx &amp;amp;&amp;amp; \
    curl -L -o terraform.zip https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip &amp;amp;&amp;amp; \
    unzip terraform.zip &amp;amp;&amp;amp; \
    mv terraform /usr/local/bin/ &amp;amp;&amp;amp; \
    rm terraform.zip &amp;amp;&amp;amp; \
    curl -L -o opentofu.zip https://github.com/opentofu/opentofu/releases/download/v${OPENTOFU_VERSION}/tofu_${OPENTOFU_VERSION}_linux_amd64.zip &amp;amp;&amp;amp; \
    unzip opentofu.zip &amp;amp;&amp;amp; \
    mv tofu /usr/local/bin/ &amp;amp;&amp;amp; \
    rm opentofu.zip &amp;amp;&amp;amp; \
    curl -LO "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl" &amp;amp;&amp;amp; \
    chmod +x kubectl &amp;amp;&amp;amp; \
    mv kubectl /usr/local/bin/ &amp;amp;&amp;amp; \
    pipx install ansible-core==${ANSIBLE_VERSION} &amp;amp;&amp;amp; \
    rm -rf /root/.cache &amp;amp;&amp;amp; \

CMD ["bash"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I’ve built the image and used a &lt;em&gt;1.0.1&lt;/em&gt; tag:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;flaviuscdinu93/devops-dev-env 1.0.1 8d1344dd324c 5 seconds ago 340MB&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the optimization isn’t that big, we’ve reduced just 11MB.&lt;/p&gt;

&lt;p&gt;Now, let’s leverage a multi-stage build for the second image. The image initially looks like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu:20.04

RUN apt-get update &amp;amp;&amp;amp; \
    apt-get install -y python3 python3-pip

WORKDIR /app

COPY backend/ /app

RUN pip3 install -r requirements.txt

EXPOSE 5000

CMD ["python3", "app.py"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I will switch the image from ubuntu to alpine and leverage a multi stage build:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.9-alpine as builder

WORKDIR /app

RUN apk add --no-cache build-base

COPY backend/requirements.txt .

RUN pip install --no-cache-dir --user -r requirements.txt

COPY backend/app.py .

FROM python:3.9-alpine

WORKDIR /app

COPY --from=builder /app/app.py .
COPY --from=builder /root/.local /root/.local

ENV PATH=/root/.local/bin:$PATH

EXPOSE 5000

CMD ["python", "app.py"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After building the image, this is the result:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;flaviuscdinu93/python_app 1.0.0-alpine 41243d14a523 1 second ago 59.6MB&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We’ve reduced the initial image from almost 400MB to 60MB.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Docker deployment strategies&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There are multiple deployment strategies that can be used to deploy your containers, and it all depends on what other tools you are using in your ecosystem.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Single-container deployment&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In some special cases, deploying a single Docker container might be enough. What you’ll need to do is package your application and its dependencies into a Docker image and run it as a container on a machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use official base images —&lt;/strong&gt; Start with minimal and secure base images from trusted sources&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimize the Dockerfile —&lt;/strong&gt; You should minimize the number of layers and remove unnecessary files to reduce the image size&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Environment variables —&lt;/strong&gt; Leverage environment variables for configuration to maintain portability across environments.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Multi-Container deployment with Docker Compose&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Docker Compose can be easily used for defining and running multi-container Docker applications. As you already know, you can easily manage multiple services using a single &lt;code&gt;docker-compose.yml&lt;/code&gt; file. If you haven’t seen the docker-compose article, take a look &lt;a href="https://techblog.flaviusdinu.com/docker-nautical-leader-5-docker-compose-9803506f8a1f?sk=163d1232998d61b91bfa28c755fa1584" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Service isolation —&lt;/strong&gt; You should define each service in its own container to improve modularity&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Networking —&lt;/strong&gt; Use Docker network features to enable communication between containers (create the networks to fit your needs). Don’t rembember how? Take a look &lt;a href="https://techblog.flaviusdinu.com/docker-nautical-leader-3-docker-networks-and-volumes-32410557f7af?sk=cce00333bdda76ad352ac18f63fd2767" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Volumes —&lt;/strong&gt; Remember the Nginx + NodeJS + Flask + Redis example built &lt;a href="https://techblog.flaviusdinu.com/docker-nautical-leader-5-docker-compose-9803506f8a1f?sk=163d1232998d61b91bfa28c755fa1584" rel="noopener noreferrer"&gt;here&lt;/a&gt;? We’ve used volumes for data persistence to prevent data loss when recreating containers&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Container orchestration with Kubernetes or Docker Swarm&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For complex applications requiring scalability and high availability, leverage Kubernetes or Docker Swarm.&lt;/p&gt;

&lt;p&gt;If you don’t remember what you can do with Kubernetes or Docker Swarm take a look at the K8s and Swarm part.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Use CI/CD pipeline&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Integrating Docker into your CI/CD pipeline automates the building, testing, and deployment of applications. They will make your life easier. Check out an example of a GitHub Actions CI/CD pipeline &lt;a href="https://techblog.flaviusdinu.com/docker-nautical-leader-8-how-to-build-a-ci-cd-pipeline-for-your-docker-images-ac675e5904c9?sk=b69b2bd3cb81efe53532cbc257458142" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated builds —&lt;/strong&gt; Use CI/CD tools like Jenkins, GitLab CI/CD, or GitHub Actions to automate Docker image builds&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Testing —&lt;/strong&gt; Implement automated testing at each stage to catch issues early&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Versioning and tagging —&lt;/strong&gt; Tag Docker images with meaningful version numbers or commit hashes for traceability&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;5. Blue-Green Deployments&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Blue-green deployment involves running two identical production environments. One of them (blue) is live and serves all production traffic, while the other one (green) is idle. When deploying a new version, you switch the traffic from blue to green after ensuring that everything works as planned. Then you can decide what you want to do with the blue environment (decommission or repurpose)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Zero downtime —&lt;/strong&gt; Users experience no downtime during deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easy rollbacks —&lt;/strong&gt; You can revert traffic to the previous version quickly if there are any issues with your deployments&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;6. Canary Deployments&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Canary deployment is a strategy to roll out the new version to a small subset of users before a full release. This minimizes risk by limiting exposure to potential issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitoring —&lt;/strong&gt; Closely monitor the performance and errors in the canary deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gradual rollout —&lt;/strong&gt; Increase the percentage of traffic to the new version incrementally&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;7. Rolling Updates&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Rolling updates replace instances of your application one at a time with the new version. This ensures that a few cases are always serving requests, providing continuous availability.&lt;/p&gt;

&lt;p&gt;Depending on what tools you are using in your ecosystem to deploy your Docker containers, you will most likely have to check out the best practices on how to implement the best deployment strategies.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Beyond Docker&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Podman&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f531311-7075-4f7c-b96b-d7972430cf8f_884x293.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjarxkg4gbj5h3b1buuug.png" width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Podman is a daemon-less container engine developed by Red Hat that allows you to run and manage containers without the need for a central daemon (unlike the Docker engine). It adheres to the Open Container Initiative (OCI) standards, ensuring compatibility with Docker images and container registries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Daemon-less architecture —&lt;/strong&gt; You won’t need root-running daemon, enhancing security&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rootless containers —&lt;/strong&gt; Enables running containers without root privileges&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker-compatible CLI —&lt;/strong&gt; Supports many Docker commands, making it easy to transition&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Building Images with Podman&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Building images with Podman is nearly identical to Docker, using the same &lt;code&gt;Dockerfile&lt;/code&gt; syntax.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podman build -t myimage:latest -f Dockerfile .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;LXC&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F397787b5-fe1a-40c5-8178-3cc0ba09b9da_650x350.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bx90t76gntn81xepcjk.png" width="650" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LXC is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a single host. It provides a lightweight alternative to full-machine virtualization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;System containers —&lt;/strong&gt; Offers containers that behave like complete Linux systems&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Low overhead —&lt;/strong&gt; Utilizes kernel features for isolation, ensuring minimal performance overhead&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;customization&lt;/strong&gt; : You can easily customize the container environment&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Building Images with LXC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;LXC doesn’t use images in the same way Docker does, and it relies on templates to create containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating an LXC Container&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo lxc-create -n mycontainer -t ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;CRI-O&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2830b29c-8f18-464c-96de-769d36e277ce_280x280.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbb9sje2djosg0ctki3ux.png" width="280" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CRI-O is an open-source container runtime for Kubernetes that implements the Kubernetes Container Runtime Interface (CRI) using OCI-compliant runtimes. It aims to provide a lightweight alternative to Docker for Kubernetes environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubernetes integration —&lt;/strong&gt; It is specifically designed for K8s&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lightweight runtime —&lt;/strong&gt; Reduces overhead by removing unnecessary components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security enhancements —&lt;/strong&gt; Supports advanced security features like SELinux.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Building Images with CRI-O&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CRI-O focuses solely on running containers and does not include image-building capabilities. To build images, you can use &lt;strong&gt;Buildah&lt;/strong&gt; , another tool from the same ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building Images with Buildah&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;buildah bud -t myimage:latest .

buildah push myimage:latest docker://registry.example.com/myimage:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This part was originally posted on &lt;a href="https://techblog.flaviusdinu.com/docker-nautical-leader-9-from-optimization-to-deployment-and-beyond-e09a18d1f0aa" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;Hope it helps! &lt;/p&gt;

&lt;p&gt;Keep up building!&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>tutorial</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Terraform from 0 to Hero</title>
      <dc:creator>Flavius Dinu</dc:creator>
      <pubDate>Tue, 04 Mar 2025 18:28:49 +0000</pubDate>
      <link>https://dev.to/flaviuscdinu/terraform-from-0-to-hero-52bm</link>
      <guid>https://dev.to/flaviuscdinu/terraform-from-0-to-hero-52bm</guid>
      <description>&lt;h1&gt;
  
  
  Terraform from 0 to Hero
&lt;/h1&gt;

&lt;p&gt;This entire series was originally posted on &lt;a href="https://medium.com/@flaviuscdinu93/terraform-from-0-to-hero-0-i-like-to-start-counting-from-0-maybe-i-enjoy-lists-too-much-72cd0b86ebcd" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Table of contents:&lt;/p&gt;

&lt;p&gt;1. What is Terraform?&lt;br&gt;
  2. Terraform Provider&lt;br&gt;
  3. Terraform Resources&lt;br&gt;
  4. Terraform Data Sources and Outputs&lt;br&gt;
  5. Terraform Variables and Locals&lt;br&gt;
  6. Terraform Provisioners and Null resources&lt;br&gt;
  7. Terraform Loops and Conditionals&lt;br&gt;
  8. Terraform CLI commands&lt;br&gt;
  9. Terraform functions&lt;br&gt;
  10. Working with files&lt;br&gt;
  11. Understanding Terraform state&lt;br&gt;
  12. Terraform depends_on and lifecycle block&lt;br&gt;
  13. Terraform Dynamic blocks&lt;br&gt;
  14. Terraform Modules&lt;br&gt;
  15. Best practices for modules I&lt;br&gt;
  16. Best practices for modules II&lt;br&gt;
  17. Bonus1: OpenTofu differences&lt;br&gt;
  18. Bonus2: Specialized Infrastructure Orchestration platforms&lt;/p&gt;
&lt;h1&gt;
  
  
  1. What is Terraform?
&lt;/h1&gt;

&lt;p&gt;Once upon a time, if you worked in the IT industry, there was a big chance you faced different challenges when provisioning or managing infrastructure resources. It often felt like spinning plates, trying to keep everything running smoothly and making sure that all the resources are properly configured.&lt;/p&gt;

&lt;p&gt;Then, Terraform came to the rescue and saved us from this daunting task that took a lot of time.&lt;/p&gt;

&lt;p&gt;So what is Terraform? Terraform started as an open-source infrastructure as code (IaC) tool, developed by Hashicorp, that &lt;strong&gt;makes it easy to create and take care of your infrastructure resources&lt;/strong&gt;. Now, it changed it license to BSL. If you want to learn more about the license change, and how it compares to OpenTofu checkout this &lt;a href="https://spacelift.io/blog/opentofu-vs-terraform" rel="noopener noreferrer"&gt;article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It’s built in &lt;strong&gt;Golang&lt;/strong&gt; (Go), which gives it a lot of power to create different infrastructure pieces in parallel, making it reliable by taking advantage of Go’s strong type-checking and error-handling capabilities.&lt;/p&gt;

&lt;p&gt;Terraform uses &lt;strong&gt;HCL&lt;/strong&gt; (Hashicorp Configuration Language) code to define its resources, but even JSON can be used for this, if you, of course, hate your life for whatever reason. Let’s get back to HCL. It is a &lt;strong&gt;human-readable&lt;/strong&gt; , high-level programming language that is designed to be easy to use and understand:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "resource_type" "resource_name" {
  param1 = "value1"
  param2 = "value2"
  param3 = "value3"
  param4 = "value4"
}

resource "cat" "british" {
  color           = "orange"
  name            = "Garfield"
  age             = 5
  food_preference = ["Tuna", "Chicken", "Beef"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;I don’t want to get into too much detail about what a resource is in this article, as I plan to build a series around this, but the above code, with a pseudo real-life example, is pretty much self-explanatory.&lt;/p&gt;

&lt;p&gt;To make it as simple as possible for now to understand, when you are using HCL and declaring something, it will have a type (let’s suppose there is a resource type cat, for example) and a name on the first line. Inside the curly brackets, you are going to specify how you want to configure that type of “something”.&lt;/p&gt;

&lt;p&gt;In our example, we will create a “cat”, that will be named inside of terraform as “british”. After that, we are configuring the cat’s real name, the one that everyone will know about, the color, the age, and what it likes to eat.&lt;/p&gt;

&lt;p&gt;As you see, the language, at a first glance seems to be pretty close to English. There is more to it, of course, but you are going to see it in the next articles.&lt;/p&gt;

&lt;p&gt;One of the main benefits of using Terraform is that it is &lt;strong&gt;platform-agnostic&lt;/strong&gt;. This means that people that are coding in Terraform, &lt;strong&gt;don’t need to learn different programming languages to provision infrastructure resources in different cloud providers&lt;/strong&gt;. However, &lt;strong&gt;this doesn’t mean&lt;/strong&gt; that if you develop the code to provision a VM instance in AWS, you can use the same one for Azure or GCP.&lt;/p&gt;

&lt;p&gt;Nevertheless, this can save a lot of time and effort, as engineers won’t need to constantly switch between a lot of tools (like going from Cloudformation for AWS to ARM templates for Azure).&lt;/p&gt;

&lt;p&gt;A consistent experience is offered across all platforms.&lt;/p&gt;

&lt;p&gt;Terraform is stateful. Is this a strength or is this a weakness? This topic is highly subjective and it depends on your use case.&lt;/p&gt;

&lt;p&gt;One of the main benefits of statefulness in Terraform is that it allows it to make decisions about how to manage resources based on the current state of the infrastructure. This ensures that Terraform does not create unnecessary resources and helps to prevent errors and conflicts. This can save time and resources, make the provisioning process more efficient and also encourage collaboration between different teams.&lt;/p&gt;

&lt;p&gt;Terraform keeps its state in a &lt;strong&gt;state file&lt;/strong&gt; and uses it and your configuration, to determine the actions that need to be taken to reach the desired state of the resources.&lt;/p&gt;

&lt;p&gt;Even though I’ve presented only strengths until now, being stateful has a relatively big weakness: the managing of the state file. This adds complexity to the system and also in the case that the state file gets corrupted or deleted, it can lead to conflicts and errors in the provisioning process.&lt;/p&gt;

&lt;p&gt;We will tackle this subject in detail in the next articles.&lt;/p&gt;

&lt;p&gt;You may now ask, how can I easily install it? Well, &lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;Hashicorp’s guide&lt;/a&gt; does a great job of helping you install it, just select your platform and you are good to go.&lt;/p&gt;

&lt;p&gt;Ok, now you have a high-level idea about what Terraform is, how to install it, but it’s ok if you still have a lot of questions, as all the answers will come shortly.&lt;/p&gt;

&lt;p&gt;Originally posted on &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-1-what-the-heck-is-terraform-ecc1d9ad175e?sk=8426dfe6656c6ebdce970499dffbda83" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;
  
  
  2. What is a Terraform provider?
&lt;/h1&gt;

&lt;p&gt;In short, Terraform providers are plugins that allow Terraform to interact with specific infrastructure resources. They act as an interface between Terraform and the underlying infrastructure, translating the Terraform configuration into the appropriate API calls and allowing Terraform to manage resources across a wide variety of environments. Each provider has its own set of resources and data sources that can be managed and provisioned using Terraform.&lt;/p&gt;

&lt;p&gt;One of the most common mistakes that people make when they are thinking about Terraform providers, is the fact that they assume that Terraform providers exist only for Cloud Vendors such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), or Oracle Cloud Infrastructure (OCI). There are a lot of other providers that can be used that don’t belong to a Cloud Vendor as Template, Kubernetes, Helm, Spacelift, Artifactory, VSphere, and Aviatrix, to name a few.&lt;/p&gt;

&lt;p&gt;Each provider has its own set of resources and data sources that can be managed and provisioned using Terraform. For example, the AWS provider has resources for managing EC2 instances, EBS volumes, and ELB load balancers.&lt;/p&gt;

&lt;p&gt;Another great thing related to this feature is the fact that you can build your own Terraform provider. While it has an API, you can translate it to Terraform, so that’s just great. However, there are thousands of providers already available in the &lt;a href="https://registry.terraform.io/browse/providers" rel="noopener noreferrer"&gt;registry&lt;/a&gt;, so you don’t need to reinvent the wheel.&lt;/p&gt;

&lt;p&gt;Before jumping in and showing you examples of how to use providers, let’s discuss about how to use the documentation.&lt;/p&gt;

&lt;p&gt;After you are selecting your provider from the registry, you will be redirected to the provider’s page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde0073cf-2796-4bf6-a8b1-d819d70f9376_1400x641.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xg1w4glmbgsy1zmb92q.png" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above view, click on documentation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F18a77e17-e2b1-4c77-b2be-06ead03b4616_1400x757.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz55qky3sw6adqznafmon.png" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Usually, the first tab you are directed to will explain how to configure and use the provider and some simple examples of how to create some simple resources. Of course, this view is different from one provider to another, but usually, you will see these examples.&lt;/p&gt;

&lt;p&gt;We will get back to how to use the documentation when we will talk about resources in the next article.&lt;/p&gt;

&lt;p&gt;Even though AWS is the biggest player in the market when it comes to cloud vendors, in this article I will show an example provider with OCI and one with Azure.&lt;/p&gt;

&lt;p&gt;You can choose your own, of course, by going to the &lt;a href="https://registry.terraform.io/browse/providers" rel="noopener noreferrer"&gt;registry&lt;/a&gt; and selecting whatever suits you.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;OCI&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You have multiple ways to connect to the OCI provider as stated in the &lt;a href="https://registry.terraform.io/providers/oracle/oci/latest/docs" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, but I will only discuss about the default one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm#APIKeyAuth" rel="noopener noreferrer"&gt;API Key Authentication&lt;/a&gt; (default)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm#instancePrincipalAuth" rel="noopener noreferrer"&gt;Instance Principal Authorization&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm#resourcePrincipalAuth" rel="noopener noreferrer"&gt;Resource Principal Authorization&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformproviderconfiguration.htm#securityTokenAuth" rel="noopener noreferrer"&gt;Security Token Authentication&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Based on your tenancy and your user will have to specify the following details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;tenancy_ocid&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;user_ocid&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;private_key or private_key_path&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;private_key_password (optional, only required if your password is encrypted)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;fingerprint&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;region&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  provider "oci" {
      tenancy_ocid     = "tenancy_ocid"
      user_ocid        = "user_ocid"
      fingerprint      = "fingerprint"
      private_key_path = "private_key_path"
      region           = "region"
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After you specify the correct values for all of the values mentioned above you can interact with your Oracle Cloud Infrastructure’s cloud account.&lt;/p&gt;

&lt;p&gt;You will be able to define resources and datasources to create and configure different pieces of infrastructure.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;AZURE&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Similar to the OCI provider, the Azure provider, can be configured in multiple ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://registry.terraform.io/providers/oracle/oci/latest/docs/guides/docs/guides/azure_cli" rel="noopener noreferrer"&gt;Authenticating to Azure using the Azure CLI&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://registry.terraform.io/providers/oracle/oci/latest/docs/guides/docs/guides/managed_service_identity" rel="noopener noreferrer"&gt;Authenticating to Azure using Managed Service Identity&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://registry.terraform.io/providers/oracle/oci/latest/docs/guides/docs/guides/service_principal_client_certificate" rel="noopener noreferrer"&gt;Authenticating to Azure using a Service Principal and a Client Certificate&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://registry.terraform.io/providers/oracle/oci/latest/docs/guides/docs/guides/service_principal_client_secret" rel="noopener noreferrer"&gt;Authenticating to Azure using a Service Principal and a Client Secret&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://registry.terraform.io/providers/oracle/oci/latest/docs/guides/docs/guides/service_principal_oidc" rel="noopener noreferrer"&gt;Authenticating to Azure using OpenID Connect&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will discuss the Azure CLI option which is the easiest way to authenticate by leveraging the &lt;code&gt;az login&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    provider "azurerm" {
      features {}
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the only configuration you have to do in the Terraform code and after running &lt;code&gt;az login&lt;/code&gt; and following the prompt you are good to go.&lt;/p&gt;

&lt;p&gt;Originally posted on &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-2-providers-8d2c97ef6cf1?sk=11b38c8ba2e65817a0844b9371a64d29" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. What is a Terraform Resource?
&lt;/h3&gt;

&lt;p&gt;Resources in Terraform refer to the components of infrastructure that Terraform is able to manage, such as virtual machines, virtual networks, DNS entries, pods, and others. Each resource is defined by a type, such as “aws_instance” or “google_dns_record”, “kubernetes_pod”, “oci_core_vcn”, and has a set of configurable properties, such as the instance size, vcn cidr, etc. Remember the cat example from the first article in this series.&lt;/p&gt;

&lt;p&gt;Terraform can be used to create, update, and delete resources, managing dependencies between them and ensuring they are created in the correct order. You can also create explicit dependencies between some of the resources if you would like to do that by using &lt;code&gt;depends_on&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let’s go in-depth and try to understand how to create these resources and how can we leverage the documentation.&lt;/p&gt;

&lt;p&gt;I will start with something simple, an &lt;code&gt;aws_vpc&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;First things first, whenever you are creating a resource, you will need to go to the documentation. It is really important to understand what you can do for a particular resource and I believe that you should try to build a habit around this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53024224-dae3-466f-a755-7b4238d72eee_1400x429.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2kgpievwhqy5q8q7j5j.png" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the right-hand side, you have the &lt;code&gt;On this page&lt;/code&gt; space, with 4 elements that you should know like you know to count:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Example Usage&lt;/strong&gt; → this will show you a couple of examples of how to use the resource&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Argument Reference&lt;/strong&gt; → in this section you are going to see all the parameters that can be configured for a resource. Some parameters are mandatory and others are optional (these parameters will be signaled with an &lt;code&gt;Optional&lt;/code&gt; placed between brackets)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Attributes Reference&lt;/strong&gt; → here you will find out what a resource exposes and I will talk about this in more detail when I get to the Outputs article&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Import&lt;/strong&gt; → Until now, if you didn’t know anything about Terraform and you’ve just read my articles, you are possibly thinking that it’s possible to import different resources in the state and that is correct. In this section, you are going to find out how you can import that particular resource type&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So let’s create a &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/vpc" rel="noopener noreferrer"&gt;VPC&lt;/a&gt;. First, we need to define the provider as specified in the last article, the only caveat now, is that we are doing it for a different cloud provider. If you forgot how to do it just reference the previous article as it has two examples for Azure and OCI and you easily do it for AWS.&lt;/p&gt;

&lt;p&gt;Nevertheless, I will show you an option for AWS too using the credentials file that is leveraged by aws cli also. The provider will automatically read the &lt;strong&gt;AWS_ACCESS_KEY&lt;/strong&gt; and &lt;strong&gt;AWS_SECRET_ACCESS_KEY&lt;/strong&gt; from the &lt;code&gt;~/.aws/credentials&lt;/code&gt;, so make sure you have that configured. An example can be found &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We are then going to add the vpc configuration. Everything should be saved in a &lt;code&gt;.tf&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    provider "aws" {
      region = "us-east-1"
    }

    resource "aws_vpc" "example" {
      cidr_block = "10.0.0.0/16"
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we see in the documentation, all the parameters for the vpc are optional, as AWS will assign everything that you don’t specify for you.&lt;/p&gt;

&lt;p&gt;Now it is time to run the code. For this, we are going to learn some terraform essential commands and I’m going just to touch upon the basics of these commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform init&lt;/code&gt; → Initializes a working directory with Terraform configuration files. This should be the first command executed after creating a new Terraform configuration or cloning an existing one from version control. It also downloads the specified provider and modules if you are using any and saves them in a generated .terraform directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt; → generates an execution plan, allowing you to preview the changes Terraform intends to make to your infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform apply&lt;/code&gt; → executes the actions proposed in a Terraform plan. If you don’t provide it a plan file, it will generate an execution plan when you are running the command, as if &lt;code&gt;terraform plan&lt;/code&gt; ran. This prompts for your input so don’t be scared to run it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform destroy&lt;/code&gt; → is a simple way to delete all remote objects managed by a specific Terraform configuration. This prompts for your input so don’t be scared to run it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will get into more details when we are going to tackle all Terraform commands during this series.&lt;/p&gt;

&lt;p&gt;Ok, now that you know the basics, let’s run the code.&lt;/p&gt;

&lt;p&gt;Go to the directory where you’ve created your terraform file with the above configuration and run &lt;code&gt;terraform init&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_Initializing the backend...  

Initializing provider plugins...  
\- Finding latest version of hashicorp/aws...  
\- Installing hashicorp/aws v4.50.0...  
\- Installed hashicorp/aws v4.50.0 (signed by HashiCorp)  

Terraform has created a lock file .terraform.lock.hcl to record the provider  
selections it made above. Include this file in your version control repository  
so that Terraform can guarantee to make the same selections by default when  
you run "terraform init" in the future.  

Terraform has been successfully initialized!  

You may now begin working with Terraform. Try running "terraform plan" to see  
any changes that are required for your infrastructure. All Terraform commands  
should now work.  

If you ever set or change modules or backend configuration for Terraform,  
rerun this command to reinitialize your working directory. If you forget, other  
commands will detect it and remind you to do so if necessary._
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, let’s run &lt;code&gt;terraform plan&lt;/code&gt; to see what is going to happen:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:  
\+ create  

Terraform will perform the following actions:  
_  
# aws __vpc.example will be created  
\+ resource "aws__vpc" "example" {  
\+ arn = (known after apply)  
\+ cidr __block = "10.0.0.0/16"  
\+ default__network __acl__ id = (known after apply)  
\+ default __route__ table __id = (known after apply)  
\+ default__security __group__ id = (known after apply)  
\+ dhcp __options__ id = (known after apply)  
\+ enable __classiclink = (known after apply)  
\+ enable__classiclink __dns__ support = (known after apply)  
\+ enable __dns__ hostnames = (known after apply)  
\+ enable __dns__ support = true  
\+ enable __network__ address __usage__ metrics = (known after apply)  
\+ id = (known after apply)  
\+ instance __tenancy = "default"  
\+ ipv6__association __id = (known after apply)  
\+ ipv6__cidr __block = (known after apply)  
\+ ipv6__cidr __block__ network __border__ group = (known after apply)  
\+ main __route__ table __id = (known after apply)  
\+ owner__id = (known after apply)  
\+ tags __all = (known after apply)  
}  

Plan: 1 to add, 0 to change, 0 to destroy.  

───────────────────────────────────────────────────────────────────  

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now._

In the plan, we see that we are going to create one vpc with the above configuration. You can observe that the majority of the parameters will be known after apply, but the cidr block is the one that we’ve specified.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s apply the code and create the vpc with &lt;code&gt;terraform apply:&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; _Terraform will perform the following actions:  

# aws_vpc.example will be created  
\+ resource "aws_vpc" "example" {  
\+ arn = (known after apply)  
\+ cidr_block = "10.0.0.0/16"  
\+ default_network_acl_id = (known after apply)  
\+ default_route_table_id = (known after apply)  
\+ default_security_group_id = (known after apply)  
\+ dhcp_options_id = (known after apply)  
\+ enable_classiclink = (known after apply)  
\+ enable_classiclink_dns_support = (known after apply)  
\+ enable_dns_hostnames = (known after apply)  
\+ enable_dns_support = true  
\+ enable_network_address_usage_metrics = (known after apply)  
\+ id = (known after apply)  
\+ instance_tenancy = "default"  
\+ ipv6_association_id = (known after apply)  
\+ ipv6_cidr_block = (known after apply)  
\+ ipv6_cidr_block_network_border_group = (known after apply)  
\+ main_route_table_id = (known after apply)  
\+ owner_id = (known after apply)  
\+ tags_all = (known after apply)  
}  

Plan: 1 to add, 0 to change, 0 to destroy.  

Do you want to perform these actions?  
Terraform will perform the actions described above.  
Only 'yes' will be accepted to approve.  

Enter a value:_

Enter yes at the prompt and you will be good to go.

 _Enter a value: yes  

aws_vpc.example: Creating...  
aws_vpc.example: Creation complete after 3s [id=vpc-some_id]  

Apply complete! Resources: 1 added, 0 changed, 0 destroyed._

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Woohoo, we have created a vpc using Terraform. You can then go into the AWS console and see it in the region you’ve specified for your provider.&lt;/p&gt;

&lt;p&gt;Very nice and easy, I would say, but before destroying the vpc, let’s see how we can create a resource that references this existing vpc.&lt;/p&gt;

&lt;p&gt;An AWS internet gateway exists only inside of a vpc. So let’s go and check the &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/internet_gateway" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; for the internet gateway.&lt;/p&gt;

&lt;p&gt;From the documentation, we see in the example directly that we can reference a vpc id for creating the internet gateway:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
    resource "aws_internet_gateway" "gw" {
      vpc_id = ""
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But the 100-point question is, how can we reference the above-created vpc? Well, that is not very hard if you remember the following:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;type.name.attribute&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All resources have a type, a name, and some attributes they expose. The exposed attributes are part of the Attributes reference section in the provider. Some providers will explicitly mention they are exposing everything from Argument reference + Attribute reference.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s take our vpc as an example, its &lt;strong&gt;type&lt;/strong&gt; is &lt;strong&gt;aws_vpc&lt;/strong&gt; , its name is &lt;strong&gt;example&lt;/strong&gt; and it exposes a bunch of things (remember, the documentation is your best friend).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8494382f-a7cf-4b2d-95d8-081d5532088a_1297x393.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foymxxii8gxz88n2cqpxe.png" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, as the internet gateway requires a vpc_id and we want to reference our existing one, our code will look like this in the end:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
    provider "aws" {
      region = "us-east-1"
    }

    resource "aws_vpc" "example" {
      cidr_block = "10.0.0.0/16"
    }

    resource "aws_internet_gateway" "gw" {
      vpc_id = aws_vpc.example.id
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then reapply the code with &lt;code&gt;terraform apply&lt;/code&gt;and terraform will simply compare what we have already created with what we have in our configuration and create only the internet gateway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws __vpc.example: Refreshing state... [id=vpc-some__ id]  

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:  
\+ create  

Terraform will perform the following actions:  

_# aws_internet_gateway.gw will be created  
\+ resource "aws_internet_gateway" "gw" {  
\+ arn = (known after apply)  
\+ id = (known after apply)  
\+ owner_id = (known after apply)  
\+ tags_all = (known after apply)  
\+ vpc_id = "vpc-some_id"  
}  

Plan: 1 to add, 0 to change, 0 to destroy.  

Do you want to perform these actions?  
Terraform will perform the actions described above.  
Only 'yes' will be accepted to approve.  

Enter a value: yes  

aws_internet_gateway.gw: Creating...  
aws_internet_gateway.gw: Creation complete after 2s [id=igw-some_igw]  

Apply complete! Resources: 1 added, 0 changed, 0 destroyed._

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pretty neat, right?&lt;/p&gt;

&lt;p&gt;Once we are done with our infrastructure, we can destroy it, using &lt;code&gt;terraform destroy&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_aws_vpc.example: Refreshing state... [id=vpc-some_vpc]  
aws_internet_gateway.gw: Refreshing state... [id=igw-some_igw]  

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:  
\- destroy  

Terraform will perform the following actions:  

# aws_internet_gateway.gw will be destroyed  
\- resource "aws_internet_gateway" "gw" {  
\- all parameters are specified here  
}  

# aws_vpc.example will be destroyed  
\- resource "aws_vpc" "example" {  
\- all parameters are specified here  
}  

Plan: 0 to add, 0 to change, 2 to destroy.  

Do you really want to destroy all resources?  
Terraform will destroy all your managed infrastructure, as shown above.  
There is no undo. Only 'yes' will be accepted to confirm.  

Enter a value: yes  

aws_internet_gateway.gw: Destroying... [id=igw-some_igw]  
aws_internet_gateway.gw: Destruction complete after 2s  
aws_vpc.example: Destroying... [id=vpc-some_vpc]  
aws_vpc.example: Destruction complete after 0s  

Destroy complete! Resources: 2 destroyed._

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It can be a little overwhelming, but bear with me and understand the key points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A resource is a component that can be managed with Terraform (a VM, a Kubernetes Pod, etc)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Documentation is your best friend, understand how the use those 4 sections from it&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There are 4 essential commands that help you provision and destroy your infrastructure: init/plan/apply/destroy&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When referencing a resource from a configuration we are using &lt;code&gt;type.name.attribute&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Originally posted on &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-3-resources-b9cb97c5a30c?sk=49f3b0e3062da6370ef9343c306670da" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  4. Data Sources and Outputs
&lt;/h1&gt;

&lt;p&gt;Terraform resources are great and you can do a bunch of stuff with them. But did I tell you can use Data Sources and Outputs in conjunction with them to better implement your use case? Let’s jump into it.&lt;/p&gt;

&lt;p&gt;To put it as simply as possible, A data source is a configuration object that retrieves data from an external source and can be used in resources as arguments when they are created or updated. When I am talking about an external source, I am referring to absolutely anything: manually created infrastructure, resources created from other terraform configurations, and others.&lt;/p&gt;

&lt;p&gt;Data sources are defined in their respective providers and you can use them with a special block called &lt;strong&gt;data&lt;/strong&gt;. The documentation of a data source is pretty similar to one of a resource, so if you’ve mastered how to use that one, this will be a piece of cake.&lt;/p&gt;

&lt;p&gt;Let’s take an example of a data source that returns the most recent ami_id (image id) of an Ubuntu image in AWS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    provider "aws" {
      region = "us-east-1"
    }

    data "aws_ami" "ubuntu" {
      filter {
        name   = "name"
        values = ["ubuntu-*"]
      }
      most_recent = true
    }

    output "ubuntu" {
      value = data.aws_ami.ubuntu.id
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I am putting a filter on the image name and I’m specifying all the image names that start with &lt;code&gt;ubuntu-&lt;/code&gt;. I’m adding the &lt;code&gt;most_recent = true&lt;/code&gt; to get only one image, as the &lt;em&gt;aws_ami&lt;/em&gt; data source doesn’t support returning multiple image ids. So this data source will return only the most recent Ubuntu ami.&lt;/p&gt;

&lt;p&gt;In the code example, there is also one reference to an output, but I haven’t exactly told you what an output is, did I?&lt;/p&gt;

&lt;p&gt;An output is a way to easily view the value of a specific data source, resource, local, or variable after Terraform has finished applying changes to infrastructure. It can be defined in the Terraform configuration file and can be viewed using the &lt;code&gt;terraform output&lt;/code&gt; command, but just to reiterate, only after a &lt;code&gt;terraform apply&lt;/code&gt; happens. Outputs can be also used to expose different resources inside a module, but we will discuss this in another post.&lt;/p&gt;

&lt;p&gt;Outputs don’t depend on a provider at all, they are a special kind of block that works independently from them.&lt;/p&gt;

&lt;p&gt;The most important parameters an output supports are &lt;strong&gt;value&lt;/strong&gt; , for which you specify what you want to see, and &lt;strong&gt;description&lt;/strong&gt; (optional) in which you explain what that output wants to achieve.&lt;/p&gt;

&lt;p&gt;Let’s go back to our example.&lt;/p&gt;

&lt;p&gt;In the output, I have specified a reference to the above data source. When we are referencing a resource, we are using &lt;code&gt;type.name.attribute&lt;/code&gt;, for data sources, it’s pretty much the same but we have to prefix it with &lt;strong&gt;data&lt;/strong&gt; , so &lt;code&gt;data.type.name.attribute&lt;/code&gt; will do the trick.&lt;/p&gt;

&lt;p&gt;As I mentioned above, in order to see what this returns, you will first have to apply the code. You are not going to see the contents of a data source without an output, so I encourage you to use them at the beginning when you are trying to get familiar with them.&lt;/p&gt;

&lt;p&gt;This is the output of a &lt;code&gt;terraform apply&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;_data.aws_ami.ubuntu: Reading...&lt;br&gt;&lt;br&gt;
data.aws_ami.ubuntu: Read complete after 3s [id=ami-0f388924d43083179]  &lt;/p&gt;

&lt;p&gt;Changes to Outputs:&lt;br&gt;&lt;br&gt;
+ ubuntu = "ami-0f388924d43083179"  &lt;/p&gt;

&lt;p&gt;You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.  &lt;/p&gt;

&lt;p&gt;Do you want to perform these actions?&lt;br&gt;&lt;br&gt;
Terraform will perform the actions described above.&lt;br&gt;&lt;br&gt;
Only 'yes' will be accepted to approve.  &lt;/p&gt;

&lt;p&gt;Enter a value: yes  &lt;/p&gt;

&lt;p&gt;Apply complete! Resources: 0 added, 0 changed, 0 destroyed.  &lt;/p&gt;

&lt;p&gt;Outputs:  &lt;/p&gt;

&lt;p&gt;ubuntu = "ami-0f388924d43083179"_&lt;/p&gt;

&lt;p&gt;In an apply, the first &lt;strong&gt;state&lt;/strong&gt; a resource goes through is &lt;strong&gt;creating&lt;/strong&gt; , a data source is going through a &lt;strong&gt;reading&lt;/strong&gt; state, just to make the differences between them clearer.&lt;/p&gt;

&lt;p&gt;Now, let’s use this image to create an ec2 instance:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-east-1"
}

data "aws_ami" "ubuntu" {
  filter {
    name   = "name"
    values = ["ubuntu-*"]
  }
  most_recent = true
}

output "ubuntu" {
  value = data.aws_ami.ubuntu.id
}

resource "aws_instance" "web" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t2.micro"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Just by referencing the ami id from our data source and an instance type, we are able to create an ec2 instance with a &lt;code&gt;terraform apply&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Don’t forget to delete your infrastructure if you are practicing, as everything you create will incur costs. Do that with a &lt;code&gt;terraform destroy&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Originally posted on &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-4-data-sources-and-outputs-5d044d69f499" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;
  
  
  5. Terraform Variables and Locals
&lt;/h1&gt;

&lt;p&gt;Terraform variables and locals are used to better organize your code, easily change it, and make your configuration reusable.&lt;/p&gt;

&lt;p&gt;Before jumping into variables and locals in Terraform, let’s first discuss their supported types.&lt;/p&gt;

&lt;p&gt;Usually, in any programming language, when we are defining a variable or a constant, we are assigning it, or it infers a type.&lt;/p&gt;

&lt;p&gt;Supported types in Terraform:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Primitive:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;String&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Number&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bool&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Complex — These types are created from other types&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;List&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Map&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Object&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Null — Usually represents absence, really useful in conditional expressions.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is also the &lt;code&gt;any&lt;/code&gt; type, in which you basically add whatever you want without caring about the type, but I really don’t recommend it as it will make your code harder to maintain.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Variables&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Every variable will be declared with a &lt;code&gt;variable&lt;/code&gt; block and we will always use it with &lt;code&gt;var.variable_name&lt;/code&gt;. Let’s see this in action:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "web" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = var.instance_type
}

variable "instance_type" {
  description = "Instance Type of the variable"
  type        = string
  default     = "t2.micro"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;I have declared a variable called &lt;code&gt;instance_type&lt;/code&gt; and in it, I’ve added 3 fields, all of which are optional, but usually, it is a best practice to add these, or at least the &lt;strong&gt;type&lt;/strong&gt; and &lt;strong&gt;description&lt;/strong&gt;. Well, there are three other possible arguments (sensitive, validation, and nullable), but let’s not get too overwhelmed by this.&lt;/p&gt;

&lt;p&gt;In the resource block above, I’m referencing the variable, with &lt;code&gt;var.instance_type&lt;/code&gt; and due to the fact I’ve set the &lt;strong&gt;default&lt;/strong&gt; value to t2.micro, my variable will get that particular value and I don’t need to do anything else. Cool, right?&lt;/p&gt;

&lt;p&gt;Well, let’s suppose we are not providing any &lt;strong&gt;default&lt;/strong&gt; value and we are not doing anything else and we run a &lt;code&gt;terraform apply&lt;/code&gt;. As Terraform does not know the value of the variable, it will ask you to provide a value for it. Pretty neat, that means you can forget to assign it. This is not a best practice, though.&lt;/p&gt;

&lt;p&gt;There are a couple of other ways you can assign values to variables. If you happen to specify a value for a variable in multiple ways, Terraform will use the last value it finds, by taking into consideration their precedence order. I’m going to present these to you now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;using a &lt;strong&gt;default&lt;/strong&gt; → as in the example above, this will be overwritten by any other option&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;using a &lt;code&gt;terraform.tfvars&lt;/code&gt;file → this is a special file in which you can add values to your variables&lt;/p&gt;

&lt;p&gt;instance_type = "t2.micro"&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;using a &lt;code&gt;*.auto.tfvars&lt;/code&gt; file → similar to the terraform.tfvars file, but will take precedence over it. The variables' values will be declared in the same way. The “*” is a placeholder for any name you want to use&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;using &lt;code&gt;-var&lt;/code&gt; or &lt;code&gt;-var-file&lt;/code&gt; when running terraform plan/apply/destroy. When you are using both of them in the same command, the value will be taken from the last option.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;terraform apply -var="instance_type=t3.micro" -var-file="terraform.tfvars"&lt;/code&gt;→ This will take the value from the var file, but if we specify the &lt;code&gt;-var&lt;/code&gt; option last, it will get the value from there.&lt;/p&gt;

&lt;p&gt;Some other variables examples:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "my_number" {
  description = "Number example"
  type        = number
  default     = 10
}

variable "my_bool" {
  description = "Bool example"
  type        = bool
  default     = false
}

variable "my_list_of_strings" {
  description = "List of strings example"
  type        = list(string)
  default     = ["string1", "string2", "string3"]
}

variable "my_map_of_strings" {
  description = "Map of strings example"
  type        = map(string)
  default = {
    key1 = "value1"
    key2 = "value2"
    key3 = "value"
  }
}

variable "my_object" {
  description = "Object example"
  type = object({
    parameter1 = string
    parameter2 = number
    parameter3 = list(number)
  })
  default = {
    parameter1 = "value"
    parameter2 = 1
    parameter3 = [1, 2, 3]
  }
}

variable "my_map_of_objects" {
  description = "Map(object) example"
  type = map(object({
    parameter1 = string
    parameter2 = bool
    parameter3 = map(string)
  }))
  default = {
    elem1 = {
      parameter1 = "value"
      parameter2 = false
      parameter3 = {
        key1 = "value1"
      }
    }
    elem2 = {
      parameter1 = "another_value"
      parameter2 = true
      parameter3 = {
        key2 = "value2"
      }
    }
  }
}

variable "my_list_of_objects" {
  description = "List(object) example"
  type = list(object({
    parameter1 = string
    parameter2 = bool
  }))
  default = [
    {
      parameter1 = "value"
      parameter2 = false
    },
    {
      parameter1 = "value2"
      parameter2 = true
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;In the above example, there are two variables of a simple type (number and bool), and a couple of complex types. As the simple ones are pretty easy to understand, let’s jump into the others.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;list(string)&lt;/code&gt; — in this variable, you can declare how many strings you want inside the list. You are going to access an instance of the list by using &lt;code&gt;var.my_list_of_strings[index]&lt;/code&gt;. Keep in mind that lists start from 0. &lt;code&gt;var.my_list_of_strings[1]&lt;/code&gt; will return &lt;code&gt;string2&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;map(string)&lt;/code&gt; — in this variable, you can declare how many key:value pairs you want. You are going to access an instance of the map by using &lt;code&gt;var.my_map_of_strings[key]&lt;/code&gt; where key is on the left-hand side from the equal sign. &lt;code&gt;var.my_map_of_strings["key3"]&lt;/code&gt; will return &lt;code&gt;value&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;object({})&lt;/code&gt; — inside of an object, you are declaring parameters as you see fit. You can have simple types inside of it and even complex types and you can declare as many as you want. You can consider an object to be a map having more explicit types defined for the keys. You are going to access instances of an object, by using the same logic as you would for a map.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;map(object({}))&lt;/code&gt; — I’ve specified this complex build, because this is something I am using a lot inside of my code because it works well with &lt;code&gt;for_each&lt;/code&gt; (don’t worry, we will talk about this in another post). You are going to access a property of an object in the map by using &lt;code&gt;var.my_map_of_objects["key"]["parameter"]&lt;/code&gt; and if there are any other complex parameters defined you will have to go deeper. &lt;code&gt;var.my_map_of_objects["elem1"]["parameter1"]&lt;/code&gt; will return &lt;code&gt;value&lt;/code&gt;. &lt;code&gt;var.my_map_of_objects["elem1"]["parameter3"]["key1"]&lt;/code&gt; will return &lt;code&gt;value1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;list(object({}))&lt;/code&gt; — This is something I’m using in &lt;code&gt;dynamic blocks&lt;/code&gt;(again, we will discuss this in detail in another post). You are going to access a property of an object in the list, by using &lt;code&gt;var.my_list_of_objects[index]["parameter"]&lt;/code&gt;. Again, if there are any parameters that are complex, you will have to go deeper. &lt;code&gt;var.my_list_of_objects[0]["parameter1"]&lt;/code&gt; will return &lt;code&gt;value&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;One important thing to note is the fact that you &lt;strong&gt;cannot&lt;/strong&gt; reference other resources or data sources inside a variable, so you cannot say that a variable is equal to a resource attribute by using the &lt;code&gt;type.name.attribute&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Locals&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;On the other hand, a &lt;code&gt;local variable&lt;/code&gt; assigns a name to an expression, making it easier for you to reference it, without having to write that expression a gazillion times. They are defined in a locals block, and you can have multiple local variables defined in a single local block. Let’s take a look:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  instance_type = "t2.micro"
  most_recent   = true
}

data "aws_ami" "ubuntu" {
  filter {
    name   = "name"
    values = ["ubuntu-*"]
  }
  most_recent = local.most_recent
}

resource "aws_instance" "web" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = local.instance_type
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;As you see, we are defining inside the locals block two local variables and we are referencing them throughout our configuration with &lt;code&gt;local.local_variable_name&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As opposed to variables, inside of a local, you can define whatever resource or data source attribute you want. We can even define more complex operations inside of them, but for now, let’s just let this sync in as we are going to experiment with these some more in the future.&lt;/p&gt;

&lt;p&gt;Originally posted on &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-5-variables-and-locals-d8ce17ee11a5" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;
  
  
  6. Terraform Provisioners and Null Resource
&lt;/h1&gt;

&lt;p&gt;Terraform provisioners have nothing in common with providers. You can use provisioners to run different commands or scripts on your local machine or a remote machine, and also copy files from your local machine to a remote one. Provisioners, exist inside of a resource, so in order to use one, you will simply have to add a provisioner block in that particular resource.&lt;/p&gt;

&lt;p&gt;One thing worth mentioning is the fact that a provisioner is not able to reference the parent resource by its name, but they can use the self object which actually represents that resource.&lt;/p&gt;

&lt;p&gt;They are considered a last resort, as they are not a part of the Terraform declarative model.&lt;/p&gt;

&lt;p&gt;There are 3 types of provisioners:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;local-exec&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;file&lt;/strong&gt; (should be used in conjunction with a connection block)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;remote-exec&lt;/strong&gt; (should be used in conjunction with a connection block)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All provisioners support two interesting options &lt;strong&gt;when&lt;/strong&gt; and &lt;strong&gt;on_failure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You can run provisioners either when the resource is created (which is, of course, the default option) or if your use case asks for it, run it when a resource is destroyed.&lt;/p&gt;

&lt;p&gt;By default, &lt;strong&gt;on_failure&lt;/strong&gt; is set to fail, which will fail the apply if the provisioner fails, which is expected Terraform behaviour, but you can set it to ignore a fail by setting it to continue.&lt;/p&gt;

&lt;p&gt;From experience, I can tell you that sometimes provisioners fail for no reason, or they can even appear to be working and not doing what they are expected to. Still, I believe it is still very important to know how to use them, because, in some of your use cases, you may not have any alternatives.&lt;/p&gt;

&lt;p&gt;Before jumping into each of the provisioners, let’s talk about null resources. A null resource is basically something that doesn’t create anything on its own, but you can use it to define provisioners blocks. They also have a “trigger” attribute, which can be used to recreate the resource, hence to rerun the provisioner block if the trigger is hit.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Local-Exec&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As its name suggests, a local-exec block is going to run a script on your local machine. Nothing too fancy about it. Apart from the when and on_failure options, there are a couple of other options you can specify:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;command&lt;/strong&gt; — what to run; this is the only required argument.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;working_dir&lt;/strong&gt; — where to run it&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;interpreter&lt;/strong&gt; — what interpreter to use (e.g /bin/bash), by default terraform will decide based on your system os&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;environment&lt;/strong&gt; — key/value pairs that represent the environment&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s see this in action in a null resource and observe the output of a &lt;code&gt;terraform apply&lt;/code&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "null_resource" "this" {
  provisioner "local-exec" {
    command = "echo Hello World!"
  }
}

# null_resource.this: Creating...
# null_resource.this: Provisioning with 'local-exec'...
# null_resource.this (local-exec): Executing: ["/bin/sh" "-c" "echo Hello World!"]
# null_resource.this (local-exec): Hello World!
# null_resource.this: Creation complete after 0s [id=someid]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;You can use this to run different scripts before or after an apply of a specific resource by using &lt;strong&gt;depends_on&lt;/strong&gt; (we will talk about this in another article in more detail).&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Connection Block&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before going into the other two provisioners, remote-exec and file, let’s take some time and understand the connection block. In order to run or copy something on a remote vm, you will first have to connect to it, right?&lt;/p&gt;

&lt;p&gt;Connection blocks, support both ssh and winrm, so you can easily connect to both your Linux and Windows vms.&lt;/p&gt;

&lt;p&gt;You even have the option to connect via a bastion host or a proxy, but I will just show you a simple connection block for a Linux VM.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;connection {
    type        = "ssh"
    user        = "root"
    private_key = "private_key_contents"
    host        = "host"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;File&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The file provisioner is used to copy a file from your local vm to a remote vm. There are three arguments that are supported:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;source (what file to copy)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;content (the direct content to copy on the destination)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;destination (where to put the file)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As mentioned before, file needs a connection block to make sure it works properly. Let’s see an example on an ec2 instance.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-east-1"
}

locals {
  instance_type = "t2.micro"
  most_recent   = true
}

data "aws_ami" "ubuntu" {
  filter {
    name   = "name"
    values = ["ubuntu-*"]
  }
  most_recent = local.most_recent
}

resource "aws_key_pair" "this" {
  key_name   = "key"
  public_key = file("~/.ssh/id_rsa.pub")
}

resource "aws_instance" "web" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = local.instance_type
  key_name      = aws_key_pair.this.key_name
}

resource "null_resource" "copy_file_on_vm" {
  depends_on = [
    aws_instance.web
  ]
  connection {
    type        = "ssh"
    user        = "ubuntu"
    private_key = file("~/.ssh/id_rsa")
    host        = aws_instance.web.public_dns
  }
  provisioner "file" {
    source      = "./file.yaml"
    destination = "./file.yaml"
  }
}

# null_resource.copy_file_on_vm: Creating...
# null_resource.copy_file_on_vm: Provisioning with 'file'...
# null_resource.copy_file_on_vm: Creation complete after 2s [id=someid]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Remote-Exec&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Remote-Exec is used to run a command or a script on a remote-vm.&lt;/p&gt;

&lt;p&gt;It supports the following arguments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;inline → list of commands that should run on the vm&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;script → a script that runs on the vm&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;scripts → multiple scripts to run on the vm&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You have to provide only one of the above arguments as they are not going to work together.&lt;/p&gt;

&lt;p&gt;Similar to file, you will need to add a connection block.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "null_resource" "remote_exec" {
  depends_on = [
    aws_instance.web
  ]
  connection {
    type        = "ssh"
    user        = "ubuntu"
    private_key = file("~/.ssh/id_rsa")
    host        = aws_instance.web.public_dns
  }
  provisioner "remote-exec" {
    inline = [
      "mkdir dir1"
    ]
  }
}

# null_resource.remote_exec: Creating...
# null_resource.remote_exec: Provisioning with 'remote-exec'...
# null_resource.remote_exec (remote-exec): Connecting to remote host via SSH...
# null_resource.remote_exec (remote-exec):   Host: somehost
# null_resource.remote_exec (remote-exec):   User: ubuntu
# null_resource.remote_exec (remote-exec):   Password: false
# null_resource.remote_exec (remote-exec):   Private key: true
# null_resource.remote_exec (remote-exec):   Certificate: false
# null_resource.remote_exec (remote-exec):   SSH Agent: true
# null_resource.remote_exec (remote-exec):   Checking Host Key: false
# null_resource.remote_exec (remote-exec):   Target Platform: unix
# null_resource.remote_exec (remote-exec): Connected!
# null_resource.remote_exec: Creation complete after 3s [id=someid]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;In order to use the above code, just use the file example and change the copy_file_on_vm null resource with this one and you are good to go.&lt;/p&gt;

&lt;p&gt;You have to make sure that you can connect to your vm, so make sure you have a security rule that permits ssh access in your security group.&lt;/p&gt;

&lt;p&gt;Even though I don’t recommend provisioners, keep in mind they may be a necessary evil.&lt;/p&gt;

&lt;p&gt;Originally posted on &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-6-provisioners-and-null-resources-adcaa2ca0ba8" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;
  
  
  7. Terraform Loops and Conditionals
&lt;/h1&gt;

&lt;p&gt;In this post, we will talk about how to use Count, for_each, for loops, ifs, and ternary operators inside of Terraform. It will be a long journey, but this will help a lot when writing better Terraform code.&lt;/p&gt;

&lt;p&gt;As I am planning to use the Kubernetes provider in this lesson, you can easily create your own Kubernetes cluster using Kind. More details &lt;a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/MrL-QeIjK60"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;👉&lt;a href="https://www.instagram.com/reel/CoQGDC8Ju3l/?utm_source=ig_web_copy_link" rel="noopener noreferrer"&gt;COUNT&lt;/a&gt; 👈&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If I say I hate count, that would be an understatement. I get why people use it, but in my book, since for_each was released, I never looked back. &lt;strong&gt;This is my really unpopular opinion, so don’t take my word for it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s see what we can do with count. Using count we can, you guessed it, create multiple resources of the same type. Every terraform resource supports the count block. Count exposes a &lt;code&gt;count.index&lt;/code&gt; object, which can be used in the same way you would use an iterator in any programming language.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_namespace" "this" {
  count = 5
  metadata {
    name = format("ns%d", count.index)
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above block will create 5 namespaces in Kubernetes with the following names: ns1, ns2, ns3, ns4, ns5. All fun and games until now. If you change the count to 4, the last namespace, ns5 will be deleted if you re-apply the code.&lt;/p&gt;

&lt;p&gt;In any case, when you are using count, you can address a particular index of your resource by using &lt;code&gt;type.name[index]&lt;/code&gt;. In our case that means individual resources can be accessed with &lt;code&gt;kubernetes_namespace.this[0]&lt;/code&gt; to &lt;code&gt;kubernetes_namespace.this[4]&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let’s suppose you want to customize, a little bit the names of the namespaces. For that, we can you use a local or a variable in conjunction with a function. Don’t worry if you see a couple of functions now, we will discuss functions in a separate article in detail.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  namespaces = ["frontend", "backend", "database"]
}

resource "kubernetes_namespace" "this" {
  count = length(local.namespaces)
  metadata {
    name = local.namespaces[count.index]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above will create three namespaces called frontend, backend, and database. Let’s suppose for whatever reason, you want to remove the backend namespace and keep only the other two.&lt;/p&gt;

&lt;p&gt;What is going to happen when you reapply the code?&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Plan: 1 to add, 0 to change, 2 to destroy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s break this down. We are using a list with 3 elements, which means that our list has 3 indexes: 0, 1, and 2. If we remove the element from index 1, backend, the element from index 2, becomes the element from index 1.&lt;/p&gt;

&lt;p&gt;To make it even more clear, initially, we have the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;kubernetes_namespace.this[0] → frontend&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;kubernetes_namespace.this[1] → backend&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;kubernetes_namespace.this[2] → database&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After we remove backend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;kubernetes_namespace.this[0] → frontend&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;kubernetes_namespace.this[1] → database&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Due to the fact that database changes its terraform identity by moving from index 2 to index 1, it will be recreated, and imagine all the problems you will have when you are recreating a namespace with a ton of things inside.&lt;/p&gt;

&lt;p&gt;Also, let’s suppose you are creating a hundred ec2 instances and you are using a list for that to better configure them. For some reason, let’s suppose you want to remove the instance with index 13 (see what I did there?), what’ going to happen with the ones from index 14 to 100? They will be recreated because all of them are going to change their index to what it was minus 1.&lt;/p&gt;

&lt;p&gt;And that’s why I hate count. It doesn’t give you the flexibility to create very generic resources and in my book, that’s a hard pass.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;👉&lt;a href="https://www.instagram.com/p/CofS84YorUA/" rel="noopener noreferrer"&gt;FOR_EACH&lt;/a&gt; 👈&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I am a big fan of using for_each on all of the resources, as you never know when you want to create multiple resources of the same kind. For_Each can be used with map and set variables, but I don’t remember a use case in which I used a set. So what I’m always doing is using for_each on maps, and more specifically on &lt;code&gt;map(object)&lt;/code&gt;. I’ll show you what that looks like in a bit.&lt;/p&gt;

&lt;p&gt;For_each exposes one attribute called &lt;code&gt;each&lt;/code&gt;. This attribute contains a key and value which can be used with&lt;code&gt;each.key&lt;/code&gt; and &lt;code&gt;each.value&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With for_each, you will reference an instance of your resource with &lt;code&gt;type.name[key]&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let’s use a variable this time to create the namespaces, and let’s configure some more information for them in order to see why for_each is superior from my point of view.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_namespace" "this" {
  for_each = var.namespaces
  metadata {
    name        = each.key
    annotations = each.value.annotations
    labels      = each.value.labels
  }
}

variable "namespaces" {
  type = map(object({
    annotations = optional(map(string), {})
    labels      = optional(map(string), {})
  }))
  default = {
    namespace1 = {}
    namespace2 = {
      labels = {
        color = "green"
      }
    }
    namespace3 = {
      annotations = {
        imageregistry = "https://hub.docker.com/"
      }
    }
    namespace4 = {
      labels = {
        color = "blue"
      }
      annotations = {
        imageregistry = "my_awesome_registry"
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We have defined a variable called &lt;code&gt;namespaces&lt;/code&gt; and we are going to iterate through it on the &lt;code&gt;kubernetes_namespace&lt;/code&gt; resource. This variable has a &lt;code&gt;map(object)&lt;/code&gt; type and inside of it, we’ve defined two optional properties: annotations and labels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optional&lt;/strong&gt; can be used on parameters inside object variables to give the possibility to omit that particular parameter and to provide a default value for it instead. As this feature is available from Terraform 1.3.0, I believe it will soon be embraced by the community as a best practice (for me, it is already). Inside this variable, we have added a default value, just for demo purposes, in the real world, you are going to anyway separate resources from variables in their own files, and you are going to provide default values as empty maps if you are using the above approach with optionals on your parameters, but that’s a completely different story.&lt;/p&gt;

&lt;p&gt;Let’s go a little bit through our default value:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;All namespaces will be our &lt;code&gt;each.key&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Everything after namespaceX = will be our &lt;code&gt;each.value&lt;/code&gt; → Meaning, that if we want to reference labels or annotations, we are going to use &lt;code&gt;each.value.labels&lt;/code&gt; or &lt;code&gt;each.value.annotations&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Due to the fact that both parameters inside of our variable have been defined with the optional block, we can omit them, meaning that &lt;code&gt;namespace1 = {}&lt;/code&gt; is a valid configuration. This is happening because, in our resource, we’ve assigned the name in the metadata block to &lt;code&gt;each.key&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The entire configuration translates into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;namespace1&lt;/code&gt; → will have no labels and no annotations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;namespace2&lt;/code&gt; → will have only labels&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;namespace3&lt;/code&gt; → will have only annotations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;namespace4&lt;/code&gt; → will have both labels and annotations&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If I want to remove &lt;code&gt;namespace2&lt;/code&gt; for whatever reason, what is going to happen to &lt;code&gt;namespace3&lt;/code&gt; and &lt;code&gt;namespace4&lt;/code&gt;? Absolutely &lt;strong&gt;nothing.&lt;/strong&gt; Due to the fact that we are not using a list anymore and we are using a map, by removing an element of the map, there is going to be no change to the others (remember, Terraform identifies our resources with &lt;code&gt;kubernetes_namespace.this["namespace1"]&lt;/code&gt; to &lt;code&gt;kubernetes_namespace.this["namespace4"]&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;And this is why I will always vouch for &lt;code&gt;for_each&lt;/code&gt; instead of count.&lt;/p&gt;

&lt;h2&gt;
  
  
  **👉&lt;a href="https://www.instagram.com/reel/CopyNIWOJTm/?utm_source=ig_web_copy_link" rel="noopener noreferrer"&gt;Ternary Operators&lt;/a&gt; 👈
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;I’ve seen plenty of people arguing that Terraform doesn’t have an &lt;strong&gt;if&lt;/strong&gt; instruction. Well, it does, but you can use that only when you are building complex instructions in &lt;strong&gt;for&lt;/strong&gt; loops (not for_each), which I will discuss soon. For any other type of condition, you have ternary operators and the syntax is:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;condition ? val1 : val2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The above means if the condition is true, use val1, if the condition is false, use val2. Let’s see it in action:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  use_local_name = false
  name           = "namespace1"
}

resource "kubernetes_namespace" "this" {
  metadata {
    name = local.use_local_name ? local.name : "namespace2"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above example, I’m checking if &lt;code&gt;local.use_local_name&lt;/code&gt; is equal to true, and if it is, I’m going to provide to my namespace the name that is in &lt;code&gt;local.name&lt;/code&gt; otherwise, I am going to provide it &lt;code&gt;namespace2&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Of course, due to the fact that I’ve set &lt;code&gt;use_local_name&lt;/code&gt; to false, this means that the name of my namespace will be &lt;code&gt;namespace2&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The beautiful or the ugliest part of ternary operators (depending on who reads the code) is the fact that you can use nested conditionals.&lt;/p&gt;

&lt;p&gt;Let’s build a local variable that uses nested conditionals:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  val1               = 1
  val2               = 2
  val3               = 3
  nested_conditional = local.val2 &amp;gt; local.val1 ? local.val3 &amp;gt; local.val2 ? local.val3 : local.val2 : local.val1
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above operation, we are checking initially if &lt;code&gt;val2&lt;/code&gt; is greater than &lt;code&gt;val1&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;if it’s not, then nested_conditional will go to the last “:” and assign the value to &lt;code&gt;local.val1&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;if it is, then we are checking if &lt;code&gt;val3&lt;/code&gt; is greater than &lt;code&gt;val2&lt;/code&gt; and:&lt;br&gt;&lt;br&gt;
- if &lt;code&gt;val3&lt;/code&gt; is greater than &lt;code&gt;val2&lt;/code&gt; the value of nested _conditional will be &lt;code&gt;val3&lt;/code&gt;&lt;br&gt;&lt;br&gt;
- if &lt;code&gt;val3&lt;/code&gt; is less than &lt;code&gt;val2&lt;/code&gt; the value of nested_conditional will be &lt;code&gt;val2&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These nested conditionals can get pretty hard to understand and usually if you see something that goes more than 3 or 4 levels, there is almost always an error in judgment somewhere or you should do some changes to the variable or expression that you are using when you are building this as it will get almost impossible to maintain in the long run.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;👉&lt;a href="https://www.instagram.com/reel/Co2luBxo84o/?utm_source=ig_web_copy_link" rel="noopener noreferrer"&gt;For loops and Ifs&lt;/a&gt; 👈&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you are familiar with Python, you are going to notice pretty easily that for loops and ifs in Terraform are pretty similar to Python’s list comprehensions.&lt;/p&gt;

&lt;p&gt;Let me show you what I’m talking about:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  list_var = range(5)
  map_var = {
    cat1 = {
      color = "orange",
      name  = "Garfield"
    },
    cat2 = {
      color = "blue",
      name  = "Tom"
    }
  }
  for_list_list = [for i in local.list_var : i * 2]
  for_list_map  = { for i in local.list_var : format("Number_%s", i) =&amp;gt; i }

  for_map_list = [for k, v in local.map_var : k]
  for_map_map  = { for k, v in local.map_var : format("Cat_%s", k) =&amp;gt; v }

  for_list_list_if = [for i in local.list_var : i if i &amp;gt; 2]
  for_map_map_if   = { for k, v in local.map_var : k =&amp;gt; v if v.color == "orange" }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We’ve defined a list variable that generates numbers from 0 to 4 &lt;code&gt;list_var&lt;/code&gt;and a map variable with two elements &lt;code&gt;map_var&lt;/code&gt;. As you can see, for the other 6 locals defined in the code snippet, you can build both lists and maps by using this type of loop. By starting the value with &lt;code&gt;[&lt;/code&gt; you are creating a list, and by starting the value with &lt;code&gt;{&lt;/code&gt; you are creating a map.&lt;/p&gt;

&lt;p&gt;The difference from a syntax standpoint is that when you are building a map you have to provide the &lt;code&gt;=&amp;gt;&lt;/code&gt; attribute. The sky is the limit when it comes to these expressions, you can nest them on how many levels you want depending on the structure you are iterating through, but this will become, again, very hard to maintain.&lt;/p&gt;

&lt;p&gt;If you are cycling through a map variable, and you are using a single iterator, you will actually cycle only through the values of the map, by using two, you will cycle through both keys and variables (the first iterator will be the key, the second iterator will be the value).&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for_list_list = [
  0,
  2,
  4,
  6,
  8,
]
for_list_map = {
  "Number_0" = 0
  "Number_1" = 1
  "Number_2" = 2
  "Number_3" = 3
  "Number_4" = 4
}
for_map_list = [
  "cat1",
  "cat2",
]
for_map_map = {
  "Cat_cat1" = {
    "color" = "orange"
    "name" = "Garfield"
  }
  "Cat_cat2" = {
    "color" = "blue"
    "name" = "Tom"
  }
}
for_list_list_if = [
  3,
  4,
]
for_map_map_if = {
  "cat1" = {
    "color" = "orange"
    "name" = "Garfield"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Above are all the values of the locals defined with for loops and ifs. Let’s discuss the last two:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;for_list_list_if = [for i in local.list_var : i if i &amp;gt; 2]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is cycling through our initial list that holds the numbers from 0 to 4 and is creating a new list with only the elements that are greater than 2.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;for_map_map_if = { for k, v in local.map_var : k =&amp;gt; v if v.color == "orange" }&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This one is cycling through our initial map variable and will recreate a new map with all the elements that have the color equal to orange.&lt;/p&gt;

&lt;p&gt;There is another operator called &lt;code&gt;splat(*)&lt;/code&gt; that can help with providing a more concise way to reference some common operations that you would usually do with a for. This operator works only on lists, sets, and tuples.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;splat_list = [
    {
      name = "Mike"
      age  = 25
    },
    {
      name = "Bob"
      age  = 29
    }
  ]
  splat_list_names = local.splat_list[*].name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If for example, you would’ve had a list of maps in the above format, you can easily build a list of all the names or ages from it, by using the splat operator.  &lt;/p&gt;

&lt;p&gt;Originally posted on Medium &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-7-count-for-each-and-ternary-operators-dc1295c18a60" rel="noopener noreferrer"&gt;here&lt;/a&gt;.  &lt;/p&gt;

&lt;h1&gt;
  
  
  8. Terraform CLI Commands
&lt;/h1&gt;

&lt;p&gt;Throughout my posts, I said I don’t want to repeat myself or reinvent the wheel, right?&lt;/p&gt;

&lt;p&gt;Well, I am keeping my promise, so if you want to see all commands that you can use and even download an awesome cheatsheet with them, you can follow this article which really nails it: &lt;a href="https://spacelift.io/blog/terraform-commands-cheat-sheet" rel="noopener noreferrer"&gt;https://spacelift.io/blog/terraform-commands-cheat-sheet&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  9. Terraform Functions
&lt;/h1&gt;

&lt;p&gt;Terraform functions are built-in, reusable code blocks that perform specific tasks within Terraform configurations. They make your code more dynamic and ensure your configuration is DRY. Functions allow you to perform various operations, such as converting expressions to different data types, calculating lengths, and building complex variables.&lt;/p&gt;

&lt;p&gt;These functions are split into multiple categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;String&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Numeric&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Collection&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Date and Time&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Crypto and Hash&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Filesystem&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IP Network&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Encoding&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Type Conversion&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This split, however, can become overwhelming to someone who doesn’t have that much experience with Terraform. For example, the formatlist list function is considered to be a string function even though it modifies elements from a list. A list is a collection though; some may argue that this function should be considered a collection function, but still, at its core, it does changes to strings.&lt;/p&gt;

&lt;p&gt;For that particular reason, I won’t specify the function type when I’ll describe them, but just go with what you can do with them. Of course, I will not go through all of the available functions, but through the ones, I am using throughout my configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;ToType Functions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;ToType is not an actual function; rather, many functions can help you change the type of a variable to another type.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tonumber(argument)&lt;/code&gt; → With this function you can change a string to a number, anything else apart from another number and null will result in an error&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tostring(argument)&lt;/code&gt; → Changes a number/bool/string/null to a string&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tobool(argument)&lt;/code&gt; → Changes a string (only “true” or “false”)/bool/null to a bool&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tolist(argument)&lt;/code&gt; → Changes a set to a list&lt;/p&gt;

&lt;p&gt;&lt;code&gt;toset(argument)&lt;/code&gt; → Changes a list to a set&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tomap(argument)&lt;/code&gt; → Converts its argument to a map&lt;/p&gt;

&lt;p&gt;In Terraform, you are rarely going to need to use these types of functions, but I still thought they are worth mentioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;format(string_format, unformatted_string)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The format function is similar to the printf function in C and works by formatting a number of values according to a specification string. It can be used to build different strings that may be used in conjunction with other variables. Here is an example of how to use this function:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 string1       = "str1"
 string2       = "str2"
 int1          = 3
 apply_format  = format("This is %s", local.string1)
 apply_format2 = format("%s_%s_%d", local.string1, local.string2, local.int1)
}

output "apply_format" {
 value = local.apply_format
}
output "apply_format2" {
 value = local.apply_format2
}

# Result in:
apply_format  = "This is str1"
apply_format2 = "str1_str2_3"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;formatlist(string_format, unformatted_list)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The formatlist function uses the same syntax as the format function but changes the elements in a list. Here is an example of how to use this function:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 format_list = formatlist("Hello, %s!", ["A", "B", "C"])
}

output "format_list" {
 value = local.format_list
}

# Result in:
format_list = tolist(["Hello, A!", "Hello, B!", "Hello, C!"])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;length(list / string / map)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Returns the length of a string, list, or map.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 list_length   = length([10, 20, 30])
 string_length = length("abcdefghij")
}

output "lengths" {
 value = format("List length is %d. String length is %d", local.list_length, local.string_length)
}

# Result in:
lengths = "List length is 3. String length is 10"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;join(separator, list)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Another useful function in Terraform is “join”. This function creates a string by concatenating together all elements of a list and a separator. For example, consider the following code:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 join_string = join(",", ["a", "b", "c"])
}

output "join_string" {
 value = local.join_string
}

# Result in:
The output of this code will be “a, b, c”.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;try(value, fallback)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Sometimes, you may want to use a value if it is usable, but fall back to another value if the first one is unusable. This can be achieved using the “try” function. For example:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 map_var = {
   test = "this"
 }
 try1 = try(local.map_var.test2, "fallback")
}

output "try1" {
 value = local.try1
}

# Result:
The output of this code will be “fallback”, as the expression local.map_var.test2 is unusable.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;can(expression)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A useful function for validating variables is “can”. It evaluates an expression and returns a boolean indicating if there is a problem with the expression. For example:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "a" {
 type = any
 validation {
   condition     = can(tonumber(var.a))
   error_message = format("This is not a number: %v", var.a)
 }
 default = "1"
}

# Result:
The validation in this code will give you an error: “This is not a number: 1”.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;flatten(list)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In Terraform, you may work with complex data types to manage your infrastructure. In these cases, you may want to flatten a list of lists into a single list. This can be achieved using the “flatten” function, as in this example:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 unflatten_list = [[1, 2, 3], [4, 5], [6]]
 flatten_list   = flatten(local.unflatten_list)
}

output "flatten_list" {
 value = local.flatten_list
}

# Result:
The output of this code will be [1, 2, 3, 4, 5, 6].
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;keys(map) &amp;amp; values(map)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It may be useful to extract the keys or values from a map as a list. This can be achieved using the “keys” or “values” functions, respectively. For example:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 key_value_map = {
   "key1" : "value1",
   "key2" : "value2"
 }
 key_list   = keys(local.key_value_map)
 value_list = values(local.key_value_map)
}

output "key_list" {
 value = local.key_list
}

output "value_list" {
 value = local.value_list
}

# Result: 
key_list = ["key1", "key2"]
value_list = ["value1", "value2"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;slice(list, startindex, endindex)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Slice returns consecutive elements from a list from a startindex (inclusive) to an endindex (exclusive).&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 slice_list = slice([1, 2, 3, 4], 2, 4)
}


output "slice_list" {
 value = local.slice_list
}

# Result:
slice_list = [3]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;range&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Creates a range of numbers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;one argument(limit)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;two arguments(initial_value, limit)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;three arguments(initial_value, limit, step)&lt;/p&gt;

&lt;p&gt;`locals {&lt;br&gt;
 range_one_arg    = range(3)&lt;br&gt;
 range_two_args   = range(1, 3)&lt;br&gt;
 range_three_args = range(1, 13, 3)&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;output "ranges" {&lt;br&gt;
 value = format("Range one arg: %v. Range two args: %v. Range three args: %v", local.range_one_arg, local.range_two_args, local.range_three_args)&lt;br&gt;
}&lt;/p&gt;
&lt;h1&gt;
  
  
  Result:
&lt;/h1&gt;

&lt;p&gt;range = "Range one arg: [0, 1, 2]. Range two args: [1, 2]. Range three args: [1, 4, 7, 10]"`&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;lookup(map, key, fallback_value)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Retrieves a value from a map using its key. If the value is not found, it will return the default value instead&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 a_map = {
   "key1" : "value1",
   "key2" : "value2"
 }
 lookup_in_a_map = lookup(local.a_map, "key1", "test")
}


output "lookup_in_a_map" {
 value = local.lookup_in_a_map
}

# Result:
This will return: lookup_in_a_map = "key1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;concat(lists)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Takes two or more lists and combines them in a single one&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 concat_list = concat([1, 2, 3], [4, 5, 6])
}


output "concat_list" {
 value = local.concat_list
}


# Result:
concat_list = [1, 2, 3, 4, 5, 6]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;merge(maps)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;merge&lt;/code&gt; function takes one or more maps and returns a single map that contains all of the elements from the input maps. The function can also take objects as input, but the output will always be a map.&lt;/p&gt;

&lt;p&gt;Let’s take a look at an example:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 b_map = {
   "key1" : "value1",
   "key2" : "value2"
 }
 c_map = {
   "key3" : "value3",
   "key4" : "value4"
 }
 final_map = merge(local.b_map, local.c_map)
}


output "final_map" {
 value = local.final_map
}

# Result:
final_map = {
  "key1" = "value1"
  "key2" = "value2"
  "key3" = "value3"
  "key4" = "value4"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;zipmap(key_list, value_list)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Constructs a map from a list of keys and a list of values&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 key_zip    = ["a", "b", "c"]
 values_zip = [1, 2, 3]
 zip_map    = zipmap(local.key_zip, local.values_zip)
}

output "zip_map" {
 value = local.zip_map
}

# Result
zip_map = {
  "a" = 1
  "b" = 2
  "c" = 3
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;expanding function argument …&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This special argument works only in function calls and expands a list into separate arguments. Useful when you want to merge all maps from a list of maps&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 list_of_maps = [
   {
     "a" : "a"
     "d" : "d"
   },
   {
     "b" : "b"
     "e" : "e"
   },
   {
     "c" : "c"
     "f" : "f"
   },
 ]
 expanding_map = merge(local.list_of_maps...)
}

output "expanding_map" {
 value = local.expanding_map
}

# Result
expanding_map = {
  "a" = "a"
  "b" = "b"
  "c" = "c"
  "d" = "d"
  "e" = "e"
  "f" = "f"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;file(path_to_file)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Reads the content of a file as a string and can be used in conjunction with other functions like jsondecode / yamldecode.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  a_file = file("./a_file.txt")
}

output "a_file" {
  value = local.a_file
}

# Result
The output would be the content of the file called a_file as a string.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;templatefile(path, vars)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Reads the file from the specified path and changes the variables specified in the file between the interpolation syntax ${ … } with the ones from the vars map.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 a_template_file = templatefile("./file.yaml", { "change_me" : "awesome_value" })
}


output "a_template_file" {
 value = local.a_template_file
}

# Result
This will change the ${change_me} variable to awesome_value.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;jsondecode(string)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Interprets a string as json.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 a_jsondecode = jsondecode("{\"hello\": \"world\"}")
}


output "a_jsondecode" {
 value = local.a_jsondecode
}

# Result
jsondecode = {
 "hello" = "world"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;jsonencode(string)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Encodes a value to a string using json&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 a_jsonencode = jsonencode({ "hello" = "world" })
}


output "a_jsonencode" {
 value = local.a_jsonencode
}

# Result
a_jsonencode = "{\"hello\":\"world\"}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;yamldecode(string)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Parses a string as a subset of YAML, and produces a representation of its value.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 a_yamldecode = yamldecode("hello: world")
}


output "a_yamldecode" {
 value = local.a_yamldecode
}

# Result:
a_yamldecode = {
 "hello" = "world"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;yamlencode(value)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Encodes a given value to a string using YAML.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
 a_yamlencode = yamlencode({ "a" : "b", "c" : "d" })
}


output "a_yamlencode" {
 value = local.a_yamlencode
}

# Result:
a_yamlencode = &amp;lt;&amp;lt;EOT
"a": "b"
"c": "d"

EOT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can use Terraform functions to make your life easier and to write better code. Keeping your configuration as DRY as possible improves readability and makes updating it easier, so functions are a must-have in your configurations.&lt;/p&gt;

&lt;p&gt;These are just the functions that I am using the most, but some honorable mentions are &lt;code&gt;element&lt;/code&gt;, &lt;code&gt;base64encode&lt;/code&gt;, &lt;code&gt;base64decode&lt;/code&gt;, &lt;code&gt;formatdate&lt;/code&gt;, &lt;code&gt;uuid&lt;/code&gt;, and &lt;code&gt;distinct&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Originally posted on Medium &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-9-functions-a62108826021" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  10. Working with files
&lt;/h1&gt;

&lt;p&gt;When it comes to Terraform, working with files is pretty straightforward. Apart from the .tf and .tfvars files, you can use json, yaml, or whatever other file types that support your needs.&lt;/p&gt;

&lt;p&gt;In the last article, I mentioned a couple of functions that interact with files, like &lt;code&gt;file &amp;amp; template_file&lt;/code&gt;, and right now I want to show you how easy it is to interact with a lot of files and use some of their content as variables for your configuration.&lt;/p&gt;

&lt;p&gt;For the following examples, let’s suppose we are using this yaml file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespaces:
  ns1:
    annotations:
      imageregistry: "https://hub.docker.com/"
    labels:
      color: "green"
      size: "big"
  ns2:
    labels:
      color: "red"
      size: "small"
  ns3:
    annotations:
      imageregistry: "https://hub.docker.com/"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Reading a file only if it exists&lt;/strong&gt;
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  my_file = fileexists("./my_file.yaml") ? file("./my_file.yaml") : null
}

output "my_file" {
  value = local.my_file
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We are using two functions for this and a ternary operator. The &lt;code&gt;fileexists&lt;/code&gt;function checks if the file exists and returns true or false based on the result. In our case above, if the file exists, &lt;code&gt;my_file&lt;/code&gt; will be the content of &lt;code&gt;my_file.yaml&lt;/code&gt; as a string, otherwise will be null.&lt;/p&gt;

&lt;p&gt;Let’s take this up a notch and transform this into a real use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Reading the yaml file and using it in a configuration if it exists&lt;/strong&gt;
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  namespaces = fileexists("./my_file.yaml") ? yamldecode(file("./my_file.yaml")).namespaces : var.namespaces
}

output "my_file" {
  value = local.namespaces
}

resource "kubernetes_namespace" "this" {
  for_each = local.namespaces
  metadata {
    name        = each.key
    annotations = lookup(each.value, "annotations", {})
    labels      = lookup(each.value, "labels", {})
  }
}

variable "namespaces" {
  type = map(object({
    annotations = optional(map(string), {})
    labels      = optional(map(string), {})
  }))
  default = {
    ns1 = {}
    ns2 = {}
    ns3 = {}
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the example above, we are checking if the file exists, and if it does we are going to load the namespaces from the yaml file, otherwise use a variable for that. As the file exists, we are using &lt;code&gt;yamldecode&lt;/code&gt; on the loaded string to create a map variable and we are passing it to for_each. This can be extremely useful in some use cases.&lt;/p&gt;

&lt;p&gt;Example Terraform plan:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubernetes_namespace.this["ns1"] will be created
  + resource "kubernetes_namespace" "this" {
      + id = (known after apply)

      + metadata {
          + annotations      = {
              + "imageregistry" = "https://hub.docker.com/"
            }
          + generation       = (known after apply)
          + labels           = {
              + "color" = "green"
              + "size"  = "big"
            }
          + name             = "ns1"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # kubernetes_namespace.this["ns2"] will be created
  + resource "kubernetes_namespace" "this" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "color" = "red"
              + "size"  = "small"
            }
          + name             = "ns2"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }

  # kubernetes_namespace.this["ns3"] will be created
  + resource "kubernetes_namespace" "this" {
      + id = (known after apply)

      + metadata {
          + annotations      = {
              + "imageregistry" = "https://hub.docker.com/"
            }
          + generation       = (known after apply)
          + name             = "ns3"
          + resource_version = (known after apply)
          + uid              = (known after apply)
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Using templatefile on a yaml file and use its configuration if it exists&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We can go further on this example and add some variables that we will change with templatefile. To do that, we must first make some changes to our initial yaml file.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespaces:
  ns1:
    annotations:
      imageregistry: ${image_registry_ns1}
    labels:
      color: "green"
      size: "big"
  ns2:
    labels:
      color: ${color_ns2}
      size: "small"
  ns3:
    annotations:
      imageregistry: "https://hub.docker.com/"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The only change that we will have to do for the above code will be a change to the locals:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  namespaces = fileexists("./my_file.yaml") ? yamldecode(templatefile("./my_file.yaml", { image_registry_ns1 = "ghcr.io", color_ns2 = "black" })).namespaces : var.namespaces
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Those two variables defined inside the yaml file using the &lt;code&gt;${}&lt;/code&gt; syntax, will be changed with the ones from the templatefile functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Fileset&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;fileset&lt;/code&gt; function helps with identifying all files inside a directory that respect a pattern.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  yaml_files = fileset(".", "*.yaml")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above will show all the yaml files, inside the current directory. This output will be a list of all those files. Not very useful on its own, right?&lt;/p&gt;

&lt;p&gt;Well, you can use the &lt;code&gt;file&lt;/code&gt; function to load the content of these files as strings. You can take them one by one, using list indexes, but if you want to take it up a notch, what you can do is use a for loop and group them together in something that makes sense.&lt;/p&gt;

&lt;p&gt;Let’s suppose you have 2 other yaml files that are similar to the one we used at the beginning of the post. We can group them all together in a single variable:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  namespaces = merge([for my_file in fileset(".", "*.yaml") : yamldecode(file(my_file))["namespaces"]]...)
}

output "namespaces" {
  value = local.namespaces
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As the files are exactly the same, only the names of the namespaces are different, this will result in:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespaces = {
  "ns1" = {
    "annotations" = {
      "imageregistry" = "https://hub.docker.com/"
    }
    "labels" = {
      "color" = "green"
      "size" = "big"
    }
  }
  "ns2" = {
    "labels" = {
      "color" = "red"
      "size" = "small"
    }
  }
  "ns3" = {
    "annotations" = {
      "imageregistry" = "https://hub.docker.com/"
    }
  }
  "ns4" = {
    "annotations" = {
      "imageregistry" = "https://hub.docker.com/"
    }
    "labels" = {
      "color" = "green"
      "size" = "big"
    }
  }
  "ns5" = {
    "labels" = {
      "color" = "red"
      "size" = "small"
    }
  }
  "ns6" = {
    "annotations" = {
      "imageregistry" = "https://hub.docker.com/"
    }
  }
  "ns7" = {
    "annotations" = {
      "imageregistry" = "https://hub.docker.com/"
    }
    "labels" = {
      "color" = "green"
      "size" = "big"
    }
  }
  "ns8" = {
    "labels" = {
      "color" = "red"
      "size" = "small"
    }
  }
  "ns9" = {
    "annotations" = {
      "imageregistry" = "https://hub.docker.com/"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can check another cool example &lt;a href="https://www.reddit.com/r/Terraform/comments/10cq94r/terraform_with_multiple_yaml_files/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Other Examples&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You can get filesystem-related information using these key expressions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;path.module — This function returns the path of the current module being executed. This is useful for accessing files or directories that are relative to the module being executed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;path.root — This function returns the root directory of the current Terraform project. This is useful for accessing files or directories located at the project's root.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;path.cwd — This function returns the current working directory where Terraform is being executed before any chdir operations happened. This is useful for accessing files or directories that are relative to the directory where Terraform is running from.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are some other file functions that can be leveraged in order to accommodate some use cases, but to be honest I’ve used them only once or twice.&lt;/p&gt;

&lt;p&gt;Still, I believe mentioning them, will bring some value.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;basename&lt;/code&gt; — takes a path and returns everything apart from the last part of it&lt;/p&gt;

&lt;p&gt;E.G: &lt;code&gt;basename("/Users/user1/hello.txt")&lt;/code&gt; will return hello.txt.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;dirname&lt;/code&gt; — behaves exactly opposite to basename, returns all the directories until the file&lt;/p&gt;

&lt;p&gt;E.G: &lt;code&gt;dirname("/Users/user1/hello.txt")&lt;/code&gt; will return /Users/user1&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pathexpand&lt;/code&gt; — takes a path that starts with a &lt;code&gt;~&lt;/code&gt; and expands this path adding the home of the logged in user. If the path, doesn’t use a &lt;code&gt;~&lt;/code&gt; this function will not do anything&lt;/p&gt;

&lt;p&gt;E.G: You are logged in as user1 on a Mac: &lt;code&gt;pathexpand("~/hello.txt")&lt;/code&gt; will return /Users/user1/hello.txt&lt;/p&gt;

&lt;p&gt;&lt;code&gt;filebase64&lt;/code&gt; — reads the content of a file and returns it as base64 encoded text.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;abspath&lt;/code&gt; — takes a string containing a filesystem path and returns the absolute path  &lt;/p&gt;

&lt;p&gt;Originally posted on Medium &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-10-files-e5eefee68ef6" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  11. Understanding Terraform State
&lt;/h1&gt;

&lt;p&gt;Terraform state is a critical component of Terraform that enables users to define, provision, and manage infrastructure resources using declarative code. In this blog post, we will explore the importance of Terraform state, how it works, and best practices for managing it.&lt;/p&gt;

&lt;p&gt;It is a json file that tracks the state of infrastructure resources managed by Terraform. By default, the name of the file is &lt;code&gt;terraform.tfstate&lt;/code&gt; and whenever you update the first state, a backup is generated called &lt;code&gt;terraform.tfstate.backup&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This state file is stored locally by default, but can also be stored remotely using a remote backend such as Amazon S3, Azure Blob Storage, Google Cloud Storage, or HashiCorp Consul. The Terraform state file includes the current configuration of resources, their dependencies, and metadata such as resource IDs and resource types. There are a couple of products that help with managing state and provide a sophisticated workflow around Terraform like &lt;a href="https://spacelift.io/" rel="noopener noreferrer"&gt;Spacelift&lt;/a&gt; or Terraform Cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How does it work?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When Terraform is executed, it reads the configuration files and the current state file to determine the changes required to bring the infrastructure to the desired state. Terraform then creates an execution plan that outlines the changes to be made to the infrastructure. If the plan is accepted, Terraform applies the changes to the infrastructure and updates the state file with the new state of the resources.&lt;/p&gt;

&lt;p&gt;You can use the &lt;code&gt;terraform state&lt;/code&gt; command to manage your state.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform state list&lt;/code&gt;: This command lists all the resources that are currently tracked by Terraform state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform state show&lt;/code&gt;: This command displays the details of a specific resource in the Terraform state. The output includes all the attributes of the resource.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform state pull&lt;/code&gt;: This command retrieves the current Terraform state from a remote backend and saves it to a local file. This command is useful when you want to make manual operations in a remote state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform state push&lt;/code&gt;: This command uploads the local Terraform state file to the remote backend. This command is useful after you made manual changes to your remote state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform state rm&lt;/code&gt;: This command removes a resource from the Terraform state. This doesn’t mean the resource will be destroyed, it won’t be managed by Terraform after you’ve removed it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform state mv&lt;/code&gt;: This command renames a resource in the Terraform state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform state replace-provider&lt;/code&gt;: This command replaces the provider configuration for a specific resource in the Terraform state. This command is useful when switching from one provider to another or upgrading to a new provider version.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Supported backends&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Amazon S3 Backend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Amazon S3 backend is a popular choice for remote state storage. To configure the Amazon S3 backend, you will need to create an S3 bucket and an IAM user with permissions to access the bucket. Here is an example of how to configure the Amazon S3 backend in Terraform:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket         = "my-terraform-state-bucket"
    key            = "terraform.tfstate"
    region         = "us-west-2"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The S3 backend supports locking, but to do that you will need also need a dynamodb table. The table must have a partition key named &lt;code&gt;LockID&lt;/code&gt; as a string. If this is not configured, state locking will be disabled.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket         = "my-terraform-state-bucket"
    key            = "terraform.tfstate"
    region         = "us-west-2"
    dynamodb_table = "terraform-state-lock"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Azure Blob Storage Backend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Azure Blob Storage backend is similar to the Amazon S3 backend and provides a remote storage solution for Terraform state files. To configure the Azure Blob Storage backend, you will need to create an Azure Storage Account and a container to store the Terraform state file. This backend supports locking by default, so you won’t need to configure anything else for locking.&lt;/p&gt;

&lt;p&gt;Here is an example of how to configure the Azure Blob Storage backend in Terraform:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "azurerm" {
    storage_account_name = "mytfstateaccount"
    container_name       = "tfstate"
    key                  = "terraform.tfstate"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Google Cloud Storage Backend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Google Cloud Storage backend is a popular choice for remote state storage for users of Google Cloud Platform. To configure the Google Cloud Storage backend, you will need to create a bucket and a service account with access to the bucket. Again there is no need to do anything else to enable locking. Here is an example of how to configure the Google Cloud Storage backend in Terraform:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "gcs" {
    bucket       = "my-terraform-state-bucket"
    prefix       = "terraform/state"
    credentials  = "path/to/credentials.json"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;HTTP Backend&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is a simple backend that can be useful for development or testing, but it is not recommended for production use because it does not provide the same level of security and reliability as other backends.&lt;/p&gt;

&lt;p&gt;To configure the HTTP backend, you will need to have access to a web server that can serve files over HTTP or HTTPS. Here is an example of how to configure the HTTP backend in Terraform:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "http" {
    address = "https://my-terraform-state-server.com/terraform.tfstate"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this example, the &lt;code&gt;address&lt;/code&gt; parameter specifies the URL of the Terraform state file on the web server. You can use any web server that supports HTTP or HTTPS to host the state file, including popular web servers like Apache or Nginx.&lt;/p&gt;

&lt;p&gt;When using the HTTP backend, it is important to ensure that the state file is protected with appropriate access controls and authentication mechanisms. Without proper security measures, the state file could be accessed or modified by unauthorized users, which could lead to security breaches or data loss.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Remote State Data Source&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The Terraform Remote State Data Source, like any other data source, retrieves existing information. This special data source, doesn’t depend on any provider, this allows you to retrieve state data from a previously deployed Terraform infrastructure. This can be useful if you need to reference information from one infrastructure to another.&lt;/p&gt;

&lt;p&gt;To use a remote state data source in Terraform, you first need to configure the remote state backend for the infrastructure you want to retrieve data from. This is done in the &lt;code&gt;backend&lt;/code&gt; block in your Terraform configuration as specified above. After you create your infrastructure for the first configuration, you can reference it in the second one using the remote data source.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "terraform_remote_state" "config1" {
  backend = "local"

  config = {
    path = "../config1"
  }
}

resource "null_resource" "this" {
  provisioner "local-exec" {
    command = format("echo %s", data.terraform_remote_state.config1.outputs.var1)
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above example, we are supposing that we have a configuration in the directory “../config1” that has some Terraform code up and running. In that code, we have declared a “var1” output, that we are referencing in our null resource.&lt;/p&gt;

&lt;p&gt;As mentioned in some of the previous articles, Terraform documentation is your best friend when it comes to understanding what you can do with a resource or data source, or what it exposes.&lt;/p&gt;

&lt;p&gt;This data source exposes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1342047e-165d-4854-84f7-a026cba23998_1394x458.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frztlz59brw27cpuzx10n.png" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Best Practices&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There are several best practices for managing Terraform state, including:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use a remote backend&lt;/strong&gt; : Storing the Terraform state file remotely provides several benefits, including better collaboration, easier access control, and improved resilience. Remote backends such as Amazon S3 or HashiCorp Consul can be used to store the state file securely and reliably.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use locking&lt;/strong&gt; : When multiple users are working on the same Terraform project, locking is necessary to prevent conflicts. Locking ensures that only one user can modify the state file at a time, preventing conflicts and ensuring changes are applied correctly. As shown before, there are many backends that support locking.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use versioning:&lt;/strong&gt; Your configuration should always be versioned, as this will make it easier to achieve an older version of the infrastructure if something goes wrong with the changes you are making.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Isolate state:&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don’t add hundreds of resources in the same state&lt;/strong&gt; : Making a mistake with one resource can potentially hurt all your infrastructure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Have a single state file per environment&lt;/strong&gt; : When making changes, it is usually a best practice to first make the change on a lower environment and after that promote it to the higher one&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Terraform workspaces&lt;/strong&gt; : Terraform workspaces allow users to manage multiple environments, such as development, staging, and production, with a single Terraform configuration file. Each workspace has its own state file, allowing changes to be made to each of them&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;5. &lt;strong&gt;Use modules&lt;/strong&gt; : Versioned modules will make it easier to make changes to your code, hence changes will be easier to promote across environments, making operations to the state less painful.&lt;/p&gt;

&lt;p&gt;erraform state is a critical component that enables users to define, provision, and manage infrastructure resources using declarative code. Terraform state ensures that resources are created, updated, or destroyed only when necessary and in the correct order. Remote backends such as Amazon S3, Azure Blob Storage, Google Cloud Storage, or HashiCorp Consul can be used to store the state file securely and reliably, while state file locks can prevent conflicts when multiple users are working with the same Terraform configuration. Consistent naming conventions and the &lt;code&gt;terraform state&lt;/code&gt; command can help to ensure that Terraform state files are easy to manage and understand.&lt;/p&gt;

&lt;p&gt;By following these best practices for managing Terraform state, users can ensure that their infrastructure resources are managed effectively and efficiently using Terraform.&lt;/p&gt;

&lt;p&gt;Originally posted on Medium &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-11-state-ada02632508a" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  12. Terraform Depends On and Lifecycle Block
&lt;/h1&gt;

&lt;p&gt;Understanding how to use &lt;code&gt;depends_on&lt;/code&gt; and the &lt;code&gt;lifecycle&lt;/code&gt; block can help you better manage complex infrastructure dependencies and handle resource updates and replacements. In this post, I will provide an overview of what these features are, how they work, and best practices for using them effectively in your Terraform code.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Depends_on&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;depends_on&lt;/code&gt; meta-argument in Terraform is used to specify dependencies between resources. When Terraform creates your infrastructure, it automatically determines the order in which to create resources based on their dependencies. However, in some cases, you may need to manually specify the order in which resources are created, and that's where &lt;code&gt;depends_on&lt;/code&gt; comes in.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;depends_on&lt;/code&gt; works on all resources, data sources, and also modules. You are going to most likely use &lt;code&gt;depends_on&lt;/code&gt; whenever you are using &lt;code&gt;null_resources&lt;/code&gt;with &lt;code&gt;provisioners&lt;/code&gt; to accomplish some use cases.&lt;/p&gt;

&lt;p&gt;Let’s look into one simple example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_namespace" "first" {
  metadata {
    name = "first"
  }
}

resource "kubernetes_namespace" "second" {
  depends_on = [
    kubernetes_namespace.first
  ]
  metadata {
    name = "second"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Even though these namespaces resources are the same, the one named “second” will get created after the one named “first”.&lt;/p&gt;

&lt;p&gt;As the &lt;code&gt;depends_on&lt;/code&gt; argument uses a list, this means that your resources can depend on multiple things before they are getting created.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_namespace" "first" {
  metadata {
    name = "first"
  }
}

resource "kubernetes_namespace" "second" {
  depends_on = [
    kubernetes_namespace.first,
    kubernetes_namespace.third
  ]
  metadata {
    name = "second"
  }
}

resource "kubernetes_namespace" "third" {
  metadata {
    name = "third"
  }
}

# kubernetes_namespace.first: Creating...
# kubernetes_namespace.third: Creating...
# kubernetes_namespace.third: Creation complete after 0s [id=third]
# kubernetes_namespace.first: Creation complete after 0s [id=first]
# kubernetes_namespace.second: Creating...
# kubernetes_namespace.second: Creation complete after 0s [id=second]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As we’ve made the second namespace depend also on the third now, you can see from the above apply output that the &lt;code&gt;first&lt;/code&gt; and &lt;code&gt;third&lt;/code&gt; race to get created first, and after they finished, the second one started the creation process.&lt;/p&gt;

&lt;p&gt;What about &lt;code&gt;depends_on&lt;/code&gt; on modules? It works exactly the same:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "null" {
  depends_on = [
    resource.null_resource.this
  ]
  source = "./null"
}

resource "null_resource" "this" {
  provisioner "local-exec" {
    command = "echo this resource"
  }
}

# null_resource.this: Creating...
# null_resource.this: Provisioning with 'local-exec'...
# null_resource.this (local-exec): Executing: ["/bin/sh" "-c" "echo this resource"]
# null_resource.this (local-exec): this resource
# null_resource.this: Creation complete after 0s [id=1259986187217330742]
# module.null.null_resource.this: Creating...
# module.null.null_resource.this: Provisioning with 'local-exec'...
# module.null.null_resource.this (local-exec): Executing: ["/bin/sh" "-c" "echo this module"]
# module.null.null_resource.this (local-exec): this module
# module.null.null_resource.this: Creation complete after 0s [id=3893065594330030689]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you see, the null resource from the module gets created after the other one. And of course, modules can even depend on other modules, but you got the drill.&lt;/p&gt;

&lt;p&gt;Do I use &lt;code&gt;depends_on&lt;/code&gt; now? Not that much, but back in the day on Terraform 0.10, whenever there were problems related to dependencies this was used to fix them.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Lifecycle Block&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In Terraform, a &lt;code&gt;lifecycle&lt;/code&gt; block is used to define specific behaviors for a resource during its lifecycle. This block is used to manage the lifecycle of a resource in Terraform, including creating, updating, and deleting resources.&lt;/p&gt;

&lt;p&gt;The lifecycle block can be added to a resource block and includes the following arguments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;create_before_destroy&lt;/strong&gt; : When set to true, this argument ensures that a new resource is created before the old one is destroyed. This can help avoid downtime during a resource update.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;prevent_destroy&lt;/strong&gt; : When set to true, this argument prevents a resource from being destroyed. This can be useful when you want to protect important resources from being accidentally deleted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ignore_changes&lt;/strong&gt; : This argument specifies certain attributes of a resource that Terraform should ignore when checking for changes. This can be useful when you want to prevent Terraform from unnecessarily updating a resource.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;replace_triggered_by:&lt;/strong&gt; This is relatively new, came up in Terraform 1.2, and it is used to replace a resource if any attributes of that resource have changed, or even other resources have changed. Also, if you use count or for_each on the resource, you can even retrigger the recreation if there is a change to an instance of that resource (using count.index or each.key)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To be honest, I’ve used &lt;code&gt;prevent_destroy&lt;/code&gt; only once or twice, &lt;code&gt;ignore_changes,&lt;/code&gt;and &lt;code&gt;create_before_destroy&lt;/code&gt; a couple of times, and I’ve just found out about &lt;code&gt;replace_triggered_by&lt;/code&gt; as I was writing the article.&lt;/p&gt;

&lt;p&gt;Let’s see a lifecycle block in action:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_namespace" "first" {
  metadata {
    name = "first"
    labels = {
      color = "green"
    }
  }
  lifecycle {
    ignore_changes = [
      metadata[0].labels
    ]
  }
}

output "namespace_labels" {
  value = kubernetes_namespace.first.metadata[0].labels
}


# kubernetes_namespace.first: Creating...
# kubernetes_namespace.first: Creation complete after 0s [id=first]

# Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

# Outputs:

# namespace_labels = tomap({
#   "color" = "green"
# })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I’ve applied the above code for the first time and created the namespace with the above labels. In the lifecycle block, I’ve added the necessary configuration to ignore changes related to labels.&lt;/p&gt;

&lt;p&gt;Now, let’s suppose someone comes in and tries to make a change to the labels:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_namespace" "first" {
  metadata {
    name = "first"
    labels = {
      color = "blue"
    }
  }
  lifecycle {
    ignore_changes = [
      metadata[0].labels
    ]
  }
}

output "namespace_labels" {
  value = kubernetes_namespace.first.metadata[0].labels
}


# kubernetes_namespace.first: Refreshing state... [id=first]

# No changes. Your infrastructure matches the configuration.

# Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

# Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

# Outputs:

# namespace_labels = tomap({
#   "color" = "green"
# })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I’ve tried to reapply the code, but Terraform detects there is no change required due to the fact that we have the lifecycle block in action.&lt;/p&gt;

&lt;p&gt;Let’s add to the lifecycle block a &lt;code&gt;create_before_destroy&lt;/code&gt; option. With this option enabled it doesn’t mean that you won’t ever be able to destroy the resource, but whenever there is a change that dictates the resource has to be recreated, it will first create the new resource and after that delete the existing one.&lt;/p&gt;

&lt;p&gt;With the namespace resource already created, I’ve changed its name to induce a breaking change:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubernetes_namespace.first must be replaced
+/- resource "kubernetes_namespace" "first" {
      ~ id = "first" -&amp;gt; (known after apply)

      ~ metadata {
          - annotations      = {} -&amp;gt; null
          ~ generation       = 0 -&amp;gt; (known after apply)
          ~ name             = "first" -&amp;gt; "second" # forces replacement
          ~ resource_version = "816398" -&amp;gt; (known after apply)
          ~ uid              = "684f7401-1554-46fb-b21f-8e49329e76fa" -&amp;gt; (known after apply)
            # (1 unchanged attribute hidden)
        }
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

kubernetes_namespace.first: Creating...
kubernetes_namespace.first: Creation complete after 0s [id=second]
kubernetes_namespace.first (deposed object 725799ef): Destroying... [id=first]
kubernetes_namespace.first: Destruction complete after 6s

Apply complete! Resources: 1 added, 0 changed, 1 destroyed.

Outputs:

namespace_labels = tomap({
  "color" = "blue"
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see from the Terraform apply output, another resource was created first and after that, the old one was deleted.&lt;/p&gt;

&lt;p&gt;Overall, &lt;code&gt;depends_on&lt;/code&gt; and the &lt;code&gt;lifecycle&lt;/code&gt; block can help you in some edge-case situations you may get in with your IaC. I don’t really use them frequently, but sometimes they are a necessary evil.&lt;/p&gt;

&lt;p&gt;Orginally posted on Medium &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-12-depends-on-and-lifecycle-block-f9a4fd82d924" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  13. Dynamic Blocks
&lt;/h1&gt;

&lt;p&gt;Dynamic Blocks in Terraform let you repeat configuration blocks inside of a resource based on a variable/local/expression that you are using inside of them. They make your configuration DRY (Don’t Repeat Yourself).&lt;/p&gt;

&lt;p&gt;Oh, I remember the days before they introduced this feature. I was working for Oracle at the time and was in charge of building reusable modules for our cloud components. Let’s get this straight, you &lt;strong&gt;CANNOT&lt;/strong&gt; truly build reusable modules without dynamic blocks. It really is impossible, it’s like you would say that humans can breathe underwater without any equipment.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How Dynamic Blocks work&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In a dynamic block, you can use the following parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;for_each (required)&lt;/code&gt; → iterates over the value you are providing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;content (required)&lt;/code&gt; → block containing the body of each block that you are going to create&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;iterator (optional)&lt;/code&gt; → temporary variable used as an iterator&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;labels (optional)&lt;/code&gt; → list of strings that define the block labels. Never used them, tbh.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can have nested dynamic blocks, or you can use dynamic blocks to avoid generating an optional block inside configurations.&lt;/p&gt;

&lt;p&gt;Let’s take a look at the following example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "helm_release" "example" {
  name       = "my-redis-release"
  repository = "https://charts.bitnami.com/bitnami"
  chart      = "redis"

  set {
    name  = "cluster.enabled"
    value = "true"
  }

  set {
    name  = "metrics.enabled"
    value = "true"
  }

  set {
    name  = "service.annotations.prometheus\\.io/port"
    value = "9127"
    type  = "string"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above block is just a simplified version taken out of &lt;a href="https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release" rel="noopener noreferrer"&gt;Terraform’s documentation&lt;/a&gt; and as you can see it repeats the set block three times. This set block, for a helm release, is just used to add custom values to be merged with the yaml values file.&lt;/p&gt;

&lt;p&gt;Let’s see how we can rewrite this using dynamic blocks:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  set = {
    "cluster.enabled" = {
      value = true
    }
    "metrics.enabled" = {
      value = true
    }
    "service.annotations.prometheus\\.io/port" = {
      value = "9127"
      type  = "string"
    }
  }
}

resource "helm_release" "example" {
  name       = "my-redis-release"
  repository = "https://charts.bitnami.com/bitnami"
  chart      = "redis"

  values = []

  dynamic "set" {
    for_each = local.set
    content {
      name  = set.key
      value = set.value.value
      type  = lookup(set.value, "type", null)
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above code accomplishes the same thing, as the one that was repeating the blocks three times. Since we are using the for_each on the local variable, we are going to create the block three times.&lt;/p&gt;

&lt;p&gt;When you are not defining an iterator, your iterator name will be exactly the name of the block, in our case is &lt;code&gt;set&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let’s use an iterator to make this clear. The dynamic block will change to:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dynamic "set" {
    for_each = local.set
    iterator = i
    content {
      name  = i.key
      value = i.value.value
      type  = lookup(i.value, "type", null)
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As mentioned before, you can use dynamic blocks to avoid generating blocks altogether, so to achieve this in our example, what we can do is just make the local variable empty:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  set = {}
}
resource "helm_release" "example" {
  name       = "my-redis-release"
  repository = "https://charts.bitnami.com/bitnami"
  chart      = "redis"

  values = []

  dynamic "set" {
    for_each = local.set
    iterator = i
    content {
      name  = i.key
      value = i.value.value
      type  = lookup(i.value, "type", null)
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;The OCI Security List Problem&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A security list in Oracle Cloud Infrastructure is pretty similar to a network access control list in AWS. When I started building the initial network module for OCI, security groups were not available so the only way to restrict network-related rules was to use security lists. I was using Terraform 0.11 and in that version, there weren’t that many features available (no dynamic blocks, no for_each) so it was pretty hard to build a reusable module.&lt;/p&gt;

&lt;p&gt;Because security list rules were embedded inside the security list resource as blocks, it was impossible to make something generic out of them.&lt;/p&gt;

&lt;p&gt;So basically, there was no real module built, before dynamic blocks, but after they were released, this bad boy was created:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "oci_core_security_list" "this" {
  for_each       = var.sl_params
  compartment_id = oci_core_virtual_network.this[each.value.vcn_name].compartment_id
  vcn_id         = oci_core_virtual_network.this[each.value.vcn_name].id
  display_name   = each.value.display_name

  dynamic "egress_security_rules" {
    iterator = egress_rules
    for_each = each.value.egress_rules
    content {
      stateless   = egress_rules.value.stateless
      protocol    = egress_rules.value.protocol
      destination = egress_rules.value.destination
    }
  }

  dynamic "ingress_security_rules" {
    iterator = ingress_rules
    for_each = each.value.ingress_rules
    content {
      stateless   = ingress_rules.value.stateless
      protocol    = ingress_rules.value.protocol
      source      = ingress_rules.value.source
      source_type = ingress_rules.value.source_type

      dynamic "tcp_options" {
        iterator = tcp_options
        for_each = (lookup(ingress_rules.value, "tcp_options", null) != null) ? ingress_rules.value.tcp_options : []
        content {
          max = tcp_options.value.max
          min = tcp_options.value.min
        }
      }
      dynamic "udp_options" {
        iterator = udp_options
        for_each = (lookup(ingress_rules.value, "udp_options", null) != null) ? ingress_rules.value.udp_options : []
        content {
          max = udp_options.value.max
          min = udp_options.value.min
        }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see, in this one, I’m also using nested dynamics, but because I’m using lookups, you don’t even need to specify the “tcp_options” or “udp_options” inside the “ingress_rules” if you don’t want to specify them for one of your rules. This could’ve been done more elegantly using &lt;code&gt;optionals&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Keep in mind this was developed more than 2.5 years ago, back in my Oracle days.&lt;/p&gt;

&lt;p&gt;You can also check the &lt;a href="https://github.com/oracle-quickstart/oci-adoption-framework-thunder" rel="noopener noreferrer"&gt;OCI Adoption framework Thunder&lt;/a&gt;, for more Oracle Cloud modules, but again, this is more than 2.5 years old so it will need an update.&lt;/p&gt;

&lt;p&gt;I’m also planning to revive it and update the modules to the latest features available, &lt;strong&gt;so send me a message&lt;/strong&gt; if you want to take part in this revamp.&lt;/p&gt;

&lt;p&gt;Dynamic blocks offer a powerful way to manage complex infrastructure resources in Terraform and have become a key feature of the tool since their introduction in version 0.12. By enabling you to generate blocks dynamically based on input data, dynamic blocks make it easier to automate infrastructure management and eliminate manual processes, making Terraform an even more powerful and flexible infrastructure-as-code tool.&lt;/p&gt;

&lt;p&gt;Originally posted on Medium &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-13-dynamic-blocks-8fa381994f2e" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  14. Terraform Modules
&lt;/h1&gt;

&lt;p&gt;Terraform modules are one of the most important features that Terraform has to offer. They make your code reusable, can be easily versioned and shared with others, and act as blueprints for your infrastructure.&lt;/p&gt;

&lt;p&gt;In this post, I am just going to scratch the surface of Terraform modules, I want to go into more detail in the last two articles of this series.&lt;/p&gt;

&lt;p&gt;I like to think of Terraform modules as you would think about an Object Oriented Programming Class. In OOP, you define a class and after that, you can create multiple objects out of it. The same goes for Terraform modules, you define the module, and after that, you can reuse it as many times as you want.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why should you use modules?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There are several reasons to use Terraform modules in your IaC projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code Reusability&lt;/strong&gt; : Modules help you avoid duplicating code by allowing you to reuse configurations across multiple environments or projects. This makes your infrastructure code more maintainable and easier to update.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Separation of Concerns&lt;/strong&gt; : Modules enable you to separate your infrastructure into smaller, more focused units. This results in cleaner, more organized code that is easier to understand and manage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Versioning&lt;/strong&gt; : I cannot stress the importance of versioning enough. Modules support versioning and can be shared across teams, making it easier to collaborate and maintain consistency across your organization’s infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified Configuration&lt;/strong&gt; : By encapsulating complexity within modules, you can simplify your root configurations, making them easier to read and understand.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Faster Development&lt;/strong&gt; : With modules, you can break down your infrastructure into smaller, reusable components. This modular approach accelerates development, as you can quickly build upon existing modules rather than starting from scratch for each new resource or environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt; : Modules enable you to build scalable infrastructure by allowing you to replicate resources or environments easily. By reusing modules, you can ensure that your infrastructure remains consistent and manageable even as it grows in size and complexity.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Minimal module structure&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A typical module should contain the following files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;main.tf&lt;/code&gt;: Contains the core resource declarations and configurations for the module.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;variables.tf&lt;/code&gt;: Defines input variables that allow users to customize the module's behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;outputs.tf&lt;/code&gt;: Specifies output values that the module returns to the caller, providing information about the created resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;README.md&lt;/code&gt;: Offers documentation on how to use the module, including descriptions of input variables and outputs.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What I also like to do, when I’m building modules, is to create examples for those modules.&lt;/p&gt;

&lt;p&gt;So in each module, what I typically do, is create an examples folder in which I define at least a &lt;code&gt;main.tf&lt;/code&gt; in which I create an object for that module.&lt;/p&gt;

&lt;p&gt;For the Readme file, I usually leverage &lt;code&gt;terraform-docs&lt;/code&gt; to get the documentation automated, but nevertheless, I also explain what the module does, how to leverage the examples and deep dive into why I took some decisions related to the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Module Example&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s create a simple module for generating config maps in Kubernetes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Module main code

resource "kubernetes_namespace" "this" {
  for_each = var.namespaces
  metadata {
    name        = each.key
    labels      = each.value.labels
    annotations = each.value.annotations
  }
}

resource "kubernetes_config_map" "this" {
  for_each = var.config_maps
  metadata {
    name        = each.key
    namespace   = each.value.use_existing_namespace ? each.value.namespace : kubernetes_namespace.this[each.value.namespace].metadata[0].name
    labels      = each.value.labels
    annotations = each.value.annotations
  }

  data        = each.value.data
  binary_data = each.value.binary_data
}


# Module Variable code

variable "namespaces" {
  description = "Namespaces parameters"
  type = map(object({
    labels      = optional(map(string), {})
    annotations = optional(map(string), {})
  }))
  default = {}
}

variable "config_maps" {
  description = "Config map parameters"
  type = map(object({
    namespace              = string
    labels                 = optional(map(string), {})
    annotations            = optional(map(string), {})
    use_existing_namespace = optional(bool, false)
    data                   = optional(map(string), {})
    binary_data            = optional(map(string), {})
  }))
}


# Module outputs code

output "config_maps" {
  description = "Config map outputs"
  value       = { for cfm in kubernetes_config_map.this : cfm.metadata[0].name =&amp;gt; { "namespace" : cfm.metadata[0].namespace, "data" : cfm.data } }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above module code will create how many namespaces and config maps you want in your Kubernetes cluster. You can even create your config maps in existing namespaces, as you are not required to create namespaces if you don’t want to.&lt;/p&gt;

&lt;p&gt;I’m taking advantage of optionals, to avoid passing parameters in some of the cases.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# example main.tf code

provider "kubernetes" {
  config_path = "~/.kube/config"
}

module "config_maps" {
  source = "../"
  namespaces = {
    ns1 = {
      labels = {
        color = "green"
      }
    }
  }

  config_maps = {
    cf1 = {
      namespace = "ns1"
      data = {
        api_host = "myhost:443"
        db_host  = "dbhost:5432"
      }
    }
  }
}

# example outputs.tf code
output "config_maps" {
  description = "Config map outputs"
  value       = module.config_maps.config_maps
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above example, I am creating one namespace and one config map into that namespace and I am also outputting the config maps.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Example terraform apply&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.config __maps.kubernetes__ namespace.this["ns1"]: Creating...
module.config __maps.kubernetes__ namespace.this["ns1"]: Creation complete after 0s [id=ns1]
module.config __maps.kubernetes__ config __map.this["cf1"]: Creating...
module.config__ maps.kubernetes __config__ map.this["cf1"]: Creation complete after 0s [id=ns1/cf1]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

config __maps = {
  "cf1" = {
    "data" = tomap({
      "api__ host" = "myhost:443"
      "db_host" = "dbhost:5432"
    })
    "namespace" = "ns1"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can check out the repository and also the generated &lt;em&gt;Readme.md&lt;/em&gt; file &lt;a href="https://github.com/flavius-dinu/terraform-kubernetes-cfmap" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Publishing Modules&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You can easily publish your modules to a Terraform registry. If you want to share your module with the community, you can easily leverage &lt;a href="https://registry.terraform.io/" rel="noopener noreferrer"&gt;Terraform’s Public Registry&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, if you have an account with a sophisticated deployment tool such as &lt;a href="https://spacelift.io/" rel="noopener noreferrer"&gt;Spacelift&lt;/a&gt;, you can take advantage of the private registry it offers and take advantage of the &lt;a href="https://docs.spacelift.io/vendors/terraform/module-registry.html#tests" rel="noopener noreferrer"&gt;built-in testing capabilities.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Minimal Best Practices&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you are just starting to build Terraform module, take into consideration the following best practices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Keep modules focused&lt;/strong&gt; : Each module should have a specific purpose and manage a single responsibility. Avoid creating overly complex modules that manage multiple unrelated resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use descriptive naming&lt;/strong&gt; : Choose clear and descriptive names for your modules, variables, and outputs. This makes it easier for others to understand and use your module.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Document your modules&lt;/strong&gt; : Include a &lt;code&gt;README.md&lt;/code&gt; file in each module that provides clear instructions on how to use the module, input variables, and output values. In addition to this, use comments in your Terraform code to explain complex or non-obvious code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Version your modules&lt;/strong&gt; : Use version tags to track changes to your modules and reference specific versions in your configurations. This ensures that you’re using a known and tested version of your module, and it makes it easier to roll back to a previous version if needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test your modules&lt;/strong&gt; : Write and maintain tests for your modules to ensure they work as expected. The &lt;code&gt;terraform validate&lt;/code&gt; and &lt;code&gt;terraform plan&lt;/code&gt;commands can help you identify configuration errors, while other tools like Spacelift’s built-in module testing or Terratest can help you write automated tests for your modules.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terraform modules are a powerful way to streamline your infrastructure management, making your IaC more reusable, maintainable, and shareable. By following best practices and leveraging the power of modules, you can improve your DevOps workflow and accelerate your infrastructure deployments.&lt;/p&gt;

&lt;p&gt;Originally posted on Medium &lt;a href="https://techblog.flaviusdinu.com/terraform-from-0-to-hero-14-modules-3592debaab06" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  15. Best practices for modules I
&lt;/h1&gt;

&lt;p&gt;In order to build reusable Terraform modules that can be easily leveraged to achieve almost any architecture, I believe that at least the following best practices should be put in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Each Terraform module should exist in its own repository&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use &lt;em&gt;for_each&lt;/em&gt; and &lt;em&gt;map variables&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use &lt;em&gt;dynamic blocks&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use &lt;em&gt;ternary operators&lt;/em&gt; and take advantage of terraform built-in functions, especially &lt;em&gt;lookup, merge, try, can&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build &lt;em&gt;outputs&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optional:&lt;/strong&gt; Use &lt;em&gt;pre-commit&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Terraform module in its own repository&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You may wonder why I am suggesting that every terraform module should be in its own repository.&lt;/p&gt;

&lt;p&gt;Well, doing so, will most definitely help you, when you are developing new features to that module because you won’t need to do big changes in a lot of places.&lt;/p&gt;

&lt;p&gt;In addition to this, creating/updating repositories with new features of that module will occur really fast, and you will still have the possibility to keep the older versions in place for other automation that you are building.&lt;/p&gt;

&lt;p&gt;You can easily reference a git source/registry source when you want to use a module, but &lt;strong&gt;tagging&lt;/strong&gt; is essential.&lt;/p&gt;

&lt;p&gt;E.G. Referencing a Github Terraform Module:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "example" {  
  source = "git@github.com:organisation(user)/repo.git?ref=tag/branch"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;When it comes to tagging, my recommendation is to use the following tag format: &lt;em&gt;vMajor.Minor.Patch.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Use for_each and map variables&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I didn’t specify count, did I?&lt;/p&gt;

&lt;p&gt;Using count on your resources will cause more problems than offering help in the end. I’m not talking about checking if a variable is set to true and if it is, set the count to 1, else set it to 0 (not a big fan of this approach, either), I’m talking about creating multiple resources based on a list variable.&lt;/p&gt;

&lt;p&gt;Let’s suppose you want to create 3 ec2 instances in AWS using Terraform, each of them with different images, different types, and different azs, similar to the code below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  vm_instances = [
    {
      ami           = "ami-1"
      az            = "eu-west-1a"
      instance_type = "t2.micro"
    },
    {
      ami           = "ami-2"
      az            = "eu-west-1b"
      instance_type = "t3.micro"
    },
    {
      ami           = "ami-3"
      az            = "eu-west-1c"
      instance_type = "t3.small"
    }
  ]
}resource "aws_instance" "this" {
  count             = length(local.vm_instances)
  ami               = local.vm_instances[count.index].ami
  availability_zone = local.vm_instances[count.index].az
  instance_type     = local.vm_instances[count.index].instance_type
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The code is working properly, all of your 3 instances are created and everything is looking smooth after you apply it.&lt;/p&gt;

&lt;p&gt;Let’s suppose, for some reason, you want to delete the second instance. What is going to happen to the third one? It will be recreated because that’s how a list variable works if you remember from the 7th article in this series. When you are removing the second instance (index 1), the third instance (index 2) will change its index to 1.&lt;/p&gt;

&lt;p&gt;This is just a simple example, but how about a case in which you had 20 instances, and for whatever reason you needed the remove the first one in the list? All the other 19 instances would be recreated and that downtime could really affect you.&lt;/p&gt;

&lt;p&gt;By leveraging &lt;em&gt;for_each&lt;/em&gt; you can avoid this pain, and due to the fact that it exposes an &lt;em&gt;each.key&lt;/em&gt; and &lt;em&gt;each.value&lt;/em&gt; when you are using maps, this will greatly help you achieve almost any architecture you desire.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  vm_instances = {
    vm1 = {
      ami           = "ami-1"
      az            = "eu-west-1a"
      instance_type = "t2.micro"
    }
    vm2 = {
      ami           = "ami-2"
      az            = "eu-west-1b"
      instance_type = "t3.micro"
    }
    vm3 = {
      ami           = "ami-3"
      az            = "eu-west-1c"
      instance_type = "t3.small"
    }
  }
}resource "aws_instance" "this" {
  for_each          = local.vm_instances
  ami               = each.value.ami
  availability_zone = each.value.az
  instance_type     = each.value.instance_type
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The code is pretty similar but it is now using a map variable instead of a list one, and by removing an instance now, the others are not going to be affected in any way. In the above example, the keys are &lt;em&gt;vm1, vm2, and vm3&lt;/em&gt; and the values are the ones that are after the equal sign.&lt;/p&gt;

&lt;p&gt;By using the &lt;em&gt;for_each&lt;/em&gt; approach, you have the possibility of creating the resources in any way you want, reaching a more generic state and accommodating many use cases.&lt;/p&gt;

&lt;p&gt;Keep in mind that both examples from above are not an actual representation of modules, they are just used to point out the differences between &lt;em&gt;for_each&lt;/em&gt; and &lt;em&gt;count.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Use dynamic blocks&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I remember when dynamic blocks weren’t a thing in Terraform, and I had to build a module for security lists in Oracle Cloud.&lt;/p&gt;

&lt;p&gt;The problem was that security list rules were blocks inside of the security list resource itself, so whatever we would’ve done, that module, in the end, wasn’t very reusable.&lt;/p&gt;

&lt;p&gt;One can argue that we could’ve prepared some jinja2 templates, had a script in place that would render those based on an input, but still, in my opinion, that isn’t a reusable terraform module.&lt;/p&gt;

&lt;p&gt;Using dynamic blocks, you can easily repeat a block inside of a resource how many times you want based on the input variable, and that most certainly was a game changer at the time of the release.&lt;/p&gt;

&lt;p&gt;The good part is that you can have a dynamic block in place, even when you don’t want to create that block at all. In some architectures, you will need a feature enabled many times, but in others, you won’t need that feature at all. Why build two separate modules for that when you can have only one, right?&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Use Ternary operators and Terraform functions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Terraform functions complicate the code. That is most certainly a fact and if somebody doesn’t have that much experience with Terraform they will have a hard time reading the code.&lt;/p&gt;

&lt;p&gt;Nevertheless, when you are building reusable modules, you will most likely encounter the need of having some conditions inside of your code. You will see there are some arguments that cannot exist with other ones, so for your module to accommodate both use cases, you will most likely need to leverage a &lt;em&gt;ternary operator&lt;/em&gt; with a &lt;em&gt;lookup&lt;/em&gt; on the variable.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Build Outputs&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Inside of a module, an output is literally what that module is exposing for other modules to use.&lt;/p&gt;

&lt;p&gt;Make sure that whenever you are building a module, you are exposing the component that can be reused by other components (e.g. subnet ids, because they may be used by vms).&lt;/p&gt;

&lt;p&gt;By using &lt;em&gt;for_each,&lt;/em&gt; exposing an output will become harder, but don’t worry, you will get the hang of it pretty fast.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "kube_params" {
  value = { for kube in azurerm_kubernetes_cluster.this : kube.name =&amp;gt; { "id" : kube.id, "fqdn" : kube.fqdn } }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above example, I am exposing some &lt;em&gt;azure kubernetes cluster&lt;/em&gt; outputs, that I consider relevant. Due to the fact that my resource has a &lt;em&gt;for_each&lt;/em&gt; on it, I have to cycle through all of the resources of that kind to be able to access the arguments. I am creating a map variable with a name &lt;em&gt;key&lt;/em&gt; that underneath will have an id and fqdn as part of its values.&lt;/p&gt;

&lt;p&gt;Terraform documentation helps you a lot in knowing what a resource can export so you will only need to go to the documentation page of that particular resource.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Using Pre-commit&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Pre-commit can easily facilitate your terraform development and can make sure that your code respects the standard imposed by your team before you are pushing the code to the repository.&lt;/p&gt;

&lt;p&gt;By using pre-commit, you can easily make sure that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;your terraform code is valid&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;linting was done properly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;documentation for all your parameters has been written&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are other things that can be done, so you should check the documentation for the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://pre-commit.com/" rel="noopener noreferrer"&gt;pre-commit&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/antonbabenko/pre-commit-terraform" rel="noopener noreferrer"&gt;pre-commit terraform&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In order to take advantage of pre-commit, you should define a file called &lt;em&gt;.pre-commit-config.yaml&lt;/em&gt; inside of your repository with content similar to this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.3.0
    hooks:
      - id: end-of-file-fixer
      - id: trailing-whitespace
  - repo: https://github.com/antonbabenko/pre-commit-terraform
    rev: v1.72.2
    hooks:
      - id: terraform_fmt
      - id: terraform_tflint
      - id: terraform_docs
      - id: terraform_validate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Of course, many other hooks can be added, but in the example from above, we are using just the essentials.&lt;/p&gt;

&lt;p&gt;You should have &lt;em&gt;pre-commit&lt;/em&gt; installed locally because as its name states, you should run it prior to making a commit. When you are doing so, all of the problems related to your code will be mentioned and some of them will even be fixed.&lt;/p&gt;

&lt;p&gt;Originally posted on Medium &lt;a href="https://techblog.flaviusdinu.com/building-reusable-terraform-modules-9e90aa4eef31" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  16. Best practices for modules II
&lt;/h1&gt;

&lt;p&gt;In this last post of this series, I am going to show you how I build reusable Terraform modules by respecting the best practices.&lt;/p&gt;

&lt;p&gt;What I’m going to build is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A Kubernetes Module for Azure (AKS)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A Helm Module that will be able to deploy helm charts inside of Kubernetes clusters&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A repository that leverages the two modules in order to deploy a Helm chart inside of AKS&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In order to follow this tutorial yourself, you are going to need some experience with Terraform, Kubernetes, Helm, pre-commit, and git.&lt;/p&gt;

&lt;p&gt;I would suggest installing all of them by following the instructions present in the tools documentation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://learn.hashicorp.com/tutorials/terraform/install-cli" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;Kubectl&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://pre-commit.com/#install" rel="noopener noreferrer"&gt;Pre-commit&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli" rel="noopener noreferrer"&gt;azure cli&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A text editor of you choice (I’m using Visual Studio Code, but feel free to use any editor you like)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should have an Azure account. In order to create a free one you can go &lt;a href="https://azure.microsoft.com/en-us/free/search/?ef_id=CjwKCAjwx7GYBhB7EiwA0d8oe213ayU0lmwjwZX2xt7VlPwJe63DzId9Vh1Z3g4Giq26oNB1B_HRSxoCfswQAvD_BwE%3AG%3As&amp;amp;OCID=AIDcmmkvpj1ueg_SEM_CjwKCAjwx7GYBhB7EiwA0d8oe213ayU0lmwjwZX2xt7VlPwJe63DzId9Vh1Z3g4Giq26oNB1B_HRSxoCfswQAvD_BwE%3AG%3As&amp;amp;gclid=CjwKCAjwx7GYBhB7EiwA0d8oe213ayU0lmwjwZX2xt7VlPwJe63DzId9Vh1Z3g4Giq26oNB1B_HRSxoCfswQAvD_BwE" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;AKS Module&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I will first start by leveraging my &lt;a href="https://github.com/flavius-dinu/terraform-module-template" rel="noopener noreferrer"&gt;terraform-module-template&lt;/a&gt; and create a repository from it, by clicking on &lt;em&gt;Use this template.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F245f7b05-69a1-41c3-ad9b-26470e9eece1_1400x174.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8t592gsugtfveaah3p25.png" width="800" height="99"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After this, I just cloned my repository, and now I’m prepared to start the development.&lt;/p&gt;

&lt;p&gt;By using the above template, I have the following folder structure for the Terraform code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── README.md # Module documentation
├── example # One working example based on the module
│   ├── main.tf # Main code for the example
│   ├── outputs.tf # Example outputs
│   └── variables.tf # Example variables
├── main.tf # Main code for the module
├── outputs.tf # Outputs of the module
├── provider.tf # Required providers for the modules
└── variables.tf # Variables of the module
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The starting point of the development will be the root &lt;em&gt;main.tf&lt;/em&gt; file.&lt;/p&gt;

&lt;p&gt;Let’s first understand what we have to build. To do that, Terraform documentation is our best friend.&lt;/p&gt;

&lt;p&gt;I navigated to the &lt;a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster" rel="noopener noreferrer"&gt;resource documentation&lt;/a&gt; and from the start, I looked at an example to understand the minimum parameters I require in order to build the module.&lt;/p&gt;

&lt;p&gt;As I just want to build a simple module for demo purposes and without too many fancy features, I will just get the absolute necessary parameters. &lt;strong&gt;In the real world&lt;/strong&gt; , this is not going to happen, so make sure you give a thorough reading of the Argument Reference of the resource page.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_kubernetes_cluster" "this" {
  for_each            = var.kube_params
  name                = each.value.name
  location            = each.value.rg_location
  resource_group_name = each.value.rg_name
  dns_prefix          = each.value.dns_prefix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I’ve started pretty small, just adding some essential parameters that are required to create a cluster.&lt;/p&gt;

&lt;p&gt;A variable called &lt;em&gt;kube_params&lt;/em&gt; was defined in order to add all of our cluster related parameters. All the parameters from above will be mandatory in our module, due to the fact that I’ve defined them with &lt;em&gt;each.value. &lt;strong&gt;something,&lt;/strong&gt;&lt;/em&gt; where &lt;em&gt;&lt;strong&gt;something&lt;/strong&gt;&lt;/em&gt; can be whatever you want (just add something that makes sense for that particular parameter).&lt;/p&gt;

&lt;p&gt;I’ve continued to add some mandatory blocks inside of this resource:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;default_node_pool {
    enable_auto_scaling = each.value.enable_auto_scaling
    max_count           = each.value.enable_auto_scaling == true ? each.value.max_count : null
    min_count           = each.value.enable_auto_scaling == true ? each.value.min_count : null
    node_count          = each.value.node_count
    vm_size             = each.value.vm_size
    name                = each.value.np_name
  }

  dynamic "service_principal" {
    for_each = each.value.service_principal
    content {
      client_id     = service_principal.value.client_id
      client_secret = service_principal.value.client_secret
    }
  }

  dynamic "identity" {
    for_each = each.value.identity
    content {
      type         = identity.value.type
      identity_ids = identity.value.identity_ids
    }
  }
  tags = merge(var.tags, each.value.tags)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the above example, you are going to see many of the best practices that I’ve mentioned in my previous post: the use of &lt;em&gt;dynamic blocks, ternary operators, and functions.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For the dynamic blocks, by default, I am not going to create any block if the parameter is not present.&lt;/p&gt;

&lt;p&gt;For the tags, I’ve noticed that in many cases, companies want to be able to add some global tags to resources and, of course, individual tags for a particular resource. In order to accommodate that, I’m simply merging two map variables: one that exists inside of &lt;em&gt;kube_params&lt;/em&gt; and another one that will exist in the &lt;em&gt;tags&lt;/em&gt; var.&lt;/p&gt;

&lt;p&gt;As some may want to export the kubeconfig of the cluster after it is created, I’ve provided that option also:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "local_file" "kube_config" {
  for_each = { for k, v in var.kube_params : k =&amp;gt; v if v.export_kube_config == true }
  filename = each.value.kubeconfig_path
  content  = azurerm_kubernetes_cluster.this[each.key].kube_config_raw
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To export the kubeconfig, you will simply need to have a parameter in &lt;em&gt;kube_params&lt;/em&gt; called &lt;em&gt;export_kube_config&lt;/em&gt; set to true.&lt;/p&gt;

&lt;p&gt;This is how my &lt;em&gt;main.tf&lt;/em&gt; file looks like in the end:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_kubernetes_cluster" "this" {
  for_each            = var.kube_params
  name                = each.value.name
  location            = each.value.rg_location
  resource_group_name = each.value.rg_name
  dns_prefix          = each.value.dns_prefix

  default_node_pool {
    enable_auto_scaling = each.value.enable_auto_scaling
    max_count           = each.value.enable_auto_scaling == true ? each.value.max_count : null
    min_count           = each.value.enable_auto_scaling == true ? each.value.min_count : null
    node_count          = each.value.node_count
    vm_size             = each.value.vm_size
    name                = each.value.np_name
  }

  dynamic "service_principal" {
    for_each = each.value.service_principal
    content {
      client_id     = service_principal.value.client_id
      client_secret = service_principal.value.client_secret
    }
  }

  dynamic "identity" {
    for_each = each.value.identity
    content {
      type         = identity.value.type
      identity_ids = identity.value.identity_ids
    }
  }
  tags = merge(var.tags, each.value.tags)
}

resource "local_file" "kube_config" {
  for_each = { for k, v in var.kube_params : k =&amp;gt; v if v.export_kube_config == true }
  filename = each.value.kubeconfig_path
  content  = azurerm_kubernetes_cluster.this[each.key].kube_config_raw
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For the &lt;em&gt;variables.tf&lt;/em&gt; file, I just defined the &lt;em&gt;kube_params&lt;/em&gt; and &lt;em&gt;tags&lt;/em&gt; variables as they are the only vars that I’ve used inside of my module.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "kube_params" {
  type = map(object({
    name                = string
    rg_name             = string
    rg_location         = string
    dns_prefix          = string
    client_id           = optional(string, null)
    client_secret       = optional(string, null)
    vm_size             = optional(string, "Standard_DS2_v2")
    enable_auto_scaling = optional(string, true)
    max_count           = optional(number, 1)
    min_count           = optional(number, 1)
    node_count          = optional(number, 1)
    np_name             = string
    service_principal = optional(list(object({
      client_id     = optional(string, null)
      client_secret = optional(string, null)
    })), [])
    identity = optional(list(object({
      type         = optional(string, "SystemAssigned")
      identity_ids = optional(list(string), [])
    })), [])
    kubeconfig_path = optional(string, "~./kube/config")
  }))
  description = "AKS params"
}

variable "tags" {
  type        = map(string)
  description = "Global tags to apply to the resources"
  default     = {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With the release of optional, types, I’ve not used any type for the variables anymore, because right now we can omit some of the parameters by using the optional function at the variable level. More about that &lt;a href="https://techblog.flaviusdinu.com/terraform-optional-object-type-attributes-cbfdd336b662" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "kube_params" {
  value = { for kube in azurerm_kubernetes_cluster.this : kube.name =&amp;gt; { "id" : kube.id, "fqdn" : kube.fqdn } }
}

output "kube_config" {
  value = { for kube in azurerm_kubernetes_cluster.this : kube.name =&amp;gt; nonsensitive(kube.kube_config) }
}

output "kube_config_path" {
  value = { for k, v in local_file.kube_config : k =&amp;gt; v.filename }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I’ve defined three outputs, one for the most relevant parameters of the Kubernetes cluster and two for the kubeconfig. The ones related to kubeconfig are giving users the possibility to login into the cluster by using the kubeconfig file or the Kubernetes user/password/certificates combination.&lt;/p&gt;

&lt;p&gt;You can use functions inside of the outputs and you are encouraged to do so, to accommodate your use case.&lt;/p&gt;

&lt;p&gt;The module code is now ready, but you should at least have an example inside of it to show users how can they run the code.&lt;/p&gt;

&lt;p&gt;So in the examples folder, in the &lt;em&gt;main.tf&lt;/em&gt; file, I’ve added the following code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "azurerm" {
  features {}
}

module "aks" {
  source = "../"
  kube_params = {
    kube1 = {
      name                = "kube1"
      rg_name             = "rg1"
      rg_location         = "westeurope"
      dns_prefix          = "kube"
      identity            = [{}]
      enable_auto_scaling = false
      node_count          = 1
      np_name             = "kube1"
      export_kube_config  = true
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I’ve added the provider configuration and called the module with some of the essential parameters. You can add a variable to the &lt;em&gt;kube_params&lt;/em&gt; and provide values in the variable directly or in a &lt;em&gt;tfvars&lt;/em&gt; file or even in an yaml file if you want.&lt;/p&gt;

&lt;p&gt;In the above example, you are creating only one kubernetes cluster, if you want more, just copy &amp;amp; paste de &lt;em&gt;kube1&lt;/em&gt; block and add whatever parameters you want, based on the module similar to this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "azurerm" {
  features {}
}

module "aks" {
  source = "../"
  kube_params = {
    kube1 = {
      name                = "kube1"
      rg_name             = "rg1"
      rg_location         = "westeurope"
      dns_prefix          = "kube"
      identity            = [{}]
      enable_auto_scaling = false
      node_count          = 1
      np_name             = "kube1"
      export_kube_config  = true
    }
    kube2 = {
      name                = "kube2"
      rg_name             = "rg1"
      rg_location         = "westeurope"
      dns_prefix          = "kube"
      identity            = [{}]
      enable_auto_scaling = false
      node_count          = 4
      np_name             = "kube2"
      export_kube_config  = false
    }
    kube3 = {
      name                = "kube3"
      rg_name             = "rg1"
      rg_location         = "westeurope"
      dns_prefix          = "kuber"
      identity            = [{}]
      enable_auto_scaling = true
      max_count           = 4
      min_count           = 2
      node_count          = 2
      np_name             = "kube3"
      export_kube_config  = false
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I’ve added 3 just for reference, you have full control related to the number of clusters you want to create.&lt;/p&gt;

&lt;p&gt;Ok, let’s get back to the initial example. Before running the code you should first login to Azure:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You may think we are done, right? Well, not yet, because I like to run pre-commit with all the goodies on my code. I am using the same pre-commit file I mentioned in the previous post so before adding my code, I’m going to run:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pre-commit run --all-files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Before running this command, I made sure that in my README.md I had the following lines:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK --&amp;gt;
&amp;lt;!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK --&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Between these lines, tfdocs will populate the documentation based on your resources, variables, outputs, and provider.&lt;/p&gt;

&lt;p&gt;This is what the README looks like after running the pre-commit command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30ac341a-4dc0-4a82-8ea2-f5f2d65998da_1066x2156.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwom8a3e2zoroavtzdpv.png" width="800" height="1618"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, this module is done, I’ve pushed all my changes and a new tag was created automatically in my repository (I will discuss about the pipelines I am using for that in another post).&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Helm Release Module&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;All the steps are the same as for the other module, first we have to understand what we have to build by reading the documentation and understanding the parameters.&lt;/p&gt;

&lt;p&gt;I’ve created a new repository using the same template and I’ve started the development per se. As I don’t want to repeat all the steps from above in this post, I will show you just the end result of the &lt;em&gt;main.tf&lt;/em&gt; file for this module.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "helm_release" "this" {
  for_each         = var.helm
  name             = each.value.name
  chart            = each.value.chart
  repository       = each.value.repository
  version          = each.value.version
  namespace        = each.value.namespace
  create_namespace = each.value.create_namespace ? true : false

  values = [for yaml_file in each.value.values : file(yaml_file)]
  dynamic "set" {
    for_each = each.value.set
    content {
      name  = set.value.name
      value = set.value.value
      type  = set.value.type
    }
  }
  dynamic "set_sensitive" {
    for_each = each.value.set_sensitive
    content {
      name  = set_sensitive.value.name
      value = set_sensitive.value.value
      type  = set_sensitive.value.type
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This module is also pushed, tagged, and good to go.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Automation based on the Two Modules&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I’ve created a third repository, but this time from scratch in which I will use the two modules that were built throughout this post.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "azurerm" {
  features {}
}

module "aks" {
  source = "git@github.com:flavius-dinu/terraform-az-aks.git?ref=v1.0.3"
  kube_params = {
    kube1 = {
      name                = "kube1"
      rg_name             = "rg1"
      rg_location         = "westeurope"
      dns_prefix          = "kube"
      identity            = [{}]
      enable_auto_scaling = false
      node_count          = 1
      np_name             = "kube1"
      export_kube_config  = true
      kubeconfig_path     = "./config"
    }
  }
}

provider "helm" {
  kubernetes {
    config_path = module.aks.kube_config_path["kube1"]
  }
}

# Alternative way of declaring the provider

# provider "helm" {
#   kubernetes {
#     host                   = module.aks.kube_config["kube1"].0.host
#     username               = module.aks.kube_config["kube1"].0.username
#     password               = module.aks.kube_config["kube1"].0.password
#     client_certificate     = base64decode(module.aks.kube_config["kube1"].0.client_certificate)
#     client_key             = base64decode(module.aks.kube_config["kube1"].0.client_key)
#     cluster_ca_certificate = base64decode(module.aks.kube_config["kube1"].0.cluster_ca_certificate)
#   }
# }

module "helm" {
  source = "git@github.com:flavius-dinu/terraform-helm-release.git?ref=v1.0.0"
  helm = {
    argo = {
      name             = "argocd"
      repository       = "https://argoproj.github.io/argo-helm"
      chart            = "argo-cd"
      create_namespace = true
      namespace        = "argocd"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;So, in my repository, I am creating one Kubernetes cluster, in which I am deploying an ArgoCD helm chart. Due to the fact that the helm provider in both cases (commented and uncommented code) has an explicit dependency on &lt;em&gt;module.aks.something&lt;/em&gt; it will first wait for the cluster to be ready and after that, the helm chart will be deployed on it.&lt;/p&gt;

&lt;p&gt;I’ve kept the commented part in the gist because I wanted to showcase the fact that you are able to connect to the cluster using two different outputs, and both solutions are working just fine.&lt;/p&gt;

&lt;p&gt;You should use a remote state if you are collaborating or using the code in a production environment. I haven’t done that for this example, because it is just a simple demo.&lt;/p&gt;

&lt;p&gt;Now the automation is done, so what we can do is:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform plan
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Both components are created successfully and if you login to your AKS cluster, you are going to see the ArgoCD related resources by running:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all -n argocd&lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  &lt;strong&gt;Repository Links&lt;/strong&gt;&lt;br&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/flavius-dinu/terraform-az-aks" rel="noopener noreferrer"&gt;Azure AKS&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/flavius-dinu/terraform-helm-release" rel="noopener noreferrer"&gt;Helm Release&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/flavius-dinu/aks-argocd" rel="noopener noreferrer"&gt;AKS ArgoCD&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Useful Documentation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you need some help to better understand some of the concepts that I’ve used throughout this post, I’ve put together a list of useful articles and component documentation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Each&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/language/meta-arguments/for_each" rel="noopener noreferrer"&gt;https://www.terraform.io/language/meta-arguments/for_each&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dynamic blocks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://spacelift.io/blog/terraform-dynamic-blocks" rel="noopener noreferrer"&gt;https://spacelift.io/blog/terraform-dynamic-blocks&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.terraform.io/language/expressions/dynamic-blocks" rel="noopener noreferrer"&gt;https://www.terraform.io/language/expressions/dynamic-blocks&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Originally posted on &lt;a href="https://techblog.flaviusdinu.com/building-reusable-terraform-modules-part-2-c7cafaeeee59" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  17. Bonus1 — OpenTofu features
&lt;/h1&gt;

&lt;p&gt;In this part we will cover some OpenTofu features that Terraform doesn’t have.&lt;/p&gt;

&lt;p&gt;So here are the standout features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;State encryption&lt;/strong&gt; — built-in state encryption, ensuring enhanced security for sensitive infrastructure data. Unlike Terraform, which relies on third-party tools or external state backends for encryption, OpenTofu provides this feature natively, allowing users to secure their state files directly within the platform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Support for&lt;/strong&gt;&lt;code&gt;.tofu&lt;/code&gt; &lt;strong&gt;Files&lt;/strong&gt; — use of &lt;code&gt;.tofu&lt;/code&gt; file extensions, which can override standard &lt;code&gt;.tf&lt;/code&gt; files. This feature gives users greater flexibility in managing configurations, enabling easy application of specific overrides or conditional configurations. This functionality isn’t available in Terraform, which relies solely on &lt;code&gt;.tf&lt;/code&gt; files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;for_each&lt;/code&gt; &lt;strong&gt;in Provider Configuration Blocks&lt;/strong&gt; — making it possible to dynamically create multiple instances of a provider configuration. This feature allows each resource instance to select a different provider configuration based on a list or map, ideal for infrastructure duplication across regions. Terraform does not support this feature in provider configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-exclude&lt;/code&gt; &lt;strong&gt;Planning Option&lt;/strong&gt; —allows users to specify objects to skip during planning and application, providing finer control over infrastructure management. Unlike Terraform’s &lt;code&gt;-target&lt;/code&gt; option, which only targets specific resources, the &lt;code&gt;-exclude&lt;/code&gt; option allows selective exclusion of specified objects and their dependencies. This capability is particularly useful for managing large, complex setups.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  18. Bonus2 — Specialized Infrastructure Orchestration Platform
&lt;/h1&gt;

&lt;p&gt;Now that you know how to use Terraform and OpenTofu, there are many questions that will pass through your mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;How can I manage it at scale?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do I enable collaboration?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can I enforce my infrastructure process?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can I ensure that I stay safe all the way?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can I enable self-service?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can I easily combine Terraform/OpenTofu with other tools that I’m using such as Ansible or Kubernetes?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can I include security vulnerability scanning tools in my workflow?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can I ensure that I still deliver fast?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can I protect myself from infrastructure drift?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can I detect and remediate drift?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How can I take advantage of a private registry?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You may have other questions as well, and to all of them you can use the same answer — “by leveraging an infrastructure orchestration platform”.&lt;/p&gt;

&lt;p&gt;Spacelift is an infrastructure management platform that helps you with all of the above, and more. To understand more how you can leverage it, check out this &lt;a href="https://spacelift.io/blog/how-specialized-solution-can-improve-your-iac" rel="noopener noreferrer"&gt;article&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>programming</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Is this really the Minimum to become a DevOps Engineer?</title>
      <dc:creator>Flavius Dinu</dc:creator>
      <pubDate>Tue, 05 Nov 2024 12:36:39 +0000</pubDate>
      <link>https://dev.to/flaviuscdinu/is-this-really-the-minimum-to-become-a-devops-engineer-2eaf</link>
      <guid>https://dev.to/flaviuscdinu/is-this-really-the-minimum-to-become-a-devops-engineer-2eaf</guid>
      <description>&lt;p&gt;&lt;a href="https://medium.com/@flaviuscdinu93/is-this-really-the-minimum-to-become-a-devops-engineer-93f2143854fb?source=rss-bd2e21eea802------2" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1024%2F0%2A2C4iZp6yPm7_Q8Q3" width="1024" height="1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You’ve asked so I kind of delivered, I guess.&lt;/p&gt;

&lt;p&gt;I’ve seen many social media posts saying that DevOps is not a job but a methodology in software development. DevOps started as a methodology and I 100% agree that it should be one, but the thing is that many companies are searching for DevOps engineers, and DevOps engineering roles are here to stay for a while.&lt;/p&gt;

&lt;p&gt;To tell you the truth, I didn’t become a DevOps engineer by accident. I saw the methodology, and I thought, ok this seems like it will take off soon. I took a bet, and I’ve started reading about Cloud providers, CI/CD pipelines, configuration management, and infrastructure as code, and I’ve started becoming better in scripting to try and land a job in this area. Luckily the market was in a way better shape than it is today, and the competition wasn’t that big in the beginning, so I managed to transition to a position like this, less than a year into working.&lt;/p&gt;

&lt;p&gt;One year ago, I wanted to write a series about what I consider it minimum takes to become a DevOps engineer, but I realized that to cover everything that I planned to cover I would have to write a book, which is something that I don’t really want to commit to — yet. So what I’ll do instead, is try to share as much as possible in this post.&lt;/p&gt;

&lt;p&gt;I’ve been blessed (or cursed) to work with many clients that had very different architectures, maturity levels, and were using various tools to achieve their day-to-day DevOps engineering practices, so in this post, I will cover some of the common ground I’ve seen being used, show you some nice learning materials and give you some recommendations for next steps.&lt;/p&gt;

&lt;p&gt;Before jumping into that, I want to clarify — you cannot really be a Junior DevOps engineer if the company you are working for is not going to be patient and give you all the time you need to shadow a Senior. So for more experienced guys, please be the senior you needed when you were a junior (I know it sounds like a cliche, but it’s not), as in this way we will make the industry a better place. Stop blaming and start sharing and let’s be truthful to ourselves — there were so few people born extraordinary in this industry, and for the rest of us this is a continuous grind.&lt;/p&gt;

&lt;p&gt;Here is the list of skills the majority of DevOps engineers should have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Linux&lt;/li&gt;
&lt;li&gt;Scripting&lt;/li&gt;
&lt;li&gt;Version Control (VCS)&lt;/li&gt;
&lt;li&gt;Cloud Technologies&lt;/li&gt;
&lt;li&gt;Infrastructure as Code (IaC)&lt;/li&gt;
&lt;li&gt;Configuration Management&lt;/li&gt;
&lt;li&gt;Continuous Integration &amp;amp; Continuous Delivery (CI/CD)&lt;/li&gt;
&lt;li&gt;Containerization &amp;amp; Orchestration&lt;/li&gt;
&lt;li&gt;Observability &amp;amp; Monitoring&lt;/li&gt;
&lt;li&gt;Security and Governance&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. Linux
&lt;/h2&gt;

&lt;p&gt;Is being a pro in the Windows OS wrong? Of course, it’s not, but judging from the fact that even on Microsoft Azure, the majority of VMs are using some sort of Linux distribution, it would be better to dedicate some time to learning Linux, rather than going all in on Windows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonfseezkia5c9vs7smya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonfseezkia5c9vs7smya.png" alt=" " width="800" height="683"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I was thinking about writing the series one year ago, I’ve built an article that has my take on what you should learn when it comes to Linux, so check it out here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://techblog.flaviusdinu.com/is-this-really-the-minimum-to-become-a-devops-engineer-93f2143854fb?sk=257b6168c7105548424cb4bc9946f282" rel="noopener noreferrer"&gt;Continue reading on Medium&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>engineering</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
