<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Spacelift</title>
    <description>The latest articles on DEV Community by Spacelift (@spacelift).</description>
    <link>https://dev.to/spacelift</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/spacelift"/>
    <language>en</language>
    <item>
      <title>Why Terraform Feels Slow: Understanding Parallelism</title>
      <dc:creator>Spacelift team</dc:creator>
      <pubDate>Mon, 19 Jan 2026 13:26:14 +0000</pubDate>
      <link>https://dev.to/spacelift/terraform-parallelism-how-it-works-tuning-best-practices-jl</link>
      <guid>https://dev.to/spacelift/terraform-parallelism-how-it-works-tuning-best-practices-jl</guid>
      <description>&lt;p&gt;Terraform can perform resource operations concurrently, which can dramatically reduce apply time — but parallelism is a tradeoff. Too low and runs drag; too high and you can hit provider API rate limits, timeouts, or flaky behavior that’s hard to debug.&lt;/p&gt;

&lt;p&gt;In the full guide, we cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What Terraform “parallelism” actually controls (and what it doesn’t)&lt;/li&gt;
&lt;li&gt;How the default behavior works during plan/apply&lt;/li&gt;
&lt;li&gt;When tuning &lt;code&gt;-parallelism&lt;/code&gt; helps (and when it just increases failure rate)&lt;/li&gt;
&lt;li&gt;Practical strategies for reliability: rate-limit awareness, batching, and dependency shaping&lt;/li&gt;
&lt;li&gt;Tips for scaling large IaC codebases without turning applies into roulette&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➡️ Read the full article on our blog:&lt;br&gt;&lt;br&gt;
&lt;a href="https://spacelift.io/blog/terraform-parallelism" rel="noopener noreferrer"&gt;https://spacelift.io/blog/terraform-parallelism&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>Terraform IaC Security Scanners</title>
      <dc:creator>Spacelift team</dc:creator>
      <pubDate>Mon, 15 Dec 2025 09:53:33 +0000</pubDate>
      <link>https://dev.to/spacelift/top-7-terraform-scanning-tools-you-should-know-4c70</link>
      <guid>https://dev.to/spacelift/top-7-terraform-scanning-tools-you-should-know-4c70</guid>
      <description>&lt;p&gt;Terraform scanning is one of the easiest “shift-left” wins: you catch risky defaults, misconfigurations, and policy violations &lt;em&gt;before&lt;/em&gt; they become real infrastructure. The tricky part isn’t whether to scan — it’s choosing a tool that fits your workflow (local dev + CI), your policy needs, and your reporting requirements.&lt;/p&gt;

&lt;p&gt;In the full article, we cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What Terraform/IaC scanning is and what it’s best at catching&lt;/li&gt;
&lt;li&gt;7 widely used scanners and how they differ in coverage and focus&lt;/li&gt;
&lt;li&gt;How teams typically integrate scanners into CI/CD and pull request workflows&lt;/li&gt;
&lt;li&gt;What to look for when evaluating a scanner (policies, suppression, speed, output formats)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➡️ &lt;strong&gt;Read the full article on our blog:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://spacelift.io/blog/terraform-scanning-tools" rel="noopener noreferrer"&gt;https://spacelift.io/blog/terraform-scanning-tools&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>Terraform Self-Hosted</title>
      <dc:creator>Spacelift team</dc:creator>
      <pubDate>Fri, 28 Nov 2025 08:37:15 +0000</pubDate>
      <link>https://dev.to/spacelift/how-to-run-terraform-self-hosted-3eoi</link>
      <guid>https://dev.to/spacelift/how-to-run-terraform-self-hosted-3eoi</guid>
      <description>&lt;p&gt;“Self-hosted Terraform” usually isn’t about preference — it’s about constraints: compliance, data residency, private networking, or simply needing execution to happen inside your own environment.&lt;/p&gt;

&lt;p&gt;The key decision isn’t &lt;em&gt;whether&lt;/em&gt; Terraform can run there (it can), but &lt;em&gt;how&lt;/em&gt; you want to operate it day-to-day: collaboration workflow, access control, state locking, secrets handling, auditability, and how you scale runs safely without turning CI into a fragile snowflake.&lt;/p&gt;

&lt;p&gt;In the full guide, we cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What “self-hosted Terraform” means in practice (and what it doesn’t)&lt;/li&gt;
&lt;li&gt;Four common models teams use, from local execution to fully managed workflows with self-hosted runners&lt;/li&gt;
&lt;li&gt;The tradeoffs to evaluate: control, security posture, scaling, and operational overhead&lt;/li&gt;
&lt;li&gt;What tends to break first (locking, concurrency, secrets, drift) and how to plan around it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➡️ &lt;strong&gt;Read the full article on our blog:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://spacelift.io/blog/terraform-self-hosted" rel="noopener noreferrer"&gt;https://spacelift.io/blog/terraform-self-hosted&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>Ansible Register: How to Store and Reuse Task Output</title>
      <dc:creator>Spacelift team</dc:creator>
      <pubDate>Mon, 06 Oct 2025 10:50:45 +0000</pubDate>
      <link>https://dev.to/spacelift/ansible-register-how-to-store-and-reuse-task-output-15c0</link>
      <guid>https://dev.to/spacelift/ansible-register-how-to-store-and-reuse-task-output-15c0</guid>
      <description>&lt;p&gt;Ansible automates repetitive infrastructure and application management tasks across multiple environments. It enables configuration management at scale without manual intervention. Ansible playbooks allow you to define infrastructure in a structured and repeatable manner. &lt;/p&gt;

&lt;p&gt;Ansible’s register variables make playbooks more dynamic by capturing task output for use in later steps. This enables conditional execution, smarter state handling, and structured debugging, allowing workflows to adapt intelligently.&lt;/p&gt;

&lt;p&gt;In this blog, we cover how register variables work and why they are essential in Ansible automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an Ansible register?
&lt;/h2&gt;

&lt;p&gt;An Ansible register is a mechanism for capturing a task's output and storing it in a variable for use in later tasks. It allows dynamic decision-making based on command results, such as stdout, stderr, or return codes.&lt;/p&gt;

&lt;p&gt;When a task includes the &lt;code&gt;register&lt;/code&gt; keyword, Ansible saves the result in a variable, which is a dictionary containing multiple fields like &lt;code&gt;stdout&lt;/code&gt;, &lt;code&gt;stderr&lt;/code&gt;, &lt;code&gt;rc&lt;/code&gt;, and &lt;code&gt;changed&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Registered variables are essential for branching logic, error handling, and reporting within Ansible automation workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ansible register variables use cases
&lt;/h2&gt;

&lt;p&gt;Register variables allow you to dynamically adapt your workflow based on real-time system information without having to gather data manually. You can use register variables when you want your automation logic to respond to system feedback in the workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Debugging with register variables
&lt;/h3&gt;

&lt;p&gt;When you automate system checks, you often need to see what a command returns on remote servers. Register variables allow you to capture this command output and display it clearly within your playbook runs. You can view spontaneous outputs directly in Ansible and quickly verify if a system has sufficient disk space.&lt;/p&gt;

&lt;p&gt;For example, we run the &lt;code&gt;df -h&lt;/code&gt; command to check disk usage on the target machine and register its output. We then display only the human-readable output to understand the current disk usage clearly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Check available disk space on the system
  command: df -h
  register: disk_check

- name: Display the captured disk space information
  debug:
    var: disk_check.stdout
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach enables us to immediately view the server's disk space status in our playbook output. We can easily determine if we need to clear space before proceeding with installations or updates.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Conditional execution using register variables
&lt;/h3&gt;

&lt;p&gt;When automating tasks, it's often important to control execution based on the system's current state. Register variables let you capture command results, which you can then use to decide whether the next task should run. This ensures that configurations are applied only when specific conditions are met in your playbook.&lt;/p&gt;

&lt;p&gt;In the example below, we check whether nginx is installed on the target system by storing the output of a command. We set &lt;code&gt;ignore_errors&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt; so the playbook doesn't fail if nginx isn't present. Finally, we use the return code to decide whether or not to start the nginx service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Check if nginx is installed
  command: rpm -q nginx
  register: nginx_check
  ignore_errors: true

- name: Start nginx if it is installed
  service:
    name: nginx
    state: started
  when: nginx_check.rc == 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach ensures that we only attempt to start &lt;code&gt;nginx&lt;/code&gt; if it is already installed on the system. It prevents errors during playbook execution while maintaining a clean and controlled workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Data manipulation with register variables
&lt;/h3&gt;

&lt;p&gt;Sometimes you need to pull structured data from remote systems to use in later steps. With registered variables, you can capture that data once and then reuse it in loops or templates, instead of running the same command multiple times. This makes your playbooks more efficient while still working with up-to-date system information.&lt;/p&gt;

&lt;p&gt;For example, you might capture the list of users from the &lt;code&gt;/etc/passwd&lt;/code&gt; file and split it into individual lines, making it easier to process or validate later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Capture list of system users
  command: cat /etc/passwd
  register: users_output

- name: Display list of users
  debug:
    var: users_output.stdout_lines
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach enables us to process system user data dynamically within our playbook. You can parse, filter, or use this captured data to trigger configurations or audits across managed systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  How register variables fit into Ansible workflows
&lt;/h3&gt;

&lt;p&gt;Register variables integrate naturally into the idempotent workflows by using live system data to take the next actions. &lt;/p&gt;

&lt;p&gt;When a task runs, using the register keyword stores the &lt;code&gt;result&lt;/code&gt; (including &lt;code&gt;stdout&lt;/code&gt;, &lt;code&gt;stderr&lt;/code&gt;, &lt;code&gt;rc&lt;/code&gt;, and more) in a variable. This variable can then be referenced in &lt;a href="https://spacelift.io/blog/ansible-when-conditional" rel="noopener noreferrer"&gt;&lt;code&gt;when&lt;/code&gt;&lt;/a&gt;&lt;a href="https://spacelift.io/blog/ansible-when-conditional" rel="noopener noreferrer"&gt; clauses&lt;/a&gt;, &lt;a href="https://spacelift.io/blog/ansible-debug" rel="noopener noreferrer"&gt;&lt;code&gt;debug&lt;/code&gt;&lt;/a&gt;&lt;a href="https://spacelift.io/blog/ansible-debug" rel="noopener noreferrer"&gt; statements&lt;/a&gt;, or &lt;a href="https://spacelift.io/blog/ansible-template" rel="noopener noreferrer"&gt;templated values&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;For example, you can run a shell command, register its output, and make subsequent decisions based on the command's success or failure. Registered variables persist only during the playbook run and are specific to the host where the task was executed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Syntax and basic usage of Ansible register variables
&lt;/h2&gt;

&lt;p&gt;The syntax to register variables in Ansible uses the &lt;code&gt;register&lt;/code&gt; keyword within a task, storing the output of that task in a named variable. The basic format is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Run a command and register output
  command: uname -r
  register: kernel_version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, we capture the output of a system uptime check to evaluate later in our workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Check system uptime
  command: uptime
  register: uptime_result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To execute this, save the playbook as &lt;code&gt;uptime_check.yml&lt;/code&gt; and run it on your target machine using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook uptime_check.yml -i your_inventory_file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The registered variable captures the output in a structured, JSON-like format. It includes attributes such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;stdout&lt;/code&gt;: The standard output of the command&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;stderr&lt;/code&gt;: Any error messages returned&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;rc&lt;/code&gt;: The numeric return code from the command&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;changed&lt;/code&gt;: Indicates whether the task made a change to the system&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To view this structure clearly, we add a &lt;code&gt;debug&lt;/code&gt; task in the playbook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Display the captured uptime output structure
  debug:
    var: uptime_result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Registered variables store outputs in a structured JSON-like format. It contains attributes such as stdout for the command's output, stderr for error messages, rc for the return code, and changed to indicate whether the task made any changes on the target system. &lt;/p&gt;

&lt;p&gt;You can utilize these attributes to evaluate task results with precision.&lt;/p&gt;

&lt;p&gt;When you run the playbook, you will see output similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ok: [localhost] =&amp;gt; {
    "uptime_result": {
        "changed": true,
        "cmd": "uptime",
        "delta": "0:00:00.003",
        "end": "2025-06-19 11:22:33.682865",
        "rc": 0,
        "start": "2025-06-19 11:22:33.679251",
        "stderr": "",
        "stdout": "11:22:33 up 2 days,  3:14,  1 user,  load average: 0.03, 0.01, 0.00",
        ...
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This helps you inspect and use the captured values precisely in later tasks, such as for conditional checks or notifications during your playbook runs.&lt;/p&gt;

&lt;p&gt;💡 You might also like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/ansible-variable-precedence" rel="noopener noreferrer"&gt;Ansible Variable Precedence Explained: Order &amp;amp; Use Cases&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/ansible-inventory" rel="noopener noreferrer"&gt;Working with Ansible Inventory – Basics and Examples&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/ansible-cheat-sheet" rel="noopener noreferrer"&gt;Ansible Cheat Sheet: CLI Commands and Basics&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Accessing register variable data
&lt;/h2&gt;

&lt;p&gt;Once you capture data with a register variable, you can access specific attributes of this variable to inform decisions in subsequent tasks. The stdout attribute typically contains the textual output of a command. &lt;/p&gt;

&lt;p&gt;At the same time, stderr captures any error output, and rc stores the numeric return code, allowing us to verify whether a task succeeded or failed during execution without manual checks. &lt;/p&gt;

&lt;p&gt;You can also use the changed attribute to determine whether the preceding task altered a system state, ensuring that we only trigger dependent configurations when changes occur.&lt;/p&gt;

&lt;p&gt;We frequently use the debug module to display the contents of register variables during development or troubleshooting. This helps us understand the structure of the captured output and identify which attributes we need to reference in our conditionals, loops, or templates. &lt;/p&gt;

&lt;p&gt;For example, displaying uptime_result after capturing it helps you verify what the command returned across multiple hosts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Show uptime result
  debug:
    var: uptime_result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prints the clean command output. &lt;/p&gt;

&lt;p&gt;You can confirm the exact data captured from the managed node and use it effectively in subsequent tasks. This enables efficient handling of structured data while maintaining readable and modular playbooks for team workflows.&lt;/p&gt;

&lt;p&gt;Tips for navigating complex nested output&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Use the &lt;code&gt;JSON_query&lt;/code&gt; filter to extract specific values from complex JSON outputs returned by modules like URI or setup.&lt;/li&gt;
&lt;li&gt;  Always inspect the structure using &lt;code&gt;debug: var = variable_name&lt;/code&gt; before referencing nested paths.&lt;/li&gt;
&lt;li&gt;  Apply the &lt;code&gt;to_json&lt;/code&gt; filter temporarily to view the raw JSON structure during troubleshooting.&lt;/li&gt;
&lt;li&gt;  Document the nested structure you expect in your playbook comments for clarity during team reviews.&lt;/li&gt;
&lt;li&gt;  Use Ansible &lt;code&gt;community.general.json_query&lt;/code&gt; plugin for advanced JMESPath queries on deeply nested outputs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Practical examples of register variables
&lt;/h2&gt;

&lt;p&gt;The example below shows how register variables enhance your Ansible playbooks during real-world automation workflows. &lt;/p&gt;

&lt;h3&gt;
  
  
  Example 1: Capturing command output
&lt;/h3&gt;

&lt;p&gt;We often need to check the system state on remote servers while automating configurations. Capturing the output of a shell command in Ansible enables you to view the data immediately, verify it, and utilize it for subsequent tasks within the same playbook. &lt;/p&gt;

&lt;p&gt;In this example, we will use the uptime command to capture the system's uptime and display it using the debug module to confirm the information collected during the run.&lt;/p&gt;

&lt;p&gt;Now, let's create a playbook to capture and display the command output with a file name as &lt;code&gt;capture_uptime.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Capture system uptime using Ansible
  hosts: all
  gather_facts: false

  tasks:
    - name: Run uptime command on the target host
      command: uptime
      register: uptime_result

    - name: Display captured uptime output
      debug:
        var: uptime_result.stdout
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open your terminal and navigate to the directory where your &lt;code&gt;capture_uptime.yml&lt;/code&gt; file is located. Run the playbook using the following command, replacing &lt;code&gt;inventory_file&lt;/code&gt; with your Ansible inventory file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook capture_uptime.yml -i inventory_file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will execute the uptime command on all hosts listed in your inventory and capture the output. During execution, you will see output similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK [Display captured uptime output] *****************************************
ok: [server1] =&amp;gt; {
    "uptime_result.stdout": " 10:45:23 up 15 days,  2:11,  1 user,  load average: 0.02, 0.01, 0.00"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ansible captures the output of the &lt;code&gt;uptime&lt;/code&gt; command from the target machine and displays it during the playbook run.&lt;/p&gt;

&lt;p&gt;Using the register keyword, we store this output for later use, and with debug we make it visible in real time. This approach provides immediate insight into system conditions, ensuring we can verify factors like uptime or load before proceeding with sensitive tasks such as applying updates or restarting services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 2: Conditional execution
&lt;/h3&gt;

&lt;p&gt;When automating with Ansible, we often need to run tasks only if certain conditions are met on the target system. By using registered variables with the &lt;code&gt;when&lt;/code&gt; clause, we can capture command outputs and check them before moving on to dependent tasks. This way, actions are executed only when needed, helping us prevent unnecessary errors and keep our playbooks clean and efficient.&lt;/p&gt;

&lt;p&gt;Create and save the following playbook as &lt;code&gt;check_package_and_act.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Check if a package is installed before taking action
  hosts: all
  gather_facts: false

  tasks:
    - name: Check if httpd package is installed
      command: rpm -q httpd
      register: httpd_check
      ignore_errors: true

    - name: Start httpd service if it is installed
      service:
        name: httpd
        state: started
      when: httpd_check.rc == 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open your terminal, navigate to your playbook directory, and execute the playbook using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook check_package_and_act.yml -i inventory_file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;inventory_file&lt;/code&gt; with your Ansible inventory to target the correct hosts for execution. If the httpd package is installed on the target host, you will see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK [Start httpd service if it is installed] **********************************
changed: [server1]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If httpd is not installed, the playbook will skip starting the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK [Start httpd service if it is installed] **********************************
skipping: [server1]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example demonstrates how to use register variables with conditional execution in Ansible. We capture the output and return code of the &lt;code&gt;rpm -q httpd&lt;/code&gt; command using a register variable named &lt;code&gt;httpd_check&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;We then evaluate the rc attribute (return code) with a when condition to determine if the package is installed (&lt;code&gt;rc == 0&lt;/code&gt;) before attempting to start the service. &lt;/p&gt;

&lt;p&gt;This approach prevents errors that can happen when trying to start a service that isn't installed. It helps keep our automation workflows both idempotent and reliable. It also allows us to build dynamic, context-aware playbooks that automatically adapt to the state of the systems we manage, without the need for manual intervention.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 3: Looping with register variables
&lt;/h3&gt;

&lt;p&gt;Sometimes, during automation, we need to check the status of multiple services on a system. In Ansible, using register variables with loops lets us capture outputs as we iterate through a list. This makes it easier to run structured checks and produce clear reports in our playbooks. &lt;/p&gt;

&lt;p&gt;It's an efficient way to validate system health without writing repetitive tasks while still effectively collecting and reusing data.&lt;/p&gt;

&lt;p&gt;Now, we will create a playbook to capture and display the command output with a file name as &lt;code&gt;capture_uptime.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Check the status of multiple services
  hosts: all
  gather_facts: false

  tasks:
    - name: Check if services are active
      shell: systemctl is-active {{ item }}
      register: service_status
      loop:
        - sshd
        - crond
        - firewalld
      ignore_errors: true

    - name: Display the service status results
      debug:
        msg: "{{ item.item }} service status: {{ item.stdout }}"
      loop: "{{ service_status.results }}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can run the playbook using the following bash command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook check_multiple_services.yml -i inventory_file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;inventory_file&lt;/code&gt; with your Ansible inventory file to target your managed hosts. The playbook will display outputs similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK [Display the service status results] **************************************
ok: [server1] =&amp;gt; (item={'item': 'sshd', 'stdout': 'active', ...}) =&amp;gt; {
    "msg": "sshd service status: active"
}
ok: [server1] =&amp;gt; (item={'item': 'crond', 'stdout': 'active', ...}) =&amp;gt; {
    "msg": "crond service status: active"
}
ok: [server1] =&amp;gt; (item={'item': 'firewalld', 'stdout': 'inactive', ...}) =&amp;gt; {
    "msg": "firewalld service status: inactive"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example shows how to use registered variables in loops to check and report the status of multiple services within a single, structured workflow. We loop through each listed service, running the &lt;code&gt;systemctl is-active&lt;/code&gt; command and capturing the output with the &lt;code&gt;register&lt;/code&gt; keyword into the variable &lt;code&gt;service_status&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, we iterate over &lt;code&gt;service_status.results&lt;/code&gt; and use the &lt;code&gt;debug&lt;/code&gt; module to display a clear message for each service's status. This method provides a systematic way to gather service information while keeping playbooks clean, reusable, and easy to maintain for operational checks across managed environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best practices for using register variables
&lt;/h2&gt;

&lt;p&gt;Using register variables makes our Ansible playbooks clearer and more flexible. But without a structured approach, they can quickly clutter workflows and cause confusion in team settings. &lt;/p&gt;

&lt;p&gt;By following best practices, we ensure our automation scripts stay clean and effective while still using register variables to dynamically process system data during infrastructure tasks.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Keep variable names descriptive and consistent -  Using clear, descriptive names when defining register variables in your playbooks makes it easier for you and our teammates to understand what data each variable holds when reading or maintaining playbooks. Naming a variable as disk_usage_output or nginx_status is better than using generic names, such as result or output. &lt;/li&gt;
&lt;li&gt; Avoid overuse to maintain playbook readability - You should only register outputs when the captured data is necessary for later steps in the workflow. Avoid capturing outputs that are not used, as this can lead to confusion and increase noise in task outputs during execution. A clean playbook is more straightforward to debug, share, and maintain.&lt;/li&gt;
&lt;li&gt; Use debug for troubleshooting during development - Displaying the contents of a registered variable allows you to understand its structure and confirm you are referencing the correct attributes in your conditions or loops. Adding debug tasks temporarily during development can help you pinpoint issues and verify data flows without interrupting workflow logic.&lt;/li&gt;
&lt;li&gt; Handle failures gracefully with ignore_errors or conditionals - Using &lt;code&gt;ignore_errors: true&lt;/code&gt; allows the playbook to continue running, even if a check fails, while still capturing the output for conditional handling in the next step. You can use conditions based on the rc attribute to determine which actions to take, ensuring your playbooks remain robust and adaptable across various systems.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How Spacelift can help with your Ansible projects
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://spacelift.io/" rel="noopener noreferrer"&gt;Spacelift&lt;/a&gt;'s vibrant ecosystem and excellent GitOps flow are helpful for managing and orchestrating Ansible. By introducing Spacelift on top of Ansible, you can easily create custom workflows based on pull requests and apply any necessary compliance checks for your organization.&lt;/p&gt;

&lt;p&gt;Another advantage of using Spacelift is that you can manage infrastructure tools like Ansible, Terraform, Pulumi, AWS CloudFormation, and even Kubernetes from the same place and combine their &lt;a href="https://docs.spacelift.io/concepts/stack/" rel="noopener noreferrer"&gt;stacks&lt;/a&gt; with building workflows across tools.&lt;/p&gt;

&lt;p&gt;You can bring your own Docker image and &lt;a href="https://docs.spacelift.io/integrations/docker" rel="noopener noreferrer"&gt;use it as a runner&lt;/a&gt; to speed up deployments that leverage third-party tools. Spacelift's official runner image can be found &lt;a href="https://github.com/spacelift-io/runner-terraform" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Our latest Ansible enhancements solve three of the biggest challenges engineers face when they are using Ansible:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Having a centralized place in which you can run your playbooks&lt;/li&gt;
&lt;li&gt;  Combining IaC with configuration management to create a single workflow&lt;/li&gt;
&lt;li&gt;  Getting insights into what ran and where&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can provision, configure, govern, and orchestrate your containers with a single workflow, separating the elements into smaller chunks to identify issues more easily.&lt;/p&gt;

&lt;p&gt;If you want to learn more about using Spacelift with Ansible, check our &lt;a href="https://docs.spacelift.io/vendors/ansible/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, read our &lt;a href="https://spacelift.io/blog/introducing-spacelifts-latest-ansible-functionality" rel="noopener noreferrer"&gt;Ansible guide&lt;/a&gt;, or &lt;a href="https://spacelift.io/schedule-demo" rel="noopener noreferrer"&gt;book a demo with one of our engineers&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key points
&lt;/h2&gt;

&lt;p&gt;In this guide, we examined Ansible register variables and how they let us capture task outputs for use in later tasks. We covered how to write the syntax, access variable data effectively, and use register variables to capture command outputs, run conditional statements, and loop through commands in playbooks. This makes it easier to manage system states dynamically during automated workflows.&lt;/p&gt;

&lt;p&gt;Now it's time to implement register variables in our own Ansible projects. By updating playbooks to capture structured outputs and applying these variables for conditional task execution, we can make our automation smarter, more reliable, and easier to follow.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Written by Sumeet Ninawe&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ansible</category>
      <category>devops</category>
    </item>
    <item>
      <title>Docker Image Layers Explained</title>
      <dc:creator>Spacelift team</dc:creator>
      <pubDate>Fri, 19 Sep 2025 10:19:08 +0000</pubDate>
      <link>https://dev.to/spacelift/docker-image-layers-what-they-are-how-they-work-2o22</link>
      <guid>https://dev.to/spacelift/docker-image-layers-what-they-are-how-they-work-2o22</guid>
      <description>&lt;p&gt;Docker images aren’t one big filesystem snapshot; they’re a stack of layers. Each layer represents a set of filesystem changes, which makes images efficient: layers can be cached, reused across images, and shared between multiple containers. The flip side is that a small Dockerfile change can invalidate cache and force rebuilding everything after that point.&lt;/p&gt;

&lt;p&gt;In the full guide, we cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What an image layer is and how layers combine into a final image&lt;/li&gt;
&lt;li&gt;How layer caching works during builds (and what breaks cache)&lt;/li&gt;
&lt;li&gt;Which Dockerfile instructions typically create new layers&lt;/li&gt;
&lt;li&gt;How to inspect an image’s layers and understand what’s taking space&lt;/li&gt;
&lt;li&gt;Practical ways to optimize layering for smaller images and faster builds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➡️ Read the full article on our blog:&lt;br&gt;&lt;br&gt;
&lt;a href="https://spacelift.io/blog/docker-image-layers" rel="noopener noreferrer"&gt;https://spacelift.io/blog/docker-image-layers&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>GitLab vs GitHub : Key Differences in 2025</title>
      <dc:creator>Spacelift team</dc:creator>
      <pubDate>Mon, 15 Sep 2025 11:24:32 +0000</pubDate>
      <link>https://dev.to/spacelift/gitlab-vs-github-key-differences-in-2025-2cm</link>
      <guid>https://dev.to/spacelift/gitlab-vs-github-key-differences-in-2025-2cm</guid>
      <description>&lt;p&gt;GitLab and GitHub are two of the most popular Git-hosting platforms. They let you store Git repositories, collaborate on code, and automate your software delivery process using CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;Although the two platforms look similar initially, they each have unique features ideal for slightly different use cases. It’s important to select the right option for your team so you can efficiently build and scale your projects. The solution you choose will also affect your security and compliance posture.&lt;/p&gt;

&lt;p&gt;The main difference between GitLab and GitHub lies in their approach to DevOps and CI/CD integration. GitLab provides a built-in, fully integrated CI/CD system, making it a complete DevOps platform out of the box. GitHub, while popular for source code hosting and collaboration, relies more on external tools or its separate GitHub Actions for CI/CD functionality.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll examine GitLab’s and GitHub’s headline features. We’ll explain each platform’s similarities and differences and highlight its support for key DevOps workflows. Then, you’ll be ready to make an informed decision about which is the best fit for your organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is GitLab?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://about.gitlab.com/" rel="noopener noreferrer"&gt;GitLab&lt;/a&gt; is a Git-based version control system (VCS) that emerged in 2011. The hosted GitLab.com service started out as a beta in 2012. The platform lets you store Git repositories, access them through a web browser, and collaborate on changes using a merge-based workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F04%252Fwhat-is-gitlab.png%26w%3D3840%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F04%252Fwhat-is-gitlab.png%26w%3D3840%26q%3D75" alt="what is gitlab" width="1885" height="696"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitLab also strongly emphasizes CI/CD-driven automation. The platform includes a powerful built-in CI/CD implementation that's closely integrated with your repositories. Pipelines are defined in simple YAML files.&lt;/p&gt;

&lt;p&gt;GitLab's Ultimate tier offers many advanced security and compliance features that have helped the platform succeed in global enterprises. The system is now marketed as a complete DevSecOps solution capable of managing all aspects of modern software delivery. Over &lt;a href="https://about.gitlab.com/company" rel="noopener noreferrer"&gt;40 million users&lt;/a&gt; are registered on GitLab.com.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key features of GitLab
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Built-in CI/CD - GitLab offers a native continuous integration and deployment system, allowing automated testing, builds, and releases directly within the platform.&lt;/li&gt;
&lt;li&gt;  Complete DevOps lifecycle support - It provides end-to-end tools for planning, coding, testing, deploying, and monitoring - removing the need for multiple separate tools.&lt;/li&gt;
&lt;li&gt;  Auto DevOps - This feature automatically detects project settings and configures pipelines, making it easier to deploy applications with minimal setup.&lt;/li&gt;
&lt;li&gt;  Integrated container registry - GitLab includes a secure, private Docker registry that integrates directly with your CI/CD pipelines, simplifying container management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is GitHub?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://about.gitlab.com/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; is the largest and best-known Git hosting solution. It launched in 2008 and now has over &lt;a href="https://github.blog/news-insights/company-news/100-million-developers-and-counting" rel="noopener noreferrer"&gt;100 million users&lt;/a&gt;. The system's simplicity and ease of use means it's a popular choice with developers, particularly for public open-source projects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F04%252Fwhat-is-github.png%26w%3D3840%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F04%252Fwhat-is-github.png%26w%3D3840%26q%3D75" alt="what is github" width="1894" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub's pull request workflow makes it easy to collaborate on changes, while the platform also has a strong social community side. GitHub Actions, a built-in CI/CD implementation, debuted in 2018 and has achieved widespread popularity for its modular, composable architecture.&lt;/p&gt;

&lt;p&gt;GitHub is a software development standard used by teams worldwide for all kinds of projects. However, newer alternatives like GitLab have eaten into some of GitHub's market share by prioritizing features for specific use cases, particularly around the DevOps lifecycle and compliance management.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key features of GitHub
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Collaborative code hosting - GitHub excels in version control and collaboration, with features like pull requests, code reviews, and branch protection for team workflows.&lt;/li&gt;
&lt;li&gt;  GitHub Actions (CI/CD) - &lt;a href="https://spacelift.io/blog/github-actions-tutorial" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt; allows users to automate workflows, from testing to deployment, directly within the platform using YAML-based configuration.&lt;/li&gt;
&lt;li&gt;  Extensive integration ecosystem - It supports thousands of integrations and third-party apps, making it highly flexible for diverse development environments.&lt;/li&gt;
&lt;li&gt;  Large developer community and open source support - GitHub is the go-to platform for open source, hosting millions of public repositories and fostering collaboration at scale.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💡 You might also like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/software-development-tools" rel="noopener noreferrer"&gt;Top 27 Software Development Tools &amp;amp; Platforms&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/infrastructure-as-code-with-github-actions" rel="noopener noreferrer"&gt;Should you manage your IaC with GitHub Actions?&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/github-actions-vs-jenkins" rel="noopener noreferrer"&gt;GitHub Actions vs. Jenkins: Popular CI/CD Tools Comparison&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are the differences between GitLab and GitHub? Key features for Git repositories, CI/CD, and DevOps
&lt;/h2&gt;

&lt;p&gt;Both GitLab and GitHub allow you to centrally store your Git repositories and collaborate on them via a web-based interface. However, while they offer similar basic functionality, they also have several distinguishing factors. Let's look at how they approach ten key DevOps features and priorities.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Git repository management
&lt;/h3&gt;

&lt;p&gt;GitLab and GitHub both have excellent suites of core Git repository management features. You can browse and edit code within their respective user-friendly interface.&lt;/p&gt;

&lt;p&gt;Both tools make it easy to submit, review, and accept requests to merge code, including from repository forks. GitHub calls this process a pull request, whereas GitLab terms it a merge request, but the difference is purely semantic.&lt;/p&gt;

&lt;p&gt;The two solutions each include an issue tracker that lets you manage upcoming features, improvements, and bug fixes. GitLab has a more powerful but complex implementation, including sub-tasks, epics, swimlanes, and iterations in its paid plans. GitHub's structure is comparatively simpler and less prescriptive, but issues can still have labels, milestones, iterations, and custom fields assigned. Both platforms support a Kanban-style issue board layout.&lt;/p&gt;

&lt;p&gt;Cloud development environments are now available in both GitLab and GitHub. GitLab refers to them as Workspaces, whereas GitHub calls them Codespaces. The experience is similar in each, providing a full web-based IDE that's based on the same basic technology as Visual Studio Code.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. CI/CD features
&lt;/h3&gt;

&lt;p&gt;GitLab and GitHub both offer built-in CI/CD features so you can automate your release process. GitLab has included CI/CD since its earliest versions, making it a core part of the platform's historical appeal. GitHub Actions didn't appear until 2018, but is now a mature and popular solution too.&lt;/p&gt;

&lt;p&gt;Arguably,  GitHub has led more recent developments in the CI/CD space. GitHub Actions pipelines are quick and easy to write as a sequence of composable steps. You can easily reference and extend prebuilt actions published to the GitHub Marketplace. GitLab is also pursuing similar features via its &lt;a href="https://gitlab.com/explore/catalog" rel="noopener noreferrer"&gt;CI/CD Catalog&lt;/a&gt;, but it offers a limited number of available components.&lt;/p&gt;

&lt;p&gt;Both solutions use YAML to define pipeline syntax. Jobs within a pipeline can execute sequentially or in parallel to optimize performance. Self-hosted runners are supported by either platform, letting you maintain your own CI/CD infrastructure to reduce build times further or improve security controls.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Operations management
&lt;/h3&gt;

&lt;p&gt;GitLab includes a suite of features designed for app operators and infrastructure teams. It can directly integrate with your Kubernetes clusters via an in-cluster agent component and Flux CD. You can then view deployed Kubernetes resources on dashboards within GitLab. The platform can also store your Terraform state files, eliminating the need to set up a separate solution.&lt;/p&gt;

&lt;p&gt;These capabilities give GitLab a clear edge over GitHub, which lacks equivalent features. You'll need to use external solutions to manage infrastructure and observe your deployments. &lt;/p&gt;

&lt;p&gt;While this can make it harder to centralize your processes, it also means GitHub is a more focused solution designed to do a few things well. Many of GitLab's ops-oriented features are still relatively basic when compared with using a dedicated platform for each task.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Supported DevOps lifecycle stages
&lt;/h3&gt;

&lt;p&gt;GitHub is best known as a developer-facing tool targeting the build, test, and release stages of the DevOps lifecycle. Its built-in issue tracker can also accommodate planning workflows. However, the platform's lack of operations management features makes it less useful for post-delivery monitoring and analysis tasks.&lt;/p&gt;

&lt;p&gt;On the other hand, GitLab emphasizes compatibility with the entire DevOps lifecycle as one of its key selling points. The GitLab platform is designed to consolidate all &lt;a href="https://spacelift.io/blog/what-is-devsecops" rel="noopener noreferrer"&gt;DevSecOps&lt;/a&gt; work, giving every stakeholder a single shared destination to manage software projects. &lt;/p&gt;

&lt;p&gt;You can plan requirements, develop and deploy code, and run infrastructure workflows within the system, then use built-in observability and value stream analytics dashboards to drive continual improvements.&lt;/p&gt;

&lt;p&gt;As we've already touched on above, GitLab's expanded scope does mean there are significant variations in the &lt;em&gt;depth&lt;/em&gt; of its features. It's possible that using separate tools for individual DevOps stages could be the more versatile option in the long term, but GitLab gives you everything you need to get started in one place.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2024%252F09%252Fnovibet-seeklogo-1.png%26w%3D384%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2024%252F09%252Fnovibet-seeklogo-1.png%26w%3D384%26q%3D75" alt="Novibet logo in white" width="164" height="38"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Novibet is in the cloud, and everything is provisioned through Terraform, which the team previously managed using GitHub Actions. However, as the organization scaled, managing Novibet's IaC through a generic CI/CD platform began to stretch the capabilities of both the tool and the DevOps team. The Spacelift platform has enabled the team to deploy faster and with greater control as they move toward a platform engineering mindset and enable autonomy with guardrails.&lt;/p&gt;

&lt;p&gt;Spacelift customer case study&lt;/p&gt;

&lt;p&gt;&lt;a href="https://spacelift.io/customers/novibet" rel="noopener noreferrer"&gt;Read the full story&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Self-hosting and licensing/open-source options
&lt;/h3&gt;

&lt;p&gt;GitLab and GitHub have significantly different licensing models. Both operate generous free tiers, but only GitLab is open-source. Despite playing a key role in realizing the modern open-source era, GitHub's own code is not available, and the platform's use is always subject to its commercial terms of service.&lt;/p&gt;

&lt;p&gt;Nonetheless, GitLab's form of open-source is actually based on an open-core model. GitLab Community Edition (CE) is built from open-source repositories and contains only free GitLab features, while GitLab Enterprise Edition (EE) includes the closed-source enterprise capabilities available in the platform's paid tiers.&lt;/p&gt;

&lt;p&gt;Any GitLab edition can be self-hosted on your own hardware, permitting absolute control over your data. As an option for large organizations, GitLab also offers dedicated cloud instances --- essentially a private managed cloud installation of its self-hosted package. &lt;/p&gt;

&lt;p&gt;GitHub only supports self-hosting via its enterprise-oriented &lt;a href="https://docs.github.com/en/enterprise-server@3.15/admin/overview/about-github-enterprise-server" rel="noopener noreferrer"&gt;Enterprise Server&lt;/a&gt; plan. It's not possible to host your own GitHub instance for free.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Package registries and releases
&lt;/h3&gt;

&lt;p&gt;GitLab and GitHub both include built-in package registries that you can use to publish and distribute your software artifacts. GitLab supports the most popular package types, including Composer, Maven, npm, NuGet, PyPI, Ruby gems, Terraform, and more, whereas GitHub is limited to Gradle, Maven, npm, NuGet, and Ruby gems. Container images and other OCI assets can be pushed to either service.&lt;/p&gt;

&lt;p&gt;The platforms can also host your project's releases. GitLab and GitHub both allow you to publish release notes and upload assets such as compiled binaries. Users can then access the resources directly from your project's page. You can use each platform's CI/CD system to automate the release process. There are no significant differences between the services in this respect.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Scalability
&lt;/h3&gt;

&lt;p&gt;GitLab and GitHub both have proven scalability. Their public platforms are used for mission-critical software by millions of users around the world. Whichever one you pick, you shouldn't need to worry whether you'll be able to scale up your projects.&lt;/p&gt;

&lt;p&gt;Scalability is more complicated for self-hosted GitLab. The platform is relatively demanding, with at least 8 vCPU cores and 16 GB of RAM recommended for up to 1,000 users. However, smaller teams can typically deploy GitLab with fewer resources, there are even official packages and an &lt;a href="https://docs.gitlab.com/omnibus/settings/rpi" rel="noopener noreferrer"&gt;installation guide&lt;/a&gt; for the Raspberry Pi. &lt;/p&gt;

&lt;p&gt;At the other extreme, GitLab also provides a &lt;a href="https://docs.gitlab.com/administration/reference_architectures/50k_users" rel="noopener noreferrer"&gt;reference architecture&lt;/a&gt; and documentation for scaling up to 50,000 users.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Security and compliance
&lt;/h3&gt;

&lt;p&gt;GitLab and GitHub both include features to secure your development process. With GitHub you get automated vulnerability scans, secret detection scans, and outdated dependency alerts. These allow you to find and resolve security issues in your repositories efficiently.&lt;/p&gt;

&lt;p&gt;GitLab offers similar capabilities (but notably lacks a direct equivalent to GitHub's Dependabot dependency update alerts), as well as built-in static and dynamic application security testing scans. The platform also has built-in components for API fuzz testing and operational container scanning within Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;GitHub Enterprise plans provide critical compliance controls such as branch protection rules, audit logs, and RBAC-based user permissions. &lt;/p&gt;

&lt;p&gt;These capabilities are also available in GitLab. You can also define custom project compliance frameworks that you can monitor and manage at the group level, ensuring continual adherence to your regulatory and internal standards. Combined with the ability to self-host GitLab, this means it's often the better choice for large organizations with precise compliance requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. Community and support
&lt;/h3&gt;

&lt;p&gt;GitLab and GitHub are both mature platforms with active communities. Nonetheless, GitHub is the clear leader in adoption terms, with its 100 million users more than double GitLab's 40 million. Although open-source projects, including &lt;a href="https://about.gitlab.com/blog/2018/05/31/welcome-gnome-to-gitlab" rel="noopener noreferrer"&gt;GNOME&lt;/a&gt;, &lt;a href="https://gitlab.com/inkscape/inkscape" rel="noopener noreferrer"&gt;Inkscape&lt;/a&gt;, and &lt;a href="https://gitlab.com/fdroid" rel="noopener noreferrer"&gt;F-Droid&lt;/a&gt; have adopted GitLab, GitHub is still the default destination for most developers to publish and collaborate on open-source work. It may be easier for you to attract users and contributors when using GitHub instead of GitLab.&lt;/p&gt;

&lt;p&gt;GitLab has achieved significant traction in the enterprise space. Organizations including Airbus, NVIDIA, and Siemens &lt;a href="https://about.gitlab.com/customers" rel="noopener noreferrer"&gt;use GitLab&lt;/a&gt;, with many also choosing to self-host their own environments. GitHub supports global companies, too, &lt;a href="https://github.com/customer-stories" rel="noopener noreferrer"&gt;claiming&lt;/a&gt; the likes of American Airlines, Spotify, and Stripe. &lt;/p&gt;

&lt;p&gt;So, your decision should be based on which platform offers the best implementations of the features you use most. GitHub and GitLab both offer premium support options on their enterprise plans.&lt;/p&gt;

&lt;h3&gt;
  
  
  10. AI capabilities
&lt;/h3&gt;

&lt;p&gt;GitLab and GitHub have both been rapidly building AI features in recent years. GitLab calls its implementation GitLab Duo, while GitHub has GitHub Copilot.&lt;/p&gt;

&lt;p&gt;The two platforms have similar code generation capabilities. You can install GitLab Duo or GitHub Copilot as an IDE extension, and then generate new code snippets and test cases as you work. Chat features let you request explanations of unclear code or explore different solutions to a problem. &lt;/p&gt;

&lt;p&gt;The tools can also produce summaries of your merge requests (GitLab) or pull requests (GitHub), potentially making it easier to interpret complex changes.&lt;/p&gt;

&lt;p&gt;GitLab has also added AI to its enterprise security and compliance capabilities, going far beyond what GitHub offers. For instance, GitLab Duo can analyze detected vulnerabilities and explain their root cause or forecast the trajectory of your software development throughput. You can use your own self-hosted AI models with GitLab Duo if you require behavioral customizations or complete sovereignty over your AI data.&lt;/p&gt;

&lt;p&gt;Read more: &lt;a href="https://spacelift.io/blog/github-copilot-vs-chatgpt" rel="noopener noreferrer"&gt;GitHub Copilot vs. ChatGPT: Developer AI Tools Comparison&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  GitLab vs GitHub: Which should I use?
&lt;/h2&gt;

&lt;p&gt;GitHub is widely used for code collaboration and community-driven development, while GitLab offers a fully integrated DevOps platform with built-in CI/CD.&lt;/p&gt;

&lt;p&gt;GitLab has a larger scope than GitHub, including integrated infrastructure management, operations, and compliance capabilities. This makes it a good choice for enterprise scenarios demanding control, customization, and consolidation of processes.&lt;/p&gt;

&lt;p&gt;Conversely, GitHub's greatest strengths are its ease of use, convenience, and community. It's universally familiar and unfailingly supported by third-party DevOps solutions. These qualities mean it's a natural fit for open-source projects and smaller teams that don't need advanced project management or compliance features.&lt;/p&gt;

&lt;p&gt;If your goal is seamless code collaboration with customizable workflows, GitHub is likely the better fit. If you need an all-in-one platform that supports the full software development lifecycle, GitLab is a strong choice.&lt;/p&gt;

&lt;p&gt;Choose based on whether your priority is extensibility and community (GitHub) or built-in DevOps and automation (GitLab).&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I use both GitHub and GitLab?
&lt;/h3&gt;

&lt;p&gt;Yes, you can use both GitHub and GitLab in parallel, depending on your project needs. Many teams adopt a multi-platform approach to leverage specific features---for example, using GitHub for community collaboration and GitLab for CI/CD pipelines or internal repositories. &lt;/p&gt;

&lt;p&gt;Both platforms support Git-based version control, so it's feasible to sync or mirror repositories between them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why do people prefer GitLab?
&lt;/h3&gt;

&lt;p&gt;People prefer GitLab because it offers a fully integrated DevOps platform that combines source code management, CI/CD, and security in a single application. This reduces toolchain complexity and improves team collaboration. &lt;/p&gt;

&lt;p&gt;GitLab's automation features streamline the software development lifecycle, from planning to deployment. Its support for self-hosted and cloud deployments also gives teams flexibility and control. This all-in-one approach makes GitLab a strong choice for organizations seeking efficiency and scalability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Git vs. GitHub vs. GitLab
&lt;/h2&gt;

&lt;p&gt;Git is a version control system, while GitHub and GitLab are platforms that host Git repositories and add collaboration features. Git allows developers to track changes, branch, and merge code locally or across distributed teams. &lt;/p&gt;

&lt;p&gt;While Git is the foundation, GitHub and GitLab streamline team collaboration and automate parts of the development lifecycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Spacelift improve developer velocity?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://spacelift.io/" rel="noopener noreferrer"&gt;Spacelift&lt;/a&gt; is an infrastructure orchestration platform that &lt;a href="https://spacelift.io/blog/developer-velocity" rel="noopener noreferrer"&gt;improves developer velocity&lt;/a&gt; by offering a powerful policy engine based on OPA, self-service infrastructure, and the ability to build multi-tool workflows with dependencies and output sharing. Spacelift has its own Terraform/OpenTofu provider and also its own Kubernetes operator, which makes it ideal to pair it with an AI-powered coding assistant.&lt;/p&gt;

&lt;p&gt;You can easily have AI-powered coding assistants generate Spacelift Terraform/OpenTofu/Kubernetes code by showing them how you would like the code generated (for example, you want to use for_each on your resources and map(objects) variables when you are using Terraform/OpenTofu). &lt;/p&gt;

&lt;p&gt;With Spacelift, you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://docs.spacelift.io/concepts/policy" rel="noopener noreferrer"&gt;Policies&lt;/a&gt; to control what kind of resources engineers can create, what parameters they can have, how many approvals you need for a run, what kind of task you execute, what happens when a pull request is open, and where to send your notifications&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.spacelift.io/concepts/stack/stack-dependencies.html" rel="noopener noreferrer"&gt;Stack dependencies&lt;/a&gt; to build multi-infrastructure automation workflows with dependencies, having the ability to build a workflow that, for example, generates your EC2 instances using Terraform and combines it with Ansible to configure them&lt;/li&gt;
&lt;li&gt;  Self-service infrastructure via &lt;a href="https://docs.spacelift.io/concepts/blueprint.html" rel="noopener noreferrer"&gt;Blueprints&lt;/a&gt;, enabling your developers to do what matters - developing application code while not sacrificing control&lt;/li&gt;
&lt;li&gt;  Creature comforts such as &lt;a href="https://docs.spacelift.io/concepts/configuration/context.html" rel="noopener noreferrer"&gt;contexts&lt;/a&gt; (reusable containers for your environment variables, files, and hooks), and the ability to run arbitrary code&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.spacelift.io/concepts/stack/drift-detection.html" rel="noopener noreferrer"&gt;Drift detection&lt;/a&gt; and optional remediation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to learn more about Spacelift, &lt;a href="https://spacelift.io/free-trial" rel="noopener noreferrer"&gt;create a free account today&lt;/a&gt; or &lt;a href="https://spacelift.io/schedule-demo" rel="noopener noreferrer"&gt;book a demo with one of our engineers&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key points
&lt;/h2&gt;

&lt;p&gt;To summarize, the main difference between GitLab and GitHub lies in their approach to DevOps integration. GitLab offers a built-in, end-to-end DevOps lifecycle, including CI/CD, security scanning, and infrastructure automation in a single application. &lt;/p&gt;

&lt;p&gt;GitHub, while widely adopted for code collaboration, relies more on third-party integrations for full DevOps workflows. GitLab is often preferred when teams want a tightly integrated toolchain out of the box. This distinction is useful when evaluating platforms for streamlined AI development and deployment pipelines.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Written by James Walker&lt;/em&gt;&lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>github</category>
    </item>
    <item>
      <title>When Artifact Management Meets Infrastructure as Code: How to Use Cloudsmith and Spacelift</title>
      <dc:creator>Spacelift team</dc:creator>
      <pubDate>Fri, 22 Aug 2025 05:22:48 +0000</pubDate>
      <link>https://dev.to/spacelift/when-artifact-management-meets-infrastructure-as-code-how-to-use-cloudsmith-and-spacelift-1e59</link>
      <guid>https://dev.to/spacelift/when-artifact-management-meets-infrastructure-as-code-how-to-use-cloudsmith-and-spacelift-1e59</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a guest author article written by Ian Duffy, Sr SRE at Cloudsmith, a Spacelift customer.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Managing package repositories at scale requires the right tools to prevent chaos from ensuing. As providers of a cloud-native artifact management platform, we at Cloudsmith faced the same scaling challenges as our customers. &lt;/p&gt;

&lt;p&gt;This post shows how repository management complexity can be solved by treating our infrastructure as code, using Terraform (or OpenTofu) and Spacelift to automate Cloudsmith workspace configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Cloudsmith?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://cloudsmith.com/" rel="noopener noreferrer"&gt;Cloudsmith&lt;/a&gt; is a fully managed cloud-native artifact repository designed for controlling, securing, and distributing everything that flows through your software supply chain. &lt;/p&gt;

&lt;p&gt;Founded in 2016 and trusted by over 1,000 enterprise customers, including American Airlines, The Trade Desk, and Shopify, Cloudsmith protects the software supply chain by providing secure, scalable artifact repositories for software teams worldwide.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Spacelift?
&lt;/h2&gt;

&lt;p&gt;Spacelift is an IaC orchestration platform that automates infrastructure management. It runs tools including Terraform, Pulumi, and Ansible whenever your IaC files change. There's no need to build complex CI/CD pipelines manually or bring your own infrastructure state management solutions.&lt;/p&gt;

&lt;p&gt;Spacelift is designed to unify all infrastructure provisioning, configuration, and governance tasks. Policy-as-code support lets you enforce security and compliance requirements, while flexible multi-tenancy and self-service features empower your whole team to interact with infrastructure. You can centralize all your infrastructure processes to improve reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding artifact management
&lt;/h2&gt;

&lt;p&gt;Many people think of artifact management tools as simple storage spaces, like hangars or warehouses, where you park your code. But that's what basic repositories and registries do. A modern artifact management platform like Cloudsmith serves a much more critical role in your software development ecosystem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Artifact management analogy: Airports and Cloudsmith&lt;a href="https://spacelift.io/blog/how-to-use-cloudsmith-and-spacelift#artifact-management-analogy-airports-and-cloudsmith" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Think of software development as a bustling airport with countless moving parts and constant activity. Cloudsmith functions as both your air traffic control tower and your security checkpoints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Like airport security: Cloudsmith acts as a gatekeeper, enforcing strict access controls and screening for threats that could compromise software stability or security. It scans every package for vulnerabilities, validates authenticity, and ensures only authorized artifacts reach your applications.&lt;/li&gt;
&lt;li&gt;  Like air traffic control: Cloudsmith oversees the flow of all your code artifacts, ensuring assets arrive at their destinations safely and on time. It monitors the journey, coordinates movements between environments, and prevents collisions or bottlenecks that disrupt operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just as air traffic control and airport security ensure safe, efficient air travel, Cloudsmith safeguards the integrity and reliability of your software supply chain, providing smooth operations for your teams and safe software delivery to your customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the challenges with artifact management?
&lt;/h2&gt;

&lt;p&gt;We find three major challenges with traditional artifact management:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The legacy platform performance crisis&lt;/li&gt;
&lt;li&gt; The multi-format package complexity&lt;/li&gt;
&lt;li&gt; The software supply chain security risks&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  1. The legacy platform performance crisis&lt;a href="https://spacelift.io/blog/how-to-use-cloudsmith-and-spacelift#1-the-legacy-platform-performance-crisis" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Many companies start with basic package management tools that work fine for small teams. As they scale, these systems become unreliable nightmares where development teams experience frequent outages that prevent access to critical packages, disrupting CI/CD pipelines and causing build failures.&lt;/p&gt;

&lt;p&gt;When your deployment schedule depends on artifact availability, system downtime directly impacts your ability to deliver features and fixes to customers. Self-hosted and legacy solutions require significant maintenance overhead while providing poor reliability, meaning engineering teams spend more time maintaining infrastructure than building products.&lt;/p&gt;

&lt;p&gt;Cloudsmith's cloud-native platform eliminates the burden of infrastructure maintenance while providing global CDN distribution for optimal performance, allowing teams to focus on innovation instead of infrastructure management.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The multi-format package complexity&lt;a href="https://spacelift.io/blog/how-to-use-cloudsmith-and-spacelift#2-the-multiformat-package-complexity" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F08%252Fmulti-format-package-complexity.png%26w%3D3840%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F08%252Fmulti-format-package-complexity.png%26w%3D3840%26q%3D75" width="2000" height="1125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Modern organizations typically use diverse technology stacks with teams working on Docker containers, npm packages, Python libraries, Maven artifacts, Helm charts, and more.&lt;/p&gt;

&lt;p&gt;Managing separate registries for each format creates operational complexity with inconsistent access patterns across different registry systems, fragmented security policies where each format requires different approaches, geographic performance bottlenecks for distributed teams, and complex integration overhead, maintaining connections to multiple external registries.&lt;/p&gt;

&lt;p&gt;Cloudsmith's universal platform supports 30+ package formats with consistent APIs, unified access controls, and global distribution, giving engineering teams a single source of truth for all software assets regardless of format or location.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The software supply chain security risks&lt;a href="https://spacelift.io/blog/how-to-use-cloudsmith-and-spacelift#3-the-software-supply-chain-security-risks" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F08%252Fsoftware-supply-chain-security-risks.png%26w%3D3840%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F08%252Fsoftware-supply-chain-security-risks.png%26w%3D3840%26q%3D75" alt="software supply chain security risks" width="1876" height="1250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Companies need to automatically scan artifacts for known security vulnerabilities before they reach production, maintain a complete audit trail of where artifacts came from and who has accessed them, implement fine-grained permissions to ensure only authorized users can publish or consume critical packages, prevent malicious packages from entering their software supply chain, and meet regulatory standards like SOC 2, GDPR, and industry-specific security frameworks.&lt;/p&gt;

&lt;p&gt;Many organizations struggle with fragmented security approaches across different package types, inconsistent vulnerability scanning, poor audit trails, and difficulty maintaining compliance across multiple registries and environments.&lt;/p&gt;

&lt;p&gt;Cloudsmith addresses these security challenges with built-in vulnerability scanning, comprehensive audit logging, unified access controls across all package formats, and compliance-ready features that help organizations meet security and regulatory requirements without additional tooling overhead.&lt;/p&gt;

&lt;p&gt;💡 You might also like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/compliance-cost-of-drift" rel="noopener noreferrer"&gt;The Compliance Cost of Drift: Why Auditors Don't Trust Your Terraform&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/infrastructure-problems-that-spacelift-solves" rel="noopener noreferrer"&gt;Common Infrastructure Challenges and How to Solve Them&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/devops-assessment" rel="noopener noreferrer"&gt;DevOps Assessment Guide: Measuring Automation &amp;amp; Maturity&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Scaling artifact management with IaC orchestration
&lt;/h2&gt;

&lt;p&gt;In this section, we will review how to scale artifact management using Spacelift and Cloudsmith, leveraging the power of IaC orchestration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of using Cloudsmith and Spacelift together&lt;a href="https://spacelift.io/blog/how-to-use-cloudsmith-and-spacelift#benefits-of-using-cloudsmith-and-spacelift-together" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Cloudsmith manages your artifacts, while Spacelift manages your IaC. Together, they create a robust system for organizations that want to move beyond manual configuration. &lt;/p&gt;

&lt;p&gt;Benefits include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Full change history: Every change to your Cloudsmith setup is tracked in Git, giving you complete visibility and traceability.&lt;/li&gt;
&lt;li&gt;  Consistent environments: Use code to create identical development and production environments, reducing bugs caused by configuration drift.&lt;/li&gt;
&lt;li&gt;  Better collaboration: Teams can collaborate more effectively through code reviews and pull requests before changes are applied.&lt;/li&gt;
&lt;li&gt;  Enhanced security: Enforce policies and guardrails through code, reducing the risk of unauthorized or risky changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tutorial: Configuring Cloudsmith with Spacelift&lt;a href="https://spacelift.io/blog/how-to-use-cloudsmith-and-spacelift#tutorial-configuring-cloudsmith-with-spacelift" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In this tutorial, you will learn how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Configure Cloudsmith as code: Define repositories, teams, and access controls using Terraform&lt;/li&gt;
&lt;li&gt;  Implement secure authentication: Use OpenID Connect instead of storing API keys&lt;/li&gt;
&lt;li&gt;  Automate with Spacelift: Deploy changes automatically through Git workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allows you to build a foundation that eliminates manual configuration, provides complete audit trails, and ensures consistency across environments.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Create your Terraform configuration
&lt;/h4&gt;

&lt;p&gt;First, let's set up the Cloudsmith Terraform provider. Create a new file called &lt;code&gt;main.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    cloudsmith = {
      source  = "cloudsmith-io/cloudsmith"
      version = "0.0.62"
    }
  }
}

variable "api_key" {
  type      = string
  sensitive = true
}

provider "cloudsmith" {
  api_key  = var.api_key
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 2: Define your repository
&lt;/h4&gt;

&lt;p&gt;Now let's create a repository. Add this to your &lt;code&gt;main.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Data source to get organization information
data "cloudsmith_organization" "example_org" {
  slug = "example-org" # Replace with your organization slug
}

# Create a Cloudsmith repository
resource "cloudsmith_repository" "example_repo" {
  name        = "Example Repository"
  namespace   = data.cloudsmith_organization.example_org.slug_perm
  description = "A repository created with Terraform"
  slug        = "example-repository" # Optional: URL-friendly identifier

  # Optional: Repository type (defaults to Private if not specified)
  # repository_type = "Public"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 3: Configure team access
&lt;/h4&gt;

&lt;p&gt;Let's add some repository permissions. Add the following to your &lt;code&gt;main.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get current user for self-admin privileges
data "cloudsmith_user_self" "this" {}

# Get specific org member details
data "cloudsmith_org_member_details" "member1" {
  organization = data.cloudsmith_organization.example_org.slug
  member       = "user1@example.com" # Replace with actual email
}

data "cloudsmith_org_member_details" "member2" {
  organization = data.cloudsmith_organization.example_org.slug
  member       = "user2@example.com" # Replace with actual email
}

# Create team
resource "cloudsmith_team" "developers" {
  organization = data.cloudsmith_organization.example_org.slug
  name         = "Developers"
  description  = "Development team"
}

# Add members to team using org member details
resource "cloudsmith_manage_team" "dev_team_members" {
  organization = data.cloudsmith_organization.example_org.slug
  team_name    = cloudsmith_team.developers.slug

  members {
    user = data.cloudsmith_org_member_details.member1.user_id
    role = "Manager"
  }

  members {
    user = data.cloudsmith_org_member_details.member2.user_id
    role = "Member"
  }
}

# Assign repository privileges
resource "cloudsmith_repository_privileges" "example_repo_privileges" {
  organization = data.cloudsmith_organization.example_org.slug
  repository   = cloudsmith_repository.example_repo.slug

  # Self-admin for Terraform service account
  service {
    privilege = "Admin"
    slug      = data.cloudsmith_user_self.this.slug
  }

  # Team access
  team {
    privilege = "Write"
    slug      = cloudsmith_team.developers.slug
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration creates a team, adds members, and grants them write access to your repository. The Terraform service account keeps admin privileges so it can manage everything.&lt;/p&gt;

&lt;p&gt;The contents of &lt;code&gt;main.tf&lt;/code&gt; should be stored in a GitHub repository. Spacelift will pull the Terraform code from this repository and run it when changes are made.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 4: Set up Spacelift with OpenID Connect
&lt;/h4&gt;

&lt;p&gt;Instead of using long-lived API keys, we'll use OpenID Connect to let Spacelift authenticate with Cloudsmith through a trust relationship.&lt;/p&gt;

&lt;p&gt;First, set up OIDC in your Cloudsmith workspace:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Navigate to your workspace settings&lt;/li&gt;
&lt;li&gt; Go to "Authentication" → "OpenID Connect"&lt;/li&gt;
&lt;li&gt; Create a new OIDC provider for Spacelift&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your configuration should look similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F08%252FCloudsmith-workspace-config.png%26w%3D3840%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F08%252FCloudsmith-workspace-config.png%26w%3D3840%26q%3D75" alt="Cloudsmith workspace config" width="1082" height="1572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create the authentication script with a file called &lt;code&gt;get-cloudsmith-token.sh.tpl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -euo pipefail

export TF_VAR_api_key=$(curl -X POST\
  -H "Content-Type: application/json"\
  -d '{"oidc_token": "'$SPACELIFT_OIDC_TOKEN'", "service_slug":"${service_slug}"}'\
  ${api_host}/openid/${org_name}/ | jq -r ".token")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script takes the OIDC token Spacelift provides and exchanges it for a temporary Cloudsmith API token. No long-lived credentials needed!&lt;/p&gt;

&lt;p&gt;Now let's bring it all together by creating a Spacelift stack. This stack will run when changes are pushed to your repository containing &lt;code&gt;main.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "org_name" {
  description = "Organisation name for Cloudsmith"
  type        = string
  default     = "example-cloudsmith-org"  # Replace with your org name
}

variable "service_slug" {
  description = "Service slug for OIDC authentication"
  type        = string
  default     = "example-cloudsmith-service-slug"  # Replace with your service account slug
}

variable "api_host" {
  description = "Cloudsmith API host"
  type        = string
  default     = "https://api.cloudsmith.io"
}

terraform {
  required_providers {
    spacelift = {
      source  = "spacelift-io/spacelift"
      version = "1.26.0"
    }
  }
}

provider "spacelift" {}

# Create Spacelift stack
resource "spacelift_stack" "this" {
  name         = "cloudsmith"
  repository   = "cloudsmith"  # This is the repository on github that contains your terraform code which defines your Cloudsmith Workspace.
  branch       = "master"
}

# Create authentication context
resource "spacelift_context" "cloudsmith_auth" {
  name        = "cloudsmith-authentication"
  description = "Context for Cloudsmith API authentication using OIDC tokens"

  # Run the authentication script before Terraform initializes
  before_init = [
    "source /mnt/workspace/get-cloudsmith-token.sh"
  ]
}

# Mount authentication script
resource "spacelift_mounted_file" "get_cloudsmith_token" {
  context_id    = spacelift_context.cloudsmith_auth.id
  relative_path = "get-cloudsmith-token.sh"
  content = base64encode(templatefile("${path.module}/get-cloudsmith-token.sh.tpl", {
    org_name     = var.org_name
    service_slug = var.service_slug
    api_host     = var.api_host
  }))
  write_only = false
}

# Attach context to stack
resource "spacelift_context_attachment" "cloudsmith_auth_attachment" {
  context_id = spacelift_context.cloudsmith_auth.id
  stack_id   = spacelift_stack.this.id
  priority   = 100
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 5: Publish your changes
&lt;/h4&gt;

&lt;p&gt;When you push changes to your repository:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spacelift detects a git push and starts a run.&lt;/li&gt;
&lt;li&gt;Before Terraform initializes, Spacelift runs your authentication script:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F08%252FSpacelift-runs-your-authentication-script.png%26w%3D3840%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F08%252FSpacelift-runs-your-authentication-script.png%26w%3D3840%26q%3D75" width="1604" height="708"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The script exchanges Spacelift's OIDC token for a Cloudsmith API token:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F08%252FSpacelift-OIDC-token-exchanged-for-a-Cloudsmith-API-token.png%26w%3D3840%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F08%252FSpacelift-OIDC-token-exchanged-for-a-Cloudsmith-API-token.png%26w%3D3840%26q%3D75" width="1694" height="702"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform uses this token to apply the changes to your Cloudsmith workspace:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F08%252Fapply-the-changes-to-your-Cloudsmith-workspace.gif%26w%3D3840%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F08%252Fapply-the-changes-to-your-Cloudsmith-workspace.gif%26w%3D3840%26q%3D75" width="720" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key points
&lt;/h2&gt;

&lt;p&gt;Combining Cloudsmith's artifact management with Spacelift's IaC automation empowers engineering teams to scale reliably, improve security, and reduce manual overhead.&lt;/p&gt;

&lt;p&gt;Want to improve your package management? Terraform and Spacelift offer fast deployment, consistent environments, strong security, and comprehensive governance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloudsmith.com/book-a-demo" rel="noopener noreferrer"&gt;Get started with Cloudsmith&lt;/a&gt; today. If you want to learn more about Spacelift, create a &lt;a href="https://spacelift.io/free-trial" rel="noopener noreferrer"&gt;free account&lt;/a&gt; or &lt;a href="https://spacelift.io/schedule-demo" rel="noopener noreferrer"&gt;book a demo with one of our engineers&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>GitHub Actions Workflows: How to Create and Manage</title>
      <dc:creator>Spacelift team</dc:creator>
      <pubDate>Mon, 28 Jul 2025 08:21:03 +0000</pubDate>
      <link>https://dev.to/spacelift/github-actions-workflows-how-to-create-and-manage-1oj2</link>
      <guid>https://dev.to/spacelift/github-actions-workflows-how-to-create-and-manage-1oj2</guid>
      <description>&lt;p&gt;GitHub Actions is the continuous integration and delivery (CI/CD) service included with GitHub projects. It allows you to automate the process of building, testing, and deploying your software using scripted pipelines called workflows. GitHub triggers your workflows when events such as commits, merges, and code reviews occur.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll discuss the key features available in GitHub Actions workflows. We’ll build a simple example workflow and then show how you can use GitHub Actions to manage IaC deployments with Spacelift.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are GitHub Actions workflows?
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;GitHub Actions workflows&lt;/em&gt; are automated processes defined in YAML files that run on specified events in a GitHub repository, such as code pushes, pull requests, or scheduled times. They let you automate tasks like testing, building, or deploying code directly from your repo using a customizable series of jobs and steps. Each workflow runs in a GitHub-hosted or self-hosted runner and supports complex logic, matrix builds, environment secrets, and reusable actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt; is GitHub's CI/CD system. Within the platform, &lt;a href="https://docs.github.com/en/actions/writing-workflows/about-workflows" rel="noopener noreferrer"&gt;workflows&lt;/a&gt; are the top-level components that define your CI/CD configurations. A workflow in GitHub Actions is equivalent to a pipeline in other CI/CD systems.&lt;/p&gt;

&lt;p&gt;A GitHub Actions workflow groups a collection of jobs into a single automated process. When a workflow is triggered, GitHub runs all of the workflow's jobs in parallel. &lt;/p&gt;

&lt;p&gt;You can also configure jobs to run sequentially by specifying job dependencies. Jobs with dependencies won't start until those dependencies have been completed.&lt;/p&gt;

&lt;p&gt;GitHub Actions workflows are designed for modularity. By creating composite actions and reusable workflows, you can&lt;a href="https://docs.github.com/en/actions/sharing-automations/creating-actions/creating-a-composite-action" rel="noopener noreferrer"&gt; share sections&lt;/a&gt; of configuration &lt;a href="https://docs.github.com/en/actions/sharing-automations/reusing-workflows" rel="noopener noreferrer"&gt;across multiple workflows&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/marketplace?type=actions" rel="noopener noreferrer"&gt;GitHub Actions Marketplace&lt;/a&gt; also contains a huge selection of prebuilt actions you can use in your workflows.&lt;/p&gt;

&lt;p&gt;Workflows only run when a trigger event occurs. Over 30 &lt;a href="https://docs.github.com/en/actions/reference/events-that-trigger-workflows" rel="noopener noreferrer"&gt;basic trigger types&lt;/a&gt; are available, covering all key GitHub events. For instance, you can run workflows when code is pushed, after a Pull Request is merged, on a schedule, or even when new comments are added to a discussion in your repository. This flexibility means you can use GitHub Actions workflows to automate practically any process within your &lt;a href="https://spacelift.io/blog/devops-lifecycle" rel="noopener noreferrer"&gt;DevOps lifecycle&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tutorial: Building a GitHub Actions workflow
&lt;/h2&gt;

&lt;p&gt;Let's examine how to build a basic GitHub Actions workflow. This simple tutorial will help you understand the concepts outlined above. &lt;/p&gt;

&lt;p&gt;To follow along, you should first create a new repository in your GitHub account. You can also find this article's sample code &lt;a href="https://github.com/jamesheronwalker/spacelift-github-actions-workflow-demo" rel="noopener noreferrer"&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The steps to building your first GitHub Actions workflow are as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create a GitHub Actions workflow file&lt;/li&gt;
&lt;li&gt; Define workflow triggers&lt;/li&gt;
&lt;li&gt; Configure workflow jobs&lt;/li&gt;
&lt;li&gt; Test the complete example&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 1. Create a GitHub Actions workflow file
&lt;/h3&gt;

&lt;p&gt;GitHub Actions workflows are configured using YAML files within your project's &lt;code&gt;.github/workflows&lt;/code&gt; directory. Each workflow needs its own file in the directory. We'll save our sample workflow to &lt;code&gt;.github/workflows/demo-workflow.yml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once you've created your config file, you can begin adding YAML properties to define your workflow's behavior. Workflows depend on two key properties: &lt;code&gt;on&lt;/code&gt; and &lt;code&gt;jobs&lt;/code&gt;. These properties function as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;on&lt;/code&gt;: Defines the triggers that will run your workflow, such as commits and push events.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;jobs&lt;/code&gt;: Configures the jobs that your workflow will execute.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We'll set these options up in the next two sections.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2. Define workflow triggers
&lt;/h3&gt;

&lt;p&gt;To get started with our sample workflow, first set up a trigger that will launch the workflow when an event occurs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches: [ "main" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple trigger runs the workflow when you push new commits to your repository's &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;

&lt;p&gt;You can add multiple trigger conditions if you need to. For instance, you may also want to run your workflow when there's a new pull request against your &lt;code&gt;main&lt;/code&gt; branch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  pull_request:
    branches: [ "main" ]
  push:
    branches: [ "main" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this config, the workflow will run when either a pull request or push event occurs for main.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3. Configure workflow jobs
&lt;/h3&gt;

&lt;p&gt;GitHub Actions workflow jobs are configured by the top-level &lt;code&gt;jobs&lt;/code&gt; property in your YAML file. This is an object where each key defines the name of a new job. The job's settings must then be provided as a nested object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: "Check out repository"
        uses: actions/checkout@v3
      - name: "Build project"
        run: "echo 'Building code...'"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The example above defines a single job called &lt;code&gt;build&lt;/code&gt;. The &lt;code&gt;runs-on&lt;/code&gt; property configures the type of environment in which the job will run. &lt;/p&gt;

&lt;p&gt;We're using &lt;code&gt;ubuntu-latest&lt;/code&gt; for this job, but the &lt;a href="https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#choosing-github-hosted-runners" rel="noopener noreferrer"&gt;GitHub Actions documentation&lt;/a&gt; provides a complete list of available runner environments.&lt;/p&gt;

&lt;p&gt;Within the job, we're running two sequential steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The first step uses the GitHub Marketplace's &lt;a href="https://spacelift.io/blog/github-actions-checkout" rel="noopener noreferrer"&gt;&lt;code&gt;checkout&lt;/code&gt; action&lt;/a&gt; to check out your repository into the job's runner environment.&lt;/li&gt;
&lt;li&gt; The second step uses the &lt;code&gt;run&lt;/code&gt; facility to run a command within the runner's environment. For our example, the command just runs a simple &lt;code&gt;echo&lt;/code&gt; statement.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can add more functionality to your workflow by creating additional jobs and steps as needed. Complex workflows may include many different jobs, each with several distinct steps. By default, jobs run in parallel, while steps within each job execute sequentially.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4. Test the complete example
&lt;/h3&gt;

&lt;p&gt;Now we can combine the previous two sections to create a fully working GitHub Actions workflow. Copy the following code to your repository's &lt;code&gt;.github/workflows/demo-workflow.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  pull_request:
    branches: [ "main" ]
  push:
    branches: [ "main" ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: "Check out repository"
        uses: actions/checkout@v3
      - name: "Build project"
        run: "echo 'Building code...'"
  verify:
    runs-on: ubuntu-latest
    steps:
      - run: "echo 'Verifying compliance...'"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We've added a second job, &lt;code&gt;verify&lt;/code&gt;, to help demonstrate how jobs within a workflow run in parallel.&lt;/p&gt;

&lt;p&gt;After saving the workflow config to your repository, try committing changes to your &lt;code&gt;main&lt;/code&gt; branch. If you then head to your repository's Actions tab, you should see a new run of your workflow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F07%252FGitHub-Actions-workflow-example-test.png%26w%3D3840%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F07%252FGitHub-Actions-workflow-example-test.png%26w%3D3840%26q%3D75" alt="GitHub Actions workflow example test" width="1278" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the run to view its details on a new screen. You'll see the two jobs within the run, &lt;code&gt;build&lt;/code&gt; and &lt;code&gt;verify&lt;/code&gt;. You can then click the jobs to view the logs from their steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F07%252Fgithub-actions-build-and-verify.png%26w%3D3840%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2025%252F07%252Fgithub-actions-build-and-verify.png%26w%3D3840%26q%3D75" alt="github actions build and verify" width="1278" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💡 You might also like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/infrastructure-as-code-with-github-actions" rel="noopener noreferrer"&gt;Should You Manage Your IaC with GitHub Actions?&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/github-actions-kubernetes" rel="noopener noreferrer"&gt;Kubernetes with GitHub Actions: CI/CD for Containers&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/circleci-vs-github-actions" rel="noopener noreferrer"&gt;CircleCI vs GitHub Actions: CI/CD Tools Comparison&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GitHub Actions workflows features and concepts
&lt;/h2&gt;

&lt;p&gt;The example above is just a high-level overview of how to get started with GitHub Actions workflows. The workflow concept is a powerful architecture for modelling processes within the DevOps lifecycle. By combining different triggers with a mix of parallel jobs and sequential steps, you can easily implement advanced automation for your DevOps requirements.&lt;/p&gt;

&lt;p&gt;GitHub Actions supports a huge selection of &lt;a href="https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions" rel="noopener noreferrer"&gt;options&lt;/a&gt; for workflows, jobs, and steps. We don't have space to cover them all, but we'll share some key features below. &lt;/p&gt;

&lt;p&gt;You can also find more examples and tips in our &lt;a href="https://spacelift.io/blog/github-actions-tutorial" rel="noopener noreferrer"&gt;GitHub Actions getting started tutorial&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. name
&lt;/h3&gt;

&lt;p&gt;The workflow-level name option sets the workflow name that's displayed within the GitLab UI. It's an optional field that defaults to the filename of your workflow config file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: "Demo workflow"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. on
&lt;/h3&gt;

&lt;p&gt;As shown above, the on section of a workflow config file defines the &lt;a href="https://docs.github.com/en/actions/reference/events-that-trigger-workflows" rel="noopener noreferrer"&gt;trigger events&lt;/a&gt; that will run the workflow. Each workflow may specify multiple supported trigger events. &lt;/p&gt;

&lt;p&gt;Many trigger events have their own options, such as the following example that uses the release trigger's types option to run the workflow when a new release is published:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  release:
    types: [published]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. jobs
&lt;/h3&gt;

&lt;p&gt;A workflow's &lt;code&gt;jobs&lt;/code&gt; property specifies the jobs that it will run. We saw this in action in the example above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  job1:
    runs-on: ubuntu-latest
    steps:
      - run: echo "I am job 1"
  job2:
    runs-on: ubuntu-latest
    steps:
      - run: echo "I am job 2"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each job must specify the steps it will run (&lt;code&gt;steps&lt;/code&gt;) and the type of runner it will execute on (&lt;code&gt;runs-on&lt;/code&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  4. uses and with
&lt;/h3&gt;

&lt;p&gt;Steps within a workflow job can reference actions available in the GitHub Marketplace, a Docker image, a public repository, or your project's repository. &lt;/p&gt;

&lt;p&gt;Actions are reusable sections of workflow configuration that you can customize with inputs. Inputs are specified as key-value pairs using the step's &lt;code&gt;with&lt;/code&gt; option.&lt;/p&gt;

&lt;p&gt;The following example shows how to use the &lt;code&gt;docker/login-action&lt;/code&gt; action to log in to Docker Hub within your workflow's environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  dockerl:
    runs-on: ubuntu:latest
    steps:
      - name: docker login
        uses: docker/login-action@v3
        with:
          username: ${{ vars.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_TOKEN }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the action's inputs reference values that must be configured in your project's &lt;a href="https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/store-information-in-variables" rel="noopener noreferrer"&gt;GitHub Actions variables&lt;/a&gt; and &lt;a href="https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions" rel="noopener noreferrer"&gt;secrets&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. env
&lt;/h3&gt;

&lt;p&gt;You can set environment variables for your workflow's jobs using the &lt;code&gt;env&lt;/code&gt; key. This setting is supported at the workflow level, as well as for individual jobs and steps. Workflow-level variables will be injected into every job's environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;env:
  GLOBAL_VAR: global

jobs:
  demo:
    runs-on: ubuntu-latest
    env:
      JOB_VAR: job
    steps:
      - run: 'echo "$GLOBAL_VAR $JOB_VAR $STEP_VAR"'
        env:
          STEP_VAR: step
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. needs
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;needs&lt;/code&gt; job option lets you specify a dependency on other jobs in your workflow. The job won't start running until those jobs have completed successfully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  job1:
    runs-on: ubuntu-latest
    steps:
      - run: "echo 'I am job 1'"
  job2:
    runs-on: ubuntu-latest
    needs: job1
    steps:
      - run: "echo 'I am job 2'"
  job3:
    runs-on: ubuntu-latest
    needs: [job1, job2]
    steps:
      - run: "echo 'I am job 3'"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, &lt;code&gt;job2&lt;/code&gt; will wait until &lt;code&gt;job1&lt;/code&gt; has finished, while &lt;code&gt;job3&lt;/code&gt; will wait for both &lt;code&gt;job1&lt;/code&gt; and job2. The &lt;code&gt;needs&lt;/code&gt; keyword provides advanced controls to specify when jobs run, enabling you to improve workflow performance and reliability.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. matrix
&lt;/h3&gt;

&lt;p&gt;Software build processes often need to be repeated across many combinations of targets. To avoid repetitive workflow configuration, you can use the &lt;code&gt;matrix&lt;/code&gt; option to run a job multiple times with different variable inputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  build:
    strategy:
      matrix:
        os: [ubuntu-latest, windows-latest]
        node_version: [18, 20]
    runs-on: ${{ matrix.os }}
    steps:
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node_version }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The job shown above will run a total of four times, but with a different combination of operating system and Node version on each run:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;code&gt;ubuntu-latest&lt;/code&gt; on Node 14&lt;/li&gt;
&lt;li&gt; &lt;code&gt;ubuntu-latest&lt;/code&gt; on Node 18&lt;/li&gt;
&lt;li&gt; &lt;code&gt;windows-latest&lt;/code&gt; on Node 14&lt;/li&gt;
&lt;li&gt; &lt;code&gt;windows-latest&lt;/code&gt; on Node 18&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Using matrix allows you to quickly implement complex job variations.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. concurrency
&lt;/h3&gt;

&lt;p&gt;Sometimes your GitHub Actions workflows will include jobs that mustn't run concurrently. For instance, if your job deploys software to production, then you may want to prevent out-of-order deployments so you can be sure only the latest code goes live. The &lt;code&gt;concurrency&lt;/code&gt; option allows you to control this behavior.&lt;/p&gt;

&lt;p&gt;When specified at the workflow or job level, the &lt;code&gt;concurrency&lt;/code&gt; keyword groups multiple jobs together across workflow runs. GitHub Actions then ensures there's only one running and one pending job within the concurrency group at any time. &lt;/p&gt;

&lt;p&gt;You can choose to cancel existing jobs when a new one appears, allowing the new job to start immediately.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  deploy:
    runs-on: ubuntu-latest
    concurrency:
      group: deployment
      cancel-in-progress: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the &lt;code&gt;deploy&lt;/code&gt; job belongs to the &lt;code&gt;deployment&lt;/code&gt; concurrency group. When a new job is created in the group, any existing jobs will be cancelled so the latest deployment can instantly begin.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to build a GitHub Actions workflow for Spacelift IaC deployments
&lt;/h2&gt;

&lt;p&gt;GitHub Actions is a versatile CI/CD system, but it's primarily designed for software development use cases. It can be&lt;a href="https://spacelift.io/blog/infrastructure-problems-that-spacelift-solves" rel="noopener noreferrer"&gt; difficult to use&lt;/a&gt; for IaC deployments where state management, governance, and efficiency are key.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://spacelift.io/" rel="noopener noreferrer"&gt;Spacelift&lt;/a&gt;, we solve this problem with our dedicated IaC orchestration platform. Spacelift provides fully automated CI/CD for IaC with self-service access, drift detection, and policy-as-code built in, making it a great fit for infrastructure workflows.&lt;/p&gt;

&lt;p&gt;When you use Spacelift, you don't need to write complex GitHub Actions workflows to provision your infrastructure. Spacelift uses a GitOps strategy to connect to your IaC repositories, monitor for changes, and then automatically run your IaC tools. &lt;/p&gt;

&lt;p&gt;Spacelift also directly integrates with your AWS, Azure, and GCP accounts to dynamically generate short-lived cloud credentials for your pipelines.&lt;/p&gt;

&lt;p&gt;We think Spacelift stacks are the best way to handle infrastructure automation, but most teams also use CI/CD solutions like GitHub Actions to build code, run tests, and automate other parts of the DevOps lifecycle. &lt;/p&gt;

&lt;p&gt;Because you may need to interact with your Spacelift stacks from your GitHub Actions workflows, we provide the &lt;a href="https://github.com/marketplace/actions/setup-spacectl" rel="noopener noreferrer"&gt;setup-spacectl action&lt;/a&gt; in the GitHub Marketplace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.spacelift.io/concepts/spacectl" rel="noopener noreferrer"&gt;Spacelift's CLI&lt;/a&gt; Spacectl allows easy programmatic access to the resources in your Spacelift account. The setup-spacectl action is an automated way to install and configure Spacectl within a GitHub Actions workflow. &lt;/p&gt;

&lt;p&gt;Simply adding a step that uses the action lets you run spacectl commands in the following steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches: [ "main" ]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: "Setup Spacectl"
        uses: spacelift-io/setup-spacectl@v1
      - name: "Deploy Spacelift stack"
        env:
          SPACELIFT_API_KEY_ENDPOINT: https:// ${{ vars.SPACELIFT_ACCOUNT_NAME }}.app.spacelift.io
          SPACELIFT_API_KEY_ID: ${{ secrets.SPACELIFT_API_KEY_ID }}
          SPACELIFT_API_KEY_SECRET: ${{ secrets.SPACELIFT_API_KEY_SECRET }}
        run: spacectl stack deploy --id ${{ vars.SPACELIFT_STACK_ID }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This workflow specifies a job that runs after you push to your repository's &lt;code&gt;main&lt;/code&gt; branch. It uses the &lt;code&gt;setup-spacectl&lt;/code&gt; action to install Spacectl within the job's environment and then runs the &lt;code&gt;spacectl stack deploy&lt;/code&gt; command to deploy the stack referenced by the &lt;code&gt;SPACELIFT_STACK_ID&lt;/code&gt; workflow variable. &lt;/p&gt;

&lt;p&gt;To use this workflow, you must first configure all the referenced &lt;a href="https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/store-information-in-variables" rel="noopener noreferrer"&gt;variables&lt;/a&gt; and &lt;a href="https://docs.github.com/en/actions/security-for-github-actions/security-guides/using-secrets-in-github-actions" rel="noopener noreferrer"&gt;secrets&lt;/a&gt; in your project's GitHub Actions settings.&lt;/p&gt;

&lt;p&gt;This workflow is a basic example designed to demonstrate the &lt;code&gt;setup-spacectl&lt;/code&gt; action. In most cases, it's easiest to use&lt;a href="https://docs.spacelift.io/concepts/policy/push-policy" rel="noopener noreferrer"&gt; Spacelift push policies&lt;/a&gt; to deploy your stacks automatically when you commit to your repository.&lt;/p&gt;

&lt;p&gt;However, using Spacectl within a GitHub Actions job lets you trigger stack deployments as a step in a broader workflow.&lt;/p&gt;

&lt;p&gt;While the &lt;code&gt;setup-spacectl&lt;/code&gt; action enables you to use any Spacectl command, there's also an even simpler option when you just need to deploy a stack. The &lt;a href="https://github.com/marketplace/actions/spacelift-stack-deploy" rel="noopener noreferrer"&gt;&lt;code&gt;spacelift-stack-deploy&lt;/code&gt; action&lt;/a&gt;, maintained by &lt;a href="https://cloudposse.com/" rel="noopener noreferrer"&gt;Cloud Posse&lt;/a&gt;, enables you to deploy a stack using a single GitHub Actions workflow step. Internally, it uses the &lt;code&gt;setup-spacectl&lt;/code&gt; action to implement the same steps shown above. Here's an example of how to use &lt;code&gt;spacelift-stack-deploy&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches: [" main" ]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: "Deploy Spacelift stack"
        id: spacelift
        uses: cloudposse/github-action-spacelift-stack-deploy@main
        with:
          stack: ${{ vars.SPACELIFT_STACK_ID }}
          organization: ${{ vars.SPACELIFT_ACCOUNT_NAME }}
          api_key_id: ${{ secrets.SPACELIFT_API_KEY_ID }}
          api_key_secret: ${{ secrets.SPACELIFT_API_KEY_SECRET }}
    outputs:
      outputs: ${{ steps.spacelift.outputs.outputs }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This workflow uses the same secrets and variables as the previous one, but this time they're supplied as inputs to the &lt;code&gt;spacelift-stack-deploy&lt;/code&gt; action. The action automates the process of using Spacectl to deploy a Spacelift stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key points
&lt;/h2&gt;

&lt;p&gt;GitHub Actions workflows are CI/CD pipelines that run in your GitHub repositories when trigger events occur. Each workflow file contains a collection of jobs configured to run in parallel or sequentially. GitHub Actions workflows are commonly used to build, test, and deploy software changes, but the platform's flexibility and huge community ecosystem allow you to automate any DevOps process.&lt;/p&gt;

&lt;p&gt;Because GitHub Actions workflows implement general-purpose CI/CD, specialist solutions like Spacelift can still be &lt;a href="https://spacelift.io/blog/infrastructure-as-code-with-github-actions" rel="noopener noreferrer"&gt;a better fit for&lt;/a&gt; specific use cases. Our platform solves key infrastructure automation challenges that can be complex to configure in GitHub Actions, such as managing infrastructure state, gaining visibility into deployed resources, and continuously enforcing governance requirements.&lt;/p&gt;

&lt;p&gt;With Spacelift, you can fully automate your IaC deployments through direct connections to your IaC repositories and cloud accounts. You can also use GitHub Marketplace actions like &lt;code&gt;setup-spacectl&lt;/code&gt; and &lt;code&gt;spacelift-stack-deploy&lt;/code&gt; to integrate Spacelift with GitHub Actions. This combination lets you build more capable DevOps systems by triggering Spacelift IaC deployments within your GitHub Actions workflows.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Written by James Walker&lt;/em&gt;&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>devops</category>
    </item>
    <item>
      <title>Kubernetes Control Plane: What It Is &amp; How It Works</title>
      <dc:creator>Spacelift team</dc:creator>
      <pubDate>Fri, 25 Jul 2025 13:49:40 +0000</pubDate>
      <link>https://dev.to/spacelift/kubernetes-control-plane-what-it-is-how-it-works-44ch</link>
      <guid>https://dev.to/spacelift/kubernetes-control-plane-what-it-is-how-it-works-44ch</guid>
      <description>&lt;p&gt;Kubernetes is the leading container orchestration system for automating container operations at scale. It makes it easy to distribute container replicas across a cluster of compute Nodes. A centralized control plane governs the cluster and ensures deployments stay healthy.&lt;/p&gt;

&lt;p&gt;In this article, we’re going to take a deep dive into the Kubernetes control plane. We’ll explore all its components and their roles in the cluster. We’ll finish up by sharing some best practices that ensure the control plane functions reliably.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Kubernetes control plane?
&lt;/h2&gt;

&lt;p&gt;The Kubernetes control plane is the management layer inside a Kubernetes cluster. It's a collection of components that work together to manage the cluster's state, coordinate your Nodes, and provide the API server that lets you interact with the cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the purpose of the control plane?
&lt;/h3&gt;

&lt;p&gt;The control plane's main responsibilities include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Scheduling new Pods onto available Nodes&lt;/li&gt;
&lt;li&gt;  Automatically starting new containers when failures occur&lt;/li&gt;
&lt;li&gt;  Moving Pods onto different Nodes when a Node becomes unavailable&lt;/li&gt;
&lt;li&gt;  Detecting and reconciling cluster config changes, such as by creating and deleting Pods after Deployments are resized&lt;/li&gt;
&lt;li&gt;  Serving the Kubernetes API&lt;/li&gt;
&lt;li&gt;  Storing all the data about objects in the cluster&lt;/li&gt;
&lt;li&gt;  Infrastructure management through integrations with cloud provider APIs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To summarize, the control plane runs your cluster by implementing the system-level functions that Kubernetes requires. It monitors for new events in the cluster and applies any necessary actions. For instance, when you create a Pod, the control plane selects a suitable Node to run the Pod's containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the difference between the master and control plane in Kubernetes?
&lt;/h3&gt;

&lt;p&gt;The terms "master" and "control plane" in Kubernetes are often used interchangeably, but there is a subtle distinction.&lt;/p&gt;

&lt;p&gt;Historically, "master node" referred to the physical or virtual machine where the control plane components resided. However, as Kubernetes evolved, the control plane could be distributed across multiple nodes for high availability and resilience. This led to the shift towards the term "control plane" to emphasize the functional role rather than a specific physical location.&lt;/p&gt;

&lt;p&gt;Read more: &lt;a href="https://spacelift.io/blog/kubernetes-architecture" rel="noopener noreferrer"&gt;What is Kubernetes Architecture? - Components Overview&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the components of the Kubernetes control plane?
&lt;/h2&gt;

&lt;p&gt;The Kubernetes control plane is composed of several different components. They work together to implement the control plane's features and successfully run your cluster.&lt;/p&gt;

&lt;p&gt;The control plane consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  API Server&lt;/li&gt;
&lt;li&gt;  Etcd&lt;/li&gt;
&lt;li&gt;  Controller Manager&lt;/li&gt;
&lt;li&gt;  Cloud Controller Manager&lt;/li&gt;
&lt;li&gt;  Scheduler&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2022%252F12%252Fcontrol-plane.png%26w%3D3840%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2022%252F12%252Fcontrol-plane.png%26w%3D3840%26q%3D75" alt="control plane" width="1100" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's a summary of the key components of the control plane and what they do.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. API Server
&lt;/h3&gt;

&lt;p&gt;The API Server (kube-apiserver) is your cluster's core. It serves the HTTP API that enables cluster access.&lt;/p&gt;

&lt;p&gt;The API powers user-facing tools like Kubectl. Whenever you interact with your cluster, you're relying on the API Server being available. The API server also provides endpoints that Nodes use to fetch data from the cluster control plane.&lt;/p&gt;

&lt;p&gt;Because the API server governs all cluster management operations, it's vital that it stays healthy. It's recommended to run multiple replicas of the API server, spread across several Nodes, so you can still access the API if one of your Nodes fails.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Etcd
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://etcd.io/" rel="noopener noreferrer"&gt;etcd&lt;/a&gt; is a distributed key-value datastore. Kubernetes operates an etcd instance to store your cluster's data. This includes config values, CRDs, and the states of objects in your cluster.&lt;/p&gt;

&lt;p&gt;Kubernetes can use other datastores instead of etcd. Some popular distributions ship with alternatives that are better suited to their target use cases---K3s&lt;a href="https://docs.k3s.io/datastore" rel="noopener noreferrer"&gt; defaults to using SQLite&lt;/a&gt;, for example. Nonetheless, etcd is ideal for production clusters because it offers strong consistency guarantees. It reliably distributes data across control plane nodes and can quickly elect a new leader node when failures occur.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Controller Manager
&lt;/h3&gt;

&lt;p&gt;Many Kubernetes features are based on &lt;em&gt;controllers&lt;/em&gt;. A controller watches for changes in your cluster and applies new actions as needed. Examples of controllers include the Deployment controller, which creates new Pods based on a Deployment object's spec, and the CronJob controller, which enables periodic creation of new Jobs.&lt;/p&gt;

&lt;p&gt;Controllers run continuously in an automated control loop. They're governed by the Kubernetes Controller Manager (&lt;code&gt;kube-controller-manager&lt;/code&gt;). This process is responsible for starting and maintaining individual controllers. It ensures the controllers operate reliably so your cluster's state always matches its current configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Cloud Controller Manager
&lt;/h3&gt;

&lt;p&gt;Cloud Controller Manager (&lt;code&gt;cloud-controller-manager&lt;/code&gt;, or CCM) is the Kubernetes control plane component that interfaces with your cloud provider's API. Your cluster will only include this component when you're using a managed Kubernetes service.&lt;/p&gt;

&lt;p&gt;CCM enables your cluster to manage resources in your cloud account. It automates infrastructure operations, such as adding a new cloud Load Balancer resource when you create a &lt;a href="https://spacelift.io/blog/kubernetes-load-balancer" rel="noopener noreferrer"&gt;&lt;code&gt;LoadBalancer&lt;/code&gt;&lt;/a&gt;&lt;a href="https://spacelift.io/blog/kubernetes-load-balancer" rel="noopener noreferrer"&gt; Kubernetes service&lt;/a&gt;. Cloud providers implement CCM support by building a plugin that sits between Kubernetes and their own API.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Scheduler
&lt;/h3&gt;

&lt;p&gt;The Kubernetes Scheduler (&lt;code&gt;kube-scheduler&lt;/code&gt;) assigns new Pods to Nodes. It watches the cluster for Pods without a Node and selects the most suitable Node to schedule them to.&lt;/p&gt;

&lt;p&gt;Scheduling decisions involve many different factors. The scheduler compares each Node's resource utilization to the Pod's requests. It also considers the Pod's affinity rules, node selectors, and taints and tolerations. Overall, the scheduler aims to distribute Pods evenly across the cluster's Nodes to ensure good performance and fault tolerance.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do nodes work with the Kubernetes control plane?
&lt;/h2&gt;

&lt;p&gt;Kubernetes Nodes communicate with the cluster's control plane using Kubelet, a dedicated agent process. Each worker Node runs its own instance of&lt;a href="https://kubernetes.io/docs/concepts/architecture/#kubelet" rel="noopener noreferrer"&gt; Kubelet&lt;/a&gt;. When you're starting a cluster from scratch, you must manually install Kubelet on each of your worker Nodes to connect them to your cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2024%252F07%252Fkubernetes-diagram.png%26w%3D3840%26q%3D75" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fspacelift.io%2F_next%2Fimage%3Furl%3Dhttps%253A%252F%252Fspaceliftio.wpcomstaging.com%252Fwp-content%252Fuploads%252F2024%252F07%252Fkubernetes-diagram.png%26w%3D3840%26q%3D75" alt="kubernetes diagram" width="1400" height="1000"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubelet registers the Node with the Kubernetes control plane, making it eligible to schedule Pods. The Scheduler can then allocate Pods to the Node, creating instructions that the API Server exposes as Pod specs. Kubelet regularly queries the API Server to learn which Pod specs it should be running.&lt;/p&gt;

&lt;p&gt;Once a Pod has been scheduled, Kubelet is responsible for starting its containers. Kubelet uses the container runtime installed on the host to create new containers and then monitors them to ensure they stay running. If a container fails or turns unhealthy, then Kubelet will replace it with a new one.&lt;/p&gt;

&lt;p&gt;Kube-Proxy is the second Node-level control plane component. This runs on each Node to implement the Kubernetes networking layer. The proxy allows internal and external cluster traffic to reach the containers running on the Node.&lt;/p&gt;

&lt;p&gt;💡 You might also like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/kubernetes-scaling" rel="noopener noreferrer"&gt;Guide to Kubernetes Scaling: Horizontal, Vertical &amp;amp; Cluster&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/kubernetes-dashboard" rel="noopener noreferrer"&gt;Kubernetes Dashboard: Tutorial, Best Practices &amp;amp; Alternatives&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://spacelift.io/blog/kubernetes-tutorial" rel="noopener noreferrer"&gt;https://spacelift.io/blog/kubernetes-tutorial&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  Don't miss out! Get more posts like this --- &lt;a href="https://spacelift.io/newsletter" rel="noopener noreferrer"&gt;subscribe to our newsletter&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to configure the Kubernetes control plane
&lt;/h2&gt;

&lt;p&gt;The Kubernetes control plane components support various config values for tuning your cluster. Some of the most commonly used options include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates" rel="noopener noreferrer"&gt;Feature gates&lt;/a&gt;: A mechanism for opting into enabling alpha and beta features.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver" rel="noopener noreferrer"&gt;API server settings&lt;/a&gt;: You can customize the API server in several ways, such as by enabling audit logging, specifying SSL/TLS options, and choosing different authentication mechanisms.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#etcd-flags" rel="noopener noreferrer"&gt;Etcd settings&lt;/a&gt;: You can pass settings through to your cluster's etcd instance, letting you change how your cluster's state is persisted.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler" rel="noopener noreferrer"&gt;Kube-Scheduler settings&lt;/a&gt;: Settings can be used to change aspects of the scheduler's operation, including how leader election works.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet" rel="noopener noreferrer"&gt;Kubelet settings&lt;/a&gt;: Changing Kubelet settings lets you control how Nodes manage containers and interact with the control plane. For example, you can set the maximum size of container log files, manage eviction grace periods, and change how frequently Kubelet checks the API server for new data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A complete reference of possible control plane options is available in the &lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags" rel="noopener noreferrer"&gt;Kubernetes documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Options must be set when you start your cluster. The way to change them depends on which Kubernetes distribution you're using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Managed cloud Kubernetes services such as &lt;a href="https://spacelift.io/blog/kubernetes-on-aws" rel="noopener noreferrer"&gt;Amazon EKS&lt;/a&gt; and Google GKE provision and manage the control plane for you. You can't usually modify the control plane's configuration yourself.&lt;/li&gt;
&lt;li&gt;  Self-hosted Kubernetes distributions like &lt;a href="https://spacelift.io/blog/k3s-vs-k8s" rel="noopener noreferrer"&gt;Minikube and K3s&lt;/a&gt; normally let you configure the control plane when you start your cluster. For example, Minikube's &lt;a href="https://minikube.sigs.k8s.io/docs/handbook/config" rel="noopener noreferrer"&gt;&lt;code&gt;--extra-config&lt;/code&gt; flag&lt;/a&gt; passes key-value pairs to Kubernetes components including the API server, Scheduler, and Kubelet.&lt;/li&gt;
&lt;li&gt;  Clusters created with Kubeadm are &lt;a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm" rel="noopener noreferrer"&gt;configured by a ConfigMap&lt;/a&gt; in your cluster. This is created automatically based on Kubeadm CLI flags when you start your cluster. You can change settings by modifying the ConfigMap and then using Kubeadm to upgrade your Nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, the Kubernetes control plane shouldn't need reconfiguring very often. Major distributions usually ship with sensible defaults that are ready for production use. Nonetheless, it's useful to understand how the components can be customized so you can adapt your clusters to your environment. Adjusting config keys can also help you debug cluster problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes control plane vs. data plane
&lt;/h2&gt;

&lt;p&gt;The control plane and data plane work together to provide a robust and scalable platform for running your applications. The control plane makes decisions about how to manage the cluster, while the data plane executes those decisions and provides the resources needed to run your applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  High availability and the Kubernetes control plane
&lt;/h2&gt;

&lt;p&gt;The Kubernetes control plane can be configured for high availability (HA) to make your cluster more fault-tolerant. An HA control plane distributes replicas of each component across multiple Nodes. It removes the single point of failure that exists if you run your control plane on a single Node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology" rel="noopener noreferrer"&gt;"Stacked" Nodes&lt;/a&gt; are the most common way to achieve control plane HA. This deploys an instance of each control plane component to every Node. Each Node runs a local etcd instance. This topology is used automatically when you use Kubeadm to add new Nodes to a self-managed cluster.&lt;/p&gt;

&lt;p&gt;An alternative strategy is to run a separate etcd cluster outside of Kubernetes. In this model, your Nodes run all the control plane components except etcd. This allows you to scale etcd independently of your Nodes and provides increased redundancy during a failure. Losing a Kubernetes Node no longer affects etcd so there's less risk of consistency errors and no need to elect a new leader.&lt;/p&gt;

&lt;p&gt;A high availability control plane is essential for Kubernetes clusters operating in production environments. It reduces the risk of a control plane failure preventing cluster access or stopping workloads from being scheduled. Many managed Kubernetes services provide HA control plane options, sometimes for an increased cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes control plane best practices
&lt;/h2&gt;

&lt;p&gt;The control plane runs your Kubernetes cluster, so it's crucial that it's correctly configured. Here are some best practices that help improve security and reliability:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Keep your Kubernetes control plane updated - Keeping your control plane updated with new Kubernetes releases ensures you're running the latest security patches and bug fixes, providing increased protection for your workloads.&lt;/li&gt;
&lt;li&gt;  Ensure RBAC is enabled - &lt;a href="https://spacelift.io/blog/kubernetes-rbac" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;&lt;a href="https://spacelift.io/blog/kubernetes-rbac" rel="noopener noreferrer"&gt; RBAC rules&lt;/a&gt; let you control which actions and resources are available to individual users. Correct use of RBAC is essential to support a strong Kubernetes security posture. It's only available &lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac" rel="noopener noreferrer"&gt;when activated&lt;/a&gt; in the API server. Popular Kubernetes distributions turn RBAC on by default, but you should make sure you enable it when you're maintaining custom environments with Kubeadm.&lt;/li&gt;
&lt;li&gt;  Avoid publicly exposing the Kubernetes API server - Whenever possible, avoid publicly exposing the Kubernetes API Server. Using a private network to access your cluster's API provides protection against zero-day vulnerabilities discovered in the API server's authentication layer.&lt;/li&gt;
&lt;li&gt;  Configure the control plane for high availability (HA) - Having a highly available control plane makes your cluster more fault tolerant. Many Kubernetes distributions default to running all the control plane components on a single Node, causing a loss of operations if that Node goes down.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find more tips for protecting your control plane in our &lt;a href="https://spacelift.io/blog/kubernetes-security" rel="noopener noreferrer"&gt;Kubernetes security guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Kubernetes with Spacelift
&lt;/h2&gt;

&lt;p&gt;If you need assistance managing your Kubernetes projects, look at &lt;a href="https://spacelift.io/product-overview" rel="noopener noreferrer"&gt;Spacelift&lt;/a&gt;. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they're planning to change. &lt;/p&gt;

&lt;p&gt;You can also use Spacelift to mix and match Terraform, Pulumi, AWS CloudFormation, and Kubernetes Stacks and have them talk to one another.&lt;/p&gt;

&lt;p&gt;To take this one step further, you could add &lt;a href="https://docs.spacelift.io/concepts/policy/" rel="noopener noreferrer"&gt;custom policies&lt;/a&gt; to reinforce the security and reliability of your configurations and deployments. Spacelift provides different types of policies and workflows that are easily customizable to fit every use case. For instance, you could add &lt;a href="https://docs.spacelift.io/concepts/policy/terraform-plan-policy" rel="noopener noreferrer"&gt;plan policies&lt;/a&gt; to restrict or warn about security or compliance violations or &lt;a href="https://docs.spacelift.io/concepts/policy/approval-policy" rel="noopener noreferrer"&gt;approval policies&lt;/a&gt; to add an approval step during deployments. &lt;/p&gt;

&lt;p&gt;You can try Spacelift for free by &lt;a href="https://spacelift.io/free-trial" rel="noopener noreferrer"&gt;creating a trial account&lt;/a&gt; or &lt;a href="https://spacelift.io/schedule-demo" rel="noopener noreferrer"&gt;booking a demo&lt;/a&gt; with one of our engineers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key points
&lt;/h2&gt;

&lt;p&gt;The Kubernetes control plane is the set of components that manage a Kubernetes cluster and store its state. These components include the Kubernetes API Server, Cloud Controller Manager, Scheduler, and etcd datastore, as well as the Kubelet process that runs on each Node.&lt;/p&gt;

&lt;p&gt;The control plane orchestrates cluster-level operations by watching for events and taking action in response. You won't often need to engage directly with control plane components, but they'll be working behind the scenes each time you interact with your cluster.&lt;/p&gt;

&lt;p&gt;Using a managed Kubernetes service removes the complexity of maintaining your cluster's control plane. Amazon EKS, Google GKE, Azure AKS, and other options configure the control plane for you so you can concentrate on your workloads. Cloud services also let you automate cluster provisioning so developers can access Kubernetes on-demand, using an IaC platform like Spacelift.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Written by James Walker&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>List to String in Terraform</title>
      <dc:creator>Spacelift team</dc:creator>
      <pubDate>Tue, 15 Jul 2025 10:09:51 +0000</pubDate>
      <link>https://dev.to/spacelift/converting-list-to-string-in-terraform-4lf9</link>
      <guid>https://dev.to/spacelift/converting-list-to-string-in-terraform-4lf9</guid>
      <description>&lt;p&gt;Terraform’s type system is usually your friend — until you need to feed a list into something that expects a single string (a tag value, a template, a user-data snippet, an output, or a provider argument). Converting a list to a string is a common “glue” task, and the right approach depends on the shape of your data and the exact output format you need.&lt;/p&gt;

&lt;p&gt;In the full guide, we cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The most common pattern: &lt;code&gt;join()&lt;/code&gt; for delimiter-separated strings&lt;/li&gt;
&lt;li&gt;Formatting and shaping lists using expressions (&lt;code&gt;for&lt;/code&gt;), &lt;code&gt;format()&lt;/code&gt;, and &lt;code&gt;formatlist()&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Handling edge cases like nested lists, null/empty values, and quoting&lt;/li&gt;
&lt;li&gt;Practical examples for outputs, templates, and provider arguments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➡️ &lt;strong&gt;Read the full article on our blog:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://spacelift.io/blog/terraform-list-to-string" rel="noopener noreferrer"&gt;https://spacelift.io/blog/terraform-list-to-string&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>AzureRM Provider in Terraform</title>
      <dc:creator>Spacelift team</dc:creator>
      <pubDate>Mon, 07 Jul 2025 10:56:03 +0000</pubDate>
      <link>https://dev.to/spacelift/how-to-use-terraform-azure-azurerm-provider-24e9</link>
      <guid>https://dev.to/spacelift/how-to-use-terraform-azure-azurerm-provider-24e9</guid>
      <description>&lt;p&gt;The AzureRM provider is Terraform’s main integration point for managing Azure infrastructure — but most “it doesn’t work” moments come down to provider configuration and authentication choices (and how those choices behave in CI/CD).&lt;/p&gt;

&lt;p&gt;In the full guide, we cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What the AzureRM provider is and what it can manage in Azure&lt;/li&gt;
&lt;li&gt;How to configure &lt;code&gt;azurerm&lt;/code&gt; correctly (including the required &lt;code&gt;features {}&lt;/code&gt; block)&lt;/li&gt;
&lt;li&gt;Authentication methods you can use (Azure CLI, service principal, managed identity, etc.) and when each makes sense&lt;/li&gt;
&lt;li&gt;Provider versioning and workflow tips to keep environments consistent&lt;/li&gt;
&lt;li&gt;A practical example that provisions real Azure resources end-to-end&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➡️ &lt;strong&gt;Read the full article on our blog:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://spacelift.io/blog/terraform-azurerm-provider" rel="noopener noreferrer"&gt;https://spacelift.io/blog/terraform-azurerm-provider&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
    </item>
    <item>
      <title>Choosing a GitOps Tool in 2026</title>
      <dc:creator>Spacelift team</dc:creator>
      <pubDate>Mon, 02 Jun 2025 13:40:32 +0000</pubDate>
      <link>https://dev.to/spacelift/top-8-gitops-tools-you-should-know-19e8</link>
      <guid>https://dev.to/spacelift/top-8-gitops-tools-you-should-know-19e8</guid>
      <description>&lt;p&gt;GitOps isn’t a single product; it’s a way of operating: Git as the source of truth, automated reconciliation, and repeatable deployments. The tools vary a lot: some focus on Kubernetes app delivery, others on infrastructure provisioning, policy enforcement, or end-to-end platform workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools covered in the full guide:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Argo CD&lt;/li&gt;
&lt;li&gt;Flux CD&lt;/li&gt;
&lt;li&gt;Jenkins X&lt;/li&gt;
&lt;li&gt;Tekton&lt;/li&gt;
&lt;li&gt;Spinnaker&lt;/li&gt;
&lt;li&gt;Weave GitOps&lt;/li&gt;
&lt;li&gt;Crossplane&lt;/li&gt;
&lt;li&gt;Spacelift&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;➡️ Read the full article on our blog:&lt;br&gt;&lt;br&gt;
&lt;a href="https://spacelift.io/blog/gitops-tools" rel="noopener noreferrer"&gt;https://spacelift.io/blog/gitops-tools&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gitops</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
