<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Karl Schriek</title>
    <description>The latest articles on DEV Community by Karl Schriek (@karlschriek).</description>
    <link>https://dev.to/karlschriek</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/karlschriek"/>
    <language>en</language>
    <item>
      <title>Introducing Snap CD: Why I Built a New Terraform Orchestrator</title>
      <dc:creator>Karl Schriek</dc:creator>
      <pubDate>Tue, 10 Feb 2026 11:45:35 +0000</pubDate>
      <link>https://dev.to/karlschriek/introducing-snap-cd-why-i-built-a-new-terraform-orchestrator-23ll</link>
      <guid>https://dev.to/karlschriek/introducing-snap-cd-why-i-built-a-new-terraform-orchestrator-23ll</guid>
      <description>&lt;p&gt;Anyone who has operated Terraform/OpenTofu at scale knows the pattern: &lt;/p&gt;

&lt;p&gt;You start with one state file. It works great. Then it grows. And grows. Maybe your company also grows so that now multiple teams are deploying infrastructure. &lt;code&gt;terraform plan&lt;/code&gt; used to take seconds - now it takes many minutes. A single change triggers a refresh of hundreds of resources. One team's DNS change blocks another team's application deployment. You start sweating every &lt;code&gt;terraform apply&lt;/code&gt; runs. All you want to do is add a tag to an Azure storage account, but the &lt;code&gt;plan&lt;/code&gt; won't run through because that one fringe resource where credentials have gone stale is causing the refresh to fail! Or it &lt;em&gt;does&lt;/em&gt; run through but now there are multiple resources you have no knowledge of, all of which will be modified by the &lt;code&gt;apply&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The answer everyone arrives at is the same: break it up. Split your monolith into smaller, focused state files. Networking in one. DNS in another. Application infrastructure in a third. Give different teams the responsibility to manage different pieces.&lt;/p&gt;

&lt;p&gt;But the moment you do that, you inherit a new problem: &lt;strong&gt;the dependencies between those pieces are no longer enforced, and no longer visible.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your application module needs the &lt;code&gt;vpc_id&lt;/code&gt; from your networking module. Your DNS module needs the &lt;code&gt;load_balancer_arn&lt;/code&gt; from your application module. Suddenly you're stitching together &lt;code&gt;terraform_remote_state&lt;/code&gt; data sources, writing wrapper scripts, building CI/CD pipelines with hard-coded dependency chains, and praying that someone doesn't deploy a networking change that deletes resources your application deployments depend on.&lt;/p&gt;

&lt;p&gt;The dependency graph that Terraform/OpenTofu handles beautifully &lt;em&gt;within&lt;/em&gt; a single state becomes your manual responsibility &lt;em&gt;across&lt;/em&gt; states.&lt;/p&gt;

&lt;h1&gt;
  
  
  What I wanted
&lt;/h1&gt;

&lt;p&gt;I wanted a system where I could:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Break infrastructure into small, focused modules&lt;/strong&gt;, each with its own state file, its own lifecycle, its own blast radius. Outputs from any module automatically become available as inputs to other modules, creating a declarative dependency system across my entire infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Have changes propagate automatically.&lt;/strong&gt; When my "vpc" module produces a new &lt;code&gt;private_subnet_id&lt;/code&gt;, downstream modules that consume it should re-plan and re-apply without manual intervention. It should also be a true GitOps orchestrator, meaning new commits or updated configuration should automatically trigger deployment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Keep my cloud credentials out of the control plane.&lt;/strong&gt; The orchestrator should coordinate work, not execute it. Execution should happen on runners I deploy in my own infrastructure. I decide where they run, what access they have, and which modules are allowed to use them. My state files I manage in whatever remote location I am most comfortable with.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Control access granularly.&lt;/strong&gt; Infrastructure is organized into stacks (hard boundaries like "prod" and "dev"), then namespaces (logical groupings like "networking" or "storage"), then modules (individual deployments). I need role-based permissions assignable at every one of these levels, whether for service principals or users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stay non-invasive.&lt;/strong&gt; No proprietary runtimes, no lock-in at the execution layer. Runners should execute standard commands like &lt;code&gt;terraform plan&lt;/code&gt; and &lt;code&gt;terraform apply&lt;/code&gt; in a normal shell. I should be able to SSH into a runner's working directory and run commands manually if I need to.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manage everything as code.&lt;/strong&gt; A Terraform Provider for the orchestrator itself, so that stacks, namespaces, modules, runners, secrets, role assignments etc. are all defined in HCL.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;None of the existing tools delivered on all six of these and eventually I realized that if I wanted a system like this I would have to start building it myself.&lt;/p&gt;

&lt;h1&gt;
  
  
  How I solved it
&lt;/h1&gt;

&lt;p&gt;This probably warrants a dedicated article by itself, but suffice it to say that for a software engineer with the interests I have (building cohesive solutions that consist of various interlocking systems) this was a &lt;em&gt;wonderful&lt;/em&gt; project to work on. Few things I have done in my career have brought me as much satisfaction as this. It tapped into all the skills I had learnt over 20 years of software engineering and also demanded that I learn quite a few more!&lt;/p&gt;

&lt;p&gt;It took me about six months to lay down the bear bones, then about another 12 months of iteration, testing (with pretty serious production infrastructure) and feature expansion. During this time I completely rewrote some of the core systems multiple times until I was happy with them. &lt;/p&gt;

&lt;p&gt;With that being said, allow let me introduce to &lt;a href="https://snapcd.io" rel="noopener noreferrer"&gt;Snap CD&lt;/a&gt; and explain how it ticks off the six requirements I mentioned above:&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Modular deployments
&lt;/h2&gt;

&lt;p&gt;Snap CD organizes infrastructure into three levels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;module&lt;/strong&gt; is a single Terraform/OpenTofu deployment. It points to code in a Git repo, has its own state file, and defines inputs and outputs.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;namespace&lt;/strong&gt; groups related modules. Think "networking", "storage", "applications". Typically only one team would be responsible for a single namespace.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;stack&lt;/strong&gt; is a hard boundary, such "prod", "dev" or "staging". Namespaces are organized into stacks. Modules in different stacks don't influence each other.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is some very simple sample code using the &lt;a href="https://registry.terraform.io/providers/schrieksoft/snapcd/latest/docs" rel="noopener noreferrer"&gt;Snap CD Terraform Provider&lt;/a&gt; to deploy a new namespace into an existing stack. Into the namespace we deploy two modules, "vpc" and "cluster", where the latter requires an output from the former as one of its inputs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Stack&lt;/span&gt;

&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"snapcd_stack"&lt;/span&gt; &lt;span class="s2"&gt;"mystack"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-stack"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Namespace&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"snapcd_namespace"&lt;/span&gt; &lt;span class="s2"&gt;"mynamespace"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-namespace"&lt;/span&gt;
  &lt;span class="nx"&gt;stack_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;snapcd_stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mystack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;## Module 1 (VPC)&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"snapcd_module"&lt;/span&gt; &lt;span class="s2"&gt;"vpc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"vpc"&lt;/span&gt;
  &lt;span class="nx"&gt;namespace_id&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;snapcd_namespace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mynamespace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;source_revision&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt;
  &lt;span class="nx"&gt;source_url&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/snapcd-samples/mock-module-vpc.git"&lt;/span&gt;
  &lt;span class="nx"&gt;source_subdirectory&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
  &lt;span class="nx"&gt;runner_id&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;snapcd_runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;my_runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;## Module 2 (Cluster)&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"snapcd_module"&lt;/span&gt; &lt;span class="s2"&gt;"cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"cluster"&lt;/span&gt;
  &lt;span class="nx"&gt;namespace_id&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;snapcd_namespace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mynamespace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;source_revision&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt;
  &lt;span class="nx"&gt;source_url&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/snapcd-samples/mock-module-kubernetes-cluster.git"&lt;/span&gt;
  &lt;span class="nx"&gt;source_subdirectory&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
  &lt;span class="nx"&gt;runner_id&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;snapcd_runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;my_runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"snapcd_module_input_from_output"&lt;/span&gt; &lt;span class="s2"&gt;"private_subnet_id"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;input_kind&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Param"&lt;/span&gt;
  &lt;span class="nx"&gt;module_id&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;snapcd_module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"deploy_to_subnet_id"&lt;/span&gt; &lt;span class="c1"&gt;// The "cluster" module expects a variable called "deploy_to_subnet_id"&lt;/span&gt;
  &lt;span class="nx"&gt;output_module_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;snapcd_module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;output_name&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"private_subnet_id"&lt;/span&gt; &lt;span class="c1"&gt;// The "vpc" module produces an output called "private_subnet_id", which we map to "deploy_to_subnet_id"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;NOTE that these are "mock" deployments, meant for illustration only. You can find there code &lt;a href="https://github.com/snapcd-samples/mock-module-vpc.git" rel="noopener noreferrer"&gt;here&lt;/a&gt; and &lt;a href="https://github.com/snapcd-samples/mock-module-kubernetes-cluster.git" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The dependency graph is the core of Snap CD. Since the "cluster" module has a &lt;code&gt;snapcd_module_input_from_output&lt;/code&gt; that references an output from the "vpc" module, Snap CD knows that a dependency exists. No scripts. No CI/CD glue. The dependency graph is derived from the configuration itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqatpji1eicqu4qu1wefq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqatpji1eicqu4qu1wefq.gif" alt="dag" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Event-driven CD
&lt;/h2&gt;

&lt;p&gt;Modules can trigger automatically based on multiple events:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Source changes&lt;/strong&gt;: A new commit lands on a branch, or a new semantic version tag appears. Snap CD detects this (typically via polling jobs pushed to a runner, but manual notification webhooks are also supported) and triggers a deployment job.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Upstream output changes&lt;/strong&gt;: When a dependency's outputs change, downstream modules re-deploy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Definition changes&lt;/strong&gt;: When you modify a module's configuration (e.g. via the Terraform Provider, or manually via the Portal), it triggers a sync.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also require &lt;strong&gt;manual approval&lt;/strong&gt; before applies go through, with configurable approval thresholds. This lets you build workflows where plans run automatically but &lt;code&gt;apply&lt;/code&gt; waits for human sign-off.&lt;/p&gt;

&lt;p&gt;Let's consider again the code for the "cluster" module above. That module points to "main" branch of the repo at "&lt;a href="https://github.com/snapcd-samples/mock-module-vpc.git" rel="noopener noreferrer"&gt;https://github.com/snapcd-samples/mock-module-vpc.git&lt;/a&gt;". Whenever new commits are pushed to this branch, Snap CD will automatically trigger a deployment.&lt;/p&gt;

&lt;p&gt;Similarly if the "vpc" module outputs a new value for &lt;code&gt;private_subnet_id&lt;/code&gt;, then the "cluster" module deployment will trigger. &lt;/p&gt;

&lt;p&gt;Lastly, a change to the definition as follows would automatically trigger a deployment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"snapcd_module"&lt;/span&gt; &lt;span class="s2"&gt;"vpc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"vpc"&lt;/span&gt;
  &lt;span class="nx"&gt;namespace_id&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;snapcd_namespace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mynamespace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;source_revision&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt;
  &lt;span class="nx"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;source_url&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/snapcd-samples/mock-module-vpc.git"&lt;/span&gt;
  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;source_url&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/snapcd-samples/mock-module-another-vpc.git"&lt;/span&gt;
  &lt;span class="nx"&gt;source_subdirectory&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
  &lt;span class="nx"&gt;runner_id&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;snapcd_runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;my_runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we are changing the &lt;code&gt;source_url&lt;/code&gt; but any changes to the &lt;code&gt;snapcd_module&lt;/code&gt; itself or to any child resources such as &lt;code&gt;snapcd_module_input...&lt;/code&gt;, &lt;code&gt;snapcd_extra_file&lt;/code&gt;, &lt;code&gt;snapcd_backend_config&lt;/code&gt; and so forth would also automatically trigger a deployment!&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Runner isolation
&lt;/h2&gt;

&lt;p&gt;Snap CD's architecture cleanly separates orchestration from execution. The &lt;strong&gt;Server&lt;/strong&gt; (&lt;a href="https://snapcd.io" rel="noopener noreferrer"&gt;snapcd.io&lt;/a&gt;) is the control plane - it handles configuration, dependency tracking, job management, and log/output storage. It never touches your cloud infrastructure directly. No AWS credentials, no Azure service principals, no GCP service accounts. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Runners&lt;/strong&gt; are self-hosted agents that you deploy in a manner and location of your choosing. They connect to the Server over an authenticated WebSocket, pick up jobs, execute standard &lt;code&gt;terraform plan&lt;/code&gt; and &lt;code&gt;terraform apply&lt;/code&gt; etc., and report back with logs and outputs. &lt;/p&gt;

&lt;p&gt;The Runner is an open-source component, published at &lt;a href="https://github.com/schrieksoft/snapcd-runner" rel="noopener noreferrer"&gt;github.com/schrieksoft/snapcd-runner&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We provide sample code for deploying runners &lt;a href="https://github.com/schrieksoft/snapcd-runner-deployment-local" rel="noopener noreferrer"&gt;locally&lt;/a&gt;, with &lt;a href="https://github.com/schrieksoft/snapcd-runner-deployment-docker" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt;, or on &lt;a href="https://github.com/schrieksoft/snapcd-runner-deployment-kubernetes" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;You configure your runners with whatever cloud credentials they need, and then you dictate which Snap CD modules are allowed to use them. For example, you may want to separate runners for "dev" and "prod", and/or for different cloud providers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltxb4anwrip5zmi3i00m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltxb4anwrip5zmi3i00m.png" alt="runners" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is an example of how you would register a runner and assign it for use by modules within the namespace we created above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"snapcd_service_principal"&lt;/span&gt; &lt;span class="s2"&gt;"my_service_principal"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// fetch a pre-existing Service Principal (this must be created manually via the snapcd.io portal)&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"MyServicePrincipal"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"snapcd_runner"&lt;/span&gt; &lt;span class="s2"&gt;"my_runner"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"myrunner"&lt;/span&gt;
  &lt;span class="nx"&gt;service_principal_id&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;my_service_principal&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;is_assigned_to_all_modules&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"snapcd_runner_namespace_assignment"&lt;/span&gt; &lt;span class="s2"&gt;"myrunner_mynamespace"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;runner_id&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;snapcd_runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;myrunner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;namespace_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;snapcd_namespace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mynamespace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Runners can be assigned to a single &lt;a href="https://registry.terraform.io/providers/schrieksoft/snapcd/latest/docs/resources/runner_module_assignment" rel="noopener noreferrer"&gt;module&lt;/a&gt;, to an entire &lt;a href="https://registry.terraform.io/providers/schrieksoft/snapcd/latest/docs/resources/runner_namespace_assignment" rel="noopener noreferrer"&gt;namespace&lt;/a&gt;, an entire &lt;a href="https://registry.terraform.io/providers/schrieksoft/snapcd/latest/docs/resources/runner_stack_assignment" rel="noopener noreferrer"&gt;stack&lt;/a&gt; or (by setting the &lt;code&gt;is_assigned_to_all_modules&lt;/code&gt; flag to &lt;code&gt;true&lt;/code&gt;) to an entire organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Permission system
&lt;/h2&gt;

&lt;p&gt;Role-based access control is assignable at every level of the hierarchy: organization, stack, namespace, module, as well as to runners. &lt;/p&gt;

&lt;p&gt;users, service principals, and groups can all be scoped precisely. &lt;/p&gt;

&lt;p&gt;In the below example code, we set a User to &lt;code&gt;Contributor&lt;/code&gt; on the namespace shown in the example code above&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;
&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"snapcd_user"&lt;/span&gt; &lt;span class="s2"&gt;"myuser"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;user_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"myuser@somedomain.com"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"snapcd_namespace_role_assignment"&lt;/span&gt; &lt;span class="s2"&gt;"myuser_contributor"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;stack_id&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;snapcd_stack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mynamespace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;principal_id&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;snapcd_user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;myuser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;principal_discriminator&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"User"&lt;/span&gt; &lt;span class="c1"&gt;// Can be one of "User", "ServicePrincipal" or "Group"&lt;/span&gt;
  &lt;span class="nx"&gt;role_name&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Contributor"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Non-invasive orchestration
&lt;/h2&gt;

&lt;p&gt;Snap CD is not a Terraform/OpenTofu replacement. It doesn't parse HCL. It doesn't have its own resource model. Your modules are regular Terraform/OpenTofu  modules. Your providers are regular Terraform/OpenTofu providers. If you stopped using Snap CD tomorrow, your infrastructure and state files would still be perfectly valid.&lt;/p&gt;

&lt;p&gt;Snap CD also doesn't force proprietary tooling into your deployment process. Runners execute standard &lt;code&gt;terraform plan&lt;/code&gt; and &lt;code&gt;terraform apply&lt;/code&gt; in a normal shell. Snap CD provides the inputs - &lt;code&gt;.env&lt;/code&gt; files, &lt;code&gt;.tfvars&lt;/code&gt; files, scripts - and the runner executes them. If you needed to, you could navigate directly to a runner's working directory and run those commands manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Everything as code
&lt;/h2&gt;

&lt;p&gt;Almost everything in Snap CD is managed via its own &lt;a href="https://registry.terraform.io/providers/schrieksoft/snapcd/latest/docs" rel="noopener noreferrer"&gt;Terraform Provider&lt;/a&gt;. Stacks, namespaces, modules, runners, secrets, role assignments, etc. - all defined in HCL. &lt;/p&gt;

&lt;p&gt;For a more complete tutorial see the &lt;a href="https://docs.snapcd.io/quickstart/" rel="noopener noreferrer"&gt;quickstart guide&lt;/a&gt; or go directly to the &lt;a href="https://github.com/snapcd-samples/sample-deployment" rel="noopener noreferrer"&gt;sample deployment&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will mention here that one of the interesting patterns that is made possible by the Terraform Provider is a module-within-module pattern. In other words, you could instruct Snap CD to deploy Snap CD modules, which then instruct Snap CD to deploy your actual resources!&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting started
&lt;/h1&gt;

&lt;p&gt;Snap CD is available as a hosted service at &lt;a href="https://snapcd.io" rel="noopener noreferrer"&gt;snapcd.io&lt;/a&gt; with a free community tier. The runner is open source and available on &lt;a href="https://github.com/schrieksoft/snapcd-runner" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. We provide deployment instructions for &lt;a href="https://github.com/schrieksoft/snapcd-runner-deployment-local" rel="noopener noreferrer"&gt;local use&lt;/a&gt;, with &lt;a href="https://github.com/schrieksoft/snapcd-runner-deployment-docker" rel="noopener noreferrer"&gt;Docker Compose&lt;/a&gt;, or on &lt;a href="https://github.com/schrieksoft/snapcd-runner-deployment-kubernetes" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you've ever stared at a sprawling Terraform monolith and thought "there has to be a better way to split this up" - that's exactly the problem Snap CD was built to solve. If you would like to try it out, here is a &lt;a href="https://docs.snapcd.io/quickstart/" rel="noopener noreferrer"&gt;quickstart&lt;/a&gt; guide.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>tooling</category>
      <category>cicd</category>
    </item>
  </channel>
</rss>
