<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nick Silverman</title>
    <description>The latest articles on DEV Community by Nick Silverman (@nckslvrmn).</description>
    <link>https://dev.to/nckslvrmn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nckslvrmn"/>
    <language>en</language>
    <item>
      <title>Home Server GitOps-Lite on Nothing but GitHub and Docker</title>
      <dc:creator>Nick Silverman</dc:creator>
      <pubDate>Wed, 22 Apr 2026 01:46:47 +0000</pubDate>
      <link>https://dev.to/nckslvrmn/home-server-gitops-lite-on-nothing-but-github-and-docker-19lo</link>
      <guid>https://dev.to/nckslvrmn/home-server-gitops-lite-on-nothing-but-github-and-docker-19lo</guid>
      <description>&lt;p&gt;I run a decent little stack of services out of my house. Whisper (an end-to-end encrypted secret sharing app), bar_keep, archivist, a few other side projects, plus the base infra that ties them together: Traefik, shared networks, a handful of private stacks. For a while every deploy meant SSHing into the server, and I was pretty tired of it.&lt;/p&gt;

&lt;p&gt;What I actually wanted was something like the popular GitOps tools a lot of folks run on top of Kubernetes. Push code, get a deploy, done. No shell, no secrets on disk, no kubectl. But I'm not running Kubernetes at home and I wasn't about to install it just for this, so I took a different route.&lt;/p&gt;

&lt;p&gt;I ended up gluing together three things that basically give me the same experience those tools give you, just lighter and totally free on top of GitHub. It's all stuff I already run, the control plane is GitHub itself, and honestly I think it's a pretty fun pattern that other folks with home servers might like.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three pieces
&lt;/h2&gt;

&lt;p&gt;There are three parts to this, and two of them are tools I built:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/nckslvrmn/github-multi-runner" rel="noopener noreferrer"&gt;github-multi-runner&lt;/a&gt;&lt;/strong&gt; — a single container that runs a bunch of GitHub Actions self-hosted runners, configured via a JSON file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/nckslvrmn/docker-compose-deploy" rel="noopener noreferrer"&gt;docker-compose-deploy&lt;/a&gt;&lt;/strong&gt; — a composite GitHub Action that runs &lt;code&gt;docker compose up&lt;/code&gt; on a self-hosted runner.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The pattern:&lt;/strong&gt; a private "base infra" repo that owns networks, volumes, and shared stacks, plus each service repo owning its own real &lt;code&gt;compose.yml&lt;/code&gt; that is both the deployment config and a working example for anyone reading the repo.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The runner container is the thing that turns "my home server" into "a GitHub Actions target." The action is the thing that actually deploys. The pattern is what ties it all together so I never have to touch a terminal to ship.&lt;/p&gt;

&lt;h2&gt;
  
  
  github-multi-runner: one container, many runners
&lt;/h2&gt;

&lt;p&gt;The official &lt;code&gt;ghcr.io/actions/actions-runner&lt;/code&gt; image is designed around one runner per container. Which is fine, but I've got a bunch of repos (public and private) plus an org, and spinning up 10 containers just to attach to 10 scopes feels silly. I also wanted to be able to add and remove runners without bouncing anything else.&lt;/p&gt;

&lt;p&gt;So github-multi-runner is just a bash entrypoint and a JSON file mounted into the official image. No custom image to build and maintain. The JSON looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"runners"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my-org"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"scope"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"org"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"target"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my-github-org"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my-repo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"scope"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"repo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"target"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"myuser/my-repo"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"whisper"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"scope"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"repo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"target"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"nckslvrmn/whisper"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The entrypoint watches that file. If you add a runner, it registers and starts it. If you change one, it gracefully deregisters and re-registers just that one. If you remove a runner, it drains it. Unrelated runners are never touched. It also handles docker socket access automatically (detects the GID and adds the runner user to a matching group), so workflows that run &lt;code&gt;docker compose&lt;/code&gt; just work.&lt;/p&gt;

&lt;p&gt;A few things I cared about while building it: graceful drains so nothing gets yanked mid-job, persistent registrations across container restarts so there's no deregister/re-register thrash, and per-runner log files so debugging is just a &lt;code&gt;tail -F&lt;/code&gt; away.&lt;/p&gt;

&lt;p&gt;The whole thing is a single bash script because bash is what ships in the runner image and I didn't want another dependency.&lt;/p&gt;

&lt;p&gt;The deploy on the host side is just a compose file with the official runner image, the entrypoint mounted in, the JSON config mounted in, the docker socket mounted in, and a &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; in the environment. That's it.&lt;/p&gt;

&lt;h2&gt;
  
  
  docker-compose-deploy: the dumbest possible deploy action
&lt;/h2&gt;

&lt;p&gt;This one is even simpler. It's a composite GitHub Action whose entire job is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/setup-compose-action@v2&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker compose -f "${{ inputs.file }}" up -d --pull always --remove-orphans&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's really it. It takes a file path and optional args and runs &lt;code&gt;docker compose up&lt;/code&gt;. It's not magic. The magic is &lt;em&gt;where&lt;/em&gt; it runs, which is on a self-hosted runner on my server, which means &lt;code&gt;docker compose up&lt;/code&gt; happens on the actual deploy target.&lt;/p&gt;

&lt;p&gt;Because it's a normal GitHub Action, I get all the things that come with that for free:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Secrets from GitHub's encrypted secret store get injected as env vars on the step, and &lt;code&gt;docker compose&lt;/code&gt; substitutes them into the compose file at runtime. Secrets never land in the compose file or on disk.&lt;/li&gt;
&lt;li&gt;The workflow log is the deploy log.&lt;/li&gt;
&lt;li&gt;If the compose file is bad, the workflow fails and I get an email.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;workflow_dispatch&lt;/code&gt; gives me a big green "deploy" button in the GitHub UI for manual deploys.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The pattern: real compose files in service repos
&lt;/h2&gt;

&lt;p&gt;Here's the part I think is actually the coolest bit. It's not a tool, it's just a convention.&lt;/p&gt;

&lt;p&gt;Every service repo has a real &lt;code&gt;compose.yml&lt;/code&gt; in the root. Not an example, not a template. The literal file that gets used in production. For Whisper, it looks roughly like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;whisper&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/nckslvrmn/whisper:${WHISPER_VERSION:-latest}&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;S3_BUCKET=${S3_BUCKET}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;DYNAMO_TABLE=${DYNAMO_TABLE}&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/home/nsilverman/.aws:/root/.aws/:ro&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.enable=true"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.whisper.rule=Host(`whisper.slvr.io`)"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;traefik.http.routers.whisper.entrypoints=websecure"&lt;/span&gt;
    &lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik&lt;/span&gt;

&lt;span class="na"&gt;networks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;traefik&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;external&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Anyone cloning the repo to self-host Whisper gets a totally usable compose file to start from. Someone reading the repo to understand how the thing is deployed gets the literal answer. And I get to use the same file to deploy my own instance. One file, three audiences, no drift.&lt;/p&gt;

&lt;p&gt;The base infra that this compose file depends on (the &lt;code&gt;traefik&lt;/code&gt; network, the Traefik container itself, shared volumes, private stacks that don't belong in public repos) lives in a separate private repo. That repo has its own compose file and its own deploy workflow. Everything connects via external networks, which means each service repo and the base infra repo all deploy fully independently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting it all together: Whisper as an example
&lt;/h2&gt;

&lt;p&gt;Whisper's deploy workflow has two jobs. The first runs in GitHub Actions' default runner environment and builds the Docker image and pushes it to GHCR. The second runs on &lt;code&gt;[self-hosted]&lt;/code&gt; and does the actual deploy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;push-whisper-image&lt;/span&gt;
  &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;self-hosted&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v5&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nckslvrmn/docker-compose-deploy@main&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;compose.yml&lt;/span&gt;
      &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;S3_BUCKET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.S3_BUCKET }}&lt;/span&gt;
        &lt;span class="na"&gt;DYNAMO_TABLE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DYNAMO_TABLE }}&lt;/span&gt;
        &lt;span class="na"&gt;WHISPER_VERSION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.ref_name }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That &lt;code&gt;runs-on: [self-hosted]&lt;/code&gt; is the whole trick. It tells GitHub to route the job to one of my runners, which are running inside my multi-runner container on my home server. The action checks out the repo, runs &lt;code&gt;docker compose up&lt;/code&gt; with &lt;code&gt;compose.yml&lt;/code&gt; and the version tag from the release, and because compose does an image pull with &lt;code&gt;--pull always&lt;/code&gt;, the new version lands. Traefik picks up the label changes automatically. Old container goes down, new one comes up, no downtime.&lt;/p&gt;

&lt;p&gt;When I cut a new tagged release on GitHub, the whole chain runs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Release published.&lt;/li&gt;
&lt;li&gt;Image builds in GitHub Actions' default runner environment and gets pushed to GHCR.&lt;/li&gt;
&lt;li&gt;Deploy job runs on my home server, pulls the new image, runs compose.&lt;/li&gt;
&lt;li&gt;Profit.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There's also a &lt;code&gt;workflow_dispatch&lt;/code&gt; variant for manual deploys where I can pass a version string in the UI. Super handy for rolling back.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I like this
&lt;/h2&gt;

&lt;p&gt;A few reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No SSH, no kubectl, no shell.&lt;/strong&gt; The entire deployment surface is a GitHub Action workflow file. If I want to deploy something new, I write a compose file and a workflow. If I want to roll back, I just rerun the deploy action with the previous version tag.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No control plane to run.&lt;/strong&gt; The popular GitOps tools need a Kubernetes cluster, a bunch of CRDs, and a UI server to actually host them. This setup needs a container and a JSON file, and GitHub is the control plane.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secrets are already solved.&lt;/strong&gt; GitHub already has an encrypted secret store with fine-grained access controls. I don't need Vault for a home setup. I just put secrets in the repo's secret settings and reference them in the workflow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The compose file is the source of truth &lt;em&gt;and&lt;/em&gt; a working example.&lt;/strong&gt; Anyone reading the repo can see exactly how the thing is deployed. There's no "well the real config is in some private ops repo" split.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Each repo deploys on its own.&lt;/strong&gt; No monorepo, no shared deploy pipeline, no coordination. The only shared thing is the external Traefik network and the base infra repo that owns it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It's fully GitOps-ish.&lt;/strong&gt; The state of my server is described by the compose files in my repos. If I nuked the server and restored the base infra repo, then re-ran every service workflow, everything would come back up in a known state.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tradeoffs, because there are always tradeoffs
&lt;/h2&gt;

&lt;p&gt;I'm not trying to pretend this is a full-blown GitOps platform. A few things it doesn't do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Self-hosted runners on public repos are a real security concern, but it can be done safely.&lt;/strong&gt; Out of the box, anyone can open a PR and execute code on your host. You can absolutely run this setup on public repos if you're careful about what triggers the self-hosted job. I only gate the deploy workflow on releases being cut, which can only be done by repo owners, so random PRs can't touch the host. For an extra layer you can also use GitHub environment protection rules to require approval before a self-hosted job runs. This makes it pretty safe, but it's still worth being careful about.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No drift detection.&lt;/strong&gt; If I SSH in and manually change something (which, with this setup, I basically never do anymore), nothing notices. A real continuously-reconciling controller would flag it. Here, the next deploy would just overwrite the drift.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No multi-node.&lt;/strong&gt; This is one home server. If I wanted HA across multiple machines I'd need something heavier. For a homelab this is fine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Still technically pull based via &lt;code&gt;docker compose up&lt;/code&gt;.&lt;/strong&gt; Images are pulled from GHCR, but the trigger for the deploy is a workflow, not a continuously-reconciling controller. If GHCR has a new image and no workflow ran, nothing happens. You can bolt &lt;code&gt;on: registry_package&lt;/code&gt; onto the workflow to trigger on image publish, which I do in a couple spots.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these have bitten me in practice for a home setup, but they're worth knowing if you want to copy this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;If any of this sounds useful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/nckslvrmn/github-multi-runner" rel="noopener noreferrer"&gt;github-multi-runner&lt;/a&gt;&lt;/strong&gt; — drop in the compose file, point at a JSON config, set a PAT, done.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/nckslvrmn/docker-compose-deploy" rel="noopener noreferrer"&gt;docker-compose-deploy&lt;/a&gt;&lt;/strong&gt; — add it as a step in any workflow that targets a self-hosted runner.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/nckslvrmn/whisper" rel="noopener noreferrer"&gt;whisper&lt;/a&gt;&lt;/strong&gt; — has the full worked example of the release → build → deploy chain in &lt;code&gt;.github/workflows/docker.yml&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The whole thing clicks together in an afternoon. If you've got a home server and you've been wishing for a lighter-weight GitOps story, I think this is a pretty good one.&lt;/p&gt;

</description>
      <category>github</category>
      <category>docker</category>
      <category>productivity</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Overwriting Shared Libraries in AWS Lambda</title>
      <dc:creator>Nick Silverman</dc:creator>
      <pubDate>Mon, 18 May 2020 23:26:39 +0000</pubDate>
      <link>https://dev.to/nckslvrmn/overwriting-shared-libraries-in-aws-lambda-479h</link>
      <guid>https://dev.to/nckslvrmn/overwriting-shared-libraries-in-aws-lambda-479h</guid>
      <description>&lt;p&gt;The latest Ruby runtime for AWS Lambda runs Ruby 2.7. Though this version of ruby is only 6 months old, the version of OpenSSL that Lambdas instance of Ruby was compiled with is over 3 years old. You can verify that by running the function below and seeing what it returns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require 'openssl'

def lambda_handler(event:, context:)
    return OpenSSL::OPENSSL_VERSION
end

# OpenSSL 1.0.2k  26 Jan 2017
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's Old! That means that Ruby's OpenSSL library is missing some key features like &lt;code&gt;SHA-3&lt;/code&gt;, &lt;code&gt;TLS 1.3&lt;/code&gt;, and the &lt;code&gt;scrypt&lt;/code&gt; KDF.&lt;/p&gt;

&lt;p&gt;I wanted to see if I could load in a newer version of the OpenSSL shared library ruby loads so I could leverage some of these shiny new features. Well, it turns out, &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html" rel="noopener noreferrer"&gt;AWS Lambda Layers&lt;/a&gt; was a big part of the answer here. In the documentation, a Lambda Layer is available to your Lambda code via the &lt;code&gt;/opt&lt;/code&gt; directory. Now anyone who uses a lot of gem dependencies might have already come across this feature as it's a great way to share gems across different functions while keeping the function size itself fairly small.&lt;/p&gt;

&lt;p&gt;But interestingly enough, it's not just a place to load gems. Lambda also adds to the &lt;code&gt;RUBYLIB&lt;/code&gt; environment variable with a path you can fill with a Lambda Layer (specifically &lt;code&gt;/opt/ruby/lib&lt;/code&gt;). This path is also &lt;em&gt;prefixed&lt;/em&gt; to the &lt;code&gt;LOAD_PATH&lt;/code&gt; variable. This is where things get interesting.&lt;/p&gt;

&lt;p&gt;Now that we know we can load up a Lambda Layer with a shared library that will be part of the auto searched &lt;code&gt;LOAD_PATH&lt;/code&gt;, we can construct a Lambda Layer with the necessary files to load our own version of OpenSSL. To do this, we need a newer instance of &lt;code&gt;openssl.so&lt;/code&gt; that was compiled with Ruby and we also need the &lt;code&gt;libssl.so.1.1&lt;/code&gt; and &lt;code&gt;libcrypto.so.1.1&lt;/code&gt; files to support the shared library.&lt;/p&gt;

&lt;p&gt;I was able to extract a copy of these files by installing the latest version of OpenSSL from my package manager (pacman), and installing Ruby 2.7 from RVM so it re-compiled on my machine. In the end, I constructed a directory structure that looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── lib
│   ├── libcrypto.so -&amp;gt; libcrypto.so.1.1
│   ├── libcrypto.so.1.1
│   ├── libssl.so -&amp;gt; libssl.so.1.1
│   └── libssl.so.1.1
└── ruby
    └── lib
        └── openssl.so
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I then zipped that up and uploaded that zip to a new Lambda Layer destined for my function. Upon running the below function, we can see that my OpenSSL version is now nice and new and should include the features I want! Running the original function above, I now see &lt;code&gt;OpenSSL 1.1.1d  10 Sep 2019&lt;/code&gt;. Excellent! Now I can go generate all the &lt;code&gt;scrypt&lt;/code&gt; keys and initiate all the &lt;code&gt;TLS 1.3&lt;/code&gt; connections I want right?&lt;/p&gt;

&lt;p&gt;Not exactly. It turns out, Ruby has a fun little behavior when it sees it needs to load some files. when calling &lt;code&gt;require&lt;/code&gt;, ruby will search through the &lt;code&gt;LOAD_PATH&lt;/code&gt; for the code you are trying to load, but specifically with &lt;code&gt;require&lt;/code&gt;, it will load .rb files &lt;strong&gt;and&lt;/strong&gt; shared libraries with the &lt;code&gt;.so&lt;/code&gt; extension. So When I tried to create a new &lt;code&gt;SHA-256&lt;/code&gt; digest, I was met with an unexpected error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require 'openssl'

def lambda_handler(event:, context:)
    return OpenSSL::Digest::SHA256.new
end

# uninitialized constant OpenSSL::Digest::SHA256
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What happened? Well it turns out, because my &lt;code&gt;openssl.so&lt;/code&gt; file is now &lt;em&gt;ahead&lt;/em&gt; of Ruby's built-in &lt;code&gt;openssl.rb&lt;/code&gt; code, I am only loading the shared library which comes with some classes, but not all the classes I  expect. To get around this, it's quite simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;require 'openssl.rb'

def lambda_handler(event:, context:)
    return OpenSSL::Digest::SHA256.new
end

# #&amp;lt;OpenSSL::Digest::SHA256: ...&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By specifying the &lt;code&gt;.rb&lt;/code&gt; extension, I am now instructing Ruby to look through its &lt;code&gt;LOAD_PATH&lt;/code&gt; until it finds the first instance of a file called &lt;code&gt;openssl.rb&lt;/code&gt;. This is included with ruby and is the code that loads in all of the classes I expect to see, as well as an explicit call to load &lt;code&gt;openssl.so&lt;/code&gt;. This now allows me to use all of the shiny new features that OpenSSL 1.1.1(x) provides without having to use a &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html" rel="noopener noreferrer"&gt;Custom Runtime&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>ruby</category>
    </item>
  </channel>
</rss>
