<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ray Hao</title>
    <description>The latest articles on DEV Community by Ray Hao (@ray_hao).</description>
    <link>https://dev.to/ray_hao</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ray_hao"/>
    <language>en</language>
    <item>
      <title>How I Cut Our GitHub Actions Pipeline Time by More Than 50%</title>
      <dc:creator>Ray Hao</dc:creator>
      <pubDate>Wed, 08 Apr 2026 09:54:31 +0000</pubDate>
      <link>https://dev.to/ray_hao/how-i-cut-our-github-actions-pipeline-time-by-more-than-50-4665</link>
      <guid>https://dev.to/ray_hao/how-i-cut-our-github-actions-pipeline-time-by-more-than-50-4665</guid>
      <description>&lt;h2&gt;
  
  
  Problem
&lt;/h2&gt;

&lt;p&gt;Our CI pipeline was taking 20 minutes on average to finish in every pull request. It wasn't reliable either because tools installation would occasionally fail due to network issues or rate limits. Our team needed a faster feedback loop.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flowchart TB
    A[PR opened / pushed] --&amp;gt; B[Install tools]
    B --&amp;gt; C[Kubernetes cluster setup]
    C --&amp;gt; D[Build services, Run tests, Lint]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The project is a medium-complexity Go codebase with several microservices. For every PR, each workflow went through the same steps: checkout, install a suite of binary tools, build, test, etc. Build and unit tests were fast by themselves. Most of the time was consumed by installing those binary tools on every single run. And since these installations depended on external networks, they were a constant source of inconsistent failures, which led workflows to fail for unrelated reasons.&lt;br&gt;
We had been recording per-step timing via OpenTelemetry. Looking at the data, tool installation was repeated on every single run despite never changing between PRs. That was the real target.&lt;/p&gt;
&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;The fix was straightforward in principle: build a Docker image with all tools pre-installed, host it on GitHub Container Registry, and run all workflows inside that image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flowchart TB
    A[PR opened / pushed] --&amp;gt; B

    B[Compute content hash]
    B --&amp;gt; C{Image exists}

    D[Build &amp;amp; push]

    C -- no --&amp;gt; D
    D --&amp;gt; E
    C -- yes --&amp;gt; E

    E[All other workflows]

        style B fill:#fbbf24,stroke:#d97706,color:#000
        style C fill:#fbbf24,stroke:#d97706,color:#000
        style D fill:#fbbf24,stroke:#d97706,color:#000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Why Docker&lt;/strong&gt;&lt;br&gt;
I needed a solution that would let me install tools once and share them across every PR and every workflow. Docker is the most popular and stable option to fulfill this requirement because every GitHub Actions runner already has it available, and GitHub Container Registry makes hosting the image seamless.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In practice, I added two new workflows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;resolve-image&lt;/strong&gt; computes a hash of the files the base image Dockerfile depends on, checks if an image with that hash tag already exists in the registry, and outputs the result.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;build-image&lt;/strong&gt; builds and pushes the base image with that hash tag, only triggered when resolve-image reports the image doesn't exist yet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The hash is computed from three files: the Dockerfile itself, &lt;code&gt;go.mod&lt;/code&gt;, and &lt;code&gt;go.sum&lt;/code&gt;. If none of these change, the tag is identical and the build is skipped entirely.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Compute content hash&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hash&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;HASH=$(cat \&lt;/span&gt;
      &lt;span class="s"&gt;Dockerfile \&lt;/span&gt;
      &lt;span class="s"&gt;go.mod \&lt;/span&gt;
      &lt;span class="s"&gt;go.sum \&lt;/span&gt;
      &lt;span class="s"&gt;| sha256sum | cut -d' ' -f1 | head -c 16)&lt;/span&gt;
    &lt;span class="s"&gt;echo "tag=sha-${HASH}" &amp;gt;&amp;gt; "$GITHUB_OUTPUT"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All other workflows first run resolve-image. If the image already exists, build-image is skipped and every subsequent job runs on the existing image immediately. If not, build-image runs first, then the rest follow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# In resolve-image: check whether the image already exists&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check if image exists in GHCR&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;check&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;if docker manifest inspect ghcr.io/acme/base:${{ steps.hash.outputs.tag }} \&lt;/span&gt;
      &lt;span class="s"&gt;&amp;gt; /dev/null 2&amp;gt;&amp;amp;1; then&lt;/span&gt;
      &lt;span class="s"&gt;echo "exists=true" &amp;gt;&amp;gt; "$GITHUB_OUTPUT"&lt;/span&gt;
    &lt;span class="s"&gt;else&lt;/span&gt;
      &lt;span class="s"&gt;echo "exists=false" &amp;gt;&amp;gt; "$GITHUB_OUTPUT"&lt;/span&gt;
    &lt;span class="s"&gt;fi&lt;/span&gt;

&lt;span class="c1"&gt;# In build-image: skip the build if the image already exists&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push base image&lt;/span&gt;
  &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;needs.resolve-image.outputs.exists != 'true'&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/build-push-action@v7&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/acme/base:${{ needs.resolve-image.outputs.tag }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Result
&lt;/h2&gt;

&lt;p&gt;Building the base image takes around 10 minutes but in practice, most PRs never trigger a rebuild. Thanks to &lt;a href="https://github.com/renovatebot/renovate" rel="noopener noreferrer"&gt;Renovatebot&lt;/a&gt; handling automatic dependency updates, base image builds happen almost exclusively on Renovatebot PRs (Go version upgrades, tool version bumps, etc.). PRs created by humans almost always skip the build entirely and go straight to running on the existing image.&lt;/p&gt;

&lt;p&gt;Inconsistent failures from tool installation have essentially disappeared.&lt;/p&gt;

&lt;p&gt;One unexpected benefit: since the base image already has all the tools pre-installed, it doubles as a foundation for our CodeSpaces development environment. Developers get the same tool versions locally as they do in CI, no more "works on my machine" surprises. I'll cover that setup in the next post.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Else We Tried
&lt;/h2&gt;

&lt;p&gt;Before landing on the base image approach, my first assumption was that the Kubernetes cluster setup was the bottleneck - we use &lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;kind&lt;/a&gt; to run dependencies like PostgreSQL and NATS. I replaced kind with &lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;k3s&lt;/a&gt;. It saved 1–2 minutes, but nothing significant on its own.&lt;/p&gt;

&lt;p&gt;That change is still worth revisiting. After the base image work and the base image adoption in CodeSpaces, replacing kind with k3s across both environments might tell a different story.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is part 1 of a 3-part series:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://blog.cloudy9101.com/posts/faster-ci-with-a-base-docker-image/" rel="noopener noreferrer"&gt;&lt;strong&gt;How I Cut Our GitHub Actions Pipeline Time by More Than 50%&lt;/strong&gt;&lt;/a&gt; ← you are here&lt;/li&gt;
&lt;li&gt;Reusing the CI base image in CodeSpaces&lt;/li&gt;
&lt;li&gt;Replacing kind with k3s in CI and CodeSpaces&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>githubactions</category>
      <category>docker</category>
      <category>cicd</category>
    </item>
  </channel>
</rss>
