DEV Community

우병수
우병수

Posted on • Originally published at techdigestor.com

GitLab CI/CD Is Great — Until You Hit the Free Tier Limits. Here's What I Run Instead.

TL;DR: A pipeline sitting in queue for 45 minutes while we're mid-sprint is what finally broke me. Not 5 minutes.

📖 Reading time: ~37 min

What's in this article

  1. The Moment I Started Looking for Alternatives
  2. The Current Stack I Landed On (Spoiler: It's Not One Tool)
  3. GitHub Actions — The Obvious One You Should Probably Just Use
  4. Woodpecker CI — The Self-Hosted Option That Doesn't Make You Hate Yourself
  5. Jenkins — I Know, I Know. But Hear Me Out.
  6. Forgejo Actions — If You're Already Self-Hosting Your Git
  7. Side-by-Side Comparison — What Actually Matters
  8. When to Pick What — Match the Tool to Your Situation

The Moment I Started Looking for Alternatives

A pipeline sitting in queue for 45 minutes while we're mid-sprint is what finally broke me. Not 5 minutes. Not 15. Forty-five minutes on a shared runner, waiting for a Docker build to start, while my team is asking if the staging deploy is done yet. That's not a CI system doing its job — that's a CI system actively slowing you down. GitLab.com's shared runners get hammered during peak hours, and with the free tier capped at 400 compute minutes per month, you're not just fighting queue times — you're watching a meter tick down with every build.

Four hundred minutes sounds like a lot until you do the math. A team of four, three to five active repos, Docker builds that take 8-12 minutes each — you can burn through that in a week of normal development. A single bad week with a flaky test suite that you're iterating on? Gone. The free tier used to be 2,000 minutes, then GitLab cut it, and teams that built workflows around that number got a rude surprise. I'm not criticizing the business decision, but I am saying it changed the calculus for small teams completely.

Self-hosting GitLab is the answer GitLab will give you, and technically they're right. But let's be honest about what that means. The minimum spec is 4GB RAM — and that's the "works but feels sluggish" minimum. In practice you want 8GB to not hate your life. You're also taking on Omnibus package upgrades, Postgres maintenance, backup scripts, and the occasional "Sidekiq is stuck again" incident at 11pm. I ran a self-hosted GitLab instance for about eight months. It worked, but it needed genuine babysitting. If you have a dedicated ops person, fine. If it's you and two other devs shipping product, that overhead is real.

What I actually needed was pretty simple in scope: reliable CI for three to five repos, Docker image builds pushed to a registry, and occasional Kubernetes deploys via kubectl or Helm. Not exotic. Not enterprise-scale. This is the workload that basically every small SaaS team has, and it turns out multiple tools handle it better — or at least more cheaply — than GitLab.com's free tier does in 2024. The requirements that mattered most to me: no artificial queue delays, Docker-in-Docker or equivalent support without hacks, and YAML config I don't need to memorize a proprietary DSL for.

The search wasn't about finding something perfect. It was about finding something that wouldn't interrupt a sprint. There's a real difference between a tool that has limitations you can plan around versus one that introduces unpredictable friction. A 45-minute queue is unpredictable friction. A 400-minute monthly cap you can hit mid-week is unpredictable friction. Once I framed it that way, the alternatives started looking a lot more attractive — even ones with their own trade-offs. For a broader look at tools that can simplify your whole workflow beyond just CI/CD, the Productivity Workflows guide covers a solid set of options worth pairing with whatever pipeline tool you land on.

The Current Stack I Landed On (Spoiler: It's Not One Tool)

After about three months of tool-hopping, I stopped looking for the one replacement and accepted reality: different pipelines have different requirements, and forcing everything through a single CI/CD tool creates more pain than it solves. The honest answer to "what replaced GitLab CI for us" is a three-tool setup, and each tool earned its slot by being genuinely better at one thing than everything else I tried.

GitHub Actions — The Default for Anything Touching GitHub

If your code is already on GitHub, there's almost no argument for routing it through an external CI. The tight integration is the feature. Your pipeline YAML lives in .github/workflows/, secrets are scoped to repos and environments, and the free tier gives you 2,000 minutes/month for private repos — enough for most small-to-mid teams who aren't running integration tests on every commit. For public repos, it's unlimited. The thing that caught me off guard was how much faster matrix builds feel compared to what I was used to, because GitHub spins up fresh runners in parallel without any queue drama at low concurrency.

# .github/workflows/ci.yml
name: CI

on:
push:
branches: [main, dev]
pull_request:

jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [18, 20]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- run: npm ci
- run: npm test

That cache key on actions/setup-node is not optional — skip it and you'll burn minutes reinstalling node_modules every run. The gotcha nobody mentions upfront: GitHub Actions has a 6-hour job timeout and a 35-day artifact retention default that silently deletes things. Set retention-days explicitly if you're storing build artifacts you actually need later.

Woodpecker CI — Self-Hosted Without the Weight

I picked up Woodpecker for an internal tool that couldn't leave our infrastructure. Gitea hosts the code, so GitLab's auth model was irrelevant, and Jenkins felt like pulling in a freight train to deliver a pizza. Woodpecker is a fork of the old Drone CI codebase, runs as a single binary plus a separate agent, and the whole thing sits inside a docker-compose.yml that takes maybe 20 minutes to configure from scratch. The free tier is "you run it yourself," which means the only cost is server time.

# docker-compose.yml (minimal Woodpecker setup)
version: '3'

services:
woodpecker-server:
image: woodpeckerci/woodpecker-server:latest
ports:
- "8000:8000"
volumes:
- woodpecker-server-data:/var/lib/woodpecker/
environment:
- WOODPECKER_OPEN=false
- WOODPECKER_GITEA=true
- WOODPECKER_GITEA_URL=https://gitea.yourdomain.com
- WOODPECKER_GITEA_CLIENT=${WOODPECKER_GITEA_CLIENT}
- WOODPECKER_GITEA_SECRET=${WOODPECKER_GITEA_SECRET}
- WOODPECKER_AGENT_SECRET=${WOODPECKER_AGENT_SECRET}

woodpecker-agent:
image: woodpeckerci/woodpecker-agent:latest
command: agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WOODPECKER_SERVER=woodpecker-server:9000
- WOODPECKER_AGENT_SECRET=${WOODPECKER_AGENT_SECRET}

volumes:
woodpecker-server-data:

The trade-off: Woodpecker's plugin ecosystem is sparse compared to GitHub Actions' marketplace. You'll write more shell than you'd like, and the docs have gaps — specifically around conditional step execution and multi-pipeline dependencies, which you'll hit fast on anything beyond a basic build-test-deploy flow. I solved most of it with when clauses and a bit of cursing. Still worth it for the self-hosted use case; nothing else comes close at this weight class.

Jenkins — Still Standing for One Specific Reason

I didn't want to keep Jenkins. I tried migrating a legacy Java monorepo off it twice, and both times I walked it back. The reason Jenkins survives in this stack is its incremental build support via the Incremental Build Plugin combined with Maven's --pl flag for multi-module projects. The repo has around 40 Maven modules and a full build takes 18 minutes. With Jenkins fingerprinting changed modules and skipping untouched ones, PRs that touch one service build in under 4 minutes. No other tool gave me that out of the box without significant custom scripting.

// Jenkinsfile (multi-module incremental build excerpt)
pipeline {
agent any
stages {
stage('Detect Changes') {
steps {
script {
def changedModules = sh(
script: "git diff --name-only origin/main...HEAD | grep '/' | cut -d'/' -f1 | sort -u",
returnStdout: true
).trim()
env.CHANGED_MODULES = changedModules
}
}
}
stage('Build Changed') {
when { expression { env.CHANGED_MODULES != '' } }
steps {
sh "mvn clean verify -pl ${env.CHANGED_MODULES} --also-make -T 4"
}
}
}
}

Jenkins is free under MIT license. The operational overhead — JVM memory tuning, plugin compatibility roulette, the ancient UI — is real. I budget roughly an hour a month babysitting it, which is worth it for that one repo and nothing else. I wouldn't add a second Jenkins instance for any reason.

The Actual Cost Breakdown

  • GitHub Actions: $0 — public repos unlimited, private repos 2,000 free minutes/month on the Free plan. GitHub Team is $4/user/month and bumps that to 3,000 minutes. Self-hosted runners on GitHub Actions are free regardless of plan, so if you have spare compute, that 2,000 minute cap disappears entirely.
  • Woodpecker CI: $0 for the software. You pay for the server. A $6/month VPS handles a team of 5–8 engineers with comfortable headroom.
  • Jenkins: $0 for the software. Runs on an existing internal server we already paid for, so marginal cost is effectively zero.

The total extra spend over GitLab's free tier is around $6/month for the Woodpecker host, and I'd already argued that server budget for Gitea. If your team is under 5 people, all of this fits on GitHub's free tier plus a single cheap VPS and you're done. The complexity cost is real — three tools means three mental models — but the maintenance burden is lower than keeping GitLab runners healthy, and I say that having done both.

GitHub Actions — The Obvious One You Should Probably Just Use

If you're on GitHub already, switching to anything else for CI/CD is a choice you need to justify. I don't mean that in a dismissive way — I mean the friction cost of using an external system is real, and Actions eliminates it entirely. The free tier gives you 2,000 minutes/month for private repos and unlimited minutes for public repos. For most small-to-medium teams shipping 5-10 times a day, 2,000 minutes is plenty. The minute you start doing matrix builds across 4 Node versions × 3 OS combinations, you're burning 12× your normal pipeline time per push. I've seen teams blow through their quota in 10 days doing exactly that.

Here's a real deployment workflow that builds a Docker image and pushes it to GitHub Container Registry. This isn't a toy example — I've used this pattern in production:

name: Build and Deploy

on:
  push:
    branches: [main]

permissions:
  contents: read
  packages: write

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to GHCR
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ghcr.io/${{ github.repository }}:${{ github.sha }}
          cache-from: type=gha
          cache-to: type=gha,mode=max
Enter fullscreen mode Exit fullscreen mode

Notice the permissions block at the top. This is the gotcha that cost my team half a day in 2023. GitHub quietly changed the default GITHUB_TOKEN permission from write to read. If you have old workflows without an explicit permissions block, your Docker push to GHCR will fail with a cryptic 403. The error message doesn't say "permission denied on packages" — it says something vague about authentication that sends you down a rabbit hole of rotating PATs and checking org settings. You need packages: write explicitly declared. Add it to every workflow that touches GHCR, releases, or Pages, and you'll never hit this.

The marketplace is where Actions genuinely earns its reputation. actions/setup-node@v4 and docker/build-push-action@v5 are maintained by GitHub and Docker respectively — they're not some random community action that'll break when Node 22 ships. I've stopped rolling my own setup steps for Go, Python, and Java because the official actions handle PATH manipulation, caching hooks, and version resolution better than my bash scripts did. That said, audit any community action before you use it. Check the source, check the star count, check the last commit date. Pinning to a commit SHA instead of a tag (uses: some-action@abc1234) is the right call for anything touching secrets.

Caching is not optional if you care about build speed, and it doesn't come configured by default. Here's the cache block that shaved about 40% off our Node.js pipeline — roughly 3 minutes on a 7-minute build:

      - name: Cache node modules
        uses: actions/cache@v3
        with:
          path: |
            ~/.npm
            node_modules
          key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
          restore-keys: |
            ${{ runner.os }}-node-
Enter fullscreen mode Exit fullscreen mode

The hashFiles('**/package-lock.json') key means the cache busts only when your lockfile changes — not on every push. The restore-keys fallback means you still get a partial cache hit even after adding a new dependency. Without this, every run does a cold npm ci from scratch. Combined with the cache-from: type=gha flag in the Docker build step above, you're reusing both the npm layer and Docker build layers across runs. The Docker layer caching alone is worth implementing just for the sanity of not watching a 500MB base image download on every triggered workflow.

The honest trade-off with Actions versus something like Woodpecker or Forgejo: you're locked into GitHub's infrastructure and pricing model. If GitHub raises prices or kills the free tier, you're migrating. The YAML syntax is also genuinely verbose — expressing conditional logic across jobs requires more boilerplate than it should. But the debugging experience (live logs, re-run failed jobs, step-level timing) is solid, and the integration with PRs, environments, and branch protection rules is tight enough that switching away carries real hidden costs. For teams already on GitHub, the bar for "good enough reason to use something else" is high.

Woodpecker CI — The Self-Hosted Option That Doesn't Make You Hate Yourself

I'll be direct: most self-hosted CI tools feel like punishment. Jenkins needs a Java degree to configure. Concourse has a learning curve that'll consume a sprint. Woodpecker CI is the exception. It forked from Drone CI back when Drone went proprietary, kept everything that made Drone pleasant to work with, and the community has been actively shipping improvements ever since. MIT licensed, no enterprise gate on features you actually need, and you can have it running before your coffee gets cold.

Up in Under 10 Minutes — For Real

Here's the actual docker-compose.yml. This isn't a simplified illustration — this is what I run on a $6/month Hetzner VPS connected to Gitea:

version: '3'

services:
woodpecker-server:
image: woodpeckerci/woodpecker-server:latest
ports:
- 8000:8000
- 9000:9000
volumes:
- woodpecker-server-data:/var/lib/woodpecker/
environment:
- WOODPECKER_OPEN=true
- WOODPECKER_HOST=https://ci.yourdomain.com
- WOODPECKER_GITEA=true
- WOODPECKER_GITEA_URL=https://git.yourdomain.com
- WOODPECKER_GITEA_CLIENT=your_oauth_client_id
- WOODPECKER_GITEA_SECRET=your_oauth_secret
- WOODPECKER_AGENT_SECRET=a_random_secret_you_generate
restart: unless-stopped

woodpecker-agent:
image: woodpeckerci/woodpecker-agent:latest
command: agent
depends_on:
- woodpecker-server
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WOODPECKER_SERVER=woodpecker-server:9000
- WOODPECKER_AGENT_SECRET=a_random_secret_you_generate
restart: unless-stopped

volumes:
woodpecker-server-data:

Run docker compose up -d, set up an OAuth app in Gitea (or GitHub, or GitLab — your choice), paste the credentials, and you're done. The GRPC port 9000 is the agent-to-server channel. Don't expose it publicly; keep it internal. The thing that caught me off guard: WOODPECKER_OPEN=true lets any authenticated user from your Git host create pipelines. Lock that down with WOODPECKER_ORGS if you're not running this for just yourself.

The Pipeline Syntax Is Clean — Genuinely

If you've ever written a Drone pipeline, the .woodpecker.yml syntax will feel like home. If you haven't, it's still easier to read than GitHub Actions YAML on a bad day.

steps:

  • name: test
    image: golang:1.22
    commands:

    • go test ./...
  • name: build
    image: golang:1.22
    commands:

    • CGO_ENABLED=0 go build -o bin/app . when: branch: main
  • name: deploy
    image: appleboy/drone-ssh
    settings:
    host: prod.yourdomain.com
    username: deploy
    key:
    from_secret: SSH_PRIVATE_KEY
    script:
    - systemctl restart myapp
    when:
    branch: main
    event: push

Each step runs in its own container. Secrets come from the UI and get injected via from_secret. Workspace is shared between steps by default, so your build artifacts are available downstream without any artifact upload/download dance. The when block is expressive enough to handle most real-world branching logic without making your YAML look like Kubernetes config.

The Honest Resource Story

At idle, Woodpecker server + one agent sits around 150MB RAM combined. During a build, it spins up Docker containers on the agent — so your RAM usage scales with what your pipeline actually does, not with Woodpecker's overhead. A $6/month VPS with 2GB RAM can comfortably handle a small team's CI workload. I've run teams of 4-5 developers through this setup with zero complaints about queue times. If you hit capacity, you just spin up more agents — they connect back to the same server, no config changes required on the server side.

The Plugin Situation — Eyes Open

This is where I'll be straight with you: the plugin ecosystem is substantially smaller than GitHub Actions. The Drone plugin registry mostly works since Woodpecker is compatible with most Drone plugins, but you'll hit gaps. Need to publish to a niche registry or trigger a specific cloud provider's deployment API? There's a decent chance you'll end up writing a thin Docker image yourself that wraps a CLI tool and reads Woodpecker's PLUGIN_* environment variables. That's not the end of the world — a basic plugin is maybe 20 lines of shell and a two-line Dockerfile — but it's real work that GitHub Actions avoids with a 30-second marketplace search. Budget for it if your pipeline needs anything exotic.

When Woodpecker Wins Unambiguously

  • Air-gapped environments: No outbound call home, no telemetry, runs entirely inside your network. Ideal for defense contractors, financial institutions with strict network segmentation, or anyone with an auditor who asks uncomfortable questions about SaaS CI tools phoning home.
  • Unlimited private repo minutes: GitHub Actions gives you 2,000 free minutes/month on private repos, then charges per minute. Woodpecker doesn't meter minutes. Run 10,000 minutes of builds — it costs you the same VPS you were already paying for.
  • Data sovereignty: Your code, your secrets, your build logs — none of it touches someone else's server. If your team handles regulated data or you've made a contractual promise about where code processes, this removes an entire compliance conversation.
  • Gitea/Forgejo shops: If you're already self-hosting your Git, Woodpecker is the natural pairing. The integration is first-class; GitHub Actions has no equivalent for this setup.

The one scenario where I'd steer you elsewhere: if your team lives in GitHub and you need a rich ecosystem of pre-built actions for things like OIDC-based cloud auth, SLSA provenance, or complex Docker layer caching strategies, GitHub Actions is just going to be less friction. Woodpecker is the right answer when control and cost matter more than ecosystem breadth — which, for a surprising number of teams, is exactly the situation they're actually in.

Jenkins — I Know, I Know. But Hear Me Out.

I'm not going to tell you to start a new project on Jenkins in 2024. I'd actually talk you out of it. But last year I inherited a Java monorepo — 847 Maven modules, a build that took 40 minutes on a good day, and a Jenkinsfile that had been tuned over three years by people who no longer worked there. The thing worked. Module-level caching, parallelized test suites split by estimated duration, Slack notifications with actual failure diffs. Ripping that out to move to GitHub Actions would have been a month of archaeology with zero user-visible upside. So we kept it. And honestly? It was the right call.

The declarative pipeline syntax has gotten genuinely readable since the bad old days of Groovy scripting hell. Here's a real-world skeleton that covers the patterns you'll actually need:

pipeline {
agent { label 'java-21' }

options {
buildDiscarder(logRotator(numToKeepStr: '20'))
timeout(time: 60, unit: 'MINUTES')
disableConcurrentBuilds()
}

environment {
MAVEN_OPTS = '-Xmx4g -XX:+UseG1GC'
}

stages {
stage('Compile') {
steps {
sh 'mvn -T 4 compile -q --no-transfer-progress'
}
}

stage('Test') {
  parallel {
    stage('Unit') {
      steps {
        sh 'mvn -pl core,api,service test -q'
      }
    }
    stage('Integration') {
      agent { label 'docker-host' }
      steps {
        sh 'mvn -pl integration-tests verify -Pdocker'
      }
    }
  }
}

stage('Package') {
  when { branch 'main' }
  steps {
    sh 'mvn -DskipTests package -q'
    archiveArtifacts artifacts: 'target/\*.jar'
  }
}
Enter fullscreen mode Exit fullscreen mode

}

post {
failure {
slackSend channel: '#builds', color: 'danger',
message: "FAILED: ${env.JOB_NAME} #${env.BUILD_NUMBER} — ${env.BUILD_URL}"
}
}
}

That parallel block is the thing that earns its keep on a large monorepo. Jenkins has had native parallel stage support for years, and the agent-per-stage model lets you run Docker-dependent integration tests on a separate node without poisoning your compile environment. GitHub Actions can do this too, but you're rewriting matrix configs and figuring out artifact passing between jobs. In Jenkins, if the Jenkinsfile already does it, it already does it.

Now for the honest part. Jenkins has 1,800+ plugins, and the thing that caught me off guard the first time I maintained it seriously was discovering how many of them are effectively abandonware. The Git plugin, Blue Ocean, Pipeline — fine, actively maintained. But you'll find plugins with their last commit in 2019, a string of open CVEs, and a Jenkins compatibility matrix that breaks every minor release. My rule: before you install any plugin, check the GitHub repo directly. If the last commit is older than 18 months, assume you're adopting a maintenance burden. The plugin ecosystem is where Jenkins projects go to accumulate technical debt silently.

The ops cost is real and you should budget for it explicitly. Jenkins itself is free. The time is not. Expect to spend real hours — not "a quick 20 minutes" — on things like: JVM heap tuning when builds start dying with OutOfMemoryError after a team scales up, agent provisioning when your node count grows, and plugin upgrade cycles that break something downstream every couple of months. On the monorepo project I mentioned, we had a dedicated half-day every sprint just for Jenkins housekeeping. That's not a complaint — it was worth it given what we'd have spent rewriting — but don't walk in expecting zero maintenance overhead just because the license is free.

So here's the actual decision framework: if you're starting fresh, don't pick Jenkins. Forgit Actions, Woodpecker CI, or Forgejo Actions will give you 80% of the capability with 10% of the operational surface area. But if you're inheriting Jenkins and the pipelines are working, do not rewrite them on principle. "We should modernize our CI" is not a reason. "We can't hire anyone who knows Groovy and the build breaks monthly" — that's a reason. "The current system doesn't support the deployment patterns our architecture needs" — also a reason. Gut-feel aversion to an older tool is not. The best CI system is the one your team can actually operate and improve without it becoming a project in itself.

Forgejo Actions — If You're Already Self-Hosting Your Git

The pitch is straightforward: if you're already running a self-hosted Forgejo instance for Git, you're one binary away from having CI/CD that speaks fluent GitHub Actions syntax. Forgejo is the community fork of Gitea — it split off when Gitea's governance drama got messy — and Forgejo Actions was designed from the start to be GitHub Actions-compatible. Not "inspired by" or "similar to." Actual YAML compatibility, to the point where migrating a simple pipeline is mostly a find-and-replace on github.com/ references.

The runner itself is a Go binary called forgejo-runner. Single file, no dependencies, runs as a systemd service. You download it, register it against your Forgejo instance, and you're done. Registration looks like this:

forgejo-runner register \
--no-interactive \
--instance https://git.yourcompany.com \
--token YOUR_RUNNER_TOKEN \
--name build-server-01 \
--labels ubuntu-latest:docker://ubuntu:22.04

The --labels flag is where it gets interesting — you're mapping GitHub Actions runner labels to actual Docker images. So when your pipeline says runs-on: ubuntu-latest, the runner knows to spin up an Ubuntu 22.04 container. That mapping is completely under your control, which means you can make ubuntu-latest point to a hardened internal image instead of pulling from Docker Hub. For compliance-heavy environments, that matters a lot.

A real pipeline YAML barely changes from GitHub Actions. Here's what a typical build job looks like:

name: Build and Test

on:
push:
branches: [main]

jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run tests
run: |
go test ./...
go build -o ./bin/app

That runs on Forgejo Actions unchanged. The thing that caught me off guard was the uses: directive — it still pulls actions from GitHub by default, which means your air-gapped setup breaks the moment you reference actions/checkout. The fix is to either mirror those actions in your Forgejo instance and reference them as uses: your-org/checkout@v3, or use the FORGEJO_ACTIONS_DEFAULT_ACTIONS_URL environment variable to redirect all action fetches to your internal instance. This is not prominently documented. Check the issue tracker for the current state of this before you go down the air-gap path.

The honest maturity caveat: Forgejo Actions is younger than GitHub Actions, Jenkins, or Drone. The core functionality is solid, but edge cases surface. Container networking behavior in nested Docker scenarios has had bugs. Artifact upload/download between jobs has had quirks. I'd strongly recommend browsing the issue tracker on Codeberg before committing this to anything business-critical, specifically filter for open issues tagged area/actions. The community is responsive and things move fast — but "fast" and "stable enough for your payment processing pipeline" are different standards. For internal tools, developer experience repos, and anything where a broken build annoys rather than costs money, this stack is genuinely excellent. Fully self-hosted, zero external dependencies, no per-seat pricing, no SaaS vendor to trust with your source code.

Side-by-Side Comparison — What Actually Matters

Before you pick a tool, burn this into your brain: the biggest difference between these options isn't features — it's where your pain lives. GitHub Actions puts the pain in your billing dashboard after month three. Jenkins puts it in your ops team's lap every Tuesday. Woodpecker puts it in your initial setup weekend. Pick your poison consciously.

Tool

Free Minutes/Month

Self-Host

Docker Support

YAML Complexity

Plugin/Marketplace

Min Server RAM

Dealbreaker

GitHub Actions

2,000 (private repos)

Self-hosted runners only

Yes

Medium

Excellent

N/A (hosted)

Minute limits on private repos

Woodpecker CI

Unlimited (self-host)

Yes

Yes

Simple

Growing

~256MB

Thin plugin ecosystem

Jenkins

Unlimited (self-host)

Yes

Yes

Verbose

Huge but aging

~512MB+

Maintenance burden

Forgejo Actions

Unlimited (self-host)

Yes

Yes

GitHub-compatible

Small

~512MB (full stack)

You're hosting the whole Git platform too

GitHub Actions: The Minute Trap

GitHub Actions is genuinely the best developer experience at this table — the marketplace has actions for almost everything, the YAML is reasonable once you get past reusable workflows, and the integration with your repo is smooth. The thing that caught me off guard was how fast 2,000 minutes evaporates on a team of four running tests on every push to every branch. A single Node.js test suite with a cold npm install can eat 8–10 minutes per run. Do the math: that's 200–250 runs per month before you're paying. If all your repos are public, this is a non-issue — public repos get unlimited minutes. But most commercial work isn't public. The moment you start scaling, you're either setting up self-hosted runners (which defeats the "no infrastructure" value prop) or you're on the $4/user/month Pro plan. The dealbreaker isn't the tool; it's the ceiling.

Jenkins: Power You'll Pay For in Time, Not Money

Jenkins can do literally anything. That's also its curse. A Jenkinsfile for a moderately complex pipeline looks like someone tried to write Java inside YAML inside Groovy, and they weren't having a good day:

pipeline {
  agent { docker { image 'node:18-alpine' } }
  stages {
    stage('Install') {
      steps {
        sh 'npm ci'
      }
    }
    stage('Test') {
      steps {
        sh 'npm test'
      }
      post {
        always {
          junit 'test-results/**/*.xml'
        }
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

That's not terrible, but the moment you add parallel stages, shared libraries, and credential injection, you're maintaining a small software project just to run your CI. The plugin ecosystem has thousands of entries, but a depressing number of them haven't been updated since 2019. I've spent hours debugging a failing Docker Pipeline plugin only to find out the fix was "upgrade Jenkins core, but that breaks three other plugins." If you already have a Jenkins instance and a dedicated DevOps engineer, it's fine. For a small team starting fresh today, the maintenance tax is real and ongoing.

Woodpecker CI: The Quiet Underdog

Woodpecker is what you reach for when you want GitLab CI's simplicity but you're self-hosting on a budget machine. The pipeline config is genuinely clean — each step is a Docker container, full stop. No abstractions stacked on abstractions:

steps:
  - name: test
    image: node:18-alpine
    commands:
      - npm ci
      - npm test

  - name: build
    image: node:18-alpine
    commands:
      - npm run build
    when:
      branch: main
Enter fullscreen mode Exit fullscreen mode

That readability matters. Junior devs can actually modify this without fear. The 256MB RAM floor is not a typo — I've run it on a $4/month VPS without drama. The honest limitation is the plugin list. If you need Slack notifications or Docker Hub pushes, you're using generic Docker images and writing the integration yourself, or you're waiting for the community to catch up. For greenfield projects with straightforward pipelines, this is often the right call. For teams migrating from GitLab with a pile of complex includes and dynamic child pipelines, you'll hit walls.

Forgejo Actions: The Full Self-Hosted Stack

Forgejo Actions deserves a specific callout because it's the closest drop-in replacement for the entire GitLab experience — you get the Git hosting, the issue tracker, and the CI pipeline all in one self-hosted package. The Actions syntax is intentionally GitHub-compatible, which means you can often copy a .github/workflows file, rename the directory to .forgejo/workflows, and have it work. The gotcha nobody mentions upfront: you're not just running a CI server, you're running a full Git forge. That 512MB number is for the complete stack — Forgejo itself plus a runner. On a shared server, plan for more. If your team is already paying for GitHub or GitLab hosting and just wants to replace CI, Forgejo is overkill. But if data sovereignty is a requirement — air-gapped environments, strict compliance requirements, or you simply don't want a third party holding your code — this is the most complete answer at zero licensing cost.

The Real Decision Framework

  • Your repos are public and you're on GitHub already: Actions is the obvious answer. The marketplace quality advantage is substantial and the minute limits don't apply.
  • You have a $5/month VPS and want something that just works: Woodpecker CI. Seriously. Get it running in an afternoon with Docker Compose and move on with your life.
  • You're inheriting an existing Jenkins setup: Don't migrate unless you have a specific pain point. The devil you know, etc.
  • You need full stack self-hosting for compliance or air-gap reasons: Forgejo. Budget an extra day for runner setup and network configuration.
  • You have a large team with complex pipelines and want Jenkins power without the Groovy: Reconsider GitLab CE, or look at Woodpecker for simpler pipelines and Jenkins only for the pipelines that actually need it.

When to Pick What — Match the Tool to Your Situation

Solo dev or small team already on GitHub? Stop reading and just use GitHub Actions.

I mean this sincerely — if your code is already on GitHub and you're spending time evaluating CI alternatives, you're over-engineering a solved problem. GitHub Actions is free for public repos and gives you 2,000 minutes/month on private repos. The YAML is annoying but the marketplace has an action for everything. The only reason to look elsewhere is if you've already burned through those minutes or you have a specific runner requirement. I've seen small teams waste a full sprint evaluating Drone CI and Woodpecker only to land back on GitHub Actions because the integration is just too tight to beat. Don't be that team.

You have a VPS sitting around and need unlimited CI minutes: Woodpecker wins on setup speed.

Woodpecker CI is the fastest path from "I have a server" to "CI is running" — I've done it in under 20 minutes including DNS. It's a lightweight Go binary, a single docker-compose.yml, and you're done. The free tier is literally your own hardware. Here's the bare minimum to get running:

# docker-compose.yml for Woodpecker
version: '3'
services:
  woodpecker-server:
    image: woodpeckerci/woodpecker-server:latest
    ports:
      - 8000:8000
    environment:
      - WOODPECKER_OPEN=true
      - WOODPECKER_HOST=https://ci.yourdomain.com
      - WOODPECKER_GITHUB=true
      - WOODPECKER_GITHUB_CLIENT=your_oauth_client_id
      - WOODPECKER_GITHUB_SECRET=your_oauth_secret
      - WOODPECKER_AGENT_SECRET=a_random_secret_here

  woodpecker-agent:
    image: woodpeckerci/woodpecker-agent:latest
    environment:
      - WOODPECKER_SERVER=woodpecker-server:9000
      - WOODPECKER_AGENT_SECRET=a_random_secret_here
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
Enter fullscreen mode Exit fullscreen mode

The thing that caught me off guard: Woodpecker's pipeline syntax is nearly identical to Drone CI's (it forked from Drone), so if you've ever used Drone, you'll feel at home immediately. The gotcha is that the plugin ecosystem is smaller — don't expect to find a polished plugin for every niche thing. For most standard build/test/deploy pipelines though, Docker steps cover everything you need. The docs are decent but thin on edge cases. Budget an extra hour if you're setting up matrix builds.

Regulated industry, air-gapped or strict data residency requirements: Forgejo + Forgejo Actions on your own metal.

If you're in finance, healthcare, or any environment where a legal or compliance team has opinions about where your code lives, the answer is Forgejo — a community fork of Gitea — running on your own infrastructure. Forgejo Actions uses the same YAML syntax as GitHub Actions, which is huge. It means your developers already know how to write pipelines, and you can often copy a .github/workflows file directly into .forgejo/workflows with minimal changes. The runner is the forgejo-runner, a standalone binary you deploy wherever your jobs need to execute. Self-hosted means zero external network calls during CI — every dependency pull goes through your own Nexus or Artifactory mirror, every secret stays on your hardware. That's the actual compliance win, not a feature checkbox.

You inherited Jenkins: do not migrate unless you can calculate a specific ROI number.

I've been in this situation twice. Both times the instinct was to rip it out and replace with something modern. Both times the correct answer was to leave it alone. Jenkins is ugly, the plugin ecosystem is a mess of half-maintained JARs, and the Groovy DSL for Jenkinsfiles can be genuinely painful. But here's the thing — it works. Your pipelines are already written. Your team knows where the logs are. Migration to anything else means rewriting every pipeline, re-training muscle memory, and debugging a new failure mode on a Friday afternoon. Unless Jenkins is actively blocking a business requirement (like you literally cannot get a plugin to work with a new tool you need), the migration cost doesn't pay back. If you do decide to migrate, document every single Jenkins job first. I've seen teams start a migration and discover 40 jobs nobody knew existed.

GitLab self-hosted running fine: this article genuinely isn't for you.

GitLab's self-managed tier gives you the full CI/CD feature set for free — runners, pipelines, schedules, environments, the lot. If you're running gitlab-runner on your own machines and things work, there's no reason to change tools. The only scenario where I'd revisit it is if your GitLab instance itself is becoming a maintenance burden — upgrades getting complex, Postgres eating memory, etc. In that case the conversation shifts to whether you want to run a lighter git host (like Forgejo) instead, and that's a different trade-off calculation.

Hitting GitLab.com compute limits specifically: GitHub migration is a realistic two-day job.

GitLab.com cut free CI minutes significantly, and that decision pushed a lot of teams to look around. If that's your situation and your codebase is a typical web app or library — not a massive monorepo with 200 pipelines — moving to GitHub + GitHub Actions is probably a two-day task, not a two-week project. The migration path is: export your GitLab repos with git push --mirror to GitHub, then manually rewrite your .gitlab-ci.yml files as GitHub Actions workflows. The syntax is different but the concepts map 1:1. The parts that take time are secrets (re-enter everything under Settings → Secrets), scheduled pipelines (re-create them as cron triggers in Actions), and any GitLab-specific features like needs: DAG syntax — which maps to needs: in GitHub Actions anyway, so that one's easy. Where teams lose time is usually on self-hosted runner setup if they have jobs that need specific hardware or internal network access.

Migrating a .gitlab-ci.yml\ to GitHub Actions — The Parts That Trip You Up

Migrating a .gitlab-ci.yml to GitHub Actions — The Parts That Trip You Up

The mental model shift that matters most: GitLab CI thinks in stages, GitHub Actions thinks in jobs with dependency graphs. In GitLab, everything in the same stage runs in parallel, and stages run sequentially. In GitHub Actions, there are no stages — you wire up execution order with needs:. This is actually more flexible, but it means you can't just lift your stage names and drop them in. If your GitLab pipeline has build → test → deploy, you replicate that ordering explicitly:

# GitLab
stages:
  - build
  - test
  - deploy

build-job:
  stage: build

test-job:
  stage: test

deploy-job:
  stage: deploy
Enter fullscreen mode Exit fullscreen mode
# GitHub Actions equivalent
jobs:
  build-job:
    runs-on: ubuntu-latest
    steps:
      - run: echo "building"

  test-job:
    needs: build-job
    runs-on: ubuntu-latest
    steps:
      - run: echo "testing"

  deploy-job:
    needs: test-job
    runs-on: ubuntu-latest
    steps:
      - run: echo "deploying"
Enter fullscreen mode Exit fullscreen mode

Pass artifacts between jobs using actions/upload-artifact and actions/download-artifact. GitLab's artifact system is implicit within stages; GitHub Actions requires you to be explicit about what you're handing off. I actually prefer this — it forces you to think about what your jobs actually need. GitLab's artifacts: paths: maps directly, just with more boilerplate:

# GitLab
build-job:
  artifacts:
    paths:
      - dist/

# GitHub Actions
- uses: actions/upload-artifact@v4
  with:
    name: dist-files
    path: dist/

# Then in the next job:
- uses: actions/download-artifact@v4
  with:
    name: dist-files
Enter fullscreen mode Exit fullscreen mode

The only/except vs on: Mismatch

GitLab's only and except keywords are job-level. GitHub's on: is workflow-level, which means the granularity is different and you cannot directly translate them 1:1. If you had a GitLab job that only runs on main, in GitHub Actions you'd gate the entire workflow — or use an if: condition on the job. The rough equivalent of only: [main] is:

# GitLab
deploy-job:
  only:
    - main

# GitHub Actions (workflow level)
on:
  push:
    branches:
      - main

# OR job level with if:
deploy-job:
  if: github.ref == 'refs/heads/main'
Enter fullscreen mode Exit fullscreen mode

The job-level if: approach is what you want when you're translating per-job only/except rules. The workflow-level on: just controls whether the workflow runs at all — it won't suppress individual jobs inside it.

Fork PRs and Secret Handling — This One Actually Matters for Security

GitLab's masked variables are available in pipelines triggered from forks if you explicitly allow it in project settings — the masking just prevents them from printing in logs. GitHub takes a harder stance: secrets are not passed to workflows triggered by PRs from forks, period. This is the safer default. The thing that catches teams off guard is that this also applies to GITHUB_TOKEN — it gets a read-only token on fork PRs, not the full write token. If you have a CI step that posts a comment or updates a status check, it'll silently fail or throw a permission error on fork PRs. The pattern to handle this is the pull_request_target event, but be careful: that runs in the context of the base repo and can access secrets, so never check out untrusted fork code in that context without sandboxing it.

Docker-in-Docker: Surprisingly Close Between the Two

This one translated more cleanly than I expected. GitLab's services: block maps almost directly to GitHub Actions' services: block inside a container job. The syntax is different but the concept is identical — spin up a sidecar container that your job can reach over localhost:

# GitLab DinD
build:
  image: docker:24
  services:
    - docker:24-dind
  variables:
    DOCKER_HOST: tcp://docker:2376
    DOCKER_TLS_CERTDIR: "/certs"
  script:
    - docker build -t myapp .

# GitHub Actions equivalent
jobs:
  build:
    runs-on: ubuntu-latest
    container:
      image: docker:24
    services:
      docker:
        image: docker:24-dind
        options: --privileged
    env:
      DOCKER_HOST: tcp://docker:2376
      DOCKER_TLS_CERTDIR: /certs
    steps:
      - run: docker build -t myapp .
Enter fullscreen mode Exit fullscreen mode

One difference: in GitLab, the service hostname is the image name with slashes replaced by dashes. In GitHub Actions, the service hostname is whatever key you use in the services: map — in this case, docker. Keep that key short and sane.

Monorepo Path Filtering: The Genuinely Unsolved Part

GitLab's rules: changes: is one of the best features in the whole platform and GitHub Actions just doesn't have a native equivalent. In GitLab you can say "only run this job if files under packages/api/ changed" — right in the YAML, no third-party anything. In GitHub Actions, you get path filtering at the workflow level with on: push: paths:, but that triggers or skips the entire workflow, not individual jobs. For per-job path filtering, the community workaround that actually works is dorny/paths-filter:

jobs:
  changes:
    runs-on: ubuntu-latest
    outputs:
      api: ${{ steps.filter.outputs.api }}
      frontend: ${{ steps.filter.outputs.frontend }}
    steps:
      - uses: actions/checkout@v4
      - uses: dorny/paths-filter@v3
        id: filter
        with:
          filters: |
            api:
              - 'packages/api/**'
            frontend:
              - 'packages/frontend/**'

  build-api:
    needs: changes
    if: needs.changes.outputs.api == 'true'
    runs-on: ubuntu-latest
    steps:
      - run: echo "building API"

  build-frontend:
    needs: changes
    if: needs.changes.outputs.frontend == 'true'
    runs-on: ubuntu-latest
    steps:
      - run: echo "building frontend"
Enter fullscreen mode Exit fullscreen mode

It works, but it's a two-job pattern with an extra network dependency on a third-party action that could disappear. I've been burned by popular Actions getting abandoned before. Pin dorny/paths-filter to a specific commit SHA, not a tag, if this is in a production pipeline. This is the one place where I'd genuinely say GitLab's design is better — it's a first-class feature that belongs in the core workflow spec and GitHub hasn't shipped it yet.

FAQ

Is GitHub Actions really free for open source?

Yes, and it's genuinely unlimited — not a trial, not a "free tier with caps." Public repositories get unlimited Actions minutes on GitHub-hosted runners. I've run projects with hundreds of contributors pushing dozens of times a day and never hit a wall. The catch people miss: your repository has to be public, not just your organization. If you have a public org but a private repo, you're eating into the 2,000 free minutes/month that comes with every free account. The other thing that tripped me up early on: storage for artifacts and packages is limited to 500MB on the free plan, even for public repos. Large build artifacts, Docker layers, test reports — those add up fast. Use actions/upload-artifact with short retention periods or you'll get a surprise bill.

Can Woodpecker CI use GitHub as a remote but run its own runners?

This is actually one of Woodpecker's best use cases. Yes — you authenticate against GitHub (or Gitea, Forgejo, GitLab, Bitbucket) via OAuth, Woodpecker watches your repos for push events through webhooks, but every pipeline job runs on your own infrastructure. You're using GitHub purely as the source of truth and the trigger mechanism. Setup looks roughly like this:

# docker-compose.yml (simplified)
services:
  woodpecker-server:
    image: woodpeckerci/woodpecker-server:latest
    environment:
      - WOODPECKER_GITHUB=true
      - WOODPECKER_GITHUB_CLIENT=your_oauth_app_client_id
      - WOODPECKER_GITHUB_SECRET=your_oauth_app_secret
      - WOODPECKER_AGENT_SECRET=a_shared_secret_for_agents
      - WOODPECKER_HOST=https://ci.yourdomain.com

  woodpecker-agent:
    image: woodpeckerci/woodpecker-agent:latest
    environment:
      - WOODPECKER_SERVER=woodpecker-server:9000
      - WOODPECKER_AGENT_SECRET=a_shared_secret_for_agents
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
Enter fullscreen mode Exit fullscreen mode

The trade-off is infrastructure overhead — you're running a server and at least one agent. On a $6/month VPS you can handle light workloads fine. The thing that caught me off guard was that Woodpecker requires your server to be publicly reachable so GitHub can deliver webhooks. Behind a NAT without exposing a port? You need something like Cloudflare Tunnel or ngrok for local dev. Docs on that are thin; you mostly figure it out from GitHub issues.

What happened to Drone CI — is it dead?

Not dead, but the story is messy enough that I'd think twice before committing to it today. use acquired Drone in 2020, and since then the open source version has been starved of attention. The last meaningful community release was Drone 2.x. The repo still exists, PRs pile up unreviewed, and critical issues sit open for months. use has been pushing people toward their commercial platform instead of investing in the open source core. The community forked it — that fork became Woodpecker CI, which is where the actual active development is happening. If you're evaluating Drone right now, just use Woodpecker. The config format is nearly identical (Woodpecker started as a fork so your .drone.yml pipelines are mostly portable), the community is active, and releases actually happen. The only reason to stick with Drone proper is if you're already deep in a use ecosystem deal.

How do I stop burning through GitHub Actions minutes on every pushed commit?

Three techniques that actually matter. First, use path filters so your backend CI doesn't run when someone edits a README:

on:
  push:
    branches: [main]
    paths:
      - 'src/**'
      - 'package.json'
      - '.github/workflows/ci.yml'
Enter fullscreen mode Exit fullscreen mode

Second, cache aggressively. The default Node.js setup action has caching built in, but I see teams skip the cache parameter constantly:

- uses: actions/setup-node@v4
  with:
    node-version: '20'
    cache: 'npm'   # this one line saves 2-3 minutes per run
Enter fullscreen mode Exit fullscreen mode

Third — and this is the one that actually moves the needle — stop running CI on every feature branch push and use concurrency to cancel outdated runs:

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true
Enter fullscreen mode Exit fullscreen mode

That single block means if you push three commits in 30 seconds, only the last one runs. For teams doing rapid iteration on feature branches this cuts usage by 60-70% without changing anything about your actual pipeline logic. Combine it with a workflow_dispatch trigger so devs can manually kick off builds when they need them, and you've stopped the bleeding entirely.


Disclaimer: This article is for informational purposes only. The views and opinions expressed are those of the author(s) and do not necessarily reflect the official policy or position of Sonic Rocket or its affiliates. Always consult with a certified professional before making any financial or technical decisions based on this content.


Originally published on techdigestor.com. Follow for more developer-focused tooling reviews and productivity guides.

Top comments (0)