GitLab CI/CD Architecture & Core Concepts: A Complete Hands-On Guide (With Examples)
GitLab CI/CD is powerful, flexible, and honestly — a bit overwhelming when you first look at everything it can do. Pipelines, stages, jobs, runners, executors, artifacts, rules, workflow logic, schedules… it’s a lot.
So let’s break it down.
This guide walks you through all the essential GitLab CI/CD concepts with practical examples, real pipeline configurations, and architecture insights. By the end, you’ll not only understand how GitLab works behind the scenes — you’ll be able to design clean, scalable pipelines yourself.
🧱 Core CI/CD Components
Before writing your first pipeline, you need to understand the four foundational building blocks:
| Component | Description |
|---|---|
| Pipeline | The full CI/CD process triggered by a push, MR, schedule, or manual action |
| Stage | A logical grouping of jobs (build, test, deploy) |
| Job | A single unit of work executed by a GitLab Runner |
| Script | The shell commands that actually run inside a job |
Here’s what this looks like in a minimal .gitlab-ci.yml:
workflow:
name: My Awesome App Pipeline
rules:
- if: $CI_COMMIT_BRANCH == 'main'
stages:
- test
- deploy
unit_test_job:
stage: test
script:
- npm install
- npm test
deploy_job:
stage: deploy
script:
- echo "Deploying…"
🧪 Jobs: The Heart of the Pipeline
A job is the smallest building block in GitLab CI.
It runs on a runner
It belongs to a stage
It executes your script
It can have before_script and after_script
It can set artifacts, cache, rules, and more
Example with a tagged runner:
unit_test_job:
stage: test
tags:
- saas-linux-small-amd64
script:
- npm install
- npm test
🧰 Scripts, Before Scripts, After Scripts
GitLab runs your commands in order, just like regular shell.
before_script:
- echo "Setting up environment"
script:
- npm install
- npm test
after_script:
- echo "Cleaning up"
This "setup → execute → cleanup" pattern keeps pipelines predictable.
📦 Passing Files Between Jobs Using Artifacts
Jobs don’t share file systems. If one job generates output required by another (like a build artifact), you must explicitly upload it.
build_job:
stage: build
script:
- echo "dragon" > dragon.txt
artifacts:
paths:
- dragon.txt
expire_in: 3 days
Then download it in the next job:
test_job:
stage: test
needs: ["build_job"]
script:
- grep -i "dragon" dragon.txt
🔗 needs: Creating a DAG Pipeline
Here’s the thing — stages alone can be limiting. If you want fine-grained ordering, the needs: keyword gives you DAG-style pipelines.
docker_build:
stage: docker
needs:
- test_file
docker_testing:
stage: docker
needs:
- docker_build
docker_push:
stage: docker
needs:
- docker_testing
Now the flow becomes:
build → test → docker_build → docker_testing → docker_push
This is how you make pipelines faster and smarter.
🔐 Variables: Global vs Job-Level
Stop hardcoding usernames, registry URLs, and tags everywhere. Use variables instead.
Global Variables
variables:
USERNAME: dockerUser
REGISTRY: docker.io/$USERNAME
IMAGE: sample-app
VERSION: $CI_PIPELINE_ID
Job-level Variable Override
docker_push:
variables:
PASSWORD: $DOCKER_PASSWORD
Mask sensitive variables
Store secrets in GitLab → Settings → CI/CD → Variables
Enable Masked.
🐋 Specify Job Images (Avoid "command not found")
By default, GitLab uses a Ruby image. That’s why node -v fails.
Use the image: keyword:
deploy-job:
image: node:20-alpine3.18
script:
- node -v
- npm -v
Cleaner. Faster. Reliable.
⚡ Parallel Jobs & Matrix Pipelines
Simple sharding
test:
stage: test
image: ruby:3.2
script:
- rspec
parallel: 5
Parallel Matrix
deploy-job:
stage: deploy
parallel:
matrix:
RUNNER_MACHINE:
- saas-linux-small-amd64
- saas-linux-medium-amd64
NODE_VERSION:
- '18-alpine3.18'
- '20-alpine3.18'
- '21-alpine3.18'
tags:
- $RUNNER_MACHINE
image: node:$NODE_VERSION
script:
- echo "Runner: $RUNNER_MACHINE, Node: $NODE_VERSION"
This creates 6 parallel jobs.
🕒 Job Timeouts
Prevent pipelines from hanging forever:
deploy-job:
timeout: 10m
script:
- sleep 3600
After 10 minutes, GitLab stops the job.
🔄 Pipeline Schedules
Run CI daily/weekly/monthly — no pushes needed.
GitLab UI → CI/CD → Schedules
Or trigger with variables:
deploy-job:
script:
- echo "Deploy to $DEPLOY_ENV"
Set DEPLOY_ENV in schedule variables.
🔀 Merge Requests + Rules
Run jobs only for MR pipelines:
merge_request_job:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
script:
- echo "MR detected"
And check MR fields:
echo $CI_MERGE_REQUEST_TITLE
echo $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME
🧠 Workflow Rules (Gate the Entire Pipeline)
Control when a pipeline should even exist.
workflow:
rules:
- if: $CI_COMMIT_BRANCH == "main"
variables:
DEPLOY_ENV: "PRODUCTION"
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature\//'
variables:
DEPLOY_ENV: "TESTING"
- when: never
If no rule matches → pipeline doesn’t run at all.
🚦 Skip CI for Documentation Updates
Add this to your commit message:
[ci skip]
GitLab won’t start a pipeline.
🔒 Resource Groups (One-At-A-Time Deployments)
Avoid two pipelines deploying at once:
deploy_prod:
stage: deploy
resource_group: production
script:
- echo "Deploying…"
GitLab queues these jobs — only one can run at a time.
🖥️ Runners & Executors (How GitLab Actually Executes Jobs)
Runners are the workers. Executors decide how jobs run:
| Executor | Best For | Isolation |
|---|---|---|
| Shell | quick local scripts | low |
| Docker | reproducible builds | high |
| Kubernetes | scalable workloads | high |
| VM | full OS isolation | very high |
| SSH | legacy hardware | moderate |
SaaS Runners Examples:
linux_job:
tags:
- saas-linux-medium-amd64
🎉 Wrapping Up
GitLab CI/CD is one of those tools that feels huge until you break it down. Once you understand pipelines, jobs, runners, rules, and artifacts, everything clicks.
Start small, keep YAML clean, avoid repetition, and use variables and matrices when life gets messy.
Top comments (0)