A pragmatic guide to container-based deployments that won't break the bank
Overview
This guide walks you through setting up a complete CI/CD pipeline using GitLab's free tier and a DigitalOcean DigitalOcean Kubernetes cluster. The approach is straightforward: build Docker images, push them to GitLab's Container Registry, and roll them out to your Kubernetes cluster. No fancy GitOps operators, no complex Helm charts—just good old-fashioned image tags and kubectl
commands.
Assumptions
ℹ You're comfortable with containerization concepts and have written a Dockerfile or two
ℹ You understand basic Kubernetes primitives (Deployments, Services, Secrets)
ℹ You have Kubernetes CRDs already defined in your cluster (or a separate repository)
ℹ Your deployment strategy is image-tag-based: new code = new image = new deployment
ℹ You're okay with an imperative deployment approach (we'll discuss the trade-offs)
Prerequisites
Accounts & Infrastructure:
✅ GitLab account (free tier is sufficient)
✅ DigitalOcean account with an existing Kubernetes cluster (even the cheapest $12/mo cluster works perfectly)
Knowledge:
🧠 Basic understanding of Docker and container registries
🧠 Familiarity with Kubernetes concepts: Secrets, Deployments, and rollouts
🧠 Git basics (you're already here, so you're good)
Project Setup:
➡️ A docker-compose.ci.yml
file that defines your build configuration
➡️ A Dockerfile that accepts a BUILD_HASH
argument:
...
# Build arguments for version information aligned with docker image tag and git commit hash
ARG BUILD_HASH=NOT_SET
ENV BUILD_HASH=${BUILD_HASH}
Note: The docker-compose.ci.yml
expects the CI_COMMIT_SHORT_SHA
variable, which gets passed to your Dockerfile as the BUILD_HASH
argument.
...
build:
context: ./your-context-dir
dockerfile: Dockerfile
args:
BUILD_HASH: "${CI_COMMIT_SHORT_SHA}"
Step 1: Create GitLab Access Token
Since GitLab's free tier doesn't support group-level access tokens, we'll create one at the project level:
- Navigate to your project in GitLab
- Go to Settings → Access Tokens
- Click Add new token
- Configure the token:
- Name: Something descriptive like "k8s-pull-token"
-
Scopes: Select at minimum:
read_repository
read_registry
write_registry
- Click Create project access token
- Important: Copy the token immediately—you won't see it again!
Step 2: Create DigitalOcean Access Token
Your CI/CD pipeline needs to authenticate with DigitalOcean to deploy to your Kubernetes cluster:
- Go to DigitalOcean API Tokens
- Click Generate New Token
- Configure with minimal required scopes:
-
Read:
image
,kubernetes
,load_balancer
-
Update:
kubernetes
,load_balancer
-
Other:
access_cluster
(kubernetes)
-
Read:
- Save the token securely
Security Note: The CI/CD runner will create short-lived tokens upon authorization, so this persistent token is only used for initial authentication.
Step 3: Create Kubernetes Docker Registry Secret
Your Kubernetes cluster needs credentials to pull images from GitLab's Container Registry. Here's how to create the secret:
kubectl create secret docker-registry regcred \
--docker-server=registry.gitlab.com/<your-group>/<your-project> \
--docker-username=<gitlab-username> \
--docker-password=<token-from-step-1> \
--docker-email=<your-email> \
--dry-run=client -o yaml > regcred-secret.yml
Your generated regcred-secret.yml
should look like this:
apiVersion: v1
type: kubernetes.io/dockerconfigjson
kind: Secret
metadata:
name: regcred
annotations:
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
data:
.dockerconfigjson: <base64-encoded-secret>
Pro tip: Notice the reflector
annotations? If you're using Emberstack's Kubernetes Reflector, this automatically syncs the secret across namespaces. One secret to rule them all. If you're not using Reflector, you'll need to create this secret in each namespace that needs to pull images.
Apply the secret to your cluster:
kubectl apply -f regcred-secret.yml
Step 4: Configure GitLab CI/CD Variables
Add your DigitalOcean token to GitLab's CI/CD variables:
- In your GitLab project, go to Settings → CI/CD
- Expand the Variables section
- Click Add variable
- Configure:
-
Key:
DO_ACCESS_TOKEN
- Value: Your DigitalOcean token from Step 2
- Type: Variable
- Flags: Check "Mask variable" (for security)
- Leave other flags unchecked unless you need environment-specific protection
-
Key:
Step 5: Understanding the Pipeline
Now for the main event—the .gitlab-ci.yml
file. This pipeline has three stages that mirror a typical deployment workflow:
Pipeline Stages Overview
stages:
- test
- build
- deploy
Simple, sequential, predictable. Let's break down each stage:
Stage 1: docker-test
docker-test:
stage: test
image: docker:dind
services:
- "docker:dind"
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH
script:
- echo Placeholder for running tests
What it does: Runs your test suite in a Docker-in-Docker environment.
When it runs:
- On all merge requests
- On all branch commits
Variables used:
-
CI_PIPELINE_SOURCE
: GitLab's built-in variable indicating how the pipeline was triggered -
CI_COMMIT_BRANCH
: The branch being built
Currently, this is a placeholder. Replace the echo with actual test commands like docker compose -f docker-compose.ci.yml run tests
or similar.
Stage 2: docker-build
docker-build:
stage: build
image: docker:dind
services:
- "docker:dind"
needs: [docker-test]
rules:
# Build without push on merge requests to main
- if: $CI_PIPELINE_SOURCE == "merge_request_event" && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "main"
variables:
SHOULD_PUSH: "false"
# Build and push on commits to main branch
- if: $CI_COMMIT_BRANCH == "main"
variables:
SHOULD_PUSH: "true"
script:
- echo "Commit SHA is $CI_COMMIT_SHORT_SHA"
- echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin
- docker compose --file docker-compose.ci.yml build
- |
if [ "$SHOULD_PUSH" = "true" ]; then
echo "Pushing image to registry..."
docker push registry.gitlab.com/group/project/image:$CI_COMMIT_SHORT_SHA
else
echo "Skipping push (merge request build)"
fi
What it does: Builds your Docker image and conditionally pushes it to GitLab's Container Registry.
When it runs:
- On merge requests targeting
main
(builds but doesn't push) - On commits to the
main
branch (builds and pushes)
Variables used:
-
CI_COMMIT_SHORT_SHA
: Short Git commit SHA used as image tag -
CI_REGISTRY
: GitLab's container registry URL (automatically provided) -
CI_REGISTRY_USER
: Registry username (automatically provided) -
CI_REGISTRY_PASSWORD
: Registry password (automatically provided) -
CI_PIPELINE_SOURCE
: Pipeline trigger source -
CI_MERGE_REQUEST_TARGET_BRANCH_NAME
: Target branch for merge requests -
CI_COMMIT_BRANCH
: Current branch name -
SHOULD_PUSH
: Custom variable controlling whether to push the image
The smart bit: Merge requests get a full build validation without polluting your registry. Only successful merges to main
result in pushed images. This keeps your registry clean and your deployments intentional.
Stage 3: k8s-deploy
k8s-deploy:
stage: deploy
image: debian:bookworm-slim
needs: [docker-build]
rules:
- if: $CI_COMMIT_BRANCH == "main"
variables:
K8S_CLUSTER: "66250a2e-6ag4-48e6-a857-a578c754fa3b"
K8S_NAMESPACE: "your-deployment-namespace"
K8S_DEPLOYMENT: "your-deployment-name"
before_script:
- apt-get update && apt-get install -y curl ca-certificates
# Install doctl
- cd /tmp
- curl -sL https://github.com/digitalocean/doctl/releases/download/v1.104.0/doctl-1.104.0-linux-amd64.tar.gz | tar -xzv
- mv doctl /usr/local/bin/
# Install kubectl
- curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
- chmod +x kubectl && mv kubectl /usr/local/bin/
# Authenticate with DigitalOcean and configure kubectl
- doctl auth init --access-token $DO_ACCESS_TOKEN
- doctl kubernetes cluster kubeconfig save $K8S_CLUSTER
script:
- echo "Deploying image with tag $CI_COMMIT_SHORT_SHA to Kubernetes..."
- kubectl -n $K8S_NAMESPACE set image deployment.apps/$K8S_DEPLOYMENT container-name=registry.gitlab.com/group/project/image:$CI_COMMIT_SHORT_SHA
- echo "Waiting for rollout to complete..."
- kubectl -n $K8S_NAMESPACE rollout status deployment.apps/$K8S_DEPLOYMENT --timeout=5m
- echo "Deployment successful!"
What it does: Deploys your new image to the Kubernetes cluster.
When it runs:
- Only on commits to the
main
branch
Variables used:
-
K8S_CLUSTER
: Your DigitalOcean Kubernetes cluster ID -
K8S_NAMESPACE
: Target Kubernetes namespace -
K8S_DEPLOYMENT
: Name of the Deployment to update -
DO_ACCESS_TOKEN
: DigitalOcean API token (from Step 4) -
CI_COMMIT_SHORT_SHA
: Image tag to deploy
The deployment dance:
-
Setup tooling: Installs
doctl
(DigitalOcean CLI) andkubectl
in a minimal Debian container - Authenticate: Uses your DO token to authenticate and fetch cluster credentials
-
Update deployment: Uses
kubectl set image
to update the container image tag - Wait and verify: Monitors the rollout status with a 5-minute timeout
Important: The kubectl rollout status
command exits with status 1 if the deployment fails. This means your pipeline will fail if pods don't come up healthy, which is exactly what you want—fast feedback on broken deployments.
The Imperative Approach: Trade-offs and Considerations
Let's address the elephant in the room: this is an imperative deployment strategy. We're directly telling Kubernetes what to do with kubectl set image
, not declaring desired state in Git (GitOps) or using sophisticated deployment tools.
Drawbacks:
- No Git-based audit trail of what's deployed (only CI/CD logs)
- Rollbacks require re-running old pipelines or manual intervention
- State drift if someone manually changes the deployment
- Doesn't scale well to complex, multi-service deployments
Why it works here:
- You're only updating the image tag—the simplest possible change
- Your Kubernetes CRDs are version-controlled elsewhere
- The pipeline is the single deployment pathway (no manual kubectl cowboys)
- It's dead simple to understand and debug
- Perfect for small teams and straightforward applications
As long as you're disciplined about only using this pipeline for deployments and keeping your CRDs under version control, this approach is pragmatic and effective.
Testing Your Pipeline
- Create a feature branch and make a small change
- Push and watch the
docker-test
anddocker-build
stages run - Open a merge request to
main
—the build should run but skip the push - Merge to
main
and watch the full pipeline execute - Check your Kubernetes cluster:
kubectl -n tools-prod get pods -w
Conclusion
You now have a working CI/CD pipeline that costs roughly $12/month (just the Kubernetes cluster—GitLab and the CI runners are free). Not bad for automated deployments that go from git push to production in minutes.
The beauty of this approach is its simplicity. No vendor lock-in, no complex tooling, just Docker, Kubernetes, and a dash of CI/CD glue. Sure, it's not the most sophisticated setup you'll ever see, but it works, it's maintainable, and most importantly—you actually understand what's happening.
Now go forth and deploy with confidence. Your friends asked how you did it; now you can show them!
Top comments (0)