Deploying my version-controlled Kubernetes infrastructure using GitOps, with automated application via a dedicated pipeline and public exposure of my portfolio.
After setting up the build and push of my Docker image, it’s time to take it further: versioning and automating the deployment of my Kubernetes infrastructure.
Separation of concerns: code vs infra
To maintain a clean architecture and GitOps logic, I split the infrastructure from the application code into two separate Git repositories:
-
Wooulf/forkfolio
: contains the website’s source code (Next.js), theDockerfile
, and a CI pipeline for building and pushing the Docker image. -
Wooulf/infra-k8s-terraform
: contains all Kubernetes configuration files (deployment.yaml
,service.yaml
, etc.), and a separate CI/CD pipeline for applying them to the cluster.
🎯 This structure decouples application code from infrastructure, making it easier to maintain, review, and evolve both parts independently.
Deploying the application on MicroK8s
To run my app on the cluster and expose it cleanly to the public, I’ve configured the following components:
- A Deployment: handles pod lifecycle (updates, resilience, etc.)
- A ClusterIP Service: stabilizes internal communication to the pod
- An Ingress: routes incoming HTTP traffic to the correct app
- A NodePort Service: exposes the Ingress Controller to the outside world
One request, one path
Together, these components route an incoming HTTP request through the cluster as follows:
🌐
Client
→VPS
(port 80) →NodePort
→Ingress
→ClusterIP
→Pod
Here’s a visual diagram of the flow:
Kubernetes manifest files
These are the versioned files in the infra repo:
🧱 Deployment
Handles container deployment, updates, and redundancy.
apiVersion: apps/v1
kind: Deployment
metadata:
name: portfolio
spec:
replicas: 1
selector:
matchLabels:
app: portfolio
template:
metadata:
labels:
app: portfolio
spec:
containers:
- name: portfolio
image: woulf/portfolio:latest
ports:
- containerPort: 3000
🌐 ClusterIP Service
Exposes the pod within the cluster via a stable internal IP.
apiVersion: v1
kind: Service
metadata:
name: portfolio
spec:
selector:
app: portfolio
ports:
- port: 80
targetPort: 3000
🌍 Ingress
Links the domain name (woulf.fr
) to the correct internal service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: woulf-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: woulf.fr
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portfolio
port:
number: 3000
ingressClassName: nginx
🛣️ NodePort Service to expose the Ingress Controller
This service exposes the NGINX Ingress Controller pod to the public by opening a port on the VPS.
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-microk8s-controller
namespace: ingress
spec:
type: NodePort
externalIPs:
- 185.216.27.229
selector:
name: nginx-ingress-microk8s
ports:
- port: 80
targetPort: 80
nodePort: 32180
💡 externalIPs
allows manual exposure of a service on a VPS’s public IP. It’s a functional approach for self-hosted setups, but in a cloud context, we’d use a LoadBalancer service, which integrates with the provider’s networking. On bare-metal clusters, MetalLB can simulate this behavior.
GitHub Actions pipeline in the infra repo
Once versioned, the files are applied to my cluster automatically using a second CI/CD pipeline in the infra-k8s-terraform
repository.
It triggers on changes to the k8s/
folder:
on:
push:
paths:
- 'k8s/**'
jobs:
apply_k8s_configs:
steps:
- uses: actions/checkout@v4
- uses: azure/k8s-set-context@v3
with:
kubeconfig: \${{ secrets.KUBECONFIG }}
- run: kubectl apply -f k8s/
- run: kubectl get all
✅ Result: with every config commit, the cluster is automatically synced.
What’s next?
I could go further with a fully-fledged GitOps tool like ArgoCD or Flux. These tools continuously monitor the Git repository and update the cluster without relying on a manual pipeline.
⚠️ This setup is minimalistic: it doesn’t include high availability or dynamic traffic management. Still, it’s more than enough for a self-hosted portfolio in MVP mode.
🔄 This setup gives me fast feedback between commits and production, while keeping my infra code versioned and clean. It’s not quite GitOps-as-a-Service, but it’s close.
💡 In the next article, I’ll talk about secret management, the upcoming HTTPS setup, some monitoring ideas, and possible future improvements to my infrastructure.
Top comments (0)