A friend sent me an article about GitHub potentially getting blocked in Russia and asked me to spin up GitLab. I suggested Gitea — I'd used it at a college hackathon, knew it was lightweight and wouldn't eat half the server. He agreed.
While the deploy was running, I asked him how he syncs Obsidian. He said — plain WebDAV, nothing fancy. Well, server's already open anyway, so I threw in Nextcloud too. Hour and a half later I had both a git host and a cloud storage.
Why not GitLab
My friend originally wanted GitLab. I opened the docs, looked at the requirements — 4 GB RAM just to start — and said no. We don't have a dedicated git server, there's already a project running on it. GitLab would've eaten everything.
Gitea idles at ~150 MB. Actions are compatible with GitHub Actions syntax, so existing workflows move over without rewriting. I'd already used it at a hackathon, knew it worked fine. Suggested it, got the green light.
On Helm
First time I touched Kubernetes was when I had to deploy a college project for a grade. Wrote manifests by hand — Deployment, Service, Ingress, PVC, repeat. I knew Helm existed but never had a reason to dig into it.
Turns out it's like pacman, but for Kubernetes. One values.yaml instead of five hundred lines of YAML, one command — everything's up. Would've lost my mind doing it the old way.
First though, Helm couldn't see the cluster at all:
Kubernetes cluster unreachable: Get "http://localhost:8080/version"
k3s puts the kubeconfig somewhere Helm doesn't look by default. Fix:
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
echo 'export KUBECONFIG=/etc/rancher/k3s/k3s.yaml' >> ~/.bashrc
Gitea
The server already had ingress-nginx and cert-manager with a letsencrypt-prod ClusterIssuer. Set the DNS beforehand — A record git.logiflowadvanced.online → 2.27.42.100.
gitea-values.yaml:
ingress:
enabled: true
ingressClassName: nginx
hosts:
- host: git.logiflowadvanced.online
paths:
- path: /
pathType: Prefix
tls:
- secretName: gitea-tls
hosts:
- git.logiflowadvanced.online
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
gitea:
admin:
username: admin
password: yourpassword
email: your@email.com
persistence:
size: 10Gi
postgresql-ha:
enabled: false
postgresql:
enabled: true
helm repo add gitea-charts https://dl.gitea.com/charts/
helm repo update
helm install gitea gitea-charts/gitea \
--namespace gitea --create-namespace \
-f gitea-values.yaml
Pods came up. Opened the browser — invalid certificate, browser complaining. Checked the Ingress:
NAME CLASS HOSTS ADDRESS PORTS
gitea <none> git.logiflowadvanced.online 80, 443
CLASS: <none> — the nginx controller just ignored this Ingress entirely. cert-manager didn't issue anything either, so nginx was serving its default self-signed cert.
kubectl patch ingress gitea -n gitea \
--type=json \
-p='[{"op":"add","path":"/spec/ingressClassName","value":"nginx"}]'
After the patch, cert-manager issued the certificate and the site opened fine.
SSH
gitea-ssh is created as a headless ClusterIP by default — not reachable from outside. Port 22 is taken by the system SSH, so I needed a NodePort. You can't patch a headless service into NodePort — have to delete and recreate:
kubectl delete svc gitea-ssh -n gitea
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: gitea-ssh
namespace: gitea
spec:
type: NodePort
selector:
app.kubernetes.io/name: gitea
ports:
- port: 22
targetPort: 2222
nodePort: 30022
EOF
Remote looks like this:
git remote add gitea ssh://git@git.logiflowadvanced.online:30022/username/repo.git
Nextcloud
Set up the DNS for cloud.logiflowadvanced.online while Gitea was still deploying.
nextcloud-values.yaml:
ingress:
enabled: true
ingressClassName: nginx
hosts:
- host: cloud.logiflowadvanced.online
paths:
- path: /
pathType: Prefix
tls:
- secretName: nextcloud-tls
hosts:
- cloud.logiflowadvanced.online
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nextcloud:
host: cloud.logiflowadvanced.online
username: admin
password: yourpassword
mariadb:
enabled: true
auth:
password: yourdbpassword
database: nextcloud
username: nextcloud
postgresql:
enabled: false
persistence:
enabled: true
size: 10Gi
helm install nextcloud nextcloud/nextcloud \
--namespace nextcloud --create-namespace \
-f nextcloud-values.yaml
Same ingressClassName issue — same patch, same result.
After the first login it kept redirecting to /login/cleary?=1. Nextcloud didn't know its own external address:
kubectl exec -n nextcloud deploy/nextcloud -- \
php occ config:system:set overwriteprotocol --value="https"
kubectl exec -n nextcloud deploy/nextcloud -- \
php occ config:system:set overwrite.cli.url \
--value="https://cloud.logiflowadvanced.online"
Obsidian
Created separate users for me and my friend. WebDAV URL per user:
https://cloud.logiflowadvanced.online/remote.php/dav/files/username/
In the RemotelySave plugin: type — WebDAV, URL, login, password. Works.
The thing that ate most of my time wasn't Gitea or Nextcloud — it was ingressClassName. Neither chart sets it automatically, and without it nginx just ignores the Ingress completely. cert-manager doesn't issue anything. Browser shows self-signed, you stare at the logs, pods are all Running, no errors anywhere.
Run kubectl get ingress -n <namespace> right after deploy. If CLASS says <none> — that's your problem.
The other non-obvious one: gitea-ssh is headless and you can't patch it into a NodePort — you have to delete and recreate it. Spent a few minutes trying to patch it before actually reading the error.
With Nextcloud and MariaDB — if MariaDB didn't come up on the first deploy or you uninstalled and reinstalled, helm will complain about credential mismatch on upgrade. Just helm uninstall + kubectl delete pvc --all -n nextcloud and start fresh, it's faster than untangling the creds.
Top comments (1)
daaaamn dude you are so smart, 🤓👍👍 my ni§§er