In the beginning, I was hosting my main portfolio on GitHub Pages, and all of my side projects were being hosted through either Railway or Vercel. It worked. I never really had a reason to complain. Push your code, let the platform handle the rest, and move on with your life.
But something about it never sat right with me.
I didn't own any of it. My projects were living on someone else's infrastructure, running on someone else's terms. If GitHub decided to change something tomorrow, or Railway tweaked their free tier again, I'd be scrambling. I was building things I cared about on top of ground I didn't control.
Enough was enough. I decided it was time to set up my own server.
The Hardware;
I had an older HP desktop sitting around — a Windows machine that wasn't being used anymore, just collecting dust on the bottom of my desk doing absolutely nothing. Perfect candidate.
I went on Google, searched up ubuntu.com, and ended up downloading the Ubuntu Server ISO. No desktop environment, no GUI, just a clean terminal. That's all I needed.
I grabbed a spare USB drive, flashed the ISO onto it, plugged it into the HP, and booted from USB. The Ubuntu Server installer walked me through the basics — setting up a user, picking a drive, configuring the network. Nothing crazy. Within maybe 20 minutes I had a fresh Linux server sitting on my desk.
The Stack
Once I was in, the first thing I did was update everything:
sudo apt update && sudo apt upgrade -y
Then I installed Docker. I knew from the start I didn't want to manage a bunch of services by hand — different ports, different dependencies, things stepping on each other. Docker lets me keep everything isolated and reproducible. One compose file to rule them all.
The architecture is pretty straightforward. I have a single docker-compose.yml that defines every service I run. At the top sits Caddy, a reverse proxy that routes incoming requests to the right container based on the subdomain. Each project gets its own container, its own subdomain, its own resource limits. Nothing bleeds into anything else.
Here's what the stack looks like:
Internet → Cloudflare → cloudflared tunnel → Caddy → containers
And here's what's running on it:
alkhabaz.dev — my portfolio (Nginx Alpine, static files)
ghost.alkhabaz.dev — Zahki Ghost (encrypted messaging)
vault.alkhabaz.dev — Zahki Vault (secret sharing)
radar.alkhabaz.dev — Zahki Radar (uptime monitoring)
relay.alkhabaz.dev — Zahki Relay (HTTP request inspector)
lang.alkhabaz.dev — Azkal Lang (programming language playground)
ledger.alkhabaz.dev — Azkal Ledger (invoicing tool)
pulse.alkhabaz.dev — Azkal Pulse (site audit docs)
cli.alkhabaz.dev — Azkal CLI (scaffolding tool docs)
galaxy.alkhabaz.dev — MyGalaxy (3D solar system)
blocks.alkhabaz.dev — BlockBlaster (arcade game)
viper.alkhabaz.dev — Viper (multiplayer snake)
status.alkhabaz.dev — Uptime Kuma (monitoring dashboard)
Fifteen containers. One machine. One compose file.
The Caddyfile that ties it all together is dead simple. Each subdomain is just a few lines — listen on this hostname, proxy to that container:
http://alkhabaz.dev:80 {
reverse_proxy portfolio:80
}
http://ghost.alkhabaz.dev:80 {
reverse_proxy zahki-ghost:3000
}
That's it. No complex routing logic, no rewrites, no middleware chains. Caddy handles it all on port 80 internally, and I'll get to why HTTPS isn't Caddy's job in a second.
Making It Public (Without Opening a Single Port)
This is the part where most self-hosting guides tell you to log into your router, forward ports 80 and 443, set up dynamic DNS, and pray your ISP doesn't block inbound traffic. I didn't do any of that.
I used Cloudflare Tunnels instead.
The idea is simple: instead of opening ports on your network and letting the internet reach in, you run a small agent called cloudflared on your server that reaches out to Cloudflare. It creates a persistent encrypted tunnel. Cloudflare then routes traffic from your domain through that tunnel to your server. From the outside, it looks like your site is hosted on Cloudflare's edge network. From the inside, your server has zero open ports. Nothing is exposed.
In my compose file, it's just another container:
yamlcloudflared:
image: cloudflare/cloudflared:latest
command: tunnel --no-autoupdate run --token ${CF_TUNNEL_TOKEN}
I pointed my domain's DNS to Cloudflare, configured the tunnel in their dashboard to route each subdomain to Caddy, and that was it. No port forwarding. No dynamic DNS. No Certbot. Cloudflare terminates TLS at their edge, which is why Caddy only listens on port 80 — the encrypted connection happens between the user's browser and Cloudflare, and the tunnel handles the rest.
I pulled up my phone, disconnected from WiFi, typed in my domain, and my site loaded. Served from the HP desktop under my desk. That was the moment it clicked.
Deploying Changes
I wrote a small deploy script that keeps things simple:
bash./scripts/deploy.sh # deploy everything
./scripts/deploy.sh portfolio # deploy just one service
For a full deploy, it pulls the latest code from every project's repo, rebuilds all the containers, and brings them back up. For a single service, it just rebuilds and restarts that one container. No CI/CD pipeline, no webhook triggers — I push my code, SSH in, run the script, and it's live.
Backups
Some of my projects use SQLite databases — Ghost, Vault, Radar, and Uptime Kuma all store data locally. I wrote a backup script that runs every night and keeps 30 days of history:
bashsqlite3 "$src" ".backup '${BACKUP_DIR}/${db}.db'"
Anything older than 30 days gets cleaned up automatically. It's not fancy, but it means if something breaks, I can roll back to any day in the last month.
Resource Limits
Since everything shares one machine, I set memory and CPU limits on every container. The heavier apps like Azkal Lang and the Zahki tools get 256–512MB and a quarter of a CPU. The static sites and doc pages get 32MB and barely any CPU — they're just Nginx serving HTML. Viper gets a bit more since it runs a real-time multiplayer server. Nothing is allowed to go rogue and starve everything else.
Was It Worth It?
For most people, the answer is probably no. GitHub Pages and Vercel exist for a reason. They're fast, they're free, and they just work.
But for me, it was never about convenience. It was about ownership. I wanted to know that when someone visits my site, it's being served from a machine I set up, running software I configured, on a network I control. There's no middleman. No platform that could pull the rug.
The total cost of this setup is zero dollars. The HP was already mine. Docker is free. Caddy is free. Cloudflare's tunnel is free. The only recurring cost is the domain, which I was paying for anyway.
Fifteen containers. Thirteen subdomains. Zero open ports. All of it managed with a compose file, a deploy script, and a Caddyfile shorter than most README files.
If you've got an old machine lying around and you've been curious about self-hosting, go for it. The barrier to entry is way lower than you think. An afternoon, a USB stick, and some terminal commands — that's really all it takes.
If you have questions about any part of the process, feel free to reach out. I'm always down to talk, or help.
Top comments (0)