DEV Community

yep
yep

Posted on • Originally published at yepchaos.com

From Fly.io to On-Premise Kubernetes

Everything works in localhost. Exposing it to the internet is a different problem. I went through Fly.io, Linode managed Kubernetes, and eventually landed on an on-premise cluster. Each step had tradeoffs in both cost and operational complexity.

Containers and Kubernetes

Before getting into the details, here is the short explanation.

A container is a lightweight, isolated unit that packages an application with its runtime and dependencies. Unlike virtual machines, containers share the host OS kernel, which makes them efficient in terms of startup time and resource usage. Docker is the standard tooling: define a Dockerfile, build an image, and run it across environments with minimal variation.

The problem: once we have multiple containers across multiple machines, managing them manually is chaos. Which machine runs what? What happens when a container crashes? How do you roll out updates without downtime?

Kubernetes solves this. It's an orchestration platform — you describe what you want (3 replicas of this service, always keep them running, expose them on this port) and Kubernetes figures out how to make it happen. The key building blocks:

  • Pod — the smallest unit, one or more containers running together
  • Deployment — describes how many pods to run and how to update them
  • Service — a stable network endpoint that routes traffic to the right pods
  • Ingress — routes external HTTP traffic into the cluster

The big win: if a pod crashes, Kubernetes restarts it. If a node goes down, it reschedules pods elsewhere. You stop thinking about individual machines and start thinking about desired state.

Phase 1: Fly.io + Vercel

For the backend I started with Fly.io. Easy to deploy, cheap during development — I stayed under their $5 threshold. For the frontend, Vercel. Push to GitLab, it deploys automatically. Vercel still handles the frontend today, no complaints. Fly.io manages container just fine.

Fly.io was fine for stateless services. The problem was stateful ones — ScyllaDB and NATS.

Running stateful services on Kubernetes properly requires operators — controllers that understand the specific lifecycle of a piece of software. ScyllaDB has its own operator that handles cluster bootstrapping, repairs, scaling, backup, topology changes. NATS has one too. On platforms like this, running Kubernetes operators isn’t possible because you don’t have access to a Kubernetes control plane. As a result, lifecycle management for stateful systems must be handled manually. You're stuck managing stateful services manually. I spent more time managing the platform than building the product.

Phase 2: Linode Managed Kubernetes

I needed real Kubernetes. Linode offered $100 credit on signup, which was enough to experiment properly.

The setup: 3 worker nodes (1 CPU, 2GB RAM, 50GB storage each) plus a load balancer. Linode's managed Kubernetes is free for the control plane — we only pay for nodes and networking.

  • 3 nodes × $12/month = $36
  • Load balancer = $10/month
  • Total: $46/month

I used Terraform with Linode's provider to provision the cluster — infrastructure as code, version controlled, easy to redeploy. Once the cluster was up, I could run operators properly. ScyllaDB and NATS behaved the way they were supposed to.

When the $100 credit ran out, $46/month was hard to justify for a project still in testing. So I started thinking about on-premise.

Phase 3: On-Premise Kubernetes

The same teacher from my IOI days - gave me access to three VMs on his company's infrastructure. Free. Each machine had 8 cores, 8GB RAM, and 50GB storage.

I set up my own Kubernetes cluster on these using k3s — a lightweight Kubernetes distribution that works well for on-premise and resource-constrained environments. I'll write a dedicated post on the k3s setup, but the short version: it's full Kubernetes without the overhead, and it runs fine on these VMs.

Full control over the environment. I can run any operator, configure networking however I need, no platform restrictions. The tradeoff is that there's no managed control plane — if something breaks at the infrastructure level, I fix it myself. It’s acceptable also free just some of my time.

I deployed everything: ScyllaDB cluster, NATS, PostgreSQL, Redis, the backend services. All running on three VMs, costing nothing.

Where Things Stand

  • Frontend: Vercel, still works great
  • Backend + all services: On-premise k3s cluster

On-premise infrastructure requires more operational effort but provides full control and effectively zero cost when hardware is available. Next I'll write about the actual k3s setup — how I configured the cluster, networking, storage, and got everything running.

Top comments (0)