K8s can be installed with autoscaler github.com/kubernetes/autoscaler . In real life people utalizing cloud providers set their autoscaler most of the autoscaling goes by metrics and how pods are set . Lookup Kubernetes HPA (horizontal/vertical autoscaling) .
Example: each node can hold only 40 pods. Your deployment kicks in and creates deployment for additional 300 pods. Now autoscaler kicks in and creates worker nodes accordingly.
Same goes for metrics like cpu/ram etc that’s why it’s important to set boundaries on namespace resources.
Hope that answers the question if not, I’ll give some additional info when I get on laptop.
Hi Joes thanks for this good initiative.
I know that kubernetes has been designed with cluster/scalability in mind.
How do you manage to add more hosts to handle more load on your applications ?
Seems to be trivial, but I would be happy to have a real life feedback about this.
Thanks !
On my phone so don’t expect much than this:
K8s can be installed with autoscaler github.com/kubernetes/autoscaler . In real life people utalizing cloud providers set their autoscaler most of the autoscaling goes by metrics and how pods are set . Lookup Kubernetes HPA (horizontal/vertical autoscaling) .
Example: each node can hold only 40 pods. Your deployment kicks in and creates deployment for additional 300 pods. Now autoscaler kicks in and creates worker nodes accordingly.
Same goes for metrics like cpu/ram etc that’s why it’s important to set boundaries on namespace resources.
Hope that answers the question if not, I’ll give some additional info when I get on laptop.
so it seems that kubernetes is optimized for public clouds then ?