I am DevOps Engineer Working With K8s A Lot, Ask Me Anything!

joehobot profile image Joe Hobot ・1 min read

I will try to answer most of your questions around DevOps or Kubernetes - Been now working on K8s for almost 2 years and have nothing but love for it.


markdown guide

Hi Joes thanks for this good initiative.
I know that kubernetes has been designed with cluster/scalability in mind.

How do you manage to add more hosts to handle more load on your applications ?

Seems to be trivial, but I would be happy to have a real life feedback about this.
Thanks !


On my phone so don’t expect much than this:

K8s can be installed with autoscaler github.com/kubernetes/autoscaler . In real life people utalizing cloud providers set their autoscaler most of the autoscaling goes by metrics and how pods are set . Lookup Kubernetes HPA (horizontal/vertical autoscaling) .

Example: each node can hold only 40 pods. Your deployment kicks in and creates deployment for additional 300 pods. Now autoscaler kicks in and creates worker nodes accordingly.

Same goes for metrics like cpu/ram etc that’s why it’s important to set boundaries on namespace resources.

Hope that answers the question if not, I’ll give some additional info when I get on laptop.


so it seems that kubernetes is optimized for public clouds then ?


how did you develop containers in local machine? i think kubernetes is too big platform for writing application code with developing containers.


Not OP, but there's several tools for running local K8s clusters on your workstation. I'm inclined to agree though, K8s is too heavy and has too much going on for local use. It uses the same container format and runtime as Docker so I've found Docker and docker-compose are usually preferable unless you're integrating with K8s directly.


Are you looking for to develop a container or container that would work with K8s? Say you integrate it with helm.

If it’s just working with containers then a simple docker is sufficient to work on your local machine . But as Gareth below mentions, minikube is good way to start locally if you want to have that feeling of how would it work on your K8s infrastructure. Also there are lightweight K8s. When I get off the phone I’ll send you a link.

How did I develop?
Well early on when I started I was using minikube because I did not want to mess around with extras around lab K8s but now that it’s all pretty much templated and have multi cluster environments I just use K8s.

Update: Lightweight K8s k3s.io/


Hi, I'm a sysadmin working with traditional VM clusters. In your opinion, what would be the biggest selling point of moving towards containers and later on on K8s? And also, what would be the biggest drawback?




It really all depends on your business and applications you are working on. Now for sysadmins for an example instead of having say some vm with full blown OS like Ubuntu 18.04 and that only does some minimal jobs like cron jobs or parsing data, such things can be converted into a container that are easier to manage than patching, upgrading installing diff tools on Ubuntu + working on an applications that are tied to that VM. So for that alone I think its a good selling point because you are taking away the OS layer and do only what your app should be doing. But then there is something like lambda functions where it gets even better , you run just a code vs a container.

The biggest drawback I think its the stateless applications and databases, such things are OK to have on k8s cluster and even containerize it, however the container and any orchestration infrastructure is not fully there yet and if it is, it can be unstable and not worth it. So if you are looking for a full fledged speeds and latency minimization a VM Cluster or on-prem systems is better because you are minimizing the hops in network..

Another one is the troubleshooting, when shit goes down, and if you are not experienced you could be looking into 10-15 different things what went wrong. On vm cluster you look at only 4-5 things of failures. OS/Storage/Network/Application


Thanks Joe! I was exploring the container/Docker world in the last few weeks, and I'm glad to see that your opinion sort of resonates with the one I was forming.

I'm already looking at specific cases where containers would be more efficient than the usual VM approach, like small python apps and such. But, and that's a big but, the majority of what I have to deal with are database clusters, and the ephemeral nature of containers had me thinking that would be really hard to containerise DB clusters, especially ones with sharding and replicas (take this with a grain of salt because I'm a Docker noob).

Databases in containers is a young fish. Maybe in few years where stability and speed comes up to the par but nothing will beat latency and transactions like a self/cloud hosted server. While I do run some apps and have contsinerized databases like MongoDB etc, been burned few times by simply “don’t touch it” lol.

Yeah if you can offload some of hosts/apps with containers that’s great start.



I am learning Kubernetes in my lab currently, and I would be interested in hearing what you think about my "Getting Started with Kubernetes (at home)" series of posts and my related documentation on GitLab.

what is your stance on Helm? I find it a lot easier to manage Helm Charts than Kubernetes manifests directly, and tend to gravitate towards a Helm based deployment for most things if possible. I feel like it is the future of software deployment on Kubernetes.



I’ll take a look at your link you posted but yeah I work 100% with helm that is all tied up to ci/cd. It’s much easier to manage hundreds of apps and deployments. Some chose also kustomize


How long do it takes to get the nuts and bolts of K8s and be cozy with it ?


Depending on how much time you devote to it and what types of things you want to do. I can tell you that the more you go deeper into it the more complex it gets.

I felt comfortable within about 5-6 months, but even now close to 2y working with it, there are just things that you still find out to be more useful than what you are accustomed to.

In general I would say 2-3hrs a day * 3-4 months = you get to the point that you could create a k8s cluster with no problem and install containers etc. But to get to the point where you work with multi cluster , horizontal/vertical scaling, network policies, etc etc.. that just takes out more time.


There is tons of resources out there, while me lately I just go onto kubernetes.io to get some specs and resources for someone that just starts that site is bit overwhelming.

But if you want all of the above, this google sheet contains everything you need to know

kubernetes resources google sheet


What do you think about the release pace of k8s in relation to cloud vendors' support and specifically Azure, is it kinda fast-paced? Should we be worried that our current version of k8s is phased out too soon?


Yes and no. Assuming you are talking about managed K8s by Cloud Providers vs hosting and managing it yourself?

I think release pace is actually good because you know that things are being fixed and new features are being added. That said in personal experience being on release 1.9.x vs 1.13.x is not good because there is a large overlap in upgrades and most importantly is the security. As far as cloud vendors being one or two releases back is somewhat painful but at least you know that releases they put out are stable and work with their infrastructure. But say you have self managed K8s, to test all of the things with your app and infrastructure could take you month or two especial If you are working with many layers of networking and storage infrastructure.

Hope that helps. Feel free to ask more if this was not enough explained as I can go in greater details too :)


I'm a fullstack dev and always enjoyed working with servers and just started learning devOps, as devOps yourself what path would you recommend and some general best practices that can help in the long run.

Thanks ❤️


Can you help me clear this doubts of mine?
Suppose I have a container multiple containers with mysql too. when I am in single node than I can easily create multiple mysql container using same container volume. But when we want same thing to work with multiple cluster. Now what is the best way to maintain the database container volume across all the node.

What do we do in such scenario? Thanks!


Which cloud provider do you use? Because if you use like stateless the volume should be accessible by all nodes so container it self does not matter where it is unless you go by labels.

Say in my example for elk stack , the ebs volume is created and attached to container as volume, if container for es goes down and goes onto another host the volume is still be available as it’s a shared ebs volume.


I've been struggling with deciding to move to k8s the past few months. We have a team with +/- 6 people on the full product team, with basically me and two other developers doing most of the operations.

My primary reason to move would be to easily sunset/cordon, upgrade and upgrade/remove servers. However, I find the whole management of K8s just take too much mental energy to "get right" with authentication, authorization, namespaces, 100 YML files, secrets, configsets, etc. I'm all a bit overwhelmed.

Maybe my question would be: is K8s resonable for such a small team, and if so: how do you suggest we organize our cluster without needing to hire a dedicated sysadmin?


Well my team started as about 4-5 ppl now we are at 8-9 that we all manage different teams and onboard them daily from different platform to K8s. Now I can tell you it is not an easy start that’s for sure at the very beginning to get it going seem to be easy however once you start with automation and different hocus pocus things it gets heavy. Our team is very diverse in K8s knowledge all of us have best strength in some parts of K8s but nobody I would say is a K8s guru because let’s face it is tough to be “that” person.

So I think with the right set of mind like start sketching out infrastructure for dev or lab env. Say 1 master small worker nodes. Then hook it up to ci/cd and see if you can get some app to run via helm or kustomize. Once you get the hang of it, dig deeper into autoscaling vertically/horizontally. Security tightened with something like kali , canary deploys etc can be done right with istio.

I mean I could go on and on, but everyone seem to do stuff differently because K8s is flexible for any team. As long as it’s not like you and 8devs that want you to finish it by Friday.


Hey Joe,

I have an application deployed on k8 and it is in need of some type of fast persistent storage to function fully.

To me it made sense to also deploy the storage on k8s, however I heard from some of my friends/contacts that there can be issues with persistent volumes on k8, when pods fail that the volumes do not get released quickly enough so that there can be significant downtime before the storage is fully recovered.

I have seen scattering of other posts talk about this, but I do not know what is the truth and if it is a good idea or not.

Any help would be appreciated.


Hey Joe, thank you for the DevOps roadmap guide. I'm a beginner in DevOps.
I wanted to ask, where/how do I get Linux assignments to get hands-on Linux-admin skills every day? How'd one become an expert at Linux?


Not a question but this is the first time I have seen Kubernetes abbreviated to "K8". I actually thought you were talking about the AMD K8 processor and that got me all sorts of curious. 🤷‍♂️


Lol. Yeah lots of ppl switched from saying Kubernetes to k8s


In windows, why enabling hyper-v is required to run docker ? Isn't hyper-v something that powers Virtual machines? Theoretically, docker is not a VM-thingy and that means it should not need hyper-v, right?


How has the kubernetes ecosystem evolved in the past two years?


I think because it has been hyped up a lot and many companies got into "new shiny thing", now you have more resources to work from.

The support online has been amazing and I am talking about issues that have not been been out for even few hrs. There are 2-3 weekly k8s meetings I usually go to one a week or watch it on youtube k8s channel, then there is slack channel, /r/kubernetes and bunch of more sites.

People hear k8s fixes a lot of things, but in really anything can fix things if you put work into it, except for the part that k8s has ways of doing things differently.

The more time you spend in k8s of doing some fine-tuning and wanting "special" features , it gets complex with the time especially if you want all 100% automated.

In other words it's a beast that needs lots of food.


What is the cost of running kubernetes?


That depends on what you want to run. My smallest k8s cluster of 1 nodes 1 master was $7


I have a confession to make : My first thought when reading the title was "Why would anyone use an Athlon64 in a production server in 2019" ?


At what point would you say Kubernetes is worth the investment for an organization?


Thats up to the organization and what services they want to provide. K8s can be expensive if it's not done right - especially if you don't have people that are knowledgeable about k8s already.

K8s is easy to install and works with but once you dig deeper its much more complex than you can imagine but it works really good. I think most of the time spent is automating things because if you do one offs and don't either document it, there is really no point of having k8s just because its such a large orchestration tool.

Now say you have 40-50 docker containers that are currently running on xyz, I think even with 10-15 containers and small k8s cluster would be a worth an investment. For an example, say you are running 2 wordpress sites, sure all of that can be containerized, but do you really need k8s to do all of that? No! But what if you are a company that wants to expand from 2 wordpress sites and is providing some wordpress hosting?

That is where k8s could come great to play because you can duplicate things very easly and do canary deploys with 0 downtime.

All in all, if you have not many applications to containerize it, by the time you learn k8s and work with it full on mode, it might not be worth it. As said above, I seen people run k8s cluster for 1 wordpress site. Why? I don't know! Because its cool? Ok..


Actually I'm using docker. I have 1 Nodejs container, 1 mongoeb container and 1 elastic search container. Should I use k8? And why?


For you to learn k8s and work with it a bit, yeah sure why not.

But if this is something that would go into production and have say minimal utilization, why bother with k8s ?

What you are looking into is at least 3 servers and you will have to manage those pretty much constantly to be up to date.

OR you could use something like AWS Beanstalk and have no servers to manage :) + its much cheaper.