DEV Community

Joe Hobot
Joe Hobot

Posted on

I am DevOps Engineer Working With K8s A Lot, Ask Me Anything!

I will try to answer most of your questions around DevOps or Kubernetes - Been now working on K8s for almost 2 years and have nothing but love for it.

Latest comments (40)

Collapse
 
oswinfrans profile image
Oswin Frans

Hey Joe,

I have an application deployed on k8 and it is in need of some type of fast persistent storage to function fully.

To me it made sense to also deploy the storage on k8s, however I heard from some of my friends/contacts that there can be issues with persistent volumes on k8, when pods fail that the volumes do not get released quickly enough so that there can be significant downtime before the storage is fully recovered.

I have seen scattering of other posts talk about this, but I do not know what is the truth and if it is a good idea or not.

Any help would be appreciated.

Collapse
 
ajinspiro profile image
Arun Kumar

In windows, why enabling hyper-v is required to run docker ? Isn't hyper-v something that powers Virtual machines? Theoretically, docker is not a VM-thingy and that means it should not need hyper-v, right?

Collapse
 
komalbawdekar profile image
KomalBawdekar

Hey Joe, thank you for the DevOps roadmap guide. I'm a beginner in DevOps.
I wanted to ask, where/how do I get Linux assignments to get hands-on Linux-admin skills every day? How'd one become an expert at Linux?

Collapse
 
tclain profile image
Timothée Clain

Hi Joes thanks for this good initiative.
I know that kubernetes has been designed with cluster/scalability in mind.

How do you manage to add more hosts to handle more load on your applications ?

Seems to be trivial, but I would be happy to have a real life feedback about this.
Thanks !

Collapse
 
joehobot profile image
Joe Hobot • Edited

On my phone so don’t expect much than this:

K8s can be installed with autoscaler github.com/kubernetes/autoscaler . In real life people utalizing cloud providers set their autoscaler most of the autoscaling goes by metrics and how pods are set . Lookup Kubernetes HPA (horizontal/vertical autoscaling) .

Example: each node can hold only 40 pods. Your deployment kicks in and creates deployment for additional 300 pods. Now autoscaler kicks in and creates worker nodes accordingly.

Same goes for metrics like cpu/ram etc that’s why it’s important to set boundaries on namespace resources.

Hope that answers the question if not, I’ll give some additional info when I get on laptop.

Collapse
 
tclain profile image
Timothée Clain

so it seems that kubernetes is optimized for public clouds then ?

Collapse
 
turnerj profile image
James Turner

Not a question but this is the first time I have seen Kubernetes abbreviated to "K8". I actually thought you were talking about the AMD K8 processor and that got me all sorts of curious. 🤷‍♂️

Collapse
 
joehobot profile image
Joe Hobot

Lol. Yeah lots of ppl switched from saying Kubernetes to k8s

Collapse
 
michiels profile image
Michiel Sikkes

I've been struggling with deciding to move to k8s the past few months. We have a team with +/- 6 people on the full product team, with basically me and two other developers doing most of the operations.

My primary reason to move would be to easily sunset/cordon, upgrade and upgrade/remove servers. However, I find the whole management of K8s just take too much mental energy to "get right" with authentication, authorization, namespaces, 100 YML files, secrets, configsets, etc. I'm all a bit overwhelmed.

Maybe my question would be: is K8s resonable for such a small team, and if so: how do you suggest we organize our cluster without needing to hire a dedicated sysadmin?

Collapse
 
joehobot profile image
Joe Hobot

Well my team started as about 4-5 ppl now we are at 8-9 that we all manage different teams and onboard them daily from different platform to K8s. Now I can tell you it is not an easy start that’s for sure at the very beginning to get it going seem to be easy however once you start with automation and different hocus pocus things it gets heavy. Our team is very diverse in K8s knowledge all of us have best strength in some parts of K8s but nobody I would say is a K8s guru because let’s face it is tough to be “that” person.

So I think with the right set of mind like start sketching out infrastructure for dev or lab env. Say 1 master small worker nodes. Then hook it up to ci/cd and see if you can get some app to run via helm or kustomize. Once you get the hang of it, dig deeper into autoscaling vertically/horizontally. Security tightened with something like kali , canary deploys etc can be done right with istio.

I mean I could go on and on, but everyone seem to do stuff differently because K8s is flexible for any team. As long as it’s not like you and 8devs that want you to finish it by Friday.

Collapse
 
just_insane profile image
Jax Gauthier

Hi,

I am learning Kubernetes in my lab currently, and I would be interested in hearing what you think about my "Getting Started with Kubernetes (at home)" series of posts and my related documentation on GitLab.

what is your stance on Helm? I find it a lot easier to manage Helm Charts than Kubernetes manifests directly, and tend to gravitate towards a Helm based deployment for most things if possible. I feel like it is the future of software deployment on Kubernetes.

Thanks!

Collapse
 
joehobot profile image
Joe Hobot • Edited

I’ll take a look at your link you posted but yeah I work 100% with helm that is all tied up to ci/cd. It’s much easier to manage hundreds of apps and deployments. Some chose also kustomize

Collapse
 
taragrg6 profile image
taragurung

Can you help me clear this doubts of mine?
Suppose I have a container multiple containers with mysql too. when I am in single node than I can easily create multiple mysql container using same container volume. But when we want same thing to work with multiple cluster. Now what is the best way to maintain the database container volume across all the node.

What do we do in such scenario? Thanks!

Collapse
 
joehobot profile image
Joe Hobot

Which cloud provider do you use? Because if you use like stateless the volume should be accessible by all nodes so container it self does not matter where it is unless you go by labels.

Say in my example for elk stack , the ebs volume is created and attached to container as volume, if container for es goes down and goes onto another host the volume is still be available as it’s a shared ebs volume.

Collapse
 
joehobot profile image
Joe Hobot

For you to learn k8s and work with it a bit, yeah sure why not.

But if this is something that would go into production and have say minimal utilization, why bother with k8s ?

What you are looking into is at least 3 servers and you will have to manage those pretty much constantly to be up to date.

OR you could use something like AWS Beanstalk and have no servers to manage :) + its much cheaper.

Collapse
 
bjornaligoransson profile image
Björn Ali Göransson

What do you think about the release pace of k8s in relation to cloud vendors' support and specifically Azure, is it kinda fast-paced? Should we be worried that our current version of k8s is phased out too soon?

Collapse
 
joehobot profile image
Joe Hobot • Edited

Yes and no. Assuming you are talking about managed K8s by Cloud Providers vs hosting and managing it yourself?

I think release pace is actually good because you know that things are being fixed and new features are being added. That said in personal experience being on release 1.9.x vs 1.13.x is not good because there is a large overlap in upgrades and most importantly is the security. As far as cloud vendors being one or two releases back is somewhat painful but at least you know that releases they put out are stable and work with their infrastructure. But say you have self managed K8s, to test all of the things with your app and infrastructure could take you month or two especial If you are working with many layers of networking and storage infrastructure.

Hope that helps. Feel free to ask more if this was not enough explained as I can go in greater details too :)