DEV Community

Sid
Sid

Posted on

Kubernetes - Scheduling containers

One-shot containers

There are 2 types of one-shot containers based on how they handle recovery in case of error, differentiation between these 2 approaches is from a CLI modifier called "--restart".

  • Value of this modifier as "OnFailure" will ensure resource created by kubectl run will be restarted if it does not exit in a clean way (Checked using exit return code).
  • Value of this modifier as "Never" will do nothing regardless of how resource / pod exits.

Scheduling containers

Modifier called "--schedule" to kubectl run command will allow us to supply cron expression and will schedule pods accordingly. One-shot container modifiers can also be used along with scheduling modifiers.
This is command for scheduling a container:

# Sample command
kubectl create cronjob <job-name> --schedule="<schedule>" --restart=OnFailure --image=<image-name> -- <command-to-container>
# E.g. Command
kubectl create cronjob test-cronjob --schedule="*/3 * * * *" --restart=OnFailure --image=alpine -- sleep 10

Example of container with restart on failure and schedule is a batch job that needs to work on batch of data on some frequency.

Command help params

I like commands that use intuitive, multi-level switches to provide help on that topic. Thankfully kubectl does follow that practice, it'll not only auto-complete keywords, on every level you can just provide "-h" switch display help on all available CLI options, their meaning and some good examples. I love it when developers put thought to make their CLIs really good and helpful, particularly when someone like myself (with least possible main memory) is using it! ;)

Shortcomings of Logs

  • It seems like logs command that kubectl supports just limits no. of pods that it can pull it's data from to just 8! It does make sense since it's internally making round trips to API service and anything over 8 (Magic number ;) ) seems to be harmful that API layer that's a center-piece of whole Kubernetes architecture.
  • As I had mentioned previously, logs command when run without a filter seems to latch onto a single pod (not even round-robin) so it feels like logs command in kubectl is good for say local development while work is in progress but not a great choice to run it in production.
  • My application of Kubenetes is for cloud and specifically on AWS as their offering - EKS (AWS Managed Kubernetes Cluster) so seems like I'll have to find this "logging" part really well while using EKS. (I still haven't looked at EKS myself but deep down every fibre of my body is telling me that AWS would have supported option to route all logs to CloudWatch logs out of box)
  • One option to manage logs that I learned from my Kubernetes training / course if called Stern.

Stern

Seems like a good tool, I gave it a try, it has all options normal logs has plus some more. I would encourage everyone to try this out for local usage.

Deleting Stuff

# Multiple resources can be deleted together
kubectl delete "<resource-type>/<resource-name>" "<resource-type>/<resource-name>"

Delete does not mean, delete right away! It'll still follow a wait time while pods move to "Terminating" state and then finally be killed off.

Phoof, this was a long post that I had anticipated ...

Oldest comments (4)

Collapse
 
downey profile image
Tim Downey

It seems like logs command that kubectl supports just limits no. of pods that it can pull it's data from to just 8!

Are you sure about this? My kubectl defaults to 5, but you can set it higher with the --max-log-requests flag. I'm using kubectl version v1.18.4 so it might be a newer option, though. 🤷‍♂️

--max-log-requests=5: Specify maximum number of concurrent logs to follow when using by a selector. Defaults to 5.
Collapse
 
sidgod profile image
Sid

You're right! I can increase it using max-log-requests options, I did not look through everything -h gave me so could not test it. I'm not sure to what extend we can use this option in production. I'm guessing if we try to pull logs from too many log streams it's not going to do down well w.r.t Kubernetes API. I'm still very new to Kubernetes scene, but this is something that I'm planning to look through when I reach a stage wherein I can put some app to production and scale it well.

Collapse
 
downey profile image
Tim Downey • Edited

Yeah, for production you may consider looking into things like fluentd or fluentbit for forwarding logs to some external aggregator and the view them through that.

But for dev clusters or just adhoc access, kubectl logs is super handy!

Thread Thread
 
sidgod profile image
Sid

Nice! We already use filebeat + ES (Since it's native to AWS offering right now) I'm guessing we'll still continue to do the same way!