DEV Community

Discussion on: Kubernetes - Scheduling containers

Collapse
 
downey profile image
Tim Downey

It seems like logs command that kubectl supports just limits no. of pods that it can pull it's data from to just 8!

Are you sure about this? My kubectl defaults to 5, but you can set it higher with the --max-log-requests flag. I'm using kubectl version v1.18.4 so it might be a newer option, though. 🤷‍♂️

--max-log-requests=5: Specify maximum number of concurrent logs to follow when using by a selector. Defaults to 5.
Collapse
 
sidgod profile image
Sid

You're right! I can increase it using max-log-requests options, I did not look through everything -h gave me so could not test it. I'm not sure to what extend we can use this option in production. I'm guessing if we try to pull logs from too many log streams it's not going to do down well w.r.t Kubernetes API. I'm still very new to Kubernetes scene, but this is something that I'm planning to look through when I reach a stage wherein I can put some app to production and scale it well.

Collapse
 
downey profile image
Tim Downey • Edited

Yeah, for production you may consider looking into things like fluentd or fluentbit for forwarding logs to some external aggregator and the view them through that.

But for dev clusters or just adhoc access, kubectl logs is super handy!

Thread Thread
 
sidgod profile image
Sid

Nice! We already use filebeat + ES (Since it's native to AWS offering right now) I'm guessing we'll still continue to do the same way!