DEV Community

Cover image for TIL: The path to CKA Week 5.
Jace
Jace

Posted on

TIL: The path to CKA Week 5.

After finished the Kubernetes installation the hard way,
I moved on the the CKA with Practice Tests that I bought in udemy.

This course contains lots of online hand-on labs, which improve the kubernetes troubleshooting skill.


ETCD

etcd is a key-value storage for all of the data about the whole information of a k8s cluster.

And you can deploy etcd from by downloading a released binary and execute it.
Or, if you install kubernetes cluster by kubeadm, your etcd will run as container. For example, you can get the keys of your cluster by
kubectl exec etcd-master -n kube-system etcdctl get / --perfix -keys-only


kube-apiserver

In kubernetes, there are some operation/component related with kube-apiserver.

  1. Authenticate User
  2. Validate Request
  3. Retrieve data
  4. Interactive with ETCD
  5. Scheduler
  6. Kubelet

You can see your kebe-apiserver configuration by checking cat /etc/systemd/system/kube-apiserver.service or
cat /etc/kubernetes/manifests/kube-apiserver if your cluster is installed by kubeadm
Or you can search a running api-server process by
ps -aux | grep kube-apiserver, and you can see the all configurations.


kube-controller-manager

When install kube-controller-manager there are some different controller will be installed. Eg, development controller, replication controller, etc.

And if your cluster is install by kubeadm, like other components, the kube-controller-manager will run as a pod in kube-system namespace on master node.
For other non-kubeadm setups, the configurations can be checked in /etc/systemd/system/kube-controller-manager.service


kube-scheduler

In k8s, kube-scheduler will decides and only decides which pod goes to which worker node.
By considering the pod's resources requirements, taints, node selectors and affinity, etc.

Again, for a cluster set by kubeadm the kube-scheduler configuration will be in /etc/kubernetes/manifests/kube-scheduler.yaml.
And it run as a pod in the kube-system namespace.


kubelet.

kubelet is where the dirty work happends in kubernetes cluster.
It receives the command from the scheduler and to interact with the container run time to start or to delete a container.
If you use kubeadm to install k8s cluster, it WILL NOT install kubelet.
You must download and install it by your own.

You can search your kubelet process by
ps aux | grep kubelet


kube-proxy.

kube-proxy will setup the iptables for the services in kubernetes.
Since the service ip and the pod ip are in different subnet.
So the kube-proxy's job is to monitor the services and the pod ips, and maintains the proper rules to forward the traffic.

Notes.

  1. Create pod by kubectl run commnad.
    kubectl run nginx --image=nginx --restart=Never

  2. Create deployment by kubectl run
    kubectl run nginx --image=nginx --restart=Always

  3. Create job by kubectl run
    kubectl run nginx --image=nginx --restart=OnFailure

Oldest comments (0)