All you know about monolithic architecture, but when we migrate to micro-service architecture where everything resides according to modules in various containers. But containers are not scalable, not in-built load balancing and all. Think if your OS crashes where all your containers are, then the whole system stops working and the system shuts down. Also, if your users limit pitched threshold value, then you don’t have the option to create another container and manage users load.
To overcome this limitation and manage, scale your container Kubernetes comes into picture.
Kubernetes is a container-orchestration solution for application deployment, scaling. It was designed by Google and now it is maintained by Cloud Native Computing Foundation.
Many popular Cloud Services like Google, Amazon, etc offer a Kubernetes based platform as a service or infrastructure as a service on which Kubernetes is deployed as a platform-providing service.
Kubernetes provides infrastructure that deploy, scale applications based on CPU Usage, User’s traffic and according specified set of rules. Kubernetes follows the master/slave architecture.
Kubernetes master is responsible for controlling the cluster, managing workloads and communication across the system. It contains various components like API Server, Controller Manager, Scheduler, etcd.
It's a lightweight key-value data store, which stores the configuration details of the cluster. It also represents the overall state of the cluster at any time. Kubernetes API Server uses etcd’s watch API to monitor cluster and upgrade/downgrade configuration changes according to specified rules. Let’s say if the deployer has set five instances of pods that need to be running, this is stored in etcd. If it is found that only 4 instances are running, this difference will be detected by comparison with etcd data and Kubernetes will create an additional instance of the new pod.
API Server serves the API using JSON over HTTP. API Server processes and validates REST requests and updates the state of the API object in etcd, so it helps clients to configure workloads and containers across Worker nodes.
Scheduler selects the unscheduled pods of node. It tracks the resources. So based on resource requirements it can schedule pod in node.
Controller Manager drives actual cluster state toward the desired cluster state. It communicates with API Server to create, update and delete the resources it manages. It is also responsible to create/replace nodes if any node fails.
Nodes is a system where containers are deployed. Where Docker Image/ Services are running. It contains Kubelet, Kube-Proxy, Container.
Kublet is responsible for running/starting/stopping and maintaining the node and ensuring containers (pods) on the node are healthy. Kubelet also monitors the state of the pod, if any is not in desired state, it re-deploys the same node.
Node status is checked on certain seconds. If it detects a node failure, the replication controller observes this state change and launches pod on other healthy nodes.
It’s an Implementation of network proxy and a load balancer. It’s responsible for routing traffic to appropriate containers based on IP and port number of the incoming requests.
Container is the core level of micro-service, which contains applications, libraries and dependencies. It is exposed through an external IP address. Kubernetes supports Docker containers to run inside pod.
Pod is a scheduling unit in Kubernetes, Kubernetes contains one or more than one container and is located on the same node.
Each pod in Kubernetes is assigned a unique IP address within the cluster, which allows applications to use ports without the risk of conflict.
Inside the Pod, containers reference each other on localhost. But a container within one pod has no way to directly connect the container of another pod, for that container has to use Pod IP Address.
Thanks for reading Blog!