DEV Community

Nitin Bansal
Nitin Bansal

Posted on

Kubernetes Handnotes

These are some of the handnotes that I've prepared over the course of a few years working with kubernetes. Many of these are generally not known well, unless a person has dug deep into the official kubernetes docs.

These are in no specific order, and are meant to be used as notes for quick, and comparatively detailed reference to kubernetes.

The target audience for these handnotes are beginners who have familiaried themselves with the core concepts of kubernetes, and now wish to dig deeper for better understanding, and professionals, who want a quick and thorough reference to the most important aspects of kubernetes without referring to official docs again and again.

These are in no way a replacement for kubernetes official docs. They are, for all purposes, the best available resources for learning kubernetes. These handnote just try to provide a better ratio of knowledge gained vs the time taken, compared to the official kubernetes docs.

Resources covered:

  1. Pods
  2. ReplicaSets
  3. Controller
  4. Master Components
  5. Node components
  6. Objects
  7. UID’s
  8. Namespaces
  9. Services
  10. Labels
  11. Label Selectors
  12. Field Selectors
  13. Annotations
  14. Object Management
  15. Init Containers
  16. Secret generator
  17. Persistent volumes(pv) and Persistent volumes claims(pvc)
  18. Ingress
  19. Ingress Controller
  20. Service discovery
  21. Endpoint
  22. Kube proxy
  23. Kube dns
  24. Etcd


  1. Each Pod in a Kubernetes cluster has a unique IP address, even Pods on the same Node.
  2. Every container in a Pod shares the network namespace, including the IP address and network ports.
  3. By default, docker uses host-private networking, so containers can talk to other containers only if they are on the same machine.
  4. Containers inside a Pod can communicate with one another using localhost, and all pods in a cluster can see each other without NAT.
  5. All containers in the Pod can access the shared volumes, allowing those containers to share data.
  6. Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic.


  1. Scaling is accomplished by changing the number of replicas in a Deployment
  2. A ReplicaSet might dynamically drive the cluster back to desired state via creation of new Pods to keep your application running
  3. Kubernetes also supports autoscaling of Pods, but it is outside of the scope of this article. Scaling to zero is also possible, and it will terminate all Pods of the specified Deployment.
  4. States in a kubernetes replicaset
    • 1. DESIRED : Configured number of replicas
    • 2. CURRENT : Show how many replicas are running now
    • 3. UP-TO-DATE : Number of replicas that were updated to match the desired (configured) state
    • 4. AVAILABLE : Shows how many replicas are actually AVAILABLE to the users
  5. Updates in kubernetes are versioned, and any deployment update can be reverted to a previous (stable) version.


  1. A controller handles all aspects of pod management including, but not limited to, creation, scheduling, replication, and healing of pods.

Kubernetes Master Components

  1. kube-apiserver : Exposes the Kubernetes API. It is designed to scale horizontally
  2. etcd : Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data. Always have a backup plan for etcd’s data for your Kubernetes cluster.
  3. kube-scheduler : Schedules newly created pods to run on nodes based on individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference and deadlines.
  4. kube-controller-manager : Runs controllers. Each controller includes:
    • 1. Node controller: Responsible for noticing and responding when nodes go down.
    • 2. Replication Controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.
    • 3. Endpoints Controller: Populates the Endpoints object (that is, joins Services & Pods).
    • 4. Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces.
  5. Cloud controller manager : runs controllers that interact with the underlying cloud providers. The cloud-controller-manager binary is an alpha feature introduced in Kubernetes release 1.6. The following controllers have cloud provider dependencies:
    • 1. Node Controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
    • 2. Route Controller: For setting up routes in the underlying cloud infrastructure
    • 3. Service Controller: For creating, updating and deleting cloud provider load balancers
    • 4. Volume Controller: For creating, attaching, and mounting volumes, and interacting with the cloud provider to orchestrate volumes

Kubernetes Node components

  1. kubelet : An agent that runs on each node in the cluster. It makes sure that containers are running in a pod. It doesn’t manage containers which were not created by Kubernetes.
  2. Container runtime
  3. Addons : Provides cluster features
  4. DNS : DNS server. Serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches.
  5. Web UI
  6. Container Resource Monitoring
  7. Cluster-level logging

Kubernetes Objects

  1. They are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster.
  2. By creating an object, you’re effectively telling the Kubernetes system what you want your cluster’s workload to look like; this is your cluster’s desired state.
  3. Every Kubernetes object includes two nested object fields that govern the object’s configuration: the object spec and the object status.
    • 1. Spec : It describes your desired state for the object
    • 2. Status : It describes the actual state of the object, and is supplied and updated by the Kubernetes system.
  4. Required fields in an object spec file:
    • 1. apiVersion
    • 2. kind
    • 3. metadata
      • 1. name: mandatory
      • 2. uid: mandatory: system provided
      • 3. namespace: optional
    • 4. spec
  5. All objects in the Kubernetes REST API are unambiguously identified by a Name and a UID.
  6. Name is a client-provided string that refers to an object in a resource URL, such as /api/v1/pods/some-name. This becomes DNS_NAME?.
  7. Only one object of a given kind can have a given name at a time. However, if you delete the object, you can make a new object with the same name.

Kubernetes UID's

  1. They are a kubernetes systems-generated strings to uniquely identify objects.
  2. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID. It is intended to distinguish between historical occurrences of similar entities.


  1. Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.
  2. Namespaces are intended for use in environments with many users spread across multiple teams, or projects.
  3. Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces.
  4. Namespaces can not be nested inside one another.
  5. Default namesspaces:
    • 1. default
    • 2. kube-system: For objects created by the Kubernetes system
    • 3. kube-public: Readable by all users (including those not authenticated).
  6. If you want to reach a service across namespaces, you need to use the fully qualified domain name (FQDN).
  7. Default behavior of kubernetes is to lookup a service in local namespace. Service DNS entry is of the form: ..svc.cluster.local.
  8. Namespace resources are not themselves in a namespace.
  9. Low-level resources, such as nodes and persistentVolumes, are not in any namespace.


  1. A Service in Kubernetes is an abstraction which defines a logical set of Pods, and a policy by which to access them. Services enable a loose coupling between dependent Pods.
  2. Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic.
  3. A service is assigned a unique IP address (also called clusterIP).
  4. This address is tied to the lifespan of the Service, and will not change while the Service is alive
  5. A Service is backed by a group of Pods, and these pods are exposed through endpoints.
  6. The Service’s selector will be evaluated continuously and the results will be POSTed to an Endpoints object
  7. When a Pod dies, it is automatically removed from the endpoints, and new Pods matching the Service’s selector will automatically get added to the endpoints.
  8. A service IP is completely virtual, it never hits the wire.
  9. Kubernetes services provides a stable, virtual IP (VIP) address.
  10. Virtual IP(VIP) address means it is not attached to any network interface
  11. VIP's purpose is to forward traffic to pods
  12. Keeping the mapping between the VIP and the pods up-to-date is the job of kube-proxy, a process that runs on every node, which queries the API server to learn about new services in the cluster.
  13. The target for service may not necessarily be a pod. It can be external cluster component, component in some other namespace, non-kubernetes component. Just define your service without the selector attribute.
  14. With no selector attribute, no endpoints object is created.
  15. For multi-port services (services that expose more than one port), you must give all of your ports names, so that endpoints can be disambiguated
  16. Services can be exposed in different ways by specifying a type in the ServiceSpec:
    • 1. ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster.
    • 2. NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. This acts as superset of ClusterIP.
    • 3. LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. This acts as superset of NodePort.
    • 4. ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns.
    • 5. To summarize, ExternalName => is superset of LoadBalancer => is superset of NodePort => is superset of ClusterIP
  17. Difference b/w port, targetPort and nodePort:
    • 1. Port: Port number which makes a service visible to other services running within the same cluster
    • 2. TargetPort: Port on which service is running
    • 3. NodePort: Port on which the service can be accessed from external users using Kube-Proxy
  18. A Service routes traffic across a set of Pods.
  19. A few notes about NodePort type service:
    • 1. It is not designed for production environments. Use LoadBalancer or Ingress Controller/Resource for same
    • 2. Need to specify extra nodePort attribute to service definition
    • 3. Opens specified (or automatically chosen if not specified) port on every node
    • 4. You can only have once service per port
    • 5. You can only use ports 30000–32767
    • 6. If your Node/VM IP address change, you need to deal with that
  20. A few notes about LoadBalancer type service:
    • 1. Best method to expose a service to outside world, if your cloud provider supports it
    • 2. There is no filtering, no routing, etc. This means you can send almost any kind of traffic to it, like HTTP, TCP, UDP, Websockets, gRPC, or whatever.
    • 3. Each service exposed with LoadBalancer will get its own IP address, and you have to pay for a LoadBalancer per exposed service, which can get expensive.
  21. Use externalIPs in service spec to set an ip address as target of service. This can be outside cluster
  22. Services have an integrated load-balancer that will distribute network traffic to all Pods of an exposed Deployment. Services will monitor continuously the running Pods using endpoints, to ensure the traffic is sent only to available Pods.
  23. If you want to reach a service across namespaces, you need to use the fully qualified domain name (FQDN).
  24. Default behavior of kubernetes is to lookup a service in local namespace. Service DNS entry is of the form: <service-name>.<namespace-name>.svc.cluster.local.
  25. Kubernetes offers a DNS cluster addon service that automatically assigns dns names to other services.
  26. A few notes about Headless services:
    • 1. When you don’t need or want load-balancing and a single service IP.
    • 2. Specify None as the clusterIP value.
    • 3. Allows developers to reduce coupling to the Kubernetes system by allowing them freedom to do discovery their own way.
    • 4. Cluster IP is not allocated.
    • 5. kube-proxy does not handle these services.
    • 6. There is no load balancing or proxying done by the platform for them.
    • 7. If selectors are defined, the endpoints controller creates Endpoints records in the API, and modifies the DNS configuration to return A records (addresses) that point directly to the Pods backing the Service.
    • 8. If selectors are not defined, no endpoints objects are created. But, CNAME records for ExternalName type services are created and A records for any Endpoints that share a name with the service, for all other types.


  1. Labels can be attached to objects at creation time or later on. also, they can be modified at any time.
  2. Labels are used to specify identifying attributes of objects.
  3. The name segment is required and must be 63 characters or less.
  4. The prefix must be a DNS subdomain: a series of DNS labels separated by dots (.), not longer than 253 characters in total, followed by a slash (/).
  5. Labels are used to specify identifying attributes of objects. Non-identifying information should be recorded using annotations.
  6. If the prefix is omitted, the label Key is presumed to be private to the user.
  7. The and prefixes are reserved for Kubernetes core components.
  8. Valid label values must be 63 characters or less. They could be empty too.

Label Selectors

  1. The client, or a user can identify a set of objects. The label selector is the core grouping primitive in Kubernetes.
  2. Following types of label selectors are supported: equality-based and set-based.
    • 1. Equality based label selectors:
    • 2. Set based label selectors:
  3. Equality based label selectors:

    • 1. Operators allowed: =,==,!=
      1. These can be specified as:
      • 1. New line terminated:
      environment = production
      tier != frontend
      • 2. Comma separated:
  4. Set based label selectors:

    • 1. Operators allowed: in, notin and exists (only the key identifier)
    • 2. Examples:
      environment in (production, qa)
      tier notin (frontend, backend)
      partition,environment notin (qa)
      >>> Above one selects resources with a partition key(no matter the value) and with environment different than  qa
  5. Set-based label selectors can be mixed with equality-based selectors. Example: partition in (customerA, customerB),environment!=qa

  6. Label selectors can be clubbed with API calls as query params and kubectl commands to filter the returned results. Examples:

    partition in (customerA, customerB),environment!=qa

  7. Service and ReplicationController does NOT support set based label selectors.

  8. Newer resources, such as Job, Deployment, Replica Set, and Daemon Set, support set-based label selectors.

Field Selectors

  1. They let you select Kubernetes resources based on the value of one or more resource fields. Examples:

    kubectl get pods --field-selector status.phase=Running
  2. Supported field selectors vary by Kubernetes resource type.

  3. All resource types support the and metadata.namespace fields.

  4. Using unsupported field selectors produces an error.

  5. Multiple resources can be filtered in one go. Also, multiple field selectors can be given using comma as separator. Example:

kubectl get statefulsets,services --field-selector=status.phase!=Running,spec.restartPolicy=Always
Enter fullscreen mode Exit fullscreen mode


  1. They store non-identifying object data. They cannot be used to target/select objects based on their value(s).
  2. The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.
  3. Annotations have the same syntax as labels.

Kubernetes Object Management

Following are the 3 ways to interact with kubernetes objects:

  1. Using Imperative commands: They operate on Live objects
  2. Using Imperative object configuration: They operate on Individual files
  3. Using Declarative object configuration: They operate on Directories of files

Imperative commands

  1. Used with kubectl with resource name in the command

Imperative object configuration

  1. Used with kubectl with operation(create, replace, etc.) and single object config file.
  2. The object config file specified must contain a full definition of the object in YAML or JSON format.
  3. Multiple files can also be specified. Example:
kubectl delete -f nginx.yaml -f redis.yaml
Enter fullscreen mode Exit fullscreen mode

Declarative object configuration

  1. Operates on object configuration files stored locally.
  2. The user does not define the operations to be taken on the files.
  3. Create, update, and delete operations are automatically detected per-object by kubectl.
  4. Uses patch operation to preserve changes made by other writers, while applying new changes (diffs only).
  5. To see what changes are going to be made, use:

    kubectl diff -f configs/
  6. To apply the changes:

    kubectl apply -f configs/
  7. Use command line flag -R to process directories recursively.

  8. These files are known as resource configs.

Init Containers

  1. They are specialized containers that run before app containers.
  2. Thwy can contain utilities or setup scripts, not present in the app image.
  3. They always run to completion.
  4. Each one must complete successfully before the next one is started.
  5. Custom containers can be specified as initContainers using initContainers field of PodSpec.
  6. Almost exactly same as regular containers in all aspects of the specs object.
  7. Does not support readiness probes as they must run to completion before the pod can be ready.
  8. They are started after network and volumes are initialized.
  9. Changes to the init container spec are limited to the container image field.
  10. The activeDeadlineSeconds is applicable on both types of containers.
  11. App container image changes only restarts the app container, not the init containers. For that, init container image needs to changed.

Secret generator: kustomization.yaml

  1. A Secret is an object that stores a piece of sensitive data like a password or key.
  2. Since 1.14, kubectl supports the management of kubernetes objects using a kustomization file.
  3. You can create a secret by generators in kustomization.yaml.

Persistent volumes(pv) and Persistent volumes claims(pvc)

  1. Use ReadWriteMany and not ReadWriteOnce when using shared volume.
  2. Access control: gid(group id) can be assigned to the created volume to restrict access to specific pods with the same gid.
  3. A PersistentVolume (PV) is a piece of storage in the cluster that has been manually provisioned by an administrator, or dynamically provisioned by kubernetes using a StorageClass.
  4. A PersistentVolumeClaim (PVC) is a request for storage by a user that can be fulfilled by a Persistent Volume.
  5. PersistentVolumes and PersistentVolumeClaims are independent from Pod lifecycles and preserve data through restarting, rescheduling, and even deleting Pods.
  6. Many cluster environments have a default StorageClass installed. When a StorageClass is not specified in the PersistentVolumeClaim, the cluster’s default StorageClass is used instead.
  7. In local clusters with default storage class (hostPath), data is saved in node's /tmp directory. Hence, could be lost on reboot.

Kubernetes Secret object type

  1. Stores a piece of sensitive data like a password or key.


  1. Kubernetes offers a DNS cluster addon service that automatically assigns dns names to other services.
  2. To check whether the same is running on your cluster or not, use following:
kubectl get services kube-dns --namespace=kube-system
Enter fullscreen mode Exit fullscreen mode


  1. An API object that manages external access to the services in a cluster, typically HTTP.
  2. Ingress can provide load balancing, SSL termination and name-based virtual hosting.
  3. Kind of like a reverse proxy
  4. An Ingress can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, and offer name based virtual hosting.
  5. An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
  6. You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
  7. Note that NOT all ingress controllers support the full spec. Be careful while choosing.
  8. It supports TLS.
  9. Supports load balancing. A few of the common algorithms are supported. For others, service loadbalancer can be used.
  10. Health checks not exposed by default. But, readiness probes can be used.
  11. Cross availability zones deployments can be done, but depends on cloud provider support. Refer federation documentation for details on deploying Ingress in a federated cluster.
  12. Types of Ingress:
    • 1. Single Service Ingress: Expose a single service. No host and path mapping
    • 2. Sample Fanout: Exposes multiple services using path-only mapping
    • 3. Name based virtual hosting: Uses domain and path mappings
  13. At least on GKE, spins up a L7 layer HTTP load balancer, hence, is non-protocol agnostic.
  14. Lot of available ingress controllers: Google Cloud Load Balancer, Nginx, Contour, Istio, and more.

Ingress Controller

  1. An Ingress Controller listens to the Kubernetes API for Ingress resources and then handle requests that match them.
  2. Can technically be any system capable of reverse proxying, but the most common is Nginx.
  3. Nginx controller needs a backend. Other controllers might not need one.
  4. An Ingress with no rules sends all traffic to a single default backend.
  5. The default backend is typically a configuration option of the Ingress controller and is not specified in your Ingress resources

Service discovery

Following are the 2 ways in which service discovery can be provisioned:

  1. Using environment variables
  2. DNS (recommended)

Environment variables - for service discovery

  1. kubelet exposes environment variables of the form {SVCNAME}{SOME_NAME}. Example, for a redis service with cluser ip of _10.0.0.11:
Enter fullscreen mode Exit fullscreen mode

DNS - for service discovery

  1. It's a cluster add-on.
  2. The DNS server watches the Kubernetes API for new services, and creates a set of DNS records for each.
  3. For service my-service in namespace my-ns, a DNS record for is created.
  4. No need to specify namespace, if pods belong to the same.
  5. The Kubernetes DNS server is the only way to access services of type ExternalName.


  1. An Endpoint is an object-oriented representation of a REST API endpoint that is populated on the Kubernates API server. Thus, the endpoint in terms of Kubernetes is the way to access its resource (e.g. a Pod) - the resource behind the 'endpoint'.
  2. Contains EndpointSubset array.
  3. EndpointSubset is a group of addresses with a common set of ports. The expanded set of endpoints is the cartesian product of Addresses x Ports.
  4. EndpointAddress of EndpointSubset may NOT be loopback (, link-local (, or link-local multicast ((
  5. IPv6 is also accepted, but not fully supported on all platforms.
  6. The Service’s selector is continuously evaluated, and the results are POSTed to an endpoints object
  7. When a Pod dies, it is automatically removed from the endpoints, and new pods matching the service’s selector are automatically added to the endpoints.
  8. Endpoints track the IP addresses of the objects the service send traffic to.
  9. And endpoint be loosely coupled with a service by keeping service's and endpoint's name same. See for more.
  10. With no selector attribute mentioned (is a service), no endpoints object is created.

Kube proxy

  1. It's a special daemon (application) running on every worker node.
  2. Can run in two modes, configurable with --proxy-mode command line switch:
    • 1. userspace
    • 2. iptables
  3. For higher throughput and better latency, use iptables proxy mode.
  4. Not IPv6 ready.
  5. Maintains network rules and performs connection forwarding.
  6. This is useful for:
    • 1. Debugging your services, or connecting to them directly from your laptop for some reason
    • 2. Allowing internal traffic, displaying internal dashboards, etc.

Kube dns

  1. This allows accessing K8s services using their names directly, rather than VIP:PORT combination.
  2. When you use kube-dns, K8s injects certain nameservice lookup configuration into new pods that allows you to query the DNS records in the cluster.
  3. kube-dns creates an internal cluster DNS zone which is used for DNS and service discovery. This means that we can access the services from inside the pods via the service names directly. Example:

    curl -I nginx-svc:8080
  4. You can use following to see node dns config as setup by kube-dns:

    cat /etc/resolv.conf
    > search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.kube-blog.internal
    options ndots:5
  5. If the service is created in default namespace, it can be accessed using the cluster internal DNS name, too:

    curl -I nginx-svc.default.svc.cluster.local:8080


  1. This is a consistent and highly-available key value store, used as kubernetes’ backing store for all cluster data.
  2. Make sure to always have a backup plan for etcd’s data for your kubernetes cluster.
  3. All data is saved in etc as registries. Example:

  4. Following command can be used to query data in etcd:

    etcdctl --ca-file=/etc/etcd/ca.pem get /registry/services/endpoints/default/kubernetes

That's all, folks ¯\(ツ)

Drop me a mail at or DM me on twitter if you have any suggestions or need any help with software development.

Also, do visit my blog if you like this. I have much more useful content planned to be added.


  5. Examples:

Top comments (0)