DEV Community

Cover image for How does Container Network Interface (CNI) help Kubernetes?
jmbharathram
jmbharathram

Posted on • Updated on

How does Container Network Interface (CNI) help Kubernetes?

If you've ever worked on Kubernetes and created resources like Pods, Services etc. for whatever purpose, you might have noticed something about the networking aspects of these resources such as:

  1. IPs are assigned to PODs and Services. i.e. IP Address Management (IPAM)

  2. Pods/containers get added to certain network namespaces when they are created and get removed from those network namespaces when those resources are deleted.

  3. Your pods are able to communicate with other Pods.

  4. External traffic is somehow routed through to the right POD while port forwarding is taken care of.

And so on.

These are a few of the requirements of the Kubernetes Networking Model.

CNI is a common interface between container runtimes and network - Bryan Boreham, Kubecon, North America 2019

What is Container Runtime?

Container runtimes rely on what's called as Container Network Interface (CNI) plugins to perform these network actions like adding or deleting a container to or from a network.

There are many CNI plugins available for the companies to use. A few popular ones are:

  • Calico
  • Flannel
  • Weave
  • Cilium

Container runtimes are not exclusive to Kubernetes. They are used independently and also by other companies like rkt container engine, Apache Mesos, Openshift, CloudFoundry and more.

Container Network Interface (CNI) basically resides in this git repo.

https://github.com/containernetworking/cni

This CNI repo has two main things:

  1. CNI specifications: If CNI is a common interface which could be implemented as a plugin by any company that offers network management software, then some standards must be defined to ensure any container runtime can use any interface implementation (ie plugin) in much the same way. These standards / specifications can be found in the above repo. The latest version of CNI spec at the time this article was written is v1.0.0.

  2. Library and Examples: Basically the above git repo also contains libraries that can be used for writing plugins, some simple plugins and some good examples of how plugins should be written.

How would Kubernetes know which CNI plugin to use?

You may know about the Kubelet process that runs on both master and worker nodes in a Kubernetes cluster.

If you look at Kubelet configuration, you will see one or more CNI-related parameters. Here's what I have in my Kubernetes cluster.


root@ip-172-31-30-83:/home/ubuntu# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf|grep -v ^#
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
root@ip-172-31-30-83:/home/ubuntu#
root@ip-172-31-30-83:/home/ubuntu# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--hostname-override=master **--network-plugin=cni** --pod-infra-container-image=k8s.gcr.io/pause:3.2"

network-plugin parameter is set to "cni" here. Apart from that, two other parameters cni-bin-dir and cni-conf-dir can be specified here, which are the directories that will contain various cni plugin binaries and cni configuration file respectively.

Since they were not set explicitly, the following default values for these parameters will be applied.

cni-bin-dir=/opt/cni/bin
cni-conf-dir=/etc/cni/net.d

Here's a quick look at what these directories have.

CNI Default Directories

Since I installed Calico CNI plugin in my cluster, the configuration file for the same is shown in the above picture. It will vary on other Kubernetes clusters based on the plugin choice.


root:/home/ubuntu# cat /etc/cni/net.d/10-calico.conflist
{
  "name": "k8s-pod-network",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "calico",
      "log_level": "info",
      "datastore_type": "kubernetes",
      "nodename": "master",
      "mtu": 1440,
      "ipam": {
          "type": "calico-ipam"
      },
      "policy": {
          "type": "k8s"
      },
      "kubernetes": {
          "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
      }
    },
    {
      "type": "portmap",
      "snat": true,
      "capabilities": {"portMappings": true}
    }
  ]
}

The above JSON complies with the CNI specification that we talked about earlier. It has CNI plugin details such as the cniVersion is 0.3.1 which is an older CNI spec version, which executable is used for IP address management etc.

Kubelet is responsible for invoking a container runtime (containerd) to manage pod operations. In the same way, it invokes a CNI plugin based on the CNI configuration file to manage the network operations required for pods/containers.

Okay! This is where I stop exploring CNI further.

To summarize

  • we learnt that Container Network Interface is an independent, vendor agnostic standard that consists of CNI specification and the libraries and examples that anyone can use to develop CNI plugins.
  • We also learnt that Kubernetes and other container runtimes do not manage the container network they operate on by themselves. Rather, they just call CNI plugins to achieve the same.
  • We also showed you a few things about how a CNI plugin called Calico is configured in a Kubernetes cluster and where a variety of CNI plugin executables are located in the system.

Hope that was interesting. Cheers.

Top comments (0)