DEV Community

Cover image for kind - Setting up CNI using Calico - Part 7
Unni P
Unni P

Posted on β€’ Originally published at iamunnip.hashnode.dev

kind - Setting up CNI using Calico - Part 7

In this article we will look how we can configure our cluster to use Calico as CNI

Introduction

  • kind ships with a simple networking implementation called kindnetd

  • Based on standard CNI plugins (ptp, host-local) and simple netlink routes

  • It also handles IP masquerade

  • We can disable the default CNI in kind and use Calico as our CNI

Usage

  • Create a cluster using the below configuration file
$ cat kind.yml 
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
name: dev
networking:
  disableDefaultCNI: true
nodes:
- role: control-plane
- role: worker
- role: worker
Enter fullscreen mode Exit fullscreen mode
$ kind create cluster --config kind.yml 
Creating cluster "dev" ...
 βœ“ Ensuring node image (kindest/node:v1.26.3) πŸ–Ό
 βœ“ Preparing nodes πŸ“¦ πŸ“¦ πŸ“¦  
 βœ“ Writing configuration πŸ“œ 
 βœ“ Starting control-plane πŸ•ΉοΈ 
 βœ“ Installing StorageClass πŸ’Ύ 
 βœ“ Joining worker nodes 🚜 
Set kubectl context to "kind-dev"
You can now use your cluster with:

kubectl cluster-info --context kind-dev
Enter fullscreen mode Exit fullscreen mode
  • Since we don’t have any CNI installed in the cluster and we can see the nodes are in NotReady state and CoreDNS pods are in pending state
$ kubectl get nodes
NAME                STATUS     ROLES           AGE   VERSION
dev-control-plane   NotReady   control-plane   65s   v1.26.3
dev-worker          NotReady   <none>          48s   v1.26.3
dev-worker2         NotReady   <none>          35s   v1.26.3
Enter fullscreen mode Exit fullscreen mode
$ kubectl -n kube-system get pods
NAME                                        READY   STATUS    RESTARTS   AGE
coredns-787d4945fb-c5xfr                    0/1     Pending   0          115s
coredns-787d4945fb-mlphp                    0/1     Pending   0          115s
etcd-dev-control-plane                      1/1     Running   0          2m8s
kube-apiserver-dev-control-plane            1/1     Running   0          2m8s
kube-controller-manager-dev-control-plane   1/1     Running   0          2m7s
kube-proxy-74mrj                            1/1     Running   0          114s
kube-proxy-txj8v                            1/1     Running   0          101s
kube-proxy-xrqnn                            1/1     Running   0          115s
kube-scheduler-dev-control-plane            1/1     Running   0          2m8s
Enter fullscreen mode Exit fullscreen mode
  • Install Calico CNI using the manifest file available from their documentation
$ kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
Enter fullscreen mode Exit fullscreen mode
  • Verify the status of the nodes and pods
$ kubectl get nodes
NAME                STATUS   ROLES           AGE     VERSION
dev-control-plane   Ready    control-plane   6m12s   v1.26.3
dev-worker          Ready    <none>          5m55s   v1.26.3
dev-worker2         Ready    <none>          5m42s   v1.26.3
Enter fullscreen mode Exit fullscreen mode
$ kubectl -n kube-system get pods
NAME                                        READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5857bf8d58-4kcbd    1/1     Running   0          4m14s
calico-node-8qrmb                           1/1     Running   0          4m14s
calico-node-v9mrj                           1/1     Running   0          4m14s
calico-node-vml2c                           1/1     Running   0          4m14s
coredns-787d4945fb-c5xfr                    1/1     Running   0          7m58s
coredns-787d4945fb-mlphp                    1/1     Running   0          7m58s
etcd-dev-control-plane                      1/1     Running   0          8m11s
kube-apiserver-dev-control-plane            1/1     Running   0          8m11s
kube-controller-manager-dev-control-plane   1/1     Running   0          8m10s
kube-proxy-74mrj                            1/1     Running   0          7m57s
kube-proxy-txj8v                            1/1     Running   0          7m44s
kube-proxy-xrqnn                            1/1     Running   0          7m58s
kube-scheduler-dev-control-plane            1/1     Running   0          8m11s
Enter fullscreen mode Exit fullscreen mode
  • Deploy our Nginx application by creating a pod and exposing it as ClusterIP
$ kubectl run nginx --image=nginx --port=80 --expose
service/nginx created
pod/nginx created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get pods nginx
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          32s
Enter fullscreen mode Exit fullscreen mode
$ kubectl get svc nginx
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
nginx   ClusterIP   10.96.106.155   <none>        80/TCP    34s
Enter fullscreen mode Exit fullscreen mode
  • Verify our Nginx application
$ kubectl run busybox --image=busybox --restart=Never --rm -it -- wget -O- http://nginx
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/busybox, falling back to streaming logs: Internal error occurred: error attaching to container: failed to load task: no running task found: task 9268947ec3741ac1bad25fab9454c9c56e51131e7d65098993a87a96ed7ea7d7 not found: not found
Connecting to nginx (10.96.106.155:80)
writing to stdout
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
-                    100% |********************************|   615  0:00:00 ETA
written to stdout
pod "busybox" deleted
Enter fullscreen mode Exit fullscreen mode

Cleanup

  • Delete our cluster after use
$ kind delete cluster --name dev
Deleting cluster "dev" ...
Deleted nodes: ["dev-control-plane" "dev-worker" "dev-worker2"]
Enter fullscreen mode Exit fullscreen mode

Reference

https://kind.sigs.k8s.io/

https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises

Postmark Image

Speedy emails, satisfied customers

Are delayed transactional emails costing you user satisfaction? Postmark delivers your emails almost instantly, keeping your customers happy and connected.

Sign up

Top comments (0)

Image of Datadog

Create and maintain end-to-end frontend tests

Learn best practices on creating frontend tests, testing on-premise apps, integrating tests into your CI/CD pipeline, and using Datadog’s testing tunnel.

Download The Guide

πŸ‘‹ Kindness is contagious

Please leave a ❀️ or a friendly comment on this post if you found it helpful!

Okay