Before Implementing custom controller in Go let's first understand what is Kubernetes Controller and Customer Resource Definition (CRD)
Kubernetes Controller
A Kubernetes Controller is component of control plane that continuously monitors state of kubernetes cluster and takes action to ensure that the actual state of cluster matches the desired state.It makes changes attempting to move current state closer to desired state.
Customer Resource Definition (CRD)
Custom Resource Definition (CRD) is a way to extend the Kubernetes API to create our own custom resources. These custom resources can represent any kind of object which we want to manage within our Kubernetes cluster.
Creating own Custom Resource Definition (CRD)
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: my-crds.com.shivam.kumar
spec:
group: com.shivam.kumar
names:
kind: my-crd
plural: my-crds
scope: Namespaced
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
properties:
description:
type: string
Apply this file using the kubectl command and when we see the available crds in our cluster we can see the crd which we created-
Creating Custom Resource (CR)
apiVersion: com.shivam.kumar/v1
kind: my-crd
metadata:
name: my-custom-resource
spec:
description: "My CRD instance"
Apply this file using the kubectl command
Now let's move on to create own custom controller
Creating custom kubernetes controller
package main
import (
"context"
"fmt"
"path/filepath"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
func main() {
var kubeconfig string
if home := homedir.HomeDir(); home != "" {
kubeconfig = filepath.Join(home, ".kube", "config")
}
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
fmt.Println("Falling back to in-cluster config")
config, err = rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
}
dynClient, err := dynamic.NewForConfig(config)
if err != nil {
panic(err.Error())
}
thefoothebar := schema.GroupVersionResource{Group: "com.shivam.kumar", Version: "v1", Resource: "my-crds"}
informer := cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options metav1.ListOptions) (runtime.Object, error) {
return dynClient.Resource(thefoothebar).Namespace("").List(context.TODO(), options)
},
WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
return dynClient.Resource(thefoothebar).Namespace("").Watch(context.TODO(), options)
},
},
&unstructured.Unstructured{},
0,
cache.Indexers{},
)
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
fmt.Println("Add event detected:", obj)
},
UpdateFunc: func(oldObj, newObj interface{}) {
fmt.Println("Update event detected:", newObj)
},
DeleteFunc: func(obj interface{}) {
fmt.Println("Delete event detected:", obj)
},
})
stop := make(chan struct{})
defer close(stop)
go informer.Run(stop)
if !cache.WaitForCacheSync(stop, informer.HasSynced) {
panic("Timeout waiting for cache sync")
}
fmt.Println("Custom Resource Controller started successfully")
<-stop
}
Now when we build this Go Program and run it-
go build -o k8s-controller .
./k8s-controller
Now whenever we add, update or delete custom resource created above we get active logs of it in our terminal. so this means our controller is monitoring our CRD.
Top comments (0)