DEV Community

Cover image for Building a Kubernetes Operator with the Operator Framework
Patrick Domnick
Patrick Domnick

Posted on

Building a Kubernetes Operator with the Operator Framework

Overview

Kubernetes Operators simplify the management of complex applications on Kubernetes. In this guide, we'll walk through creating a simple Kubernetes Operator using the Operator Framework. We'll also cover setting up a local Kubernetes cluster with KIND (Kubernetes in Docker) and deploying the Operator to the KIND cluster.

Note: This guide assumes you know what Kubernetes and Docker are and that you have a Mac or Linux/WSL machine.

Prerequisites

You might want to install the following tools on your machine:

Here are some extra tools which might be useful in the future:

  • k9s: brew install k9s
  • kustomize: brew install kustomize
  • helm: brew install helm
  • helmify: brew install arttor/tap/helmify

Creating a Kubernetes Cluster with KIND

Before we can deploy our Operator, we need a Kubernetes cluster to deploy it to. We'll use KIND to create a local Kubernetes cluster.

  1. Create a KIND cluster:

    kind create cluster
    
  2. Configure kubectl to use the KIND cluster:

    kind export kubeconfig
    
  3. Switch to the correct cluster context:

    kubectx kind-kind
    
  4. Verify the cluster is running:

    kubectl cluster-info
    

We now have a minimal local Kubernetes cluster running on our machine. You should be able to use kubectl or k9s to interact with the cluster like any other Kubernetes cluster.

Initializing the Operator Project

We can now build our Operator using the Operator Framework. We'll use the Operator SDK to scaffold a new Operator project and then generate Custom Resource APIs. It is recommended to create a new Git repository for your Operator project and choose a meaningful name for your Operator. You can commit your changes after each CLI command to better understand what the operator is generating.

  1. Create a new Operator project:

    operator-sdk init --plugins go/v3 --repo github.com/my-group/my-operator
    
  2. Create a new Custom Resource Definition (CRD):

    operator-sdk create api --group=example --version=v1alpha1 --kind=MyApp
    

This should leave you with a ready-to-use Operator project scaffolded by the Operator SDK. The three most important directories are:

  • api: Containing the types for your Custom Resources Definition
  • controllers: Containing the logic for your Operator
  • config/samples: Containing sample Custom Resource instances

Implementing the Operator

We'll now implement the Operator logic. The Operator Framework provides a high-level API for writing Operators in Go. This operator will watch for instances of the Custom Resource and create a Config Map for each instance. This is just a simple example to help you get started.

Defining the Custom Resource

Open the file api/v1alpha1/myapp_types.go and add the following code of the MyAppSpec struct:

// MyAppSpec defines the desired state of MyApp
type MyAppSpec struct {
    // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
    // Important: Run "make" to regenerate code after modifying this file

    // Name is the name of the config to be created of MyApp.
    Name string `json:"name,omitempty"`
}
Enter fullscreen mode Exit fullscreen mode

This will add a Name field to the Custom Resource Spec. We'll use this field to set the name of the Config Map.

We can also add a status field to the Custom Resource. This will be used to store the status of the Custom Resource. To do so, add the following code to the MyAppStatus struct:

// MyAppStatus defines the observed state of MyApp
type MyAppStatus struct {
    // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
    // Important: Run "make" to regenerate code after modifying this file
    Conditions []metav1.Condition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions"`
}
Enter fullscreen mode Exit fullscreen mode

To support visual feedback for the users who are using tools like Openlens, we can add a +kubebuilder:printcolumn annotation to the MyApp struct. To do so, add the following code to the MyApp struct:

// MyApp is the Schema for the myapps API
// +kubebuilder:printcolumn:name="Name",type="string",JSONPath=".spec.name",description="The name of the config map to be created"
// +kubebuilder:printcolumn:name="Status",type="string",JSONPath=".status.conditions[?(@.type==\"App\")].reason",description="The status of this resource"
type MyApp struct {
    metav1.TypeMeta   `json:",inline"`
    metav1.ObjectMeta `json:"metadata,omitempty"`

    Spec   MyAppSpec   `json:"spec,omitempty"`
    Status MyAppStatus `json:"status,omitempty"`
}
Enter fullscreen mode Exit fullscreen mode

Implementing the Controller

Before we can implement the Controller, we need to add a dependency. To do so, open the file controllers/myapp_controller.go and add the following code to the imports:

import (
    "context"

    corev1 "k8s.io/api/core/v1"
    apierrors "k8s.io/apimachinery/pkg/api/errors"
    "k8s.io/apimachinery/pkg/api/meta"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/apimachinery/pkg/runtime"
    "k8s.io/apimachinery/pkg/types"
    ctrl "sigs.k8s.io/controller-runtime"
    "sigs.k8s.io/controller-runtime/pkg/client"
    "sigs.k8s.io/controller-runtime/pkg/log"

    examplev1alpha1 "github.com/my-group/my-operator/api/v1alpha1"
)
Enter fullscreen mode Exit fullscreen mode

Open the file controllers/myapp_controller.go and add the following code to the Reconcile function (// TODO(user): your logic here):

// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.14.1/pkg/reconcile
func (r *MyAppReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    _ = log.FromContext(ctx)

    // Get CRD
    myApp := &examplev1alpha1.MyApp{}
    if err := r.Get(ctx, req.NamespacedName, myApp); err != nil {
        if apierrors.IsNotFound(err) {
            log.Log.Info("MyApp not found. Ignoring since object must be deleted.")
            return ctrl.Result{}, nil
        }
        log.Log.Error(err, "Failed to get MyApp.")
    }

    // Start the Reconciliation
    conditions := &myApp.Status.Conditions
    if len(*conditions) == 0 {
        meta.SetStatusCondition(conditions, metav1.Condition{
            Type:    "App",
            Status:  metav1.ConditionUnknown,
            Reason:  "Initializing",
            Message: "Starting reconciliation",
        })
        log.Log.Info("Condition", "Length", len(myApp.Status.Conditions))
        if err := r.Status().Update(ctx, myApp); err != nil {
            log.Log.Error(err, "Failed to update MyApp status")
            return ctrl.Result{}, err
        }
        // Start the next cycle
        return ctrl.Result{}, nil
    }

    // Act depending on the Condition. This is just a rough example.
    currentCondition := (*conditions)[0].Reason
    switch currentCondition {
    case "Initializing":
        // Create a ConfigMap
        cm := &corev1.ConfigMap{}
        err := r.Get(ctx, types.NamespacedName{Name: myApp.Name,    Namespace: myApp.Namespace}, cm)
        if err != nil {
            if apierrors.IsNotFound(err) {
                // No ConfigMap exists and we create one
                cmInstance := &corev1.ConfigMap{
                    ObjectMeta: metav1.ObjectMeta{
                        Name:      myApp.Name,
                        Namespace: myApp.Namespace,
                    },
                }
                if err := r.Create(ctx, cmInstance); err != nil {
                    log.Log.Error(err, "Failed to create a new  ConfigMap", "Namespace", myApp.Namespace,    "Name",    myApp.Name)
                    return ctrl.Result{}, err
                }
            } else {
                // Some unknown error occurred
                log.Log.Error(err, "Failed to get ConfigMap",   "Namespace", myApp.Namespace, "Name", myApp.Name)
                return ctrl.Result{}, err
            }
        }
        // Update the status
        log.Log.Info("Config Map created")
        meta.SetStatusCondition(&myApp.Status.Conditions, metav1    Condition{
            Type:    "App",
            Status:  metav1.ConditionTrue,
            Reason:  "Available",
            Message: "Config Map created",
        })
        if err := r.Status().Update(ctx, myApp); err != nil {
            log.Log.Error(err, "Failed to update status")
            return ctrl.Result{}, err
        }
    case "Unavailable":
        // Retry depending on the error
    case "Available":
        // Everything is fine
    default:
        // Set State if State was unknown
        meta.SetStatusCondition(conditions, metav1.Condition{
            Type:    "App",
            Status:  metav1.ConditionUnknown,
            Reason:  "Initializing",
            Message: "Starting reconciliation",
        })
        if err := r.Status().Update(ctx, myApp); err != nil {
            log.Log.Error(err, "Failed to update myApp status")
            return ctrl.Result{}, err
        }
    }

    return ctrl.Result{}, nil
}
Enter fullscreen mode Exit fullscreen mode

The Operator will watch the Custom Resources and react to changes. The Reconcile function will be called for each change. The Reconcile function will check the status of the Custom Resource and act accordingly. In this example, we'll update the status of the Custom Resource and create a Config Map if it doesn't exist.

Creating a sample Custom Resource Instance

To test our implementation we'll create a sample Custom Resource instance. Open the file config/samples/example_v1alpha1_myapp.yaml and add the following code:

apiVersion: example.stammkneipe.dev/v1alpha1
kind: MyApp
metadata:
  labels:
    app.kubernetes.io/name: myapp
    app.kubernetes.io/instance: myapp-sample
    app.kubernetes.io/part-of: my-operator
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/created-by: my-operator
  name: myapp-sample
spec:
  name: myapp-config-map
Enter fullscreen mode Exit fullscreen mode

Testing the Operator locally

  1. Deploy the CRDs to the cluster:

    make install
    
  2. Deploy the sample Custom Resource instance to the cluster:

    kubectl apply -f config/samples
    
  3. Start the Operator:

    make run
    

You should now see the Operator's logs in your terminal. The final message should be Config Map created. You can now stop the operator and check your cluster for the Config Map:

kubectl get configMap myapp-sample -o yaml
Enter fullscreen mode Exit fullscreen mode

Conclusion

Congratulations! You've successfully built a Kubernetes Operator using the Operator Framework on a local KIND cluster. You can extend this example by adding more features to your Operator and exploring advanced Operator Framework capabilities. I encourage you to play around with the Operator Framework and explore the possibilities of Operators on Kubernetes.

As you might have noticed, this Tutorial does not include any tests, packaging, or deploying the Operator to a real Kubernetes cluster. I'll cover these topics in future guides. So stay tuned!

Top comments (0)