DEV Community

Steven Sklar
Steven Sklar

Posted on

Kubebuilder Tips and Tricks

Recently, I've been spending a lot of time writing a Kubernetes operator using the go operator-sdk, which is built on top of the Kubebuilder framework. This is a list of a few tips and tricks that I've compiled over the past few months working with these frameworks.

Log Formatting

Kubebuilder, like much of the k8s ecosystem, utilizes zap for logging. Out of the box, the Kubebuilder zap configuration outputs a timestamp for each log, which gets formatted using scientific notation. This makes it difficult for me to read the time of an event just by glancing at it. Personally, I prefer ISO 8601, so let's change it!

In your scaffolding's main.go, you can configure your current logger format by modifying the zap.Options struct and calling ctrl.SetLogger.

opts := zap.Options{
    Development: true,
    TimeEncoder: zapcore.ISO8601TimeEncoder,
}

ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
Enter fullscreen mode Exit fullscreen mode

In this case, I added the zapcore.ISO8601TimeEncoder, which encodes timestamps to human-readable ISO 8601-formatted strings. It took a bit of digging, along with a bit of help from the Kubernetes Slack org, to figure this one out. But it's been a huge quality-of-life improvement when debugging complex reconcile loops, especially in a multithreaded environment.

MaxConcurrentReconciles

Speaking of multithreaded environments, by default, an operator will only run a single reconcile loop per-controller. However, in practice, especially when running a globally-scoped controller, it's useful to run multiple concurrent reconcile loops to simultaneously handle many resource changes at once. Luckily, the Operator SDK makes this incredibly easy with the MaxConcurrentReconciles setting. We can set this up in a new controller's SetupWithManager func:

func (r *CustomReconciler) SetupWithManager(mgr ctrl.Manager) error {

    return ctrl.NewControllerManagedBy(mgr).
        WithOptions(controller.Options{MaxConcurrentReconciles: 10}).
        ...
        Complete(r)
}
Enter fullscreen mode Exit fullscreen mode

I've created a command line arg in my main.go file that allows the user to set this value to any integer value, since this will likely be a tweaked over time depending on how the controller performs in a production cluster.

Parent-Child Relationships

One of the basic functions of a controller is to act as a parent to Kubernetes resources. This allows the controller to "own"
these objects such that when it is deleted, all child objects are automatically garbage collected by the Kubernetes runtime.

I like this small function that can be called for any client.Object to add a parent reference to the controller
that you're writing.

func (r *CustomReconciler) ownObject(ctx context.Context, cr *myapiv1alpha1.CustomResource, obj client.Object) error {

    err := ctrl.SetControllerReference(cr, obj, r.Scheme)
    if err != nil {
        return err
    }
    return r.Update(ctx, obj)
}
Enter fullscreen mode Exit fullscreen mode

You can then add Owns watches for these resources in your SetupWithManager func. These will instruct your controller to listen for changes in child resources of the specified types, triggering a reconcile loop on each change.

func (r *CustomReconciler) SetupWithManager(mgr ctrl.Manager) error {

    return ctrl.NewControllerManagedBy(mgr).
        Owns(&v1apps.Deployment{}).
        Owns(&v1core.ConfigMap{}).
        Owns(&v1core.Service{}).
        Complete(r)
}

Enter fullscreen mode Exit fullscreen mode

Watches

Your controller can also watch resources that it doesn't own. This is useful for when you need to watch for changes in globally-scoped resources like PersistentVolumes or Nodes. Here's an example of how you would register this watch in your SetupWithManager func.

func (r *CustomReconciler) SetupWithManager(mgr ctrl.Manager) error {

    return ctrl.NewControllerManagedBy(mgr).
        Watches(
            &source.Kind{Type: &v1core.Node{}},
            handler.EnqueueRequestsFromMapFunc(myNodeFilterFunc),
            builder.WithPredicates(predicate.ResourceVersionChangedPredicate{}),
        ).
        Complete(r)
}
Enter fullscreen mode Exit fullscreen mode

In this case, you need to implement myNodeFilterFunc to accept
an obj client.Object and return []reconcile.Request. Using the
ResourceVersionChangedPredicate triggers the filter function for every change on that resource type, so it's important to
write your filter function to be as efficient as possible, since there is a chance that it could be called quite a bit, especially
if your controller is globally-scoped.

Field Indexers

One gotcha that I encountered happened when trying to query for a list of Pods that are running on a particular Node. This query uses a FieldSelector filter, as
seen here:

// Get a list of all pods on the node
err := c.List(ctx, &pods, &client.ListOptions{
    Namespace:     "",
    FieldSelector: fields.ParseSelectorOrDie(fmt.Sprintf("spec.nodeName=%s", node.Name)),
})
Enter fullscreen mode Exit fullscreen mode

This codepath led to the following error: Index with name field:spec.nodeName does not exist. After some googling around, I
found this GitHub issue that referenced
a Kubebuilder docs page which contained the answer.

Controllers created using operator-sdk and Kubebuilder use a built-in caching mechanism to store results of API requests. This is to prevent
spamming the K8s API, as well as improve reconciliation performance.

When performing resource lookups using FieldSelectors, you first need to add your desired search field to an index
that the cache can use for lookups. Here's an example that will build this index for a Pod's nodeName:

if err := mgr.GetFieldIndexer().IndexField(context.TODO(), &v1core.Pod{}, "spec.nodeName", func(rawObj client.Object) []string {
    pod := rawObj.(*v1core.Pod)
    return []string{pod.Spec.NodeName}
}); err != nil {
    return err
}
Enter fullscreen mode Exit fullscreen mode

Now, we can run the List function from above with the FieldSelector with no issues.

Retries on Conflicts

If you've ever written controllers, you're probably very familiar with the error Operation cannot be fulfilled on ...: the object has been modified; please apply your changes to the latest version and try again

This occurs when the version of the resource that you're currently reconciling in your controller is out-of-date with what's in latest version of the K8s cluster state. If you're retrying your reconciliation loop on any errors, your controller will eventually reconcile the resource, but this can really pollute your logs and make it difficult to spot more important errors.

After reading through the k8s source, I found the solution to this: RetryOnConflict. It's a utility function in the client-go package that runs a function and automatically retries on conflict, up to a certain point.

Now, you can just wrap your logic inside this function argument, and never have to worry about this issue again! And the added benefit is that you just get to return err instead of return ctrl.Result{}, err, which makes your code that much easier to read.

Useful Kubebuilder Markers

Here are some useful code markers that I've found while developing my operator.

To add custom columns to your custom resource's description (when running kubectl get), you can add annotations to your API object like these:

//+kubebuilder:printcolumn:name="NodeReady",type="boolean",JSONPath=".status.nodeReady"
//+kubebuilder:printcolumn:name="NodeIp",type="string",JSONPath=".status.nodeIp"
Enter fullscreen mode Exit fullscreen mode

To add a shortname to your custom resource (like pvc for PersistentVolumeClaim for example), you can add this annotation:

//+kubebuilder:resource:shortName=mycr;mycrs
Enter fullscreen mode Exit fullscreen mode

More docs on kubebuilder markers can be found here:

https://book.kubebuilder.io/reference/markers/crd.html

Originally published on my blog: https://sklar.rocks/kubebuilder-tips/

Top comments (3)

Collapse
 
cwprogram profile image
Chris White

Thanks for the solid write up, especially outlining edge case issues. Also hats of for going so far as to check source code to figure out what's happening! Great write up and keep up the good work!

Collapse
 
bcouetil profile image
Benoit COUETIL 💫

Thanks for sharing, this demystifies those things to me 🙏

Collapse
 
samarsidharth profile image
samar sid

Thankyou for sharing this 👍