I spent more than 8 hours wrestling with Kubernetes image credential provider plugins before finally stumbling upon the real solution. If you think this is as simple as dropping a config into Kind or Minikube think again. It doesn’t work that way, and I’d rather save you the wasted time I went through.
This guide comes from someone who considers a debugger “bloat.” If you share that mindset, you’ll feel right at home.
In this blog, I will walk you step by step through setting up a Kubernetes image credential provider plugin in a Kind cluster. We’ll manually configure the Kubelet to use your custom credential provider plugin. no shortcuts, no bloat, just the essentials.
Step 1
🔹Create your Cluster.
I am going for a basic three-node cluster to work with. Use the following configuration to spin it up. This setup includes one control-plane node and two worker nodes. feel free to add more configs if you want.
# this config file contains all config fields with comments
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
# the control plane node config
- role: control-plane
# the workers
- role: worker
- role: worker
Step 2
🔹Copy the Credential Provider Plugin Binary and Config to all the nodes.
You can also use a daemonset to do this in production. but I have gone with simple docker cp. Because we are still in the development stage of the credential-provider-plugin. I have my credentilal-plugin and config in a directory
❯ ls -la credential-provider-plugin
total 22524
drwxr-xr-x 2 bupd bupd 4096 Aug 30 06:21 .
drwxr-xr-x 22 bupd bupd 4096 Aug 30 06:23 ..
-rw-r--r-- 1 bupd bupd 495 Aug 30 06:21 config.yml # config
-rwxr-xr-x 1 bupd bupd 12034738 Aug 30 06:18 credential-provider-echo-token # binary
-rwxr-xr-x 1 bupd bupd 10988078 Aug 28 17:25 credential-provider-echo-token-old
-rw-r--r-- 1 bupd bupd 993 Aug 28 17:24 go.mod
-rw-r--r-- 1 bupd bupd 8341 Aug 28 17:24 go.sum
-rw-r--r-- 1 bupd bupd 3088 Aug 30 06:18 main.go
-rw-r--r-- 1 bupd bupd 1734 Aug 28 17:25 README.md
# copy the entire folder into k8s nodes
docker cp ./credential-provider-plugin kind-control-plane:/etc/kubernetes/credential-providers
Step 3
🔹Modify the Kubelet Configuration
The real fun begins now. The Kubelet's configuration on a kind cluster is managed by a file called kubeadm-flags.env. We need to pull this file from the control-plane node, modify it, and push it back.
It will be located in /var/lib/kubelet/kubeadm-flags.env
*link
Now copy the file from k8s node to host
docker cp kind-control-plane:/var/lib/kubelet/kubeadm-flags.env ./kubeadm-flags-kind.env
Edit the file to include the credential provider flags
--image-credential-provider-bin-dir: The directory where our plugin binary resides.
-
--image-credential-provider-config: The path to the configuration file for the plugin.
Note: Don't forget to separately quote the values. Seriously, I've seen two hours vanish because of this.
Before:
KUBELET_KUBEADM_ARGS="--node-ip=172.18.0.3 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.10.1 --provider-id=kind://docker/kind/kind-control-plane"
After:
do remember to add the ’
to the values.
KUBELET_KUBEADM_ARGS="--node-ip=172.18.0.3 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.10.1 --provider-id=kind://docker/kind/kind-control-plane --image-credential-provider-bin-dir='/etc/kubernetes/credential-providers' --image-credential-provider-config='/etc/kubernetes/credential-providers/config.yml'"
Copy the modified file back to the control plane
docker cp ./kubeadm-flags-kind.env kind-control-plane:/var/lib/kubelet/kubeadm-flags.env
Step 4
🔹Restart the kubelet
Get into the control-plane node
docker exec -it kind-control-plane sh
once you are inside the node execute the below command
Run the reload and restart commands
systemctl daemon-reload && systemctl restart kubelet
Step 5
The most reliable way to verify if your plugin is working is to embed a webhook into it. This allows all inputs and outputs to be sent to a webhook endpoint, giving you a straightforward way to monitor the plugin’s execution. While it may feel a bit hacky, i love it. implementation here
Another way to test the plugin is by pulling an image that requires the default static credentials configured in your plugin. If the plugin is working correctly, the pod should start successfully with the image, confirming that the pull operation uses the expected credentials.
You can try this by applying the example pod.yaml
file:
kubectl apply -f pod.yaml
Once the pod starts, you’ll be able to confirm that the hardcoded credentials from the credential-provider-plugin is being used as intended.
If you have other ideas or approaches for monitoring plugin execution, feel free to share them in the comments!
📚 References
This is a finicky process, and things can go wrong. Here are some resources that I found invaluable while navigating this.
-
Official Docs
- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/
- https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/
- https://kubernetes.io/docs/reference/config-api/kubelet-credentialprovider.v1/#credentialprovider-kubelet-k8s-io-v1-CredentialProviderResponse
-
Example Implementations
-
Config Tips
-
For people trying to use this in proxy environments
- https://github.com/kubernetes-sigs/kind/issues/2009 if you are using proxy - which is another problem on its own.
- https://medium.com/@schottz/how-to-skip-tls-verify-fot-internal-registry-on-containerd-e039887bcb83 atleast this doesnt work for me if you are using proxy this might help but it doesn't work for me.
Top comments (0)