DEV Community

Kahiro Okina
Kahiro Okina

Posted on

Getting Started with the Cluster Inventory API on OCM (Part 1: ClusterProfile)

This is Part 1 of a two-part series.
Part 2 (coming soon): Connecting to spoke clusters from a controller using multicluster-runtime, driven by ClusterProfile.

What this article is about

The Cluster Inventory API (multicluster.x-k8s.io) is driven by SIG-Multicluster and centered on the ClusterProfile resource. It only delivers value when something produces those ClusterProfiles. That something is a cluster manager. Today, the production-ready open-source option is Open Cluster Management (OCM), whose registration controller can act as a ClusterProfile cluster manager behind a feature gate.

This article shows how to set that up end-to-end on a local kind environment:

  • An OCM hub-spoke setup with three kind clusters.
  • The ClusterProfile feature gate enabled on the hub.
  • cluster-proxy wired in so ClusterProfile.status.accessProviders carries real, usable connection info.
  • ClusterProperty from the spokes flowing into ClusterProfile.status.properties on the hub.

By the end, you'll have a working multicluster.x-k8s.io/v1alpha1 ClusterProfile inventory that any Cluster Inventory API consumer can read. That is exactly what Part 2 will plug into via multicluster-runtime.

Overview

The setup looks like this:

  • Three kind clusters: hub, cluster1, cluster2.
  • The OCM hub manages cluster1 and cluster2 as managed clusters.
  • The managed clusters belong to a ManagedClusterSet named sandbox-fleet.
  • The cluster-proxy and managed-serviceaccount addons are installed on the spokes.
  • ClusterProfile resources are created in the cluster-inventory namespace.
  • ClusterProperty values from the spokes flow into ClusterProfile.status.properties on the hub.

The synchronization path for ClusterProperty is shown below. The boxes name the actual OCM components for reference, but the only thing you need to take away is the direction: a property set on a spoke ends up in ClusterProfile.status.properties on the hub.

flowchart LR
  subgraph spoke["cluster1 / cluster2"]
    property["ClusterProperty<br/>about.k8s.io/v1alpha1"]
    agent["klusterlet-registration-agent"]
  end

  subgraph hub["hub"]
    managedCluster["ManagedCluster<br/>status.clusterClaims"]
    profileController["cluster-manager-registration-controller<br/>ClusterProfileStatusController"]
    clusterProfile["cluster-inventory/ClusterProfile<br/>status.properties"]
  end

  property --> agent
  agent --> managedCluster
  managedCluster --> profileController
  profileController --> clusterProfile
Enter fullscreen mode Exit fullscreen mode

In plain terms: spoke ClusterProperty flows to hub ManagedCluster.status.clusterClaims, then to hub ClusterProfile.status.properties. That last hop is what makes OCM a Cluster Inventory API cluster manager. Properties you set on a spoke become inventory data any consumer can read on the hub through a vendor-neutral API.

The commands in this article were verified with:

kind v0.31.0
clusteradm v1.2.1
helm v3.20.1
kubectl v1.35.3
Enter fullscreen mode Exit fullscreen mode

Prerequisites

The following commands must be available locally.

If clusteradm is not installed:

curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
Enter fullscreen mode Exit fullscreen mode

Verify the installed tools:

kind version
helm version
kubectl version --client
clusteradm version
Enter fullscreen mode Exit fullscreen mode

clusteradm version also tries to connect to the current kubeconfig context to fetch the server version. At this point, just verify that the client version prints.

This article uses the following Kubernetes context names:

export HUB_CTX=kind-hub
export C1_CTX=kind-cluster1
export C2_CTX=kind-cluster2
Enter fullscreen mode Exit fullscreen mode

Clone the OCM repository

The OCM repository contains a script for creating a local development environment. This article uses solutions/setup-dev-environment/local-up.sh.

git clone https://github.com/open-cluster-management-io/ocm.git
cd ocm
Enter fullscreen mode Exit fullscreen mode

Create the hub / cluster1 / cluster2 kind clusters

local-up.sh runs the following setup steps:

  • Creates the hub, cluster1, and cluster2 kind clusters.
  • Initializes the hub with clusteradm init.
  • Registers the spoke clusters with clusteradm join.
  • Accepts the join requests with clusteradm accept.
./solutions/setup-dev-environment/local-up.sh
Enter fullscreen mode Exit fullscreen mode

After the script completes, check the managed clusters from the hub:

kubectl config use-context kind-hub
kubectl get managedclusters --context "$HUB_CTX"
Enter fullscreen mode Exit fullscreen mode

In this verified environment:

NAME       HUB ACCEPTED   MANAGED CLUSTER URLS                  JOINED   AVAILABLE   AGE
cluster1   true           https://cluster1-control-plane:6443   True     True        2m
cluster2   true           https://cluster2-control-plane:6443   True     True        2m
Enter fullscreen mode Exit fullscreen mode

JOINED and AVAILABLE are True for both spokes. This is the starting point for managing them from the hub.

Install the cluster-proxy addon

We install cluster-proxy for two reasons. The first is that the hub needs to reach spoke APIs. The second, which matters more for this article, is that ClusterProfile.status.accessProviders needs a real connection endpoint that downstream Cluster Inventory API consumers can actually use.

Add the OCM Helm repository:

helm repo add ocm https://open-cluster-management.io/helm-charts/
helm repo update
Enter fullscreen mode Exit fullscreen mode

We create a development fleet named sandbox-fleet as a ManagedClusterSet, then add cluster1 and cluster2 to it. The same fleet is used as the target for addon distribution and ClusterProfile creation.

OCM uses two resources for cluster-set handling:

  • ManagedClusterSet decides which ManagedCluster belongs to the set.
  • ManagedClusterSetBinding makes the set usable from a specific namespace.

sandbox-fleet uses ExclusiveClusterSetLabel. When a ManagedCluster has the label cluster.open-cluster-management.io/clusterset=sandbox-fleet, that cluster belongs to sandbox-fleet.

Note on existing cluster sets. Some environments already have ManagedClusterSet resources named default or global, managed by the OCM DefaultClusterSet feature gate. global in particular uses an empty label selector and selects all managed clusters, so it overlaps with sandbox-fleet. This article consistently targets sandbox-fleet for addons and ClusterProfile to keep things unambiguous.

printf '%s\n' \
  'apiVersion: cluster.open-cluster-management.io/v1beta2' \
  'kind: ManagedClusterSet' \
  'metadata:' \
  '  name: sandbox-fleet' \
  'spec:' \
  '  clusterSelector:' \
  '    selectorType: ExclusiveClusterSetLabel' | \
  kubectl apply --context "$HUB_CTX" -f -

kubectl label managedcluster cluster1 \
  cluster.open-cluster-management.io/clusterset=sandbox-fleet \
  --overwrite \
  --context "$HUB_CTX"

kubectl label managedcluster cluster2 \
  cluster.open-cluster-management.io/clusterset=sandbox-fleet \
  --overwrite \
  --context "$HUB_CTX"
Enter fullscreen mode Exit fullscreen mode

cluster-proxy v0.10.0 introduced support for configuring ClusterProfile access providers. That is the feature we need here, and it's why we pin to v0.10.0 so Part 2 can use ClusterProfile for dynamic access.

Why v0.10.0 needs a workaround

The latest ocm/cluster-proxy chart in the Helm repository is v0.10.0, but that chart has a schema mismatch: it renders spec.proxyAgent.additionalValues in ManagedProxyConfiguration, while the v0.10.0 CRD schema does not declare that field. Running helm install as-is fails with:

failed to create typed patch object (... ManagedProxyConfiguration): .spec.proxyAgent.additionalValues: field not declared in schema

The cluster-proxy main branch removed this output in Fix chart error. (#272). Once the next chart release ships, this workaround can be dropped.

Helm v3 can pass an executable path to --post-renderer, but Helm v4 changed post-renderers to plugins and no longer accepts a raw path directly. To stay compatible with both, this article uses helm template | kubectl apply and strips the offending field with perl.

Install cluster-proxy on the hub. Enabling ClusterProfileAccessProvider and userServer.enabled makes cluster-proxy connection information appear in ClusterProfile.status.accessProviders. That is the bridge between the inventory and real spoke API access:

kubectl create namespace open-cluster-management-addon \
  --context "$HUB_CTX" \
  --dry-run=client \
  -o yaml | kubectl apply --context "$HUB_CTX" -f -

printf '%s\n' \
  'apiVersion: cluster.open-cluster-management.io/v1beta2' \
  'kind: ManagedClusterSetBinding' \
  'metadata:' \
  '  name: sandbox-fleet' \
  '  namespace: open-cluster-management-addon' \
  'spec:' \
  '  clusterSet: sandbox-fleet' | \
  kubectl apply --context "$HUB_CTX" -f -

printf '%s\n' \
  'apiVersion: cluster.open-cluster-management.io/v1beta1' \
  'kind: Placement' \
  'metadata:' \
  '  name: cluster-proxy-placement' \
  '  namespace: open-cluster-management-addon' \
  'spec:' \
  '  clusterSets:' \
  '    - sandbox-fleet' | \
  kubectl apply --context "$HUB_CTX" -f -
Enter fullscreen mode Exit fullscreen mode

ManagedProxyConfiguration is a CRD provided by the cluster-proxy chart. Apply the CRDs first and wait for them to become Established. If the CRD and CR are sent through the same kubectl apply stream, Kubernetes discovery may not see the new CRD in time and can return no matches for kind "ManagedProxyConfiguration".

helm show crds ocm/cluster-proxy --version 0.10.0 | \
  kubectl apply --context "$HUB_CTX" -f -

kubectl wait --for=condition=Established \
  crd/managedproxyconfigurations.proxy.open-cluster-management.io \
  --context "$HUB_CTX" \
  --timeout=120s

kubectl wait --for=condition=Established \
  crd/managedproxyserviceresolvers.proxy.open-cluster-management.io \
  --context "$HUB_CTX" \
  --timeout=120s

helm template cluster-proxy ocm/cluster-proxy \
  -n open-cluster-management-addon \
  --version 0.10.0 \
  --set installByPlacement.placementName=cluster-proxy-placement \
  --set installByPlacement.placementNamespace=open-cluster-management-addon \
  --set featureGates.clusterProfileAccessProvider=true \
  --set userServer.enabled=true | \
  perl -0pe 's/\n    additionalValues:\n      enableImpersonation: "[^"]+"//g' | \
  kubectl apply --context "$HUB_CTX" -f -
Enter fullscreen mode Exit fullscreen mode

Wait for ManagedProxyConfiguration/cluster-proxy and the certificate Secrets used by clusteradm proxy:

kubectl wait --for=create \
  managedproxyconfiguration/cluster-proxy \
  --context "$HUB_CTX" \
  --timeout=120s

kubectl wait --for=create \
  secret/proxy-client \
  -n open-cluster-management-addon \
  --context "$HUB_CTX" \
  --timeout=120s

kubectl wait --for=create \
  secret/proxy-server \
  -n open-cluster-management-addon \
  --context "$HUB_CTX" \
  --timeout=120s

kubectl wait --for=create \
  secret/agent-server \
  -n open-cluster-management-addon \
  --context "$HUB_CTX" \
  --timeout=120s
Enter fullscreen mode Exit fullscreen mode

Check the addon status:

kubectl rollout status deployment/cluster-proxy-addon-manager \
  -n open-cluster-management-addon \
  --context "$HUB_CTX" \
  --timeout=180s

kubectl rollout status deployment/cluster-proxy \
  -n open-cluster-management-addon \
  --context "$HUB_CTX" \
  --timeout=180s

clusteradm get addon cluster-proxy --context "$HUB_CTX"
kubectl get managedclusteraddon -A --context "$HUB_CTX" | grep cluster-proxy

kubectl wait --for=condition=Available \
  managedclusteraddon/cluster-proxy \
  -n cluster1 \
  --context "$HUB_CTX" \
  --timeout=180s

kubectl wait --for=condition=Available \
  managedclusteraddon/cluster-proxy \
  -n cluster2 \
  --context "$HUB_CTX" \
  --timeout=180s
Enter fullscreen mode Exit fullscreen mode

Install the managed-serviceaccount addon

Install the managed-serviceaccount addon so clusteradm proxy kubectl can access the spoke clusters with a managed service account.

The managed-serviceaccount chart also defaults to the global cluster set and does not expose a Helm value to change it. We set agentInstallAll=false to disable automatic distribution, then explicitly create ManagedClusterAddOn resources for the target clusters:

helm install \
  --kube-context "$HUB_CTX" \
  -n open-cluster-management-managed-serviceaccount \
  --create-namespace \
  managed-serviceaccount \
  ocm/managed-serviceaccount \
  --set agentInstallAll=false

printf '%s\n' \
  'apiVersion: addon.open-cluster-management.io/v1alpha1' \
  'kind: ManagedClusterAddOn' \
  'metadata:' \
  '  name: managed-serviceaccount' \
  '  namespace: cluster1' \
  'spec:' \
  '  installNamespace: open-cluster-management-managed-serviceaccount' | \
  kubectl apply --context "$HUB_CTX" -f -

printf '%s\n' \
  'apiVersion: addon.open-cluster-management.io/v1alpha1' \
  'kind: ManagedClusterAddOn' \
  'metadata:' \
  '  name: managed-serviceaccount' \
  '  namespace: cluster2' \
  'spec:' \
  '  installNamespace: open-cluster-management-managed-serviceaccount' | \
  kubectl apply --context "$HUB_CTX" -f -
Enter fullscreen mode Exit fullscreen mode

This flow creates ManagedClusterAddOn resources directly. Check the clusters in sandbox-fleet and verify the addon targets line up with the fleet:

kubectl get managedclusters \
  --context "$HUB_CTX" \
  -L cluster.open-cluster-management.io/clusterset
Enter fullscreen mode Exit fullscreen mode

Check the addon status:

kubectl rollout status deployment/managed-serviceaccount-addon-manager \
  -n open-cluster-management-managed-serviceaccount \
  --context "$HUB_CTX" \
  --timeout=180s

clusteradm get addon managed-serviceaccount --context "$HUB_CTX"
kubectl get managedclusteraddon -A --context "$HUB_CTX" | grep managed-serviceaccount

kubectl wait --for=condition=Available \
  managedclusteraddon/managed-serviceaccount \
  -n cluster1 \
  --context "$HUB_CTX" \
  --timeout=180s

kubectl wait --for=condition=Available \
  managedclusteraddon/managed-serviceaccount \
  -n cluster2 \
  --context "$HUB_CTX" \
  --timeout=180s
Enter fullscreen mode Exit fullscreen mode

Check the proxy path health:

clusteradm proxy health --context "$HUB_CTX"
Enter fullscreen mode Exit fullscreen mode

Create a ManagedServiceAccount

The next three sections (Create a ManagedServiceAccount, Distribute RBAC to the spoke, Access the spoke API with clusteradm proxy) are a side quest, not Cluster Inventory API itself. They exist to verify that the connection endpoint that will appear in ClusterProfile.status.accessProviders is actually reachable. If you only care about the inventory data and plan to drive access from a controller later, you can skim these and pick up at Enable the ClusterProfile feature gate.

Create a ManagedServiceAccount named test for cluster1:

printf '%s\n' \
  'apiVersion: authentication.open-cluster-management.io/v1beta1' \
  'kind: ManagedServiceAccount' \
  'metadata:' \
  '  name: test' \
  '  namespace: cluster1' \
  'spec:' \
  '  rotation: {}' | \
  kubectl apply --context "$HUB_CTX" -f -
Enter fullscreen mode Exit fullscreen mode

Check that it has been created:

kubectl get managedserviceaccount -n cluster1 --context "$HUB_CTX"
Enter fullscreen mode Exit fullscreen mode

Wait for the hub-side Secret for the managed service account:

kubectl wait --for=create \
  secret/test \
  -n cluster1 \
  --context "$HUB_CTX" \
  --timeout=120s
Enter fullscreen mode Exit fullscreen mode

Check the status conditions and verify that the token Secret has been reported:

kubectl get managedserviceaccount test \
  -n cluster1 \
  --context "$HUB_CTX" \
  -o jsonpath='{range .status.conditions[*]}{.type}={.status}{"\n"}{end}'
Enter fullscreen mode Exit fullscreen mode

Distribute RBAC to the spoke

On the spoke cluster, the ManagedServiceAccount is realized as a regular Kubernetes ServiceAccount in the namespace specified by ManagedClusterAddOn.spec.installNamespace, which is open-cluster-management-managed-serviceaccount:

kubectl get serviceaccount test \
  -n open-cluster-management-managed-serviceaccount \
  --context "$C1_CTX"
Enter fullscreen mode Exit fullscreen mode

This verification grants cluster-admin. In production, grant only the Role or ClusterRole required for the target APIs.

printf '%s\n' \
  'apiVersion: rbac.authorization.k8s.io/v1' \
  'kind: ClusterRoleBinding' \
  'metadata:' \
  '  name: managed-sa-test' \
  'roleRef:' \
  '  apiGroup: rbac.authorization.k8s.io' \
  '  kind: ClusterRole' \
  '  name: cluster-admin' \
  'subjects:' \
  '  - kind: ServiceAccount' \
  '    name: test' \
  '    namespace: open-cluster-management-managed-serviceaccount' \
  > /tmp/clusterrolebinding-managed-sa-test.yaml
Enter fullscreen mode Exit fullscreen mode

Use clusteradm create work to apply the RBAC to cluster1:

clusteradm create work managed-sa-test-rbac \
  -f /tmp/clusterrolebinding-managed-sa-test.yaml \
  --clusters cluster1 \
  --context "$HUB_CTX"
Enter fullscreen mode Exit fullscreen mode

Check the ManifestWork and the RBAC on the spoke cluster:

kubectl get manifestwork -n cluster1 --context "$HUB_CTX"
kubectl wait --for=condition=Applied \
  manifestwork/managed-sa-test-rbac \
  -n cluster1 \
  --context "$HUB_CTX" \
  --timeout=60s
kubectl get clusterrolebinding managed-sa-test --context "$C1_CTX"
Enter fullscreen mode Exit fullscreen mode

Access the spoke API with clusteradm proxy

At this point, use clusteradm proxy kubectl from the hub side to access the cluster1 API:

clusteradm proxy kubectl \
  --context "$HUB_CTX" \
  --cluster=cluster1 \
  --sa=test \
  --args="get nodes"
Enter fullscreen mode Exit fullscreen mode

If the nodes in cluster1 are returned, hub-to-spoke API access through cluster-proxy and managed-serviceaccount is working.

Create the same ManagedServiceAccount and RBAC for cluster2:

printf '%s\n' \
  'apiVersion: authentication.open-cluster-management.io/v1beta1' \
  'kind: ManagedServiceAccount' \
  'metadata:' \
  '  name: test' \
  '  namespace: cluster2' \
  'spec:' \
  '  rotation: {}' | \
  kubectl apply --context "$HUB_CTX" -f -

clusteradm create work managed-sa-test-rbac \
  -f /tmp/clusterrolebinding-managed-sa-test.yaml \
  --clusters cluster2 \
  --context "$HUB_CTX"

kubectl wait --for=condition=Applied \
  manifestwork/managed-sa-test-rbac \
  -n cluster2 \
  --context "$HUB_CTX" \
  --timeout=60s

clusteradm proxy kubectl \
  --context "$HUB_CTX" \
  --cluster=cluster2 \
  --sa=test \
  --args="get nodes"
Enter fullscreen mode Exit fullscreen mode

Enable the ClusterProfile feature gate

Now we get to the headline feature: turning on the Cluster Inventory API.

ClusterProfile is handled by the hub-side registration controller. In the OCM repository, the feature gate name is ClusterProfile. We configure the registration feature gate on ClusterManager.

First, check the existing feature gates:

kubectl get clustermanager cluster-manager \
  --context "$HUB_CTX" \
  -o jsonpath='{range .spec.registrationConfiguration.featureGates[*]}{.feature}={.mode}{"\n"}{end}'
Enter fullscreen mode Exit fullscreen mode
kubectl patch clustermanager.operator.open-cluster-management.io cluster-manager \
  --context "$HUB_CTX" \
  --type=merge \
  -p '{"spec":{"registrationConfiguration":{"featureGates":[{"feature":"ResourceCleanup","mode":"Enable"},{"feature":"ClusterProfile","mode":"Enable"}]}}}'
Enter fullscreen mode Exit fullscreen mode

In the local-up.sh verification environment, ResourceCleanup was already configured, so the patch keeps it and adds ClusterProfile. Note that the featureGates array is replaced wholesale by this patch. If your environment explicitly configures additional feature gates, keep those entries in the same array.

With local-up.sh, the registration controller is the cluster-manager-registration-controller Deployment in the open-cluster-management-hub namespace. Check the name and wait for rollout:

kubectl get deployment \
  -n open-cluster-management-hub \
  --context "$HUB_CTX"

kubectl rollout status deployment/cluster-manager-registration-controller \
  -n open-cluster-management-hub \
  --context "$HUB_CTX" \
  --timeout=180s
Enter fullscreen mode Exit fullscreen mode

Check the ClusterManager configuration:



kubectl get clustermanager cluster-manager \

Enter fullscreen mode Exit fullscreen mode

Top comments (0)