DEV Community

Kahiro Okina
Kahiro Okina

Posted on

Getting Started with the Cluster Inventory API on OCM (Part 1: ClusterProfile)

This is Part 1 of a two-part series.
Part 2 (coming soon): Connecting to spoke clusters from a controller using multicluster-runtime, driven by ClusterProfile.

What this article is about

The Cluster Inventory API (multicluster.x-k8s.io) is driven by SIG-Multicluster and centered on the ClusterProfile resource. It only delivers value when something produces those ClusterProfiles. That something is a cluster manager. Today, the production-ready open-source option is Open Cluster Management (OCM), whose registration controller can act as a ClusterProfile cluster manager behind a feature gate.

This article shows how to set that up end-to-end on a local kind environment:

  • An OCM hub-spoke setup with three kind clusters.
  • The ClusterProfile feature gate enabled on the hub.
  • cluster-proxy wired in so ClusterProfile.status.accessProviders carries real, usable connection info.
  • ClusterProperty from the spokes flowing into ClusterProfile.status.properties on the hub.

By the end, you'll have a working multicluster.x-k8s.io/v1alpha1 ClusterProfile inventory that any Cluster Inventory API consumer can read. That is exactly what Part 2 will plug into via multicluster-runtime.

Overview

The setup looks like this:

  • Three kind clusters: hub, cluster1, cluster2.
  • The OCM hub manages cluster1 and cluster2 as managed clusters.
  • The managed clusters belong to a ManagedClusterSet named sandbox-fleet.
  • The cluster-proxy and managed-serviceaccount addons are installed on the spokes.
  • ClusterProfile resources are created in the cluster-inventory namespace.
  • ClusterProperty values from the spokes flow into ClusterProfile.status.properties on the hub.

The synchronization path for ClusterProperty is shown below. The boxes name the actual OCM components for reference, but the only thing you need to take away is the direction: a property set on a spoke ends up in ClusterProfile.status.properties on the hub.


flowchart LR
  subgraph spoke["cluster1 / cluster2"]
    property["ClusterProperty<br/>about.k8s.io/v1alpha1"]
    agent["klusterlet-registration-agent"]
  end

  subgraph hub["hub"]
    managedCluster["ManagedCluster<br/>status.clusterClaims"]
    profileController["cluster-manager-registration-controller<br/>ClusterProfileStatusController"]
    clusterProfile["cluster-inventory/ClusterProfile<br/>status.properties"]
  end

  property --> agent
  agent --> managedCluster
  managedCluster --> profileController
  profileController --> clusterProfile
Enter fullscreen mode Exit fullscreen mode

In plain terms: spoke ClusterProperty flows to hub ManagedCluster.status.clusterClaims, then to hub ClusterProfile.status.properties. That last hop is what makes OCM a Cluster Inventory API cluster manager. Properties you set on a spoke become inventory data any consumer can read on the hub through a vendor-neutral API.

The commands in this article were verified with:

kind v0.31.0
clusteradm v1.2.1
helm v3.20.1
kubectl v1.35.3
Enter fullscreen mode Exit fullscreen mode

Prerequisites

The following commands must be available locally.

If clusteradm is not installed:

curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
Enter fullscreen mode Exit fullscreen mode

Verify the installed tools:

kind version
helm version
kubectl version --client
clusteradm version
Enter fullscreen mode Exit fullscreen mode

clusteradm version also tries to connect to the current kubeconfig context to fetch the server version. At this point, just verify that the client version prints.

This article uses the following Kubernetes context names:

export HUB_CTX=kind-hub
export C1_CTX=kind-cluster1
export C2_CTX=kind-cluster2
Enter fullscreen mode Exit fullscreen mode

Clone the OCM repository

The OCM repository contains a script for creating a local development environment. This article uses solutions/setup-dev-environment/local-up.sh.

git clone https://github.com/open-cluster-management-io/ocm.git
cd ocm
Enter fullscreen mode Exit fullscreen mode

Create the hub / cluster1 / cluster2 kind clusters

local-up.sh runs the following setup steps:

  • Creates the hub, cluster1, and cluster2 kind clusters.
  • Initializes the hub with clusteradm init.
  • Registers the spoke clusters with clusteradm join.
  • Accepts the join requests with clusteradm accept.
./solutions/setup-dev-environment/local-up.sh
Enter fullscreen mode Exit fullscreen mode

After the script completes, check the managed clusters from the hub:

kubectl config use-context kind-hub
kubectl get managedclusters --context "$HUB_CTX"
Enter fullscreen mode Exit fullscreen mode

In this verified environment:

NAME       HUB ACCEPTED   MANAGED CLUSTER URLS                  JOINED   AVAILABLE   AGE
cluster1   true           https://cluster1-control-plane:6443   True     True        2m
cluster2   true           https://cluster2-control-plane:6443   True     True        2m
Enter fullscreen mode Exit fullscreen mode

JOINED and AVAILABLE are True for both spokes. This is the starting point for managing them from the hub.

Install the cluster-proxy addon

We install cluster-proxy for two reasons. The first is that the hub needs to reach spoke APIs. The second, which matters more for this article, is that ClusterProfile.status.accessProviders needs a real connection endpoint that downstream Cluster Inventory API consumers can actually use.

Add the OCM Helm repository:

helm repo add ocm https://open-cluster-management.io/helm-charts/
helm repo update
Enter fullscreen mode Exit fullscreen mode

We create a development fleet named sandbox-fleet as a ManagedClusterSet, then add cluster1 and cluster2 to it. The same fleet is used as the target for addon distribution and ClusterProfile creation.

OCM uses two resources for cluster-set handling:

  • ManagedClusterSet decides which ManagedCluster belongs to the set.
  • ManagedClusterSetBinding makes the set usable from a specific namespace.

sandbox-fleet uses ExclusiveClusterSetLabel. When a ManagedCluster has the label cluster.open-cluster-management.io/clusterset=sandbox-fleet, that cluster belongs to sandbox-fleet.

Note on existing cluster sets. Some environments already have ManagedClusterSet resources named default or global, managed by the OCM DefaultClusterSet feature gate. global in particular uses an empty label selector and selects all managed clusters, so it overlaps with sandbox-fleet. This article consistently targets sandbox-fleet for addons and ClusterProfile to keep things unambiguous.

printf '%s\n' \
  'apiVersion: cluster.open-cluster-management.io/v1beta2' \
  'kind: ManagedClusterSet' \
  'metadata:' \
  '  name: sandbox-fleet' \
  'spec:' \
  '  clusterSelector:' \
  '    selectorType: ExclusiveClusterSetLabel' | \
  kubectl apply --context "$HUB_CTX" -f -

kubectl label managedcluster cluster1 \
  cluster.open-cluster-management.io/clusterset=sandbox-fleet \
  --overwrite \
  --context "$HUB_CTX"

kubectl label managedcluster cluster2 \
  cluster.open-cluster-management.io/clusterset=sandbox-fleet \
  --overwrite \
  --context "$HUB_CTX"
Enter fullscreen mode Exit fullscreen mode

cluster-proxy v0.10.0 introduced support for configuring ClusterProfile access providers. That is the feature we need here, and it's why we pin to v0.10.0 so Part 2 can use ClusterProfile for dynamic access.

Why v0.10.0 needs a workaround

The latest ocm/cluster-proxy chart in the Helm repository is v0.10.0, but that chart has a schema mismatch: it renders spec.proxyAgent.additionalValues in ManagedProxyConfiguration, while the v0.10.0 CRD schema does not declare that field. Running helm install as-is fails with:

failed to create typed patch object (... ManagedProxyConfiguration): .spec.proxyAgent.additionalValues: field not declared in schema

The cluster-proxy main branch removed this output in Fix chart error. (#272). Once the next chart release ships, this workaround can be dropped.

Helm v3 can pass an executable path to --post-renderer, but Helm v4 changed post-renderers to plugins and no longer accepts a raw path directly. To stay compatible with both, this article uses helm template | kubectl apply and strips the offending field with perl.

Install cluster-proxy on the hub. Enabling ClusterProfileAccessProvider and userServer.enabled makes cluster-proxy connection information appear in ClusterProfile.status.accessProviders. That is the bridge between the inventory and real spoke API access:

kubectl create namespace open-cluster-management-addon \
  --context "$HUB_CTX" \
  --dry-run=client \
  -o yaml | kubectl apply --context "$HUB_CTX" -f -

printf '%s\n' \
  'apiVersion: cluster.open-cluster-management.io/v1beta2' \
  'kind: ManagedClusterSetBinding' \
  'metadata:' \
  '  name: sandbox-fleet' \
  '  namespace: open-cluster-management-addon' \
  'spec:' \
  '  clusterSet: sandbox-fleet' | \
  kubectl apply --context "$HUB_CTX" -f -

printf '%s\n' \
  'apiVersion: cluster.open-cluster-management.io/v1beta1' \
  'kind: Placement' \
  'metadata:' \
  '  name: cluster-proxy-placement' \
  '  namespace: open-cluster-management-addon' \
  'spec:' \
  '  clusterSets:' \
  '    - sandbox-fleet' | \
  kubectl apply --context "$HUB_CTX" -f -
Enter fullscreen mode Exit fullscreen mode

ManagedProxyConfiguration is a CRD provided by the cluster-proxy chart. Apply the CRDs first and wait for them to become Established. If the CRD and CR are sent through the same kubectl apply stream, Kubernetes discovery may not see the new CRD in time and can return no matches for kind "ManagedProxyConfiguration".

helm show crds ocm/cluster-proxy --version 0.10.0 | \
  kubectl apply --context "$HUB_CTX" -f -

kubectl wait --for=condition=Established \
  crd/managedproxyconfigurations.proxy.open-cluster-management.io \
  --context "$HUB_CTX" \
  --timeout=120s

kubectl wait --for=condition=Established \
  crd/managedproxyserviceresolvers.proxy.open-cluster-management.io \
  --context "$HUB_CTX" \
  --timeout=120s

helm template cluster-proxy ocm/cluster-proxy \
  -n open-cluster-management-addon \
  --version 0.10.0 \
  --set installByPlacement.placementName=cluster-proxy-placement \
  --set installByPlacement.placementNamespace=open-cluster-management-addon \
  --set featureGates.clusterProfileAccessProvider=true \
  --set userServer.enabled=true | \
  perl -0pe 's/\n    additionalValues:\n      enableImpersonation: "[^"]+"//g' | \
  kubectl apply --context "$HUB_CTX" -f -
Enter fullscreen mode Exit fullscreen mode

Wait for ManagedProxyConfiguration/cluster-proxy and the certificate Secrets used by clusteradm proxy:

kubectl wait --for=create \
  managedproxyconfiguration/cluster-proxy \
  --context "$HUB_CTX" \
  --timeout=120s

kubectl wait --for=create \
  secret/proxy-client \
  -n open-cluster-management-addon \
  --context "$HUB_CTX" \
  --timeout=120s

kubectl wait --for=create \
  secret/proxy-server \
  -n open-cluster-management-addon \
  --context "$HUB_CTX" \
  --timeout=120s

kubectl wait --for=create \
  secret/agent-server \
  -n open-cluster-management-addon \
  --context "$HUB_CTX" \
  --timeout=120s
Enter fullscreen mode Exit fullscreen mode

Check the addon status:

kubectl rollout status deployment/cluster-proxy-addon-manager \
  -n open-cluster-management-addon \
  --context "$HUB_CTX" \
  --timeout=180s

kubectl rollout status deployment/cluster-proxy \
  -n open-cluster-management-addon \
  --context "$HUB_CTX" \
  --timeout=180s

clusteradm get addon cluster-proxy --context "$HUB_CTX"
kubectl get managedclusteraddon -A --context "$HUB_CTX" | grep cluster-proxy

kubectl wait --for=condition=Available \
  managedclusteraddon/cluster-proxy \
  -n cluster1 \
  --context "$HUB_CTX" \
  --timeout=180s

kubectl wait --for=condition=Available \
  managedclusteraddon/cluster-proxy \
  -n cluster2 \
  --context "$HUB_CTX" \
  --timeout=180s
Enter fullscreen mode Exit fullscreen mode

Install the managed-serviceaccount addon

Install the managed-serviceaccount addon so clusteradm proxy kubectl can access the spoke clusters with a managed service account.

The managed-serviceaccount chart also defaults to the global cluster set and does not expose a Helm value to change it. We set agentInstallAll=false to disable automatic distribution, then explicitly create ManagedClusterAddOn resources for the target clusters:

helm install \
  --kube-context "$HUB_CTX" \
  -n open-cluster-management-managed-serviceaccount \
  --create-namespace \
  managed-serviceaccount \
  ocm/managed-serviceaccount \
  --set agentInstallAll=false

printf '%s\n' \
  'apiVersion: addon.open-cluster-management.io/v1alpha1' \
  'kind: ManagedClusterAddOn' \
  'metadata:' \
  '  name: managed-serviceaccount' \
  '  namespace: cluster1' \
  'spec:' \
  '  installNamespace: open-cluster-management-managed-serviceaccount' | \
  kubectl apply --context "$HUB_CTX" -f -

printf '%s\n' \
  'apiVersion: addon.open-cluster-management.io/v1alpha1' \
  'kind: ManagedClusterAddOn' \
  'metadata:' \
  '  name: managed-serviceaccount' \
  '  namespace: cluster2' \
  'spec:' \
  '  installNamespace: open-cluster-management-managed-serviceaccount' | \
  kubectl apply --context "$HUB_CTX" -f -
Enter fullscreen mode Exit fullscreen mode

This flow creates ManagedClusterAddOn resources directly. Check the clusters in sandbox-fleet and verify the addon targets line up with the fleet:

kubectl get managedclusters \
  --context "$HUB_CTX" \
  -L cluster.open-cluster-management.io/clusterset
Enter fullscreen mode Exit fullscreen mode

Check the addon status:

kubectl rollout status deployment/managed-serviceaccount-addon-manager \
  -n open-cluster-management-managed-serviceaccount \
  --context "$HUB_CTX" \
  --timeout=180s

clusteradm get addon managed-serviceaccount --context "$HUB_CTX"
kubectl get managedclusteraddon -A --context "$HUB_CTX" | grep managed-serviceaccount

kubectl wait --for=condition=Available \
  managedclusteraddon/managed-serviceaccount \
  -n cluster1 \
  --context "$HUB_CTX" \
  --timeout=180s

kubectl wait --for=condition=Available \
  managedclusteraddon/managed-serviceaccount \
  -n cluster2 \
  --context "$HUB_CTX" \
  --timeout=180s
Enter fullscreen mode Exit fullscreen mode

Check the proxy path health:

clusteradm proxy health --context "$HUB_CTX"
Enter fullscreen mode Exit fullscreen mode

Create a ManagedServiceAccount

The next three sections (Create a ManagedServiceAccount, Distribute RBAC to the spoke, Access the spoke API with clusteradm proxy) are a side quest, not Cluster Inventory API itself. They exist to verify that the connection endpoint that will appear in ClusterProfile.status.accessProviders is actually reachable. If you only care about the inventory data and plan to drive access from a controller later, you can skim these and pick up at Enable the ClusterProfile feature gate.

Create a ManagedServiceAccount named test for cluster1:

printf '%s\n' \
  'apiVersion: authentication.open-cluster-management.io/v1beta1' \
  'kind: ManagedServiceAccount' \
  'metadata:' \
  '  name: test' \
  '  namespace: cluster1' \
  'spec:' \
  '  rotation: {}' | \
  kubectl apply --context "$HUB_CTX" -f -
Enter fullscreen mode Exit fullscreen mode

Check that it has been created:

kubectl get managedserviceaccount -n cluster1 --context "$HUB_CTX"
Enter fullscreen mode Exit fullscreen mode

Wait for the hub-side Secret for the managed service account:

kubectl wait --for=create \
  secret/test \
  -n cluster1 \
  --context "$HUB_CTX" \
  --timeout=120s
Enter fullscreen mode Exit fullscreen mode

Check the status conditions and verify that the token Secret has been reported:

kubectl get managedserviceaccount test \
  -n cluster1 \
  --context "$HUB_CTX" \
  -o jsonpath='{range .status.conditions[*]}{.type}={.status}{"\n"}{end}'
Enter fullscreen mode Exit fullscreen mode

Distribute RBAC to the spoke

On the spoke cluster, the ManagedServiceAccount is realized as a regular Kubernetes ServiceAccount in the namespace specified by ManagedClusterAddOn.spec.installNamespace, which is open-cluster-management-managed-serviceaccount:

kubectl get serviceaccount test \
  -n open-cluster-management-managed-serviceaccount \
  --context "$C1_CTX"
Enter fullscreen mode Exit fullscreen mode

This verification grants cluster-admin. In production, grant only the Role or ClusterRole required for the target APIs.

printf '%s\n' \
  'apiVersion: rbac.authorization.k8s.io/v1' \
  'kind: ClusterRoleBinding' \
  'metadata:' \
  '  name: managed-sa-test' \
  'roleRef:' \
  '  apiGroup: rbac.authorization.k8s.io' \
  '  kind: ClusterRole' \
  '  name: cluster-admin' \
  'subjects:' \
  '  - kind: ServiceAccount' \
  '    name: test' \
  '    namespace: open-cluster-management-managed-serviceaccount' \
  > /tmp/clusterrolebinding-managed-sa-test.yaml
Enter fullscreen mode Exit fullscreen mode

Use clusteradm create work to apply the RBAC to cluster1:

clusteradm create work managed-sa-test-rbac \
  -f /tmp/clusterrolebinding-managed-sa-test.yaml \
  --clusters cluster1 \
  --context "$HUB_CTX"
Enter fullscreen mode Exit fullscreen mode

Check the ManifestWork and the RBAC on the spoke cluster:

kubectl get manifestwork -n cluster1 --context "$HUB_CTX"
kubectl wait --for=condition=Applied \
  manifestwork/managed-sa-test-rbac \
  -n cluster1 \
  --context "$HUB_CTX" \
  --timeout=60s
kubectl get clusterrolebinding managed-sa-test --context "$C1_CTX"
Enter fullscreen mode Exit fullscreen mode

Access the spoke API with clusteradm proxy

At this point, use clusteradm proxy kubectl from the hub side to access the cluster1 API:

clusteradm proxy kubectl \
  --context "$HUB_CTX" \
  --cluster=cluster1 \
  --sa=test \
  --args="get nodes"
Enter fullscreen mode Exit fullscreen mode

If the nodes in cluster1 are returned, hub-to-spoke API access through cluster-proxy and managed-serviceaccount is working.

Create the same ManagedServiceAccount and RBAC for cluster2:

printf '%s\n' \
  'apiVersion: authentication.open-cluster-management.io/v1beta1' \
  'kind: ManagedServiceAccount' \
  'metadata:' \
  '  name: test' \
  '  namespace: cluster2' \
  'spec:' \
  '  rotation: {}' | \
  kubectl apply --context "$HUB_CTX" -f -

clusteradm create work managed-sa-test-rbac \
  -f /tmp/clusterrolebinding-managed-sa-test.yaml \
  --clusters cluster2 \
  --context "$HUB_CTX"

kubectl wait --for=condition=Applied \
  manifestwork/managed-sa-test-rbac \
  -n cluster2 \
  --context "$HUB_CTX" \
  --timeout=60s

clusteradm proxy kubectl \
  --context "$HUB_CTX" \
  --cluster=cluster2 \
  --sa=test \
  --args="get nodes"
Enter fullscreen mode Exit fullscreen mode

Enable the ClusterProfile feature gate

Now we get to the headline feature: turning on the Cluster Inventory API.

ClusterProfile is handled by the hub-side registration controller. In the OCM repository, the feature gate name is ClusterProfile. We configure the registration feature gate on ClusterManager.

First, check the existing feature gates:

kubectl get clustermanager cluster-manager \
  --context "$HUB_CTX" \
  -o jsonpath='{range .spec.registrationConfiguration.featureGates[*]}{.feature}={.mode}{"\n"}{end}'
Enter fullscreen mode Exit fullscreen mode
kubectl patch clustermanager.operator.open-cluster-management.io cluster-manager \
  --context "$HUB_CTX" \
  --type=merge \
  -p '{"spec":{"registrationConfiguration":{"featureGates":[{"feature":"ResourceCleanup","mode":"Enable"},{"feature":"ClusterProfile","mode":"Enable"}]}}}'
Enter fullscreen mode Exit fullscreen mode

In the local-up.sh verification environment, ResourceCleanup was already configured, so the patch keeps it and adds ClusterProfile. Note that the featureGates array is replaced wholesale by this patch. If your environment explicitly configures additional feature gates, keep those entries in the same array.

With local-up.sh, the registration controller is the cluster-manager-registration-controller Deployment in the open-cluster-management-hub namespace. Check the name and wait for rollout:

kubectl get deployment \
  -n open-cluster-management-hub \
  --context "$HUB_CTX"

kubectl rollout status deployment/cluster-manager-registration-controller \
  -n open-cluster-management-hub \
  --context "$HUB_CTX" \
  --timeout=180s
Enter fullscreen mode Exit fullscreen mode

Check the ClusterManager configuration:

kubectl get clustermanager cluster-manager \
  --context "$HUB_CTX" \
  -o yaml
Enter fullscreen mode Exit fullscreen mode

Check that the ClusterProfile CRD exists:

sleep 10

kubectl wait --for=condition=Established \
  crd/clusterprofiles.multicluster.x-k8s.io \
  --context "$HUB_CTX" \
  --timeout=120s
Enter fullscreen mode Exit fullscreen mode

Check the API resource:

kubectl api-resources --context "$HUB_CTX" | grep -i clusterprofile
Enter fullscreen mode Exit fullscreen mode

The API group is multicluster.x-k8s.io, the SIG-Multicluster Cluster Inventory API itself. From here on, anything we put into ClusterProfile is consumable by tools that target the upstream API, not just OCM-specific ones.

Create the ManagedClusterSet binding

ClusterProfile resources are created in namespaces that have a ManagedClusterSetBinding. We use the previously created sandbox-fleet cluster set and bind it to a cluster-inventory namespace.

The label cluster.open-cluster-management.io/clusterset=sandbox-fleet puts a cluster into sandbox-fleet. The ManagedClusterSetBinding we create here makes sandbox-fleet referenceable from the cluster-inventory namespace. The ClusterProfile controller watches the binding in that namespace and creates ClusterProfile resources for the target clusters.

First, confirm that the sandbox-fleet cluster set exists and that cluster1 and cluster2 belong to it:

kubectl get managedclusterset sandbox-fleet --context "$HUB_CTX"
kubectl get managedclusters \
  --context "$HUB_CTX" \
  -L cluster.open-cluster-management.io/clusterset
Enter fullscreen mode Exit fullscreen mode

Create the namespace where ClusterProfile resources will be created:

kubectl create namespace cluster-inventory \
  --dry-run=client \
  -o yaml | kubectl apply --context "$HUB_CTX" -f -
Enter fullscreen mode Exit fullscreen mode

Bind the sandbox-fleet cluster set to the cluster-inventory namespace:

printf '%s\n' \
  'apiVersion: cluster.open-cluster-management.io/v1beta2' \
  'kind: ManagedClusterSetBinding' \
  'metadata:' \
  '  name: sandbox-fleet' \
  '  namespace: cluster-inventory' \
  'spec:' \
  '  clusterSet: sandbox-fleet' | \
  kubectl apply --context "$HUB_CTX" -f -
Enter fullscreen mode Exit fullscreen mode

Wait until the binding is Bound:

kubectl wait --for=condition=Bound \
  managedclustersetbinding/sandbox-fleet \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  --timeout=60s

kubectl get managedclustersetbinding sandbox-fleet \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  -o yaml
Enter fullscreen mode Exit fullscreen mode

The condition contains Bound=True.

Check ClusterProfile

ClusterProfile is a namespaced resource in multicluster.x-k8s.io/v1alpha1. With the previous steps, it is created in the cluster-inventory namespace:

kubectl wait --for=create \
  clusterprofile/cluster1 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  --timeout=120s

kubectl wait --for=create \
  clusterprofile/cluster2 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  --timeout=120s
Enter fullscreen mode Exit fullscreen mode
kubectl get clusterprofiles \
  -n cluster-inventory \
  --context "$HUB_CTX"
Enter fullscreen mode Exit fullscreen mode

ClusterProfile resources for cluster1 and cluster2 have been created:

NAME       AGE
cluster1   1m
cluster2   1m
Enter fullscreen mode Exit fullscreen mode

In this implementation, the created ClusterProfile has spec.clusterManager.name set to open-cluster-management and spec.displayName set to the managed cluster name. Extract those fields with jsonpath:

kubectl get clusterprofile cluster1 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  -o jsonpath='{.spec.clusterManager.name}{"\n"}{.spec.displayName}{"\n"}'
Enter fullscreen mode Exit fullscreen mode

The status synchronizes the Kubernetes version from ManagedCluster, properties derived from cluster claims, and conditions:

kubectl get clusterprofile cluster1 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  -o jsonpath='{.status.version.kubernetes}{"\n"}'

kubectl get clusterprofile cluster1 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  -o jsonpath='{range .status.properties[*]}{.name}={.value}{"\n"}{end}'

kubectl get clusterprofile cluster1 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  -o jsonpath='{range .status.conditions[*]}{.type}={.status}{"\n"}{end}'
Enter fullscreen mode Exit fullscreen mode

Create ClusterProperty and reflect it in ClusterProfile

ClusterProfile.status.properties also reflects ClusterProperty resources from spoke clusters. ClusterProperty is a cluster-scoped resource in about.k8s.io/v1alpha1. It is another SIG-Multicluster API, designed for spoke clusters to declare facts about themselves (region, account ID, environment, anything you'd otherwise jam into labels).

This is the punchline of the article: a property you set on the spoke flows all the way to the inventory entry on the hub, with no custom glue code.

First, enable the ClusterProperty feature gate on the spoke-side Klusterlets. Keep the ClusterClaim and AddonManagement feature gates that local-up.sh already enabled, and add ClusterProperty:

kubectl patch klusterlet klusterlet \
  --context "$C1_CTX" \
  --type=merge \
  -p '{"spec":{"registrationConfiguration":{"featureGates":[{"feature":"ClusterClaim","mode":"Enable"},{"feature":"ClusterProperty","mode":"Enable"},{"feature":"AddonManagement","mode":"Enable"}]}}}'

kubectl patch klusterlet klusterlet \
  --context "$C2_CTX" \
  --type=merge \
  -p '{"spec":{"registrationConfiguration":{"featureGates":[{"feature":"ClusterClaim","mode":"Enable"},{"feature":"ClusterProperty","mode":"Enable"},{"feature":"AddonManagement","mode":"Enable"}]}}}'
Enter fullscreen mode Exit fullscreen mode

Wait for the ClusterProperty CRD and for the registration agent rollout:

kubectl wait --for=condition=Established \
  crd/clusterproperties.about.k8s.io \
  --context "$C1_CTX" \
  --timeout=120s

kubectl wait --for=condition=Established \
  crd/clusterproperties.about.k8s.io \
  --context "$C2_CTX" \
  --timeout=120s

kubectl rollout status deployment/klusterlet-registration-agent \
  -n open-cluster-management-agent \
  --context "$C1_CTX" \
  --timeout=180s

kubectl rollout status deployment/klusterlet-registration-agent \
  -n open-cluster-management-agent \
  --context "$C2_CTX" \
  --timeout=180s
Enter fullscreen mode Exit fullscreen mode

Create AWS region and AWS account ID examples as ClusterProperty resources on the spoke clusters. The values are dummy values for verification:

printf '%s\n' \
  'apiVersion: about.k8s.io/v1alpha1' \
  'kind: ClusterProperty' \
  'metadata:' \
  '  name: aws.region.example.com' \
  'spec:' \
  '  value: ap-northeast-1' \
  '---' \
  'apiVersion: about.k8s.io/v1alpha1' \
  'kind: ClusterProperty' \
  'metadata:' \
  '  name: aws.account-id.example.com' \
  'spec:' \
  '  value: "111122223333"' | \
  kubectl apply --context "$C1_CTX" -f -

printf '%s\n' \
  'apiVersion: about.k8s.io/v1alpha1' \
  'kind: ClusterProperty' \
  'metadata:' \
  '  name: aws.region.example.com' \
  'spec:' \
  '  value: us-west-2' \
  '---' \
  'apiVersion: about.k8s.io/v1alpha1' \
  'kind: ClusterProperty' \
  'metadata:' \
  '  name: aws.account-id.example.com' \
  'spec:' \
  '  value: "444455556666"' | \
  kubectl apply --context "$C2_CTX" -f -
Enter fullscreen mode Exit fullscreen mode

Check the resources on the spoke clusters:

kubectl get clusterproperties --context "$C1_CTX"
kubectl get clusterproperties --context "$C2_CTX"
Enter fullscreen mode Exit fullscreen mode

The registration agent synchronizes the spoke-side ClusterProperty resources into ManagedCluster.status.clusterClaims on the hub:

kubectl wait --for=jsonpath='{.status.clusterClaims[?(@.name=="aws.region.example.com")].value}'=ap-northeast-1 \
  managedcluster/cluster1 \
  --context "$HUB_CTX" \
  --timeout=180s

kubectl wait --for=jsonpath='{.status.clusterClaims[?(@.name=="aws.account-id.example.com")].value}'=111122223333 \
  managedcluster/cluster1 \
  --context "$HUB_CTX" \
  --timeout=180s

kubectl wait --for=jsonpath='{.status.clusterClaims[?(@.name=="aws.region.example.com")].value}'=us-west-2 \
  managedcluster/cluster2 \
  --context "$HUB_CTX" \
  --timeout=180s

kubectl wait --for=jsonpath='{.status.clusterClaims[?(@.name=="aws.account-id.example.com")].value}'=444455556666 \
  managedcluster/cluster2 \
  --context "$HUB_CTX" \
  --timeout=180s
Enter fullscreen mode Exit fullscreen mode

Then ManagedCluster.status.clusterClaims is synchronized into ClusterProfile.status.properties:

kubectl wait --for=jsonpath='{.status.properties[?(@.name=="aws.region.example.com")].value}'=ap-northeast-1 \
  clusterprofile/cluster1 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  --timeout=180s

kubectl wait --for=jsonpath='{.status.properties[?(@.name=="aws.account-id.example.com")].value}'=111122223333 \
  clusterprofile/cluster1 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  --timeout=180s

kubectl wait --for=jsonpath='{.status.properties[?(@.name=="aws.region.example.com")].value}'=us-west-2 \
  clusterprofile/cluster2 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  --timeout=180s

kubectl wait --for=jsonpath='{.status.properties[?(@.name=="aws.account-id.example.com")].value}'=444455556666 \
  clusterprofile/cluster2 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  --timeout=180s
Enter fullscreen mode Exit fullscreen mode

Print only the AWS-related properties from ClusterProfile:

printf 'cluster1\n'
kubectl get clusterprofile cluster1 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  -o jsonpath='{range .status.properties[*]}{.name}={.value}{"\n"}{end}' | grep '^aws\.'

printf 'cluster2\n'
kubectl get clusterprofile cluster2 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  -o jsonpath='{range .status.properties[*]}{.name}={.value}{"\n"}{end}' | grep '^aws\.'
Enter fullscreen mode Exit fullscreen mode

In this environment:

cluster1
aws.account-id.example.com=111122223333
aws.region.example.com=ap-northeast-1
cluster2
aws.account-id.example.com=444455556666
aws.region.example.com=us-west-2
Enter fullscreen mode Exit fullscreen mode

Because ClusterProfileAccessProvider is enabled on cluster-proxy v0.10.0, status.accessProviders also contains the cluster-proxy user-server connection information:

kubectl wait --for=jsonpath='{.status.accessProviders[0].name}'=open-cluster-management \
  clusterprofile/cluster1 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  --timeout=120s

kubectl get clusterprofile cluster1 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  -o jsonpath='{range .status.accessProviders[*]}{.name}{"\n"}{.cluster.server}{"\n"}{end}'
Enter fullscreen mode Exit fullscreen mode

In this environment:

open-cluster-management
https://cluster-proxy-addon-user.open-cluster-management-addon:9092/cluster1
Enter fullscreen mode Exit fullscreen mode

Check the full ClusterProfile for cluster1:

kubectl get clusterprofile cluster1 \
  -n cluster-inventory \
  --context "$HUB_CTX" \
  -o yaml \
  --show-managed-fields=false
Enter fullscreen mode Exit fullscreen mode

In this environment, the YAML is shown below. The command above prints the full value; here, status.accessProviders[].cluster.certificate-authority-data is replaced with "<base64 CA bundle>" for brevity:

apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ClusterProfile
metadata:
  creationTimestamp: "2026-05-05T07:09:27Z"
  generation: 1
  labels:
    multicluster.x-k8s.io/clusterset: sandbox-fleet
    open-cluster-management.io/cluster-name: cluster1
    x-k8s.io/cluster-manager: open-cluster-management
  name: cluster1
  namespace: cluster-inventory
  resourceVersion: "4889"
  uid: 29ec181b-7a42-4642-b957-d7057d756575
spec:
  clusterManager:
    name: open-cluster-management
  displayName: cluster1
status:
  accessProviders:
  - cluster:
      certificate-authority-data: "<base64 CA bundle>"
      extensions:
      - extension:
          clusterName: cluster1
        name: client.authentication.k8s.io/exec
      server: https://cluster-proxy-addon-user.open-cluster-management-addon:9092/cluster1
    name: open-cluster-management
  conditions:
  - lastTransitionTime: "2026-05-05T07:09:27Z"
    message: Managed cluster is available
    reason: ManagedClusterAvailable
    status: "True"
    type: ControlPlaneHealthy
  - lastTransitionTime: "2026-05-05T07:09:27Z"
    message: Managed cluster joined
    reason: ManagedClusterJoined
    status: "True"
    type: Joined
  properties:
  - name: aws.account-id.example.com
    value: "111122223333"
  - name: aws.region.example.com
    value: ap-northeast-1
  version:
    kubernetes: v1.35.0
Enter fullscreen mode Exit fullscreen mode

This YAML tells us several things:

  • metadata.labels["multicluster.x-k8s.io/clusterset"] shows that this ClusterProfile was created from sandbox-fleet.
  • spec.clusterManager.name is open-cluster-management, meaning this ClusterProfile is managed by OCM.
  • status.properties contains the values from the spoke-side ClusterProperty resources.
  • status.accessProviders[0].cluster.server is the cluster-specific endpoint of the cluster-proxy user-server.
  • status.accessProviders[0].cluster.extensions[0].extension.clusterName is the target spoke cluster name.
  • status.conditions reflects the original ManagedCluster Available and Joined conditions.

Everything a Cluster Inventory API consumer needs is in this single resource: identity, properties, kube version, conditions, and a reachable endpoint with its CA bundle. That is the promise of ClusterProfile, and OCM is delivering it end-to-end.

Compare clusteradm proxy with ClusterProfile

Finally, compare the ClusterProfile inventory view with actual spoke API access:

kubectl get clusterprofiles -n cluster-inventory --context "$HUB_CTX"

clusteradm proxy kubectl \
  --context "$HUB_CTX" \
  --cluster=cluster1 \
  --sa=test \
  --args="get ns"
Enter fullscreen mode Exit fullscreen mode

ClusterProfile is the hub-side inventory view. clusteradm proxy kubectl verifies the spoke API access path that ClusterProfile.status.accessProviders is pointing at.

What's next

We now have a working Cluster Inventory API setup with OCM as the cluster manager:

  • ClusterProfile resources auto-created per managed cluster.
  • Spoke ClusterProperty reflected into status.properties on the hub.
  • Real, reachable spoke endpoints in status.accessProviders.

In Part 2, we'll point multicluster-runtime at this inventory and write a controller that reconciles across cluster1 and cluster2 using nothing but ClusterProfile to discover and reach them. That is where the vendor-neutral inventory really pays off: the controller code will not contain a single OCM-specific import.

Cleanup

Delete the kind clusters:

kind delete cluster --name hub
kind delete cluster --name cluster1
kind delete cluster --name cluster2
Enter fullscreen mode Exit fullscreen mode

Top comments (0)