If you work with Kubernetes in production, you probably don't have just one cluster. You might have regional replicas, a staging environment, a dev cluster, maybe a sandbox for experiments. Your kubeconfig has a dozen contexts and growing.
And every time you need to check something across clusters, you do this dance:
kubectl config use-context prod-us-east-1
kubectl get pods -n backend
kubectl config use-context prod-eu-west-1
kubectl get pods -n backend
kubectl config use-context staging-us-east-1
kubectl get pods -n backend
# ... you get the idea
Or maybe you've written a shell loop:
for ctx in prod-us-east-1 prod-eu-west-1; do
echo "=== $ctx ==="
kubectl --context "$ctx" get pods -n backend
done
This works until it doesn't β hardcoded cluster names, no parallelism, no error handling, no timeout for that one cluster with flaky networking. Every team ends up with their own version of this script.
A better way
I built kubectl-xctx to solve this properly. It's a kubectl plugin that takes a regex pattern and runs any kubectl command across all matching contexts:
kubectl xctx "prod" get pods -n backend
Output is grouped per context with clear headers:
### Context: prod-us-east-1
NAME READY STATUS RESTARTS AGE
api-server-abc123 1/1 Running 0 3d
worker-def456 1/1 Running 0 3d
### Context: prod-eu-west-1
NAME READY STATUS RESTARTS AGE
api-server-xyz789 1/1 Running 0 3d
worker-uvw012 1/1 Running 0 3d
What it can do
Preview matching contexts before running anything:
kubectl xctx --list "prod"
# prod-us-east-1
# prod-eu-west-1
Run commands in parallel when you don't want to wait for sequential execution:
kubectl xctx --parallel "staging|dev" get nodes
Set a per-context timeout to skip unreachable clusters instead of hanging forever:
kubectl xctx --timeout 10s "." get pods -n kube-system
Fail fast on the first error (useful for apply/delete operations):
kubectl xctx --fail-fast "prod" apply -f deployment.yaml
Suppress or customize headers for scripting:
# Pipe JSON output cleanly
kubectl xctx --header "" "prod" get pods -o json | jq .
# Custom header format
kubectl xctx --header "=== {context} ===" "prod" get pods
Before and after
Check pods across 4 prod clusters
- Before: 4 commands, switching context each time
- After:
kubectl xctx "prod" get pods
Get nodes from staging + dev
- Before: shell loop with hardcoded context names
- After:
kubectl xctx "staging|dev" get nodes
Quick check across all clusters
- Before: custom script, hope it works
- After:
kubectl xctx --parallel --timeout 10s "." get pods
See which contexts match a pattern
- Before:
kubectl config get-contexts | grep prod - After:
kubectl xctx --list "prod"
Install
Via krew (Kubernetes plugin manager):
kubectl krew index add be0x74a https://github.com/be0x74a/krew-index
kubectl krew install be0x74a/xctx
Or directly from the manifest:
kubectl krew install --manifest-url=https://raw.githubusercontent.com/be0x74a/krew-index/main/plugins/xctx.yaml
Via Homebrew:
brew install be0x74a/tap/kubectl-xctx
From source:
git clone https://github.com/be0x74a/kubectl-xctx
cd kubectl-xctx
go build -o kubectl-xctx .
It's a single Go binary with no dependencies beyond kubectl itself.
How it works
The implementation is straightforward:
- Reads your kubeconfig contexts via
kubectl config get-contexts -o name - Filters them against the regex you provide
- Runs
kubectl --context <name> <your args>for each match - Groups output with labeled headers
In parallel mode, all contexts execute concurrently via goroutines, with results printed in order after all complete.
Try it out
GitHub: be0x74a/kubectl-xctx
If you manage multiple clusters, give it a try. I'd love to hear what workflows it helps with β or what's missing.
Top comments (0)