If you work with multiple Kubernetes clusters daily, you know the pain. Production, staging, development, that random test cluster someone spun up last month — they all live in your kubeconfig file, and switching between them feels like navigating a maze blindfolded.
I got tired of it. So I built kubecfg.
What is kubecfg?
kubecfg is a CLI tool that simplifies kubeconfig management. It lets you:
- Add new cluster configs with custom names
- List all your contexts at a glance
- Switch contexts with optional namespace selection
- Manage namespaces interactively
- Rename contexts to something memorable
- Remove contexts you no longer need
- Merge multiple kubeconfig files
The killer feature? Interactive pickers with fuzzy selection. No more copy-pasting long context names.
Quick Demo
Switching Context with Namespace
kubecfg use -n
That's it. One command. It shows you an interactive context picker, then asks which namespace you want. Both selections happen in one flow.
Select context
minikube
▸ production-eks
staging-gke
dev-local
✓ production-eks
Select namespace
default
kube-system
▸ backend-services
frontend
monitoring
✓ backend-services
Switched to context 'production-eks' with namespace 'backend-services'
Or if you just want to switch context without the namespace prompt:
kubecfg use production-eks
Need a specific namespace directly?
kubecfg use production-eks -n backend-services
The -n flag is smart. Use it without a value for interactive selection, or pass a namespace directly.
Installation
Homebrew (macOS)
brew tap kadirbelkuyu/tap
brew install kadirbelkuyu/tap/kubecfg
Go Install
go install github.com/kadirbelkuyu/kubecfg@latest
From Source
git clone https://github.com/kadirbelkuyu/kubecfg.git
cd kubecfg
go build -o kubecfg .
sudo mv kubecfg /usr/local/bin/
Commands in Detail
Adding a New Cluster
When you download a kubeconfig from AWS EKS, GCP GKE, or Azure AKS, the context names are usually terrible. Something like arn:aws:eks:us-east-1:123456789:cluster/my-cluster.
With kubecfg, you can import it with a sensible name:
kubecfg add ~/Downloads/eks-config.yaml --name production-eks
The original file stays untouched. The context gets added to your main kubeconfig with the name you chose.
Listing All Contexts
kubecfg list
Output:
CURRENT NAME CLUSTER SERVER NAMESPACE
* production-eks production-eks https://xxx.eks.amazonaws.com backend
staging-gke staging-gke https://xxx.gke.googleapis.com -
dev-local minikube https://192.168.49.2:8443 default
Clean, tabular output. The asterisk shows your current context. Namespace shows what's configured, or - if using default.
Interactive Context Switching
kubecfg use
No arguments needed. An interactive picker appears. Use arrow keys to navigate, Enter to select.
Select context
minikube
▸ production-eks
staging-gke
✓ production-eks
Switched to context 'production-eks'
Namespace Management
Switch namespace for your current context:
kubecfg ns
Interactive picker pulls namespaces directly from the cluster:
Select namespace
default
kube-system
▸ backend-services
monitoring
✓ backend-services
Namespace set to 'backend-services'
Or set it directly:
kubecfg ns kube-system
Check current namespace:
kubecfg ns current
# Output: backend-services
Renaming Contexts
That auto-generated context name bugging you?
kubecfg rename arn:aws:eks:us-east-1:123456789:cluster/prod production
Clean and simple.
Removing Contexts
kubecfg remove old-cluster
You'll get a confirmation prompt:
Remove context 'old-cluster'? [y/N]:
Skip the prompt with --force:
kubecfg remove old-cluster --force
Merging Configs
Got kubeconfig files scattered everywhere?
kubecfg merge config1.yaml config2.yaml config3.yaml -o merged.yaml
All contexts, clusters, and users get combined into one file. Duplicates are handled gracefully.
Architecture Deep Dive
I built kubecfg with Clean Architecture principles. Not because it's trendy, but because I wanted the codebase to be maintainable and testable.
kubecfg/
├── cmd/ # CLI commands (Cobra)
│ ├── root.go
│ ├── add.go
│ ├── list.go
│ ├── use.go
│ ├── ns.go
│ ├── remove.go
│ ├── rename.go
│ └── merge.go
├── internal/
│ ├── domain/ # Core business entities
│ │ ├── kubeconfig.go
│ │ ├── repository.go
│ │ └── errors.go
│ ├── application/ # Business logic
│ │ └── service.go
│ └── infrastructure/ # External implementations
│ ├── file_repository.go
│ └── kubernetes_client.go
└── main.go
Domain Layer
Pure Go structs representing kubeconfig structure:
type KubeConfig struct {
APIVersion string
Kind string
CurrentContext string
Clusters []ClusterEntry
Contexts []ContextEntry
Users []UserEntry
}
No dependencies on external packages. Just the core data model.
Application Layer
Business logic lives here. The Service struct handles all operations:
type Service struct {
repo domain.Repository
}
func (s *Service) UseContext(path, contextName, namespace string) error {
config, err := s.repo.Load(path)
if err != nil {
return err
}
_, idx := config.FindContext(contextName)
if idx < 0 {
return domain.ErrContextNotFound
}
config.CurrentContext = contextName
if namespace != "" {
config.Contexts[idx].Context.Namespace = namespace
}
return s.repo.Save(path, config)
}
The service doesn't know how configs are stored or loaded. It works with the repository interface.
Infrastructure Layer
Concrete implementations for file I/O and Kubernetes API calls:
type FileRepository struct{}
func (r *FileRepository) Load(path string) (*domain.KubeConfig, error) {
data, err := os.ReadFile(path)
if err != nil {
return nil, err
}
var config domain.KubeConfig
if err := yaml.Unmarshal(data, &config); err != nil {
return nil, domain.ErrInvalidConfig
}
return &config, nil
}
The Kubernetes client fetches namespaces directly from the cluster for the interactive picker:
type KubernetesClient struct {
kubeconfigPath string
}
func (k *KubernetesClient) ListNamespaces() ([]string, error) {
config, err := clientcmd.BuildConfigFromFlags("", k.kubeconfigPath)
if err != nil {
return nil, err
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
namespaceList, err := clientset.CoreV1().Namespaces().List(
context.Background(),
metav1.ListOptions{},
)
if err != nil {
return nil, err
}
namespaces := make([]string, len(namespaceList.Items))
for i, ns := range namespaceList.Items {
namespaces[i] = ns.Name
}
return namespaces, nil
}
The Smart Namespace Flag
One feature I'm particularly proud of is the -n flag behavior on the use command.
The challenge: How do you make a single flag that can:
- Do nothing (just switch context)
- Trigger interactive selection
- Accept a direct value
Cobra's NoOptDefVal made this possible:
func init() {
useCmd.Flags().StringVarP(&namespaceFlag, "namespace", "n", "",
"set namespace (use -n without value for interactive selection)")
useCmd.Flags().Lookup("namespace").NoOptDefVal = "__interactive__"
}
Now the flag works three ways:
| Command | Behavior |
|---|---|
kubecfg use prod |
Switch context only |
kubecfg use prod -n |
Switch context + interactive namespace |
kubecfg use prod -n backend |
Switch context + set namespace |
The resolution logic is clean:
func resolveNamespace(cmd *cobra.Command, contextName string) string {
if !cmd.Flags().Changed("namespace") {
return ""
}
if namespaceFlag == "__interactive__" {
return selectNamespaceForContext(contextName)
}
return namespaceFlag
}
Error Handling
I defined domain-specific errors that are meaningful to users:
var (
ErrConfigNotFound = errors.New("kubeconfig file not found")
ErrContextNotFound = errors.New("context not found")
ErrContextExists = errors.New("context already exists")
ErrInvalidConfig = errors.New("invalid kubeconfig format")
ErrNoCurrentContext = errors.New("no current context set")
)
When something goes wrong, you get a clear message instead of a stack trace.
Backup System
Before any modification, kubecfg creates a timestamped backup:
~/.kube/config.backup.20241208-143022
Accidentally removed the wrong context? Your backup is right there.
Tech Stack
- Go 1.24 - Fast compilation, single binary output
- Cobra - Industry standard CLI framework
- promptui - Beautiful interactive prompts
- client-go - Official Kubernetes client library
- yaml.v3 - YAML parsing with proper type handling
Why I Built This
I've been working with Kubernetes for years. The tooling around kubeconfig management always felt like an afterthought. You either use kubectl's verbose commands or install multiple tools for context and namespace management.
I wanted:
- One tool that handles both context and namespace
- Interactive pickers that actually pull from the cluster
- Clean output that's easy to read
- Safe operations with backups and confirmations
- Extensible codebase that I can maintain
kubecfg is the result. It's the tool I wish I had when I started working with multiple clusters.
What's Next
A few things on my roadmap:
- Shell completions - Bash, Zsh, Fish
- Config validation - Detect broken references
- Context groups - Organize contexts by environment
- Export command - Extract a single context to a file
Try It Out
go install github.com/kadirbelkuyu/kubecfg@latest
kubecfg use -n
The code is open source. Star the repo if you find it useful, and feel free to open issues or PRs.
GitHub: github.com/kadirbelkuyu/kubecfg
Final Thoughts
Building developer tools is satisfying in a way that application development sometimes isn’t. You’re solving real problems you face daily. Every feature comes from actual pain points.
If you manage multiple Kubernetes clusters, give kubecfg a try. And if you've built similar tools to scratch your own itch, I'd love to hear about them
Follow me for more Kubernetes and Go content. If you found this useful, a clap or share helps others discover the tool.


Top comments (0)