Introduction
Kubernetes clusters are powerful, and that power comes with risk. A single misconfigured service account can give an attacker access to every secret in your cluster. A developer with overly broad permissions can accidentally delete production workloads. In multi-tenant environments, one team's deployment can interfere with another's.
Role-Based Access Control (RBAC) is how you prevent all of this. RBAC in Kubernetes lets you define exactly who can do what, in which namespaces, down to individual API verbs on specific resource types.
Despite its importance, RBAC is one of the most commonly misconfigured parts of Kubernetes. I regularly see clusters where every service account has cluster-admin privileges, where namespaces share the same service accounts, or where RBAC policies were copied from a blog post without understanding what they grant.
This guide walks through RBAC from the ground up, with real manifests you can apply to your clusters today.
Understanding the RBAC Model
Kubernetes RBAC has four main objects: Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings. The relationship is straightforward:
- A Role defines a set of permissions within a single namespace.
- A ClusterRole defines permissions cluster-wide or across all namespaces.
- A RoleBinding grants a Role to a user, group, or service account within a namespace.
- A ClusterRoleBinding grants a ClusterRole across the entire cluster.
The key mental model: Roles define what can be done. Bindings define who can do it. Namespace scoping determines where it applies.
# A Role that allows reading pods and their logs in the "backend" namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: backend
name: pod-reader
rules:
- apiGroups: [""] # core API group
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
# Bind the pod-reader role to a specific developer
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods-backend
namespace: backend
subjects:
- kind: User
name: alice@company.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
The verbs field controls which operations are allowed. The standard verbs are: get, list, watch, create, update, patch, delete, and deletecollection. Grant only the verbs your users actually need.
ClusterRoles for Cross-Namespace Access
ClusterRoles serve two purposes. First, they define permissions on cluster-scoped resources like nodes, persistent volumes, and namespaces themselves. Second, they can be reused across multiple namespaces via RoleBindings, which avoids duplicating the same Role in every namespace.
# ClusterRole for viewing deployments across any namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: deployment-viewer
rules:
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods", "services", "configmaps"]
verbs: ["get", "list", "watch"]
# Grant this ClusterRole only in the "staging" namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: staging-deployment-viewer
namespace: staging
subjects:
- kind: Group
name: developers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: deployment-viewer
apiGroup: rbac.authorization.k8s.io
Notice the binding is a RoleBinding (namespace-scoped), not a ClusterRoleBinding. This means the developers group can view deployments only in the staging namespace, even though the ClusterRole itself is cluster-wide. This pattern of ClusterRole plus namespace-scoped RoleBinding is extremely useful for building reusable permission templates.
Kubernetes ships with several default ClusterRoles. The important ones to know:
| ClusterRole | Permissions | When to Use |
|---|---|---|
view |
Read-only access to most resources | Developers who need visibility |
edit |
Read/write on most resources, no RBAC | Developers who deploy to their namespace |
admin |
Full control within a namespace, including RBAC | Team leads managing their namespace |
cluster-admin |
Full control over everything | Platform engineers only |
ServiceAccount Security and Least Privilege
Every pod in Kubernetes runs under a service account. If you do not specify one, it uses the default service account in the namespace. This is a security problem because if any pod in the namespace is compromised, the attacker gets whatever permissions the default service account has.
Create dedicated service accounts for each workload:
# Dedicated service account for the payment processor
apiVersion: v1
kind: ServiceAccount
metadata:
name: payment-processor
namespace: backend
automountServiceAccountToken: false # Don't mount unless needed
---
# Role with minimum required permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: payment-processor-role
namespace: backend
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["stripe-api-key", "payment-db-credentials"]
verbs: ["get"]
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["payment-config"]
verbs: ["get", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: payment-processor-binding
namespace: backend
subjects:
- kind: ServiceAccount
name: payment-processor
namespace: backend
roleRef:
kind: Role
name: payment-processor-role
apiGroup: rbac.authorization.k8s.io
---
# Pod spec referencing the service account
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-processor
namespace: backend
spec:
template:
spec:
serviceAccountName: payment-processor
automountServiceAccountToken: true # This workload needs API access
containers:
- name: processor
image: company/payment-processor:v2.3.1
Notice the resourceNames field in the Role. This restricts the service account to reading only specific named secrets, not all secrets in the namespace. This is critical for sensitive workloads.
For pods that do not need to interact with the Kubernetes API at all (which is most of them), disable the service account token mount:
apiVersion: v1
kind: ServiceAccount
metadata:
name: web-frontend
namespace: frontend
automountServiceAccountToken: false
Namespace Isolation Patterns
Namespaces are the primary boundary for RBAC in Kubernetes. A well-designed namespace strategy makes RBAC simpler and more secure.
The pattern I recommend for most organizations:
production/
backend - Backend services (API, workers)
frontend - Frontend applications
data - Databases, caches, message queues
monitoring - Prometheus, Grafana, alerting
ingress - Ingress controllers, load balancers
staging/
backend
frontend
data
team-namespaces/
team-alpha - Team Alpha's development sandbox
team-beta - Team Beta's development sandbox
For each team namespace, apply a standard set of RBAC bindings:
# Template: Full access for the team, read-only for everyone else
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: team-alpha-admin
namespace: team-alpha
subjects:
- kind: Group
name: team-alpha
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: all-devs-view
namespace: team-alpha
subjects:
- kind: Group
name: all-developers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
Combine RBAC with NetworkPolicies and ResourceQuotas for true namespace isolation:
# Prevent pods from talking to other namespaces by default
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-cross-namespace
namespace: team-alpha
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector: {} # Allow within namespace
egress:
- to:
- podSelector: {} # Allow within namespace
- to:
- namespaceSelector:
matchLabels:
name: kube-system # Allow DNS resolution
ports:
- protocol: UDP
port: 53
---
# Limit resource consumption per namespace
apiVersion: v1
kind: ResourceQuota
metadata:
name: team-alpha-quota
namespace: team-alpha
spec:
hard:
requests.cpu: "8"
requests.memory: 16Gi
limits.cpu: "16"
limits.memory: 32Gi
pods: "50"
services: "10"
Auditing and Troubleshooting RBAC
When RBAC is not working as expected, Kubernetes provides tools to debug it.
Use kubectl auth can-i to check whether a specific user or service account has a permission:
# Check if alice can create deployments in the backend namespace
kubectl auth can-i create deployments --namespace=backend --as=alice@company.com
# Check what the payment-processor service account can do
kubectl auth can-i --list --namespace=backend \
--as=system:serviceaccount:backend:payment-processor
# Check if a group has access
kubectl auth can-i delete pods --namespace=production \
--as-group=developers --as=test
To find all bindings for a specific subject, there is no single command, but you can search:
# Find all RoleBindings referencing a user
kubectl get rolebindings --all-namespaces -o json | \
jq '.items[] | select(.subjects[]? | .name=="alice@company.com") |
{namespace: .metadata.namespace, name: .metadata.name, role: .roleRef.name}'
# Find all ClusterRoleBindings referencing a group
kubectl get clusterrolebindings -o json | \
jq '.items[] | select(.subjects[]? | .name=="developers") |
{name: .metadata.name, role: .roleRef.name}'
Enable audit logging in your API server to track who is doing what. This is essential for security compliance:
# Audit policy - log all write operations and secret access
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
resources:
- group: ""
resources: ["secrets"]
- level: Request
verbs: ["create", "update", "patch", "delete"]
resources:
- group: ""
resources: ["*"]
- group: "apps"
resources: ["*"]
- level: Metadata
resources:
- group: ""
resources: ["pods", "services"]
Integrating RBAC with Identity Providers
For production clusters, you should not be managing individual user certificates. Integrate Kubernetes RBAC with your existing identity provider using OIDC.
With OIDC, your users authenticate through your IdP (Okta, Azure AD, Google Workspace), and Kubernetes maps their identity and group memberships to RBAC subjects.
# kube-apiserver flags for OIDC integration
# (typically set in the cluster config, e.g., EKS, GKE, or kubeadm)
--oidc-issuer-url=https://accounts.google.com
--oidc-client-id=kubernetes-cluster-prod
--oidc-username-claim=email
--oidc-groups-claim=groups
On EKS, you use aws-auth ConfigMap or EKS access entries to map IAM roles to Kubernetes groups:
# EKS access entry (preferred over aws-auth ConfigMap)
apiVersion: eks.amazonaws.com/v1
kind: AccessEntry
metadata:
name: dev-team-access
spec:
principalArn: arn:aws:iam::123456789012:role/DevTeamRole
kubernetesGroups:
- developers
type: STANDARD
Map your IdP groups to Kubernetes groups, then create RoleBindings against those groups. This way, when someone joins or leaves a team in your IdP, their Kubernetes access updates automatically.
Need Help with Your DevOps?
Kubernetes RBAC is foundational to cluster security, but it is only one piece of the puzzle. At InstaDevOps, we help startups and SMBs design and implement complete Kubernetes security postures, from RBAC and network policies to pod security standards and secret management.
We offer fractional DevOps engineering starting at $2,999/month with no long-term contracts. Book a free 15-minute call to discuss your Kubernetes security needs: https://calendly.com/instadevops/15min
Top comments (0)