Kubernetes makes it ridiculously easy to deploy workloads. The problem starts when you deploy many of them — and especially when multiple teams share the same cluster.
Without guardrails, a single pod can request unlimited CPU, developers can forget memory limits, one namespace can consume the entire cluster, and suddenly production nodes are choking.
This is exactly why Kubernetes provides two critical tools for resource governance:
LimitRange (Per-Pod Resource Guardrail)
LimitRange exists to control how much CPU and memory each pod or container can request and consume. It prevents oversized and undersized containers.
It also injects defaults if developers forget to set resource requests — which happens more than everyone admits.
Think of it as:
“Every pod in this namespace must stay within these min, max, and default resource rules.
ResourceQuota (Namespace Resource Ceiling)
ResourceQuota is different. Instead of focusing on individual pods, it controls the total resources and even object counts inside a namespace.
For example:
how many pods can exist
how much total CPU the namespace can consume
how much memory the namespace can consume
how many PVs can be created
how many LoadBalancers or GPUs a team can request
This is extremely useful in multi-team and multi-tenant clusters.
Analogy That Makes It Click
If the cluster is a building:
ResourceQuota = total electricity allowed for each floor
LimitRange = max electricity allowed for each apartment
One prevents abuse per pod, the other prevents abuse per team.
Both together = a stable shared cluster.
Namespace Example (Practical Project)
Let’s create a namespace called dev.
Step 1: Namespace
Yaml
apiVersion: v1
kind: Namespace
metadata:
name: dev
Step 2: Apply ResourceQuota
This throttles total resources for the whole namespace:
Yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: dev
spec:
hard:
requests.cpu: "4"
requests.memory: "4Gi"
limits.cpu: "4"
limits.memory: "8Gi"
pods: "20"
persistentvolumeclaims: "5"
Meaning:
max 20 pods
max 4 CPUs requested
max 8Gi memory limit
max 5 PVCs
Step 3: Apply LimitRange
This enforces pod-level boundaries:
Yaml
apiVersion: v1
kind: LimitRange
metadata:
name: dev-limits
namespace: dev
spec:
limits:
- type: Container default: cpu: "500m" memory: "512Mi" defaultRequest: cpu: "200m" memory: "256Mi" min: cpu: "100m" memory: "128Mi" max: cpu: "1" memory: "1Gi"
Now:
developers can’t create extreme pod sizes
defaults apply automatically if they forget
Step 4: Test with a Deployment
Deploy without specifying resources:
Yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: dev
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
After applying:
kubectl describe pod -n dev
You’ll notice resource requests & limits are injected automatically. That’s LimitRange doing its job.
Failure Scenario (Quota Saves the Cluster)
If you try to deploy 30 pods:
Copy code
Error: exceeded quota: dev-quota (pods limit reached)
Without quotas, one team could take down the whole cluster. With quotas, Kubernetes defends itself.
Where to Use This in Real Companies
Enterprises apply this in:
✔ Platform Engineering
✔ DevSecOps
✔ SRE governance
✔ Multi-tenant SaaS
✔ Shared clusters
✔ Chargeback models
ResourceQuota ensures one tenant can’t starve others.
LimitRange ensures every pod behaves reasonably.
If You Ignore These…
You get:
❌ node pressure
❌ OOM kills
❌ cluster starvation
❌ noisy neighbor issues
❌ random latency spikes
❌ SRE nightmares at 3AM
Using them avoids operational chaos.
Top comments (0)