Karpenter is AWS's open-source Kubernetes node autoscaler that provisions just-right compute in seconds. Unlike Cluster Autoscaler, Karpenter doesn't need node groups — it directly provisions EC2 instances based on pod requirements.
Free, open source, now a CNCF sandbox project. Works with any cloud (not just AWS).
Why Use Karpenter?
- No node groups — provisions exactly the right instance type for pending pods
- Sub-minute scaling — 10x faster than Cluster Autoscaler
- Cost optimization — automatically uses Spot instances, ARM, and right-sized nodes
- Consolidation — removes underutilized nodes and repacks workloads
- Drift detection — automatically upgrades nodes when AMI changes
Quick Setup
1. Install Karpenter
helm repo add karpenter https://charts.karpenter.sh
helm install karpenter karpenter/karpenter \
--namespace karpenter --create-namespace \
--set clusterName=my-cluster \
--set clusterEndpoint=$(aws eks describe-cluster --name my-cluster --query 'cluster.endpoint' --output text)
2. Create a NodePool
kubectl apply -f - <<EOF
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: default
spec:
template:
spec:
requirements:
- key: kubernetes.io/arch
operator: In
values: [amd64, arm64]
- key: karpenter.sh/capacity-type
operator: In
values: [spot, on-demand]
- key: node.kubernetes.io/instance-type
operator: In
values: [m5.large, m5.xlarge, m6g.large, c5.large]
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: default
limits:
cpu: "100"
memory: 200Gi
disruption:
consolidationPolicy: WhenEmptyOrUnderutilized
consolidateAfter: 30s
EOF
3. Check NodePool Status
# Via kubectl
kubectl get nodepools
kubectl get nodepools default -o json | jq '{cpu: .status.resources.cpu, memory: .status.resources.memory, nodes: .status.resources["nodes"]}'
# Get provisioned nodes
kubectl get nodeclaims
kubectl get nodeclaims -o json | jq '.items[] | {name: .metadata.name, type: .status.instanceType, zone: .status.zone, capacity: .status.capacity}'
4. Trigger Scaling
# Deploy a workload — Karpenter auto-provisions nodes
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 50
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
resources:
requests:
cpu: "1"
memory: 1Gi
EOF
# Watch Karpenter provision nodes
kubectl logs -n karpenter -l app.kubernetes.io/name=karpenter -f | grep 'launched'
Python Example
from kubernetes import client, config
config.load_kube_config()
api = client.CustomObjectsApi()
# List NodePools
nodepools = api.list_cluster_custom_object(
group="karpenter.sh", version="v1", plural="nodepools")
for np in nodepools["items"]:
name = np["metadata"]["name"]
limits = np["spec"].get("limits", {})
print(f"NodePool: {name} | CPU limit: {limits.get('cpu','unlimited')} | Memory: {limits.get('memory','unlimited')}")
# List NodeClaims (provisioned nodes)
nodeclaims = api.list_cluster_custom_object(
group="karpenter.sh", version="v1", plural="nodeclaims")
for nc in nodeclaims["items"]:
print(f"Node: {nc['metadata']['name']} | Type: {nc['status'].get('instanceType','pending')} | Zone: {nc['status'].get('zone','pending')}")
Key Resources
| Resource | Description |
|---|---|
| NodePool | Defines constraints for node provisioning |
| EC2NodeClass | AWS-specific config (AMI, subnets, security groups) |
| NodeClaim | Represents a provisioned node |
Need custom data extraction or scraping solution? I build production-grade scrapers for any website. Email: Spinov001@gmail.com | My Apify Actors
Top comments (0)