DEV Community

daniel jeong
daniel jeong

Posted on • Originally published at manoit.co.kr

Kubernetes 1.33 Octarine

Kubernetes 1.33 Octarine: Key Features Explained — From In-Place Pod Resize to User Namespaces

We analyze the 5 most impactful features among Kubernetes 1.33 Octarine's 64 new features for production environments. From In-Place Pod Resize's Beta graduation to User Namespaces' security enhancements, we've compiled the changes every operations engineer must know.

1. Kubernetes 1.33 Octarine — Release Overview

Released in February 2026, Kubernetes 1.33 brings the most feature-rich update in history under the codename "Octarine," meaning "The Color of Magic." Looking at the release statistics:

  • 64 new features total: The highest number in K8s history
  • 18 graduated to Stable: Ready for immediate production deployment
  • 20 entering Beta: Expected to reach GA within 1-2 releases
  • 24 new Alpha features: Core components of the long-term roadmap
  • 3 APIs removed: End of legacy support

This indicates accelerating development velocity in the Kubernetes community. Version 1.32 had 43 features, while 1.33 exceeds that by over 50%.

2. In-Place Pod Resize — Finally Beta Graduation

In-Place Pod Resize, one of the most awaited features, has graduated from Alpha to Beta and is now enabled by default. Pods can now dynamically adjust CPU and memory resources without restarting.

2.1 Core Functionality

# Change pod resources without restart (Kubernetes 1.33+)
kubectl patch pod my-app --subresource resize --type merge -p '{
  "spec": {
    "containers": [{
      "name": "app",
      "resources": {
        "requests": { "cpu": "500m", "memory": "512Mi" },
        "limits": { "cpu": "1", "memory": "1Gi" }
      }
    }]
  }
}'

# Verify the changes
kubectl get pod my-app -o jsonpath='{.spec.containers[0].resources}'
{"requests":{"cpu":"500m","memory":"512Mi"},"limits":{"cpu":"1","memory":"1Gi"}}
Enter fullscreen mode Exit fullscreen mode

2.2 Why It Matters — Three Real-World Scenarios

Scenario 1: Zero-Downtime Scaling for Java Applications

Java applications have long JVM startup times, making restart costs very high. When cold starts exceed 30 seconds, you can now increase memory resources directly instead of restarting pods when facing OOMKilled situations. In-Place Resize allows immediate memory expansion without downtime.

Scenario 2: Zero-Downtime VPA (Vertical Pod Autoscaler) Operations

Previously, when VPA calculated optimal resources, it deleted pods and recreated them. With In-Place Resize integration, VPA can perform automatic vertical scaling without downtime. This is especially effective for services with high traffic volatility.

Scenario 3: Flexible Resource Management for AI Inference Workloads

LLM inference requires dramatically different GPU memory based on token length and batch size. Using In-Place Resize allows dynamic GPU memory adjustment according to traffic patterns.

2.3 Limitations and Caveats

⚠️ Important Limitations: In-Place Resize doesn't support all resource types. Currently, it only supports CPU and memory; GPU, ephemeral storage, and network bandwidth are not yet supported. If resizing fails, the status is recorded in the Pod's status.resize field, so monitoring is essential. The container runtime must support resource updates—In-Place Resize only works with Docker 25.0+, containerd 1.6+, or newer.

3. User Namespaces — Container Security Enhanced

User Namespaces is now enabled by default in Kubernetes 1.33. This feature maps the root user (UID 0) inside containers to a regular user on the host, significantly reducing the impact of container escape attacks.

3.1 How It Works

Previously, root (UID 0) inside a container could also run as root on the host, making the entire host vulnerable through kernel exploits leading to container escape. With User Namespaces:

# Enable User Namespaces in Pod spec (Kubernetes 1.33+)
apiVersion: v1
kind: Pod
metadata:
  name: secure-app
spec:
  hostUsers: false  # Enable User Namespaces
  containers:
    - name: app
      image: myapp:latest
      securityContext:
        runAsUser: 0  # Appears as root inside container

# Result:
# Inside container: UID 0 (appears to have root privileges)
# On host: UID 100000+ (mapped to unprivileged user)
# → Even if container escapes, host is only vulnerable as unprivileged user
Enter fullscreen mode Exit fullscreen mode

3.2 Security Benefits

Attack Vector Without User Namespaces With User Namespaces
Container Escape Host root privilege obtained Runs only as unprivileged UID (100000+)
Device File Access Direct manipulation of /dev possible Unusable (no permissions)
Other Pods' Volume Access Accessible with root privileges Inaccessible (permission mismatch)
Host Process Control Signal sending possible (root) No permissions

3.3 Migration Considerations

⚠️ Caution: User Namespaces can cause permission issues with host-mounted volumes on pods. When you set hostUsers: false, UID 1000 inside the container maps to UID 101000 on the host, preventing access to regular user files on the host. Therefore, if you use hostPath in volumeMounts, you must reconfigure file permissions accordingly.

4. Job Success Policy GA — Flexible Batch Workload Completion

Job Success Policy has graduated to GA (General Availability). This feature allows flexible definition of "success" for batch workloads.

4.1 Problems with Previous Approach

Previously, all pods in a Job had to succeed for the Job to complete. For example, in distributed training with 4 parallel pods:

  • Reader pod 1: complete (loads data)
  • Worker pods 3: complete (trains model)
  • Result: Job success

However, in reality, the entire task completes when just the reader pod succeeds. Job Success Policy lets you specify exactly this.

4.2 Practical Usage Example

# Job Success Policy example: distributed training (reader success only needed)
apiVersion: batch/v1
kind: Job
metadata:
  name: distributed-training-job
spec:
  completionMode: Indexed  # Assign index to each pod
  completions: 4           # Total 4 pods
  parallelism: 4           # Run 4 concurrently

  successPolicy:
    rules:
      - succeededIndexes: "0"  # Success condition: index 0 (reader)
        succeededCount: 1      # 1 pod success required

  template:
    spec:
      containers:
        - name: training
          image: training:latest
          env:
            - name: POD_INDEX
              valueFrom:
                fieldRef:
                  fieldPath: metadata.annotations['batch.kubernetes.io/job-completion-index']
      restartPolicy: Never

# Result:
# - Pod 0 (reader): complete → entire Job completes immediately
# - Pods 1, 2, 3 (workers): can continue running—doesn't matter
Enter fullscreen mode Exit fullscreen mode

5. Pod Generation Tracking

A new metadata.generation field has been added to Pods. The generation value now increments whenever mutable fields in the Pod spec are updated.

# Monitor Pod generation
kubectl get pod my-app -o jsonpath='{.metadata.generation}'
1

# After resource change
kubectl patch pod my-app --subresource resize --type merge -p '{...}'

# Generation increases
kubectl get pod my-app -o jsonpath='{.metadata.generation}'
2

# Check status
kubectl get pod my-app -o jsonpath='{.status.observedGeneration}'
2

# When observedGeneration equals metadata.generation, changes are applied
Enter fullscreen mode Exit fullscreen mode

6. Operational Implementation Guide: Pre-Upgrade Checklist

Item What to Check Impact Mitigation Strategy
In-Place Resize Check for conflicts with existing VPA configuration Medium Use VPA v1.0+, configure priorities
User Namespaces Test volume permissions when hostUsers: false High Test in staging first, adjust permissions
Job Success Policy Review completion conditions of existing batch jobs Low Maintain existing logic, add if needed
Pod Generation Add generation to monitoring dashboards Low Add Prometheus metrics
Deprecated APIs Verify removal of flowcontrol.apiserver.k8s.io/v1beta3 High Auto-convert using kubectl convert

6.1 Step-by-Step Upgrade Checklist

# Step 1: Test in staging environment
kubectl --kubeconfig=staging apply -f deployment.yaml
kubectl --kubeconfig=staging wait --for=condition=available --timeout=300s deployment/test-app

# Step 2: Check for deprecated APIs
kubectl get flowcontrols  # Should return nothing
kubectl api-versions | grep flowcontrol

# Step 3: Update Custom Resources
kubectl convert -f old-cronjob.yaml --output-version batch/v1

# Step 4: Test In-Place Resize
kubectl patch pod test-pod --subresource resize --type merge -p '{"spec":{"containers":[{"name":"app","resources":{"requests":{"memory":"1Gi"}}}]}}'
kubectl get pod test-pod -o jsonpath='{.metadata.generation}'

# Step 5: Test User Namespaces
kubectl apply -f secure-pod.yaml
Enter fullscreen mode Exit fullscreen mode

7. Performance and Security Impact Analysis

Feature Performance Impact Security Improvement Operational Complexity
In-Place Resize + Eliminates downtime (saves restart time) Neutral Increased (VPA configuration tuning)
User Namespaces - ~5% overhead (mapping cost) +++ Greatly enhanced (escape attack prevention) Increased (complex permission setup)
Job Success Policy Neutral Neutral Reduced (flexible policies)

💡 Pro Tip: Run version 1.33 in staging for at least 3 weeks before upgrading. In-Place Resize and User Namespaces especially can change how existing applications operate, so thorough testing is critical. Use kubectl convert to auto-convert deprecated APIs, so prepare migration tools in advance.

8. Conclusion

Kubernetes 1.33 Octarine raises Kubernetes maturity to a new level through 64 new features. In-Place Pod Resize eliminates downtime for long-running workloads, while User Namespaces fundamentally reduces container escape attack risks. Adopt these features progressively according to your organization's readiness, but prioritize enabling User Namespaces in security-critical environments.

This article was created with AI technology support. For more cloud-native engineering insights, visit the ManoIT Tech Blog.

Top comments (0)