DEV Community

Cover image for Octarine finally made kubernetes feel sane now here’s what I want in 2.0
<devtips/>
<devtips/>

Posted on

Octarine finally made kubernetes feel sane now here’s what I want in 2.0

How kubernetes 1.22 made my cluster survivable and what i’m hoping for in 2.0

The Octarine release (Kubernetes 1.22) dropped, and for once it didn’t feel like playing Russian roulette with my cluster. Instead, it felt like Gandalf walked in, waved a staff, and said: “your CRDs shall not break.”

If you’ve ever patched Kubernetes mid-project, you know the vibe: it’s like modding Skyrim with 200 plugins you pray it won’t crash, but deep down you know it will. Octarine? It actually worked. Clean upgrade. Less chaos. My YAML-induced PTSD eased up a little.

TLDR: Octarine isn’t just another minor bump. It’s a legit shift that made Kubernetes feel less like a Dark Souls boss fight and more like a game with guardrails. In this piece I’ll break down:

  • What Octarine actually fixed (with receipts)
  • Why I think it’s a big deal for stability, security, and sanity
  • The stuff that’s still broken and rage-inducing
  • What Kubernetes 2.0 should look like if it ever lands

Think of this as one dev’s take, over coffee or on Discord, about what’s magical in 1.22 and what I’m desperately hoping for in 2.0.

Octarine in plain english

Let’s cut the hype: what did Kubernetes 1.22 “Octarine” actually do? It wasn’t just another version bump with a cute codename. This one quietly fixed some of the pain points that have haunted every poor soul who’s ever tried to kubectl apply at 3 a.m.

1. CRDs finally grew up
Custom Resource Definitions (CRDs) went from “experimental toy” to “stable adult feature.” If you’ve built operators or custom controllers, you know CRDs used to feel like duct-taping YAML to hope. One typo, and suddenly your cluster had opinions about existence. In 1.22, CRDs finally hit GA (general availability) status. Translation: fewer nightmares, more reliable APIs. Official receipts → Kubernetes 1.22 release notes.

2. Server-side apply
This one is underrated. Instead of your kubectl client mashing config files together like a chaotic merge, the cluster itself now keeps track of fields and owners. End result: no more accidental overwrites when three people apply changes to the same resource. It’s like Git blame for YAML but with fewer arguments in Slack.

3. Security tightening
Admission webhooks, RBAC refinements, and a bunch of deprecations (goodbye PodSecurityPolicy) moved the ecosystem toward saner defaults. If you’ve ever spent hours trying to figure out why your pod can’t pull secrets, 1.22 at least made that less of a black box.

When I first read the GitHub changelog, my reaction wasn’t “wow, shiny new features!” It was more: “oh thank god, maybe I won’t break prod by touching YAML again.”

For context, here’s what I mean:

# old way, pre-server-side apply
# two people apply configs, last one wins
spec:
replicas: 3
# new way, server-side apply (1.22+)
# cluster tracks field ownership, no silent stomping
spec:
replicas: 3

Tiny change? Sure. But if you’ve ever lost state in production because a teammate’s kubectl apply nuked your fields, you know why this matters.

So yeah Octarine isn’t magic in the flashy sense. It’s magic in the thank-you-for-not-wrecking-my-weekend sense.

Press enter or click to view image in full size

The 3S framework: stability, security, sanity

When people ask me why Kubernetes 1.22 “Octarine” felt different, I boil it down to three S’s. It’s not that the release gave us flashy new toys it gave us fewer reasons to swear at our terminals.

Stability

Before 1.22, CRDs were like that experimental mod in your Minecraft server: fun until it corrupts the world. Now they’re stable. That means when you extend Kubernetes with custom APIs, they don’t collapse under load like a Jenga tower.

In practice? Upgrades don’t feel like Russian roulette anymore. The API guarantees actually mean something, so your operators don’t suddenly fall apart because someone decided to tweak an alpha field. Proof: Kubernetes 1.22 notes → CRDs are GA.

Security

PodSecurityPolicy was deprecated in 1.22, which at first freaked me out until I realized it was the right call. PSPs were like that roommate who “helps out” by breaking more things than they fix. The new PodSecurity admission controller (and RBAC refinements) actually make sense. Less mystery meat in the permission system, more predictable enforcement.

Not perfect, but at least now I can onboard junior devs without explaining a giant “don’t touch this” YAML file.

Sanity

This one’s personal: server-side apply. It’s the difference between rage-quitting and getting to bed at a normal hour.

Before 1.22:
Two devs edit the same resource, both kubectl apply, last one overwrites silently. Congrats, you’ve just deleted your teammate’s config.

After 1.22:
The cluster itself tracks field ownership. If someone tries to stomp over your changes, you actually see a conflict instead of silently losing work. It’s Git for YAML but it works.

Example receipt:

# pre-1.22 apply chaos
kubectl apply -f replicas.yaml
# teammate overwrites, no warnings
# server-side apply
kubectl apply --server-side -f replicas.yaml
# conflict detected → tells you who owns the field

First time I used this? The conflict popped, and instead of swearing, I smiled. It was the first time in years an upgrade felt like Kubernetes was on my side.

What still sucks in 2025

Okay, let’s not pretend Octarine solved everything. Kubernetes is still the most powerful tool in the room and also the one most likely to kick you in the shins when you’re just trying to deploy a web app.

Ingress hell

Every time someone says “Ingress is easy,” I want to hand them my logs from the last time I tried to wire NGINX, cert-manager, and DNS together. It’s like a Dark Souls boss fight: you go in confident, get wrecked in three moves, and then spend the next week watching tutorials on YouTube.

Octarine didn’t fix this. Ingress is still a maze of annotations and YAML where one misplaced dash can break SSL.

Yaml: the lifestyle tax

YAML isn’t just markup it’s a lifestyle. You don’t use YAML, you commit your soul to it. One space too many and Kubernetes looks at you like you just spoke Klingon.

Even with CRDs being more stable, we’re still managing massive YAML forests. My configs/ directory looks like someone rage-saved a dozen JSON files in Notepad, then converted them at gunpoint.

Debugging pods is still cursed

Logs are better than they were five years ago, but debugging a failing pod still feels like getting tech support from an NPC in an RPG.

  • kubectl logs → blank.
  • kubectl describe → cryptic wall of text.
  • Google the error → half the answers are from 2018 and deprecated.

I watched a friend try to debug a CrashLoopBackOff last month. Thirty minutes in, he just closed his laptop and said, “I’m going back to Docker Compose.” Honestly? Hard to blame him.

Community receipts

I’m not alone here check this Reddit thread or the HN discussion. The vibe is the same: Kubernetes is indispensable, but also exhausting.

So yeah Octarine gave us stability, security, and sanity. But Kubernetes still has a way of making new devs feel like they’re installing Arch Linux blindfolded.

Press enter or click to view image in full size

The LNP framework: less yaml, native observability, plug-and-play scaling

If Octarine gave us stability, security, and sanity, then Kubernetes 2.0 needs to go after the big three that are still breaking our spirits: less YAML, native observability, and plug-and-play scaling. Call it the LNP framework.

Less yaml

We joke that YAML is a lifestyle tax, but it’s also a serious barrier to entry. Declarative configs are great in theory, but in practice we’re managing hundreds of lines of indentation-sensitive text that can silently break prod if you fat-finger a space.

Imagine if Kubernetes moved toward a schema-first approach with stricter validation and saner defaults. Something like:

# today’s YAML
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1

vs.

# a saner, schema-first future
replicas: 3
updateStrategy: rolling

Kubernetes 2.0 doesn’t have to kill YAML, but it needs to stop making us feel like indent monks.

Native observability

Right now, you don’t just get Kubernetes. You get Kubernetes plus:

  • Prometheus for metrics
  • Grafana for dashboards
  • Loki for logs
  • Jaeger for tracing

It’s like buying a console and then being told: “oh, the controller is sold separately, and also you need three adapters.”

Why can’t the control plane come with batteries included? Even a baseline metrics + tracing package would save thousands of hours of duct-taping observability together.

Plug-and-play scaling

Autoscaling in Kubernetes is technically “built-in.” Realistically? You need metrics-server, Prometheus, custom configs, and a blood sacrifice under a full moon.

Scaling should be as simple as:

kubectl autoscale deployment web --min=3 --max=10 --cpu=60

… and have it actually work out of the box with sane thresholds. No 3-hour detour into tuning configs.

Decision table candy

Here’s where we are vs. where we could be:

This is the stuff that will make or break Kubernetes 2.0. Because honestly? If it still feels like an endless YAML puzzle, people are going to drift toward simpler tools.

My wishlist as a dev

Kubernetes 2.0 wishlists are floating all over Reddit and Hacker News, but here’s mine the stuff that would actually make me excited instead of just exhausted.

1. Native secrets management that doesn’t feel cursed

Right now, storing secrets in Kubernetes feels like leaving your house key under the doormat and calling it “encryption.” Sure, you can glue in HashiCorp Vault or SOPS, but that means running another stack inside your stack.

If 2.0 gave us a built-in vault with sane defaults, I’d sleep better. Secrets are literally the last thing I want to duct-tape.

2. Autoscaling that just works

Horizontal Pod Autoscaler (HPA) is like that unreliable party member in an RPG: sometimes clutch, sometimes AFK. If you don’t wire up metrics correctly or if you try to scale on something beyond CPU it feels like black magic.

I want scaling that’s as dead simple as:

kubectl scale web --auto

…and it just knows what to do. No Prometheus, no extra CRDs, no blog post rabbit hole at 1 a.m.

3. Local dev that doesn’t feel like speedrunning elden ring

Running Kubernetes locally is still painful. Minikube, kind, Docker Desktop they all kinda work, but never without yak-shaving. One time I lost half a day because my local cluster wouldn’t pull an image that was literally sitting in Docker on my machine.

Why can’t we have a batteries-included local story? Something that makes k8s up as easy as docker-compose up.

Receipts: not just me

Mat Duggan nailed this in his “What Would Kubernetes 2.0 Look Like?” post: we don’t need new toys, we need boring stability. Better defaults, better developer experience, less yak shaving.

That’s it. I don’t want Kubernetes to feel like a lifestyle choice. I just want to ship code without sacrificing a goat to the YAML gods.

Community expectations vs. reality

It’s not just me dreaming of a Kubernetes 2.0. The community has been chewing on this for a while, and the vibe is… complicated.

On Reddit, you’ll find two camps:

  • Camp 1: “Kubernetes doesn’t need a 2.0, it just needs better defaults.”
  • Camp 2: “Burn it down and start fresh. Kubernetes is too complex to fix.”

The Hacker News thread is even more blunt. Half the comments are people saying “Kubernetes is amazing but I wouldn’t recommend it to a new team,” and the other half are basically: “Why are we all YAML engineers now?”

And then you have essays like Mat Duggan’s piece, where he argues that we don’t really need “Kubernetes 2.0” at all we just need Kubernetes to become less punishing. Less bolt-ons. More boring reliability.

Here’s the tension: Kubernetes 1.22 Octarine proved the platform can evolve without breaking everything. But the dev community is already daydreaming about what a “revolutionary” 2.0 would look like, while also dreading the pain of another migration.

It’s giving big Half-Life 3 energy: rumored forever, speculated about endlessly, but maybe never shipping. And if it does? Expectations are so high it’ll either be legendary or instantly disappointing.

The truth is probably less dramatic: Kubernetes will keep iterating. We’ll keep complaining. And somewhere in between, someone will finally ship a “Kubernetes, but actually fun” alternative and steal hearts the way Docker once did.

Conclusion

Kubernetes Octarine didn’t give us fireworks it gave us something better: a little peace of mind. Stable CRDs, server-side apply, and saner security defaults aren’t flashy features, but they’re the kind of fixes that stop clusters from wrecking weekends. For once, an upgrade felt like Kubernetes was helping instead of trolling us.

But here’s the thing: if Kubernetes is going to survive another decade, it can’t just keep sanding down rough edges. The platform needs to start lowering the barrier to entry. YAML can’t be the hill we die on. Observability shouldn’t require three PhDs and five sidecars. Autoscaling should work without voodoo.

Octarine proved Kubernetes isn’t dead weight it’s still the endgame boss of infrastructure. But Kubernetes 2.0 will decide if it stays a boss fight everyone trains for, or middleware we forget exists because it “just works.”

That’s the crossroads we’re at. Personally? I’m still rooting for Kubernetes. But if the next big leap doesn’t put developers first, I won’t be shocked if the community drifts to simpler, opinionated platforms that trade power for sanity.

So I’ll end with a question to you: what would make you actually excited for Kubernetes 2.0? Because the answer might decide whether we keep fighting this boss, or uninstall and find a new game.

Helpful resources

Top comments (0)