Kubernetes (k8s) is awesome. I am a fan. I like it so much I have a k8s-themed license plate on my car, but I cringe every time I see a blog post or tweet pitching it as a solution for a nascent company.
Like microservices (Which, strong opinion incoming… you also probably shouldn’t be using, especially for something new. Seriously, stop. ), k8s solves a specific set of problems (mostly coordination/abstraction of some infrastructure components/ deployments and a lot of stuff related to general scaling/self-healing) and comes with significant, usually overlooked cost.
From a sysadmin perspective, k8s is borderline magic. Compared to all the bespoke automation one might have had to build in the past, k8s provides a general purpose infrastructure-as-code path that *just works*. A metaphor of Lego-like bricks that snap together is apt… for the most part.
K8s abstracts away a huge amount of complexity that normally consumes the lives of sysadmins , but the complexity is still there. It’s just gracefully hidden 95% of the time and the way it bubbles up is binary. Problems in k8s are either incredibly easy to solve or incredibly difficult. There’s not much in-between. You’re either building with Lego or troubleshooting polymers at the molecular level.
Deploying a completely new service, configured from ingress to database in an easy-to-read YAML file? – Super simple.
Understanding the interplay of infra-service network optimizations and failure modes? – Even with tools like service meshes and advanced monitoring/introspection, it’s really difficult.
Cluster security, networking controls, third-party plugins? Now you’re in deep, specific-domain-knowledge land.
Hosted-k8s (EKS, AKS, GKE, etc.) does not solve these problems for you. **Caveat: I know there are some fully-managed k8s providers popping up, but the best of those are basically Platform-as-a-Service (PaaS). ** It solves a lot of other problems related to the care and feeding of the k8s control plane, but you’re still left with complexity that’s inherent to running services within a cluster. Even if you’re a ninth-level Linux witch, there are problems that arise when running clustered infrastructure at scale that are simply *hard* in a similar (but admittedly less-complex) way that neuroscience is hard.
There is a point at which the challenge of this hidden complexity begins to be outweighed by the benefits of k8s, but it’s pretty far down the road – we’re talking many-millions-of-requests-per-day-with-several-tiers/services-and-possibly-geographies territory. Or you’re in a very specific niche that requires complex auto-scaling machine learning fleets, or something similar.
This is not intended as fear mongering. Again, I use k8s everyday and think it is awesome, but you need to go into it with eyes wide open and only after you’ve leaned hard into the constraints of PaaS or more traditional, boring tech that you fully grok. I started using k8s with this perspective, (at least I think I did) and there were still surprises along the way. It’s not a panacea. It’s highly unlikely that using k8s is going to save your company. Like most technologies, it will cause as many problems as it solves, you just need to have a solid understanding and rationale around which set of problems you want and are capable of dealing with.
If you’re building a new company or product, troubleshooting k8s is likely not in one of the problem sets you should be dealing with. Use Heroku or Elastic Beanstalk or whatever else that takes care of the undifferentiated heavy lifting for you. You can circle back to k8s when things really start cooking and you’ve got the people and resources to keep things on track.
None of this is to say you shouldn’t learn k8s or play around with minikube in development. Just keep in mind the huge difference between mastering k8s on your local machine and operationalizing it in production.
You could replace “k8s” with pretty much any technology and I think this advice would still apply. If you’re building something new, focus on the things that really move the needle and don’t try to solve architectural problems that you don’t have.
Photo by Frank Eiffert on Unsplash
Top comments (29)
Absolutely disagree, it's really not that hard and it provides a consistent way of doing everything.
Starting with something "simpler" then causes interuption to switch to completely different things.
The choice should more involve the skills of initial team than the size of company/budget/project.
Also starting with microsevices is much easier than splitting a monolith later.
Now I've heavily used K8s a lot I'm wishing I got into it sooner but everyone kept incorrectly saying it's over complex when it really isn't!
"Also starting with microsevices is much easier than splitting a monolith later."
It's all about picking the tradeoffs you want to deal with. Personally, I'll take "splitting up the monolith" over "re-architecting a bunch of incorrectly scoped microsercvices" any day of the week.
I agree Craig. K8s makes it so much easier for small projects because you don't have to worry about infrastructure as much. It makes CI much easier.
Well, i disagree with you, the infrastructure behind k8s is so complicated to deploy and administrate, much more complicated on-premise than on Cloud.
Where is the difference? Where is the easier? When the dev guy need to make a deployment of an app or microservice don't need to ask the infrastructure team for servers, storage and or anything... Just make a deployment on the kubernetes cluster and that's all.
But let me say you that the infrastructure is there, with higher level of complexity.
This is especially true when you over engineer K8s
Eg. Using it for the DB, persistent disks everywhere etc, all things that could be offloaded to 3rd party cloud services
👏 Don't 👏 run 👏 stateful 👏 services 👏 in 👏 k8s. 👏
Well, deploying stateless microservices will not dissapear all infrastructure required to run k8s.
The discussion is about k8s does not require complex infrastructure...
👏 Don't 👏 run 👏 stateful 👏 services 👏 in 👏 k8s. 👏
I'm new to them, but I feel like I'm about to do that exact thing. What is your solution to it?
Run them outside of k8s in traditional VMs or services like AWS RDS.
That applies to standard tools like databases, not specific ones for your use-case specially if you are building lot of workers and doing cpu heavy computation tasks. i.e not every app is CRUD like.
I disagree with this, but not because kubernetes isn't hard.
I built a K8s system for a startup and it was definitely challenging, but once it was done we had minimal down time, when stuff did go down it self healed, and (most importantly) I was able to teach our juniors very quickly how it worked, what they needed to care about and what was going on in the background that they didn't need to worry about.
A lot of this was simplified by automating away deploys etc.
Using K8s was a small investment that paid off very quickly, and if I were to go to a startup again, I would definitely use it again.
Agreed. K8s - for all its amazingness - can introduce unnecessary complexity early on. Assuming the eventual need, I'd go for something like docker-compose|swarm early, which provides a clean migration path if the time comes - but easy-peasy till then.
Docker-compose yes (for small scale stuff). Docker-swarm definitely not. We ran into some show-stopper bugs with it and docker weren't interested in fixing them. We're using k8s now. Complex, yes, but rock solid.
You got a 🦄 for
Very nice.
I appreciate your perspective. From the developer side k8s seems really great but from the sysadmin side it scares me. There seems to be a great deal written about how to use k8s but I have been unable to find much about the administration side. Any good places to look for that information? Most things I find tells you how to spin up the cluster and then moves on to just deploying things.
When I can't find documentation for something, I usually take that as a sign that that whatever it is, I shouldn't be doing it. I'd recommend avoiding running your own clusters entirely and use hosted-k8s. It's good to have the theoretical knowledge of how the control plane works and such - all things you can learn from github.com/kelseyhightower/kuberne... - but the circumstances where running your own clusters makes sense are very niche.
For day-to-day admin, the k8s docs are probably your best resource. Past that, it's a lot of delving into Github repos looking for references to things in random Readme files and source code. I have occasionally found some helpful things on kubedex.com/
Thanks for these pointers. My conclusion at this point is the same. However the benefits look really nice on the developer side so I’d really like to understand the sys admin side also. My current employer likes to keep internal tools on internal networks. Maybe in the future that will change.
The startup I work for uses k8s AND we use it for stateful services (neo4j and ElasticSearch).
To provide a bit of detail, we're on GCP, and use GKE. We use helm to roll out new deployments as part of our CI (via circleCI). All of our data is in google cloud storage, so we could recover in the event of a persistent disk failure (it would be awfully inconvenient though). Our system is a bit different than your standard database where you're reading and writing all the time. We're usually read-only, but we periodically recompute new databases, and atomically switch our services to point to those. This is quite easy with k8s.
There is certainly a learning curve when using k8s, and I wouldn't recommend it for everyone. However, once you've learned the basics I find it makes it a lot easier to experiment and try out new things.
A great read!
Question, do you think Docker-Compose suffers from similar disadvantages? is this a problem with container orchestration in general?
Also, I bet K8s might have left you with some scars, any battle stories for us?
💻
I’ve only used docker-compose for local development. It works fine for that and is an order of magnitude simpler than k8s. I don’t really think of it as being container-orchestration and more as “portable dev environment” but I may be ignorant of its capabilities.
Any type of orchestration adds complexity. I’d avoid it in general unless there was a clear, painful need.
No real k8s battle stories, but this was an interesting problem: dev.to/liquid_chickens/kubernetes-...
100% agreed with the author. This image says it all:
We're kinda comparing different levels of abstractions here. K8 has its complexities but so does managing VMware vSphere, IBM Websphere or Pivotal CloudFoundry for apps of any complexity -- those are better things to compare in my opinion.
The disruption has always been that vanilla K8 has lifted some traditionally silo'd infrastructure concerns to some intersection of Dev and Ops.
Yes, the old way of "here's a box, IP address and SSH key, install your app" works to a point, but someone has always had to manage the patching, firewall rules, storage & backup, monitoring, load balancing, etc. None of those concerns go away with public cloud, and enough people are still on-prem where silo'd Ops groups are normal. So, Ops groups hack away with Chef and Puppet to make lives easier and automate, but they have to do it without standard APIs or even sane CLI tooling to manage the infra (yes, it's awful).
K8 is nothing more than a set of open-source, vendor agnostic APIs and constructs from which to build platforms. Most start ups do not need to build bespoke platforms. Platforms that are built atop OSS tooling and standards will make a lot sense for a lot folks.
THANK YOU! Way to often a tech is touted as
this will fix all your problems
; when in reality tech is a series of trades-offs. Your article explains those trades nicely.