All code and configuration available on GitHub
As a disclaimer, I'm not claiming this is a perfect fit for everyone. Different applications have different technical requirements, and different uptime or availability standards.
But I aim to outline the basics for an inexpensive GKE cluster with Node microservices in mind. Asserted uses a configuration similar to this to run all of it's microservices.
Cluster Features
- preemptible nodes to reduce cost (optional)
- automatic SSL management with Google managed certificates
- ingress websocket stickiness
Why a cluster at all? Why not just a VM?
If your only consideration is price at the cost of everything else, then it's probably cheaper to just use a VM. However, deploying into a cluster offers a number of advantages for not that much more money.
A GKE cluster gives you tons of stuff for free that you would otherwise have to do without or engineer yourself.
- Dockerized applications ensure portable and reproducable builds
- Deployments are automatically health-checked as they roll out and stop if something is broken
- Failing instances are automatically taken off the load balancer and restarted
- Ingress controllers can automatically provision and update your SSL certs
- Resource management becomes much easier as individual applications can be limited by CPU or memory, and distributed optimally over machines
- New applications can be deployed with minimal complexity
- High availability becomes a matter of how much you want to pay rather than an engineering problem
In my mind the only real argument against any of this is just the cost of a cluster. But properly configured, a simple cluster can deployed for minimal cost.
High (ish) Availability
In this scenario I need my cluster to be able to perform deployments and node updates with no downtime as those two events are likely to be relatively frequent.
That said, I don't need and can't afford 100% uptime. I don't need multi-zone redundancy, and definitely not multi-cloud failover. I can tolerate the risk of up to a minute or so of unexpected downtime once a month or so if it reduces my costs significantly.
If you design all of your services to be stateless and make use of Cloud PubSub to queue work instead of directly calling other services over HTTP, it's possible to have an entire microservice worth of pods become unavailable for a minute or two without any lasting, (or maybe even noticable), impact.
Preemptible Nodes
This is an optional step, but one where a lot cost savings comes from. A preemptible e2-small costs 30% of a standard VM. But comes with some caveats:
- Preemptible nodes can be killed at any time. Even within minutes of starting (though rare in my experience).
- Google claims they always restart instances within 24hrs, though I've found this to not always be the case
- Preemptible nodes may not always be available. This seems to be more of an issue for larger VMs, never seen this issue myself for smaller ones.
If your services are stateless, this should not be much of an issue. The only real problem happens if the lifetime of the Nodes is synchronised and Google decides to kill all of them at the same time. This risk can be minimized by running something like preemptible-killer, but I haven't found it necessary yet.
Creating the Cluster
Cluster Details
The cluster is created with a single gcloud command. If the cluster already exists, you can create a new node pool with similar arguments.
Once this command is run, it will take a few minutes to complete.
API Implementation
The example API is only a few lines, but has a fair bit going on to demonstrate the various cluster features.
Namespace
Create the namespace first.
kubectl apply -f cluster/namespace.yml
Deploy Redis
Redis is only included as an in-cluster deployment for the purposes of this example. It's likely that in a production environment, if Redis is required, you likely wouldn't want it on a preemptible instance.
A better choice is to use a node selector or node affinity to deploy it onto a non-preemptible VM, or even just substitute with Redis Memorystore if the budget allows. A minimal Redis Memorystore instance is a bit costly, but worth it my opinion.
That said, if you design your microservices to treat Redis as an ephemeral nice-to-have global cache, and have connections fail gracefully if it's gone, you could run it in the cluster on preemptible. Again it depends on your application, cost sensitivity, and uptime requirements.
kubectl apply -f cluster/redis
Create the API IP Address
Create a public external API IP to bind to the ingress.
gcloud compute addresses create test-api-ip --global
Configure your DNS provider to point to the IP.
ConfigMap and API Deployment
The configMap and deployment are mostly pretty standard, but I’ll highlight the important details.
The deploy.yml specifies pod anti-affinity to spread the API pods as widely as possible across the nodes. The topologyKey allows the deployment to determine if a given pod is co-located on the same resource as another.
Apply the configMap and the API deployment and wait until they are up.
kubectl apply -f cluster/api/configMap.yml
kubectl apply -f cluster/api/deploy.yml
BackendConfig
The BackendConfig is a less widely documented configuration option in GKE, but it’s essential to making websockets load-balance correctly across multiple nodes.
The BackendConfig itself looks like this:
This configures the load-balancer to have session stickyness based on IP so that connections are not constantly round-robined to every API pod. Without that, socket.io won't be able to maintain a connection while polling.
The connectionDraining option just increases the amount of time allowed to drain connections as old API pods are replaced with new ones. The default is 0, which can cause connections to be severed early.
kubectl apply -f cluster/api/backend.yml
This BackendConfig is then referenced by both the service.yml and the ingress.yml.
API Service
The service creates an external load-balancer that connects to each API pod.
The important extra details in this case are the annotations and sessionAffinity in the spec.
kubectl apply -f cluster/api/service.yml
ManagedCertificate and Ingress
The ingress terminates SSL and connects the service and the load balancer to the fixed external IP.
The important extra details here are the annotations again. They link the ingress to the correct cert, IP, and backend. And also enable websocket load-balancing in nginx, without it websocket connections will not work.
The managed certificate attempts to create an SSL cert for the domain specified in it's config. It requires that everything before this is deployed and working before the managed certificate will switch to active.
Create the cert and the ingress.
kubectl apply -f cluster/api/managedCert.yml
kubectl apply -f cluster/api/ingress.yml
It’ll take up to 20 minutes to create the managed certificate. You can monitor the cert creation and the ingress creation by running the following separately:
watch kubectl describe managedcertificate
watch kubectl get ingress
Success!
Once everything is up, you should be able to navigate to the URL you bound to the external IP, and see this:
As you refresh, the connected hostname should not change which indicates that socket.io and the session affinity are working.
You now have all the basic configuration you need for a Kubernetes cluster with automatic SSL and websocket/socket.io support!
Top comments (0)