DEV Community

Cover image for How to: Kubernetes for Cheap on Google Cloud
Niko Kosonen for Verkko­kauppa.com

Posted on • Updated on • Originally published at github.com

How to: Kubernetes for Cheap on Google Cloud

[TL;DR: Run Kubernetes on two micro instances on GKE without external load balancers. Cluster setup from scratch. github.com/nkoson/gke-tutorial]

My excitement of running kubernetes on Google Cloud Platform was quickly curbed by the realization that, despite Google's virtual machines starting at affordable price points, their network ingress is another story: Let's say you want to set up a simple cluster for your own personal projects, or a small business. At the time of writing, a couple of micro nodes running in Iowa will set you back $7.77/mo, but the only (officially marketed, AFAIK) method of getting traffic in is by using a load balancer - which start at whopping $18.26 for the first 5 forwarding rules. That is a deal breaker for me, since there are plenty of other cloud providers with better offerings to smaller players.

That's when I stumbled upon a great article about running a GKE cluster without load balancers. With this newly incited motivation, I set out to create my GKE cluster - with the requirement of it being as cheap as possible while enjoying a key benefit of the cloud: being free of manual maintenance.

I have composed this article as a step-by-step tutorial. Based on my own experience in setting up a cluster on a fresh GCP account, I try to cover every topic from configuring the infrastructure to serving HTTP(S) requests from inside the cluster. Please notice, that I did this mainly to educate myself on the subject, so critique and corrections are wholeheartedly welcome.

We'll be using Terraform.io to manage our cloud infrastructure, so go ahead and register an account, if you haven't already. You'll obviously need access to a Google Cloud Platform account, as well.

Let’s get going by creating a new project on the GCP console:

Project selector (top bar) -> New Project -> Enter name -> Create

This will create a nice empty project for us, which differs from the default starter project in that the newly created blank doesn’t come with any predefined API’s or service accounts.
We’ll start digging our rabbit hole by enabling the Compute Engine API, which we need to communicate with GCP using Terraform. We'll also enable the Service Usage API so that Terraform can enable services for us as we go forward.

APIs & Services -> API Library -> Compute Engine API -> Enable

APIs & Services -> API Library -> Service Usage API -> Enable

Once the APIs have been initialized, we should find that GCP has generated a new service account for us. The aptly named Compute Engine default service account grants us remote access to the resources of our project.
Next, we’ll need to create a key for Terraform to authenticate with GCP:

IAM & Admin -> Service accounts -> Compute Engine default service account -> Create key -> Create as JSON

The key that we just downloaded can be used in our terraform.io console as an environment variable, or directly from local disk when running Terraform CLI commands. The former requires newlines edited out of the JSON file and the contents added as GOOGLE_CLOUD_KEYFILE_JSON in our terraform.io workspace:

Workspaces -> (select a workspace) -> Variables -> Environment Variables

Make sure you set the value as “sensitive / write only”, if you decide to store the key in your terraform.io workspace.
As stated above, it’s also possible to read the key from your local drive by adding the following in the Terraform provider resource:

provider "google" {
  version = "3.4.0"
  credentials = file("<filename>.json")
}

In this tutorial, we’ll be using the latter of the two methods.

While we’re here, it’s worth noting that the Compute Engine default service account doesn’t have the permissions to create new roles and assign IAM policies in the project. This is something that we will need later as part of our terraforming process, so let’s get it over with:

IAM & admin -> edit Compute Engine default service account (pen icon) -> Add another role -> select "Role Administrator" -> Save

Add another role -> select "Project IAM Admin" -> Save

Add another role -> select "Service Account Admin" -> Save

We’re now ready to initialize Terraform and apply our configuration to the cloud.

terraform init

This will set up your local Terraform workspace and download the Google provider plugin, which is used to configure GCP resources.

We can proceed to apply the configuration to our GCP project.

terraform apply

This will feed the configuration to the terraform.io cloud, check its syntax, check the state of our GCP project and, finally, ask for confirmation to apply our changes. Enter ‘yes’ and sit back. This is going to take a while.

module.cluster.google_project_iam_custom_role.kluster: Creating...
module.cluster.google_service_account.kluster: Creating..
module.cluster.google_compute_network.gke-network: Creating...
module.cluster.google_compute_address.static-ingress: Creating...
module.cluster.google_service_account.kubeip: Creating...
module.cluster.google_container_node_pool.custom_nodepool["ingress-pool"]: Creating...

module.cluster.google_container_node_pool.custom_nodepool["ingress-pool"]: Creation complete after 1m8s

Once the dust has settled, it’s time to check the damage. We set out to configure a minimal cloud infrastructure for running a kubernetes cluster, so let’s see how we’ve managed so far.

Compute Engine -> VM Instances

This page reveals that we now have two virtual machines running. These machines are part of node pools ingress-pool and web-pool. A node pool is a piece of configuration, which tells Google Container Engine (GKE) how and when to scale the machines in our cluster up or down. You can find the node pool definitions in cluster.tf and node_pool.tf

If you squint, you can see that the machines have internal IP addresses assigned to them. These addresses are part of our subnetwork range. There is a bunch of other address ranges defined in our cluster, which we’ll glimpse over right now:

subnet_cidr_range = "10.0.0.0/16"
# 10.0.0.0 -> 10.0.255.255

Defined in google_compute_subnetwork, this is the address range of the subnetwork, in which our GKE cluster will run.

master_ipv4_cidr_block = "10.1.0.0/28"
# 10.1.0.0 -> 10.1.0.15

The master node of our kubernetes cluster will be running under this block, used by google_container_cluster.

cluster_range_cidr = "10.2.0.0/16"
# 10.2.0.0 -> 10.2.255.255

Rest of our kubernetes nodes will be running under this range, defined as a secondary range as part of our subnet.

services_range_cidr = "10.3.0.0/16"
# 10.3.0.0 -> 10.3.255.255

Also a secondary range in our subnet, the service range contains our kubernetes services, more of which a bit later.

Understanding the basic building blocks of our network, there are a couple more details that we need to grasp in order for this to make sense as a whole. The nodes in our cluster can communicate with each other on the subnet we just discussed, but what about incoming traffic? After all, we’ll need to not only accept incoming connections, but also download container images from the web. Enter Cloud NAT:

Networking -> Network Services -> Cloud NAT

Part of our router configuration, Cloud NAT grants our VM instances Internet connectivity without external IP addresses. This allows for a secure way of provisioning our kubernetes nodes, as we can download container images through NAT without exposing the machines to public Internet.
In our definition, we set the router to allow automatically allocated addresses and to operate only on our subnetwork, which we set up earlier.

OK, our NAT gives us outbound connectivity, but we’ll need a inbound address for our cheap-o load balancer / ingress / certificate manager all-in-one contraption, traefik. We’ll talk about the application in a while, but let’s first make sure that our external static IP addresses are in check:

Networking -> VPC network -> External IP addresses

There should be two addresses on the list; an automatically generated one in use by our NAT, plus another, currently unused address which is named static-ingress. This is crucial for our cluster to accept connections without an external load balancer, since we can route traffic through to our ingress node using a static IP.
We’ll be running an application, kubeip, in our cluster to take care of assigning the static address to our ingress node, which we’ll discuss in a short while.

This is a good opportunity to take a look at our firewall settings:

Networking -> VPC network -> Firewall rules

We have added a single custom rule, which lets inbound traffic through to our ingress node. Notice, how we specify a target for the rule to match only with instances that carry the ingress-pool tag. After all, we only need HTTP(S) traffic to land on our internal load balancer (traefik). The custom firewall rule is defined here.

Lest we forget, one more thing: We'll be using the CLI tool gcloud to get our kubernetes credentials up and running in the next step. Of course, gcloud needs a configuration of its own, as well, so let's get it over with:

gcloud init

Answer truthfully to the questions and you shall be rewarded with a good gcloud config.

Kubernetes

Our cloud infrastructure setup is now done and we're ready to run some applications in the cluster. In this tutorial, we'll be using kubectl to manage our kubernetes cluster. To access the cluster on GCP, kubectl needs a valid config, which we can quickly fetch by running:

gcloud container clusters get-credentials <cluster> --region <region>

Aggressive optimizations

Disclaimer: I don't recommend doing any of the things I've done in this section. Feel free to crank up the node pool machine types to something beefier (such as g1-small) in favor of keeping logging and metrics alive. At the time of writing this tutorial, I had to make some rather aggressive optimizations on the cluster to run everything on two micro instances. We did mention being cheap, didn't we?

Realizing that it's probably not a good idea to disable logging, we have disabled logging on GCP. Now that we're up to speed, why don't we go ahead and turn off kubernetes metrics as well:

kubectl scale --replicas=0 deployment/metrics-server-v0.3.1 --namespace=kube-system

That's over 100MB of memory saved on our nodes at the expense of not knowing the total memory and CPU consumption anymore. Sounds like a fair deal to me!
We'll scale kube-dns service deployments down as well, since running multiple DNS services in our tiny cluster seems like an overkill:

kubectl scale --replicas=0 deployment/kube-dns-autoscaler --namespace=kube-system
kubectl scale --replicas=1 deployment/kube-dns --namespace=kube-system

kubernetes default-backend can go too. We'll be using nginx for this purpose:

kubectl scale --replicas=0 deployment/l7-default-backend --namespace=kube-system

At this point I realized that the instance spun up from web-pool was stuck at "ContainerCreating" with all the kubernetes deployments I just disabled still running, so I just deleted the instance to give it a fresh start:

gcloud compute instances list
gcloud compute instances delete <name of the web-pool instance>

After a few minutes, GCP had spun up a new instance from the web-pool instance pool, this time without the metrics server, default backend and with only one DNS service.

Deployments

The cluster we're about to launch has three deployments: nginx for serving web content, kubeIP for keeping our ingress node responsive and traefik which serves a dual purpose; routing incoming connections to nginx, plus handling SSL. We'll discuss each deployment next.

nginx-web

Incoming HTTP(S) traffic in our cluster is redirected to the nginx server, which we use as our web backend. Put simply in kubernetes terms, we're going to deploy a container image within a namespace and send traffic to it through a service. We'll do namespace first. Navigate to k8s/nginx-web/ and run:

kubectl create --save-config -f namespace.yaml

Pretty straightforward so far. The namespace we just created is defined here. Next up is the deployment:

kubectl create --save-config -f deployment.yaml

As you can see from the definition, we want our deployment to run under the namespace nginx-web. We need the container to run on a virtual machine that's spun up from the node pool web-pool, hence the nodeSelector parameter. We're doing this because we want to run everything except the load balancer on a preemptible VM to cut down costs while ensuring maximum uptime.

Moving on, the container section defines a Docker image we want to run from our private Google Container Registry (GCR) repository. Below that, we open the ports 80 and 443 for traffic and set up health check (liveness probe) for our container. The cluster will now periodically GET the container at the endpoint /health and force a restart if it doesn't receive a 200 OK response within the given time. Readiness probe is basically the same, but will tell the cluster when the container is ready to start accepting connections after initialization.

We won't dive too deep into Docker in this tutorial, but we have included a basic nginx:alpine container with placeholder web content in this tutorial. We'll need to upload the container image to GCR for kubernetes to use it as per the deployment we just created. Navigate to docker/nginx-alpine and run:

docker build -t eu.gcr.io/<project>/nginx-web .

This builds the image and tags it appropriately for use in our cluster. We need docker to authenticate with GCP, so let's register gcloud as docker's credential helper by running:

gcloud auth configure-docker

To push the image into our registry, run:

docker push eu.gcr.io/<project>/nginx-web

We can check that everything went fine with the deployment by running:

kubectl get event --namespace nginx-web
LAST SEEN  TYPE      REASON              KIND   MESSAGE
1m         Normal    Pulling             Pod    pulling image "eu.gcr.io/gke-tutorial-xxxxxx/nginx-web:latest"
1m         Normal    Pulled              Pod    Successfully pulled image "eu.gcr.io/gke-tutorial-xxxxxx/nginx-web:latest"
1m         Normal    Created             Pod    Created container
1m         Normal    Started             Pod    Started container

We now have an nginx container running in the right place, but we still need to route traffic to it within the cluster. This is done by creating a service :

kubectl create --save-config -f service.yaml

Our service definition is minimal: We simply route incoming traffic to applications that match the selector nginx-web. In other words, traffic that gets sent to this service on ports 80 and 443 will get directed to pods running our web backend.

kubeIP

Working in a cloud environment, we cannot trust that our virtual machines stay up infinitely. In contrary, we actually embrace this by running our web server on a preemptible node. Preemptible nodes are cheaper to run, as long as we accept the fact that they go down for a period of time at least once a day.
We could easily ensure higher availability in our cluster by simply scaling up the number of nodes, but for the sake of simplicity, we'll stick to one of each type, defined by our node pools ingress-pool and web-pool

A node pool is a set of instructions on how many and what type of instances we should have running in our cluster at any given time. We'll be running traefik on a node created from ingress-pool and the rest of our applications run on nodes created from web-pool.

Even though the nodes from ingress-pool are not preemptible, they might restart some time. Because our cheap-o cluster doesn't use an external load balancer (which is expen\$ive), we need to find another way to make sure that our ingress node always has the same IP for connectivity.
We solve this issue by creating a static IP address and using kubeip to bind that address to our ingress node when necessary.

Let's create the deployment for kubeip by navigating to k8s/kubeip and running:

kubectl create --save-config -f deployment.yaml

We define kube-system as the target namespace for kubeip, since we want it to communicate directly with the kubernetes master and find out when a newly created node needs a static address. Using a nodeSelector, we force kubeip to deploy on a web-pool node, just like we did with nginx earlier.

Next in the config we define a bunch of environment variables, which we bind to values in a ConfigMap. We instruct our deployment to fetch GCP service account credentials from a kubernetes secret. Through the service account, kubeip can have the required access rights to make changes (assign IPs) in GCP.

We created a GCP service account for kubeip as part of our Terraform process. Now we just need to extract its credentials just like we did with our main service account in the beginning of this tutorial. For added variety, let's use the command line this time. From the root of our project, run:

gcloud iam service-accounts list
gcloud iam service-accounts keys create keys/kubeip-key.json --iam-account <kubeip service-account id>

Now that we have saved the key, we'll store it in the cluster as a kubernetes secret:

kubectl create secret generic kubeip-key --from-file=keys/kubeip-key.json -n kube-system

We have created a GCP service account for kubeip and configured kubeip to access it via the kubernetes secret. We will still need a kubernetes service account to access information about the nodes in the cluster. Let's do that now:

kubectl create --save-config -f serviceaccount.yaml

We define a (kubernetes) ServiceAccount and below it the ClusterRole and ClusterRoleBinding resources, which define what our service account is allowed to do and where.

Next, we need to create the ConfigMap for the deployment of kubeip:

kubectl create --save-config -f configmap.yaml

In the config, we set kubeip to run in web-pool and watch instances spun up from ingress-pool. When kubeip detects such an instance, it checks if there is an unassigned IP address with the label kubeip and value static-ingress in the reserve and gives that address to the instance. We have restricted the ingress-pool to a single node, so we only need a single static IP address in our reserve.

traefik

External load balancers are very useful in keeping your web service responsive under high load. They are also prohibitively expensive for routing traffic to that single pod in your personal cluster, so we're going to make do without one.

In our tutorial cluster, we dedicate a single node to hosting traefik, which we configure to route traffic to our web backend (nginx server). Traefik can also fetch SSL certificates from resolvers such as letsencrypt to protect our HTTPS traffic. We're not going to cover procuring a domain name and setting up DNS in this tutorial, but, for reference, I have left everything that's required for setting up a DNS challenge commented out in the code.

Let's create a namespace and a service account for traefik. Navigate to k8s/traefik and run:

kubectl create --save-config -f namespace.yaml
kubectl create --save-config -f serviceaccount.yaml

Next, we'll create the deployment and take a look at what we've done so far:

kubectl create --save-config -f deployment.yaml

Using a nodeSelector once again, we specify that we want traefik to run on a machine that belongs to ingress-pool, which means that in our cluster, traefik will sit on a different machine than kubeip and nginx. The thought behind this is that both of our machines are unlikely to go down simultaneously. When web-pool goes down and is restarted, no problem; traefik will find it in the cluster and resume routing connections normally.
If our ingress-pool went down, the situation would be more severe, since we need our external IP bound to that machine. How else would our clients land on our web backend? Remember we don't have an external load balancer...

Luckily, we have kubeip which will detect the recently rebooted ingress-pool machine and assign our external IP back to it in no time. Crisis averted!

There's a couple key things in our traefik deployment that sets it apart from our other deployments. First is hostNetwork which we need for traefik to listen on network interfaces of its host machine.
Secondly, we define a toleration, because we have tainted the host node pool. Since our traefik deployment is the only one with this toleration, we can rest assured that no other application is deployed on ingress-pool.

Finally, we give traefik some arguments : entry points for HTTP, HTTPS and health check (ping in traefik lingo). We also enable the kubernetes provider, which lets us use custom resources. Let's create them now:

kubectl create --save-config -f resource.yaml

Now we can add routes to traefik using our new custom resources:

kubectl create --save-config -f route.yaml

The two routes now connect the "web" and "websecure" entrypoints (which we set up as arguments for traefik) to our nginx-web service. We should now be able to see HTML content served to us by nginx when we connect to our static IP address.

Please enjoy your cluster-on-a-budget responsively!

Oldest comments (2)

Collapse
 
jamesholcomb profile image
James Holcomb

Nice article Niko 👍...I have been using this approach for some time in a CI/CD cluster with GKE 1.14, the free-tier micro VM + one preemtible node. Even had Stackdriver logging/monitoring enabled. I used a trick to disable scheduling of fluentd on the ingress node to reduce memory by removing the fluentd-ds-ready label.

But after upgrading to 1.15 that hack appears to be gone. I tried scaling down the metrics server and further reducing fluentd memory using a scaling policy, but could not get it to fit along side the nginx-ingress controller.

Since the fluentd DaemonSet does not respect taints, I gave up and switched to an e2-micro node ($6/month) for the ingress since Stackdriver logging is a req for me.

It's unfortunate that the free tier micro VM is pretty much useless given the overhead of GKE.

Collapse
 
dannyverp profile image
Danny Verpoort

Thank you so much, this is awesomely detailed!