<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Niko Kosonen</title>
    <description>The latest articles on DEV Community by Niko Kosonen (@nkoson).</description>
    <link>https://dev.to/nkoson</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nkoson"/>
    <language>en</language>
    <item>
      <title>How to: Kubernetes for Cheap on Google Cloud</title>
      <dc:creator>Niko Kosonen</dc:creator>
      <pubDate>Tue, 03 Mar 2020 07:08:10 +0000</pubDate>
      <link>https://dev.to/verkkokauppacom/how-to-kubernetes-for-cheap-on-google-cloud-1aei</link>
      <guid>https://dev.to/verkkokauppacom/how-to-kubernetes-for-cheap-on-google-cloud-1aei</guid>
      <description>&lt;p&gt;[TL;DR: Run Kubernetes on two micro instances on GKE without external load balancers. Cluster setup from scratch. &lt;a href="https://github.com/nkoson/gke-tutorial"&gt;github.com/nkoson/gke-tutorial&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;My excitement of running &lt;em&gt;kubernetes&lt;/em&gt; on Google Cloud Platform was quickly curbed by the realization that, despite Google's virtual machines starting at affordable price points, their network ingress is another story: Let's say you want to set up a simple cluster for your own personal projects, or a small business. At the time of writing, a couple of micro nodes running in Iowa will set you back $7.77/mo, but the only (officially marketed, AFAIK) method of getting traffic in is by using a load balancer - which start at whopping $18.26 for the first 5 forwarding rules. That is a deal breaker for me, since there are plenty of other cloud providers with better offerings to smaller players. &lt;/p&gt;

&lt;p&gt;That's when I stumbled upon a &lt;a href="https://charlieegan3.com/posts/2018-08-15-cheap-gke-cluster-zero-loadbalancers/"&gt;great article&lt;/a&gt; about running a GKE cluster without load balancers. With this newly incited motivation, I set out to create my GKE cluster - with the requirement of it being as cheap as possible while enjoying a key benefit of the cloud: being free of manual maintenance. &lt;/p&gt;

&lt;p&gt;I have composed this article as a step-by-step tutorial. Based on my own experience in setting up a cluster on a fresh GCP account, I try to cover every topic from configuring the infrastructure to serving HTTP(S) requests from inside the cluster. Please notice, that I did this mainly to educate myself on the subject, so critique and corrections are wholeheartedly welcome.&lt;/p&gt;

&lt;p&gt;We'll be using Terraform.io to manage our cloud infrastructure, so go ahead and register an account, if you haven't already. You'll obviously need access to a Google Cloud Platform account, as well.&lt;/p&gt;

&lt;p&gt;Let’s get going by creating a new project on the GCP console:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Project selector (top bar) -&amp;gt; New Project -&amp;gt; Enter name -&amp;gt; Create&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This will create a nice empty project for us, which differs from the default starter project in that the newly created blank doesn’t come with any predefined API’s or service accounts.&lt;br&gt;
We’ll start digging our rabbit hole by enabling the Compute Engine API, which we need to communicate with GCP using Terraform. We'll also enable the Service Usage API so that Terraform can enable services for us as we go forward.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;APIs &amp;amp; Services -&amp;gt; API Library -&amp;gt; Compute Engine API -&amp;gt; Enable&lt;/p&gt;

&lt;p&gt;APIs &amp;amp; Services -&amp;gt; API Library -&amp;gt; Service Usage API -&amp;gt; Enable&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once the APIs have been initialized, we should find that GCP has generated a new service account for us. The aptly named Compute Engine default service account grants us remote access to the resources of our project.&lt;br&gt;
Next, we’ll need to create a key for Terraform to authenticate with GCP:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;IAM &amp;amp; Admin -&amp;gt; Service accounts -&amp;gt; Compute Engine default service account -&amp;gt; Create key -&amp;gt; Create as JSON&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The key that we just downloaded can be used in our terraform.io console as an environment variable, or directly from local disk when running Terraform CLI commands. The former requires newlines edited out of the JSON file and the contents added as &lt;code&gt;GOOGLE_CLOUD_KEYFILE_JSON&lt;/code&gt; in our terraform.io workspace:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Workspaces -&amp;gt; (select a workspace) -&amp;gt; Variables -&amp;gt; Environment Variables&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Make sure you set the value as “sensitive / write only”, if you decide to store the key in your terraform.io workspace.&lt;br&gt;
As stated above, it’s also possible to read the key from your local drive by adding the following in the Terraform provider resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"google"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"3.4.0"&lt;/span&gt;
  &lt;span class="nx"&gt;credentials&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;filename&amp;gt;.json"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In this tutorial, we’ll be using the latter of the two methods.&lt;/p&gt;

&lt;p&gt;While we’re here, it’s worth noting that the Compute Engine default service account doesn’t have the permissions to create new roles and assign IAM policies in the project. This is something that we will need later as part of our terraforming process, so let’s get it over with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;IAM &amp;amp; admin -&amp;gt; edit Compute Engine default service account (pen icon) -&amp;gt; Add another role -&amp;gt; select "Role Administrator" -&amp;gt; Save&lt;/p&gt;

&lt;p&gt;Add another role -&amp;gt; select "Project IAM Admin" -&amp;gt; Save&lt;/p&gt;

&lt;p&gt;Add another role -&amp;gt; select "Service Account Admin" -&amp;gt; Save&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We’re now ready to initialize Terraform and apply our configuration to the cloud.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will set up your local Terraform workspace and download the Google provider plugin, which is used to configure GCP resources.&lt;/p&gt;

&lt;p&gt;We can proceed to apply the configuration to our GCP project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This will feed the configuration to the terraform.io cloud, check its syntax, check the state of our GCP project and, finally, ask for confirmation to apply our changes. Enter ‘yes’ and sit back. This is going to take a while.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;module.cluster.google_project_iam_custom_role.kluster: Creating...
module.cluster.google_service_account.kluster: Creating..
module.cluster.google_compute_network.gke-network: Creating...
module.cluster.google_compute_address.static-ingress: Creating...
module.cluster.google_service_account.kubeip: Creating...
module.cluster.google_container_node_pool.custom_nodepool[&lt;span class="s2"&gt;"ingress-pool"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;: Creating...

module.cluster.google_container_node_pool.custom_nodepool[&lt;span class="s2"&gt;"ingress-pool"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;: Creation &lt;span class="nb"&gt;complete &lt;/span&gt;after 1m8s
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once the dust has settled, it’s time to check the damage. We set out to configure a minimal cloud infrastructure for running a &lt;em&gt;kubernetes&lt;/em&gt; cluster, so let’s see how we’ve managed so far.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Compute Engine -&amp;gt; VM Instances&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This page reveals that we now have two virtual machines running. These machines are part of node pools ingress-pool and web-pool. A node pool is a piece of configuration, which tells Google Container Engine (GKE) how and when to scale the machines in our cluster up or down. You can find the node pool definitions in &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/cluster.tf#L32"&gt;cluster.tf&lt;/a&gt; and &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/gke/node_pool.tf#L1"&gt;node_pool.tf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you squint, you can see that the machines have internal IP addresses assigned to them. These addresses are part of our &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/gke/network.tf#L9"&gt;subnetwork&lt;/a&gt; range. There is a bunch of other address ranges defined in our cluster, which we’ll glimpse over right now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="nx"&gt;subnet_cidr_range&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/16"&lt;/span&gt;
&lt;span class="c1"&gt;# 10.0.0.0 -&amp;gt; 10.0.255.255&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Defined in &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/gke/network.tf#L9"&gt;google_compute_subnetwork&lt;/a&gt;, this is the address range of the subnetwork, in which our GKE cluster will run.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="nx"&gt;master_ipv4_cidr_block&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.1.0.0/28"&lt;/span&gt;
&lt;span class="c1"&gt;# 10.1.0.0 -&amp;gt; 10.1.0.15&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The master node of our &lt;em&gt;kubernetes&lt;/em&gt; cluster will be running under this block, used by &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/gke/main.tf#L36"&gt;google_container_cluster&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="nx"&gt;cluster_range_cidr&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.2.0.0/16"&lt;/span&gt;
&lt;span class="c1"&gt;# 10.2.0.0 -&amp;gt; 10.2.255.255&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Rest of our &lt;em&gt;kubernetes&lt;/em&gt; nodes will be running under this range, defined as a &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/gke/network.tf#L21"&gt;secondary range&lt;/a&gt; as part of our subnet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="nx"&gt;services_range_cidr&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.3.0.0/16"&lt;/span&gt;
&lt;span class="c1"&gt;# 10.3.0.0 -&amp;gt; 10.3.255.255&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Also a secondary range in our subnet, the service range contains our &lt;em&gt;kubernetes&lt;/em&gt; services, more of which a bit later.&lt;/p&gt;

&lt;p&gt;Understanding the basic building blocks of our network, there are a couple more details that we need to grasp in order for this to make sense as a whole. The nodes in our cluster can communicate with each other on the subnet we just discussed, but what about incoming traffic? After all, we’ll need to not only accept incoming connections, but also download container images from the web. Enter Cloud NAT:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Networking -&amp;gt; Network Services -&amp;gt; Cloud NAT&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Part of our router configuration, Cloud NAT grants our VM instances Internet connectivity without external IP addresses. This allows for a secure way of provisioning our &lt;em&gt;kubernetes&lt;/em&gt; nodes, as we can download container images through NAT without exposing the machines to public Internet.&lt;br&gt;
In our &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/gke/network.tf#L40"&gt;definition&lt;/a&gt;, we set the router to allow automatically allocated addresses and to operate only on our subnetwork, which we set up earlier.&lt;/p&gt;

&lt;p&gt;OK, our NAT gives us outbound connectivity, but we’ll need a inbound address for our cheap-o load balancer / ingress / certificate manager all-in-one contraption, &lt;strong&gt;traefik&lt;/strong&gt;. We’ll talk about the application in a while, but let’s first make sure that our external static IP addresses are in check:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Networking -&amp;gt; VPC network -&amp;gt; External IP addresses&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There should be two addresses on the list; an automatically generated one in use by our NAT, plus another, currently unused address which is named static-ingress. This is crucial for our cluster to accept connections without an external load balancer, since we can route traffic through to our ingress node using a static IP.&lt;br&gt;
We’ll be running an application, kubeip, in our cluster to take care of assigning the static address to our ingress node, which we’ll discuss in a short while.&lt;/p&gt;

&lt;p&gt;This is a good opportunity to take a look at our firewall settings:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Networking -&amp;gt; VPC network -&amp;gt; Firewall rules&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We have added a single custom rule, which lets inbound traffic through to our ingress node. Notice, how we specify a target for the rule to match only with instances that carry the ingress-pool tag. After all, we only need HTTP(S) traffic to land on our internal load balancer (&lt;em&gt;traefik&lt;/em&gt;). The custom firewall rule is defined &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/gke/network.tf#L76"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Lest we forget, one more thing: We'll be using the CLI tool &lt;strong&gt;gcloud&lt;/strong&gt; to get our &lt;em&gt;kubernetes&lt;/em&gt; credentials up and running in the next step. Of course, &lt;em&gt;gcloud&lt;/em&gt; needs a configuration of its own, as well, so let's get it over with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud init
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Answer truthfully to the questions and you shall be rewarded with a good gcloud config.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes
&lt;/h2&gt;

&lt;p&gt;Our cloud infrastructure setup is now done and we're ready to run some applications in the cluster. In this tutorial, we'll be using &lt;strong&gt;kubectl&lt;/strong&gt; to manage our &lt;em&gt;kubernetes&lt;/em&gt; cluster. To access the cluster on GCP, kubectl needs a valid config, which we can quickly fetch by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud container clusters get-credentials &amp;lt;cluster&amp;gt; &lt;span class="nt"&gt;--region&lt;/span&gt; &amp;lt;region&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Aggressive optimizations
&lt;/h3&gt;

&lt;p&gt;Disclaimer: I don't recommend doing any of the things I've done in this section. Feel free to crank up the &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/cluster.tf#L32"&gt;node pool machine types&lt;/a&gt; to something beefier (such as g1-small) in favor of keeping logging and metrics alive. At the time of writing this tutorial, I had to make some rather aggressive optimizations on the cluster to run everything on two micro instances. We did mention being cheap, didn't we?&lt;/p&gt;

&lt;p&gt;Realizing that it's probably not a good idea to disable logging, we have &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/cluster.tf#L29"&gt;disabled logging&lt;/a&gt; on GCP. Now that we're up to speed, why don't we go ahead and turn off &lt;em&gt;kubernetes&lt;/em&gt; metrics as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl scale &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 deployment/metrics-server-v0.3.1 &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kube-system
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That's over 100MB of memory saved on our nodes at the expense of not knowing the total memory and CPU consumption anymore. Sounds like a fair deal to me!&lt;br&gt;
We'll scale kube-dns service deployments down as well, since running multiple DNS services in our tiny cluster seems like an overkill:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl scale &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 deployment/kube-dns-autoscaler &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kube-system
kubectl scale &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 deployment/kube-dns &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kube-system
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;kubernetes&lt;/em&gt; default-backend can go too. We'll be using &lt;strong&gt;nginx&lt;/strong&gt; for this purpose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl scale &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 deployment/l7-default-backend &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kube-system
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;At this point I realized that the instance spun up from &lt;strong&gt;web-pool&lt;/strong&gt; was stuck at "ContainerCreating" with all the &lt;em&gt;kubernetes&lt;/em&gt; deployments I just disabled still running, so I just deleted the instance to give it a fresh start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud compute instances list
gcloud compute instances delete &amp;lt;name of the web-pool instance&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;After a few minutes, GCP had spun up a new instance from the &lt;em&gt;web-pool&lt;/em&gt; instance pool, this time without the metrics server, default backend and with only one DNS service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployments
&lt;/h3&gt;

&lt;p&gt;The cluster we're about to launch has three deployments: &lt;em&gt;nginx&lt;/em&gt; for serving web content, &lt;strong&gt;kubeIP&lt;/strong&gt; for keeping our ingress node responsive and &lt;em&gt;traefik&lt;/em&gt; which serves a dual purpose; routing incoming connections to &lt;em&gt;nginx&lt;/em&gt;, plus handling SSL. We'll discuss each deployment next.&lt;/p&gt;

&lt;h3&gt;
  
  
  nginx-web
&lt;/h3&gt;

&lt;p&gt;Incoming HTTP(S) traffic in our cluster is redirected to the &lt;em&gt;nginx&lt;/em&gt; server, which we use as our web backend. Put simply in &lt;em&gt;kubernetes&lt;/em&gt; terms, we're going to &lt;strong&gt;deploy&lt;/strong&gt; a container image within a &lt;strong&gt;namespace&lt;/strong&gt; and send traffic to it through a &lt;strong&gt;service&lt;/strong&gt;. We'll do namespace first. Navigate to &lt;code&gt;k8s/nginx-web/&lt;/code&gt; and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;--save-config&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; namespace.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Pretty straightforward so far. The namespace we just created is defined &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/nginx-web/namespace.yaml#L1"&gt;here&lt;/a&gt;. Next up is the deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;--save-config&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; deployment.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see from the &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/nginx-web/deployment.yaml#L5"&gt;definition&lt;/a&gt;, we want our deployment to run under the namespace &lt;code&gt;nginx-web&lt;/code&gt;. We need the container to run on a virtual machine that's spun up from the node pool &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/cluster.tf#L46"&gt;web-pool&lt;/a&gt;, hence the &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/cluster.tf#L46"&gt;nodeSelector&lt;/a&gt; parameter. We're doing this because we want to run everything &lt;em&gt;except&lt;/em&gt; the load balancer on a preemptible VM to cut down costs while ensuring maximum uptime.&lt;/p&gt;

&lt;p&gt;Moving on, the &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/nginx-web/deployment.yaml#L23"&gt;container section&lt;/a&gt; defines a Docker image we want to run from our private Google Container Registry (GCR) repository. Below that, we open the ports 80 and 443 for traffic and set up health check (liveness probe) for our container. The cluster will now periodically GET the container at the endpoint &lt;em&gt;/health&lt;/em&gt; and force a restart if it doesn't receive a 200 OK response within the given time. Readiness probe is basically the same, but will tell the cluster when the container is ready to start accepting connections after initialization.&lt;/p&gt;

&lt;p&gt;We won't dive too deep into Docker in this tutorial, but we have included a basic nginx:alpine container with placeholder web content in this tutorial. We'll need to upload the container image to GCR for &lt;em&gt;kubernetes&lt;/em&gt; to use it as per the deployment we just created. Navigate to &lt;code&gt;docker/nginx-alpine&lt;/code&gt; and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; eu.gcr.io/&amp;lt;project&amp;gt;/nginx-web &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This builds the image and tags it appropriately for use in our cluster. We need docker to authenticate with GCP, so let's register gcloud as docker's credential helper by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud auth configure-docker
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To push the image into our registry, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker push eu.gcr.io/&amp;lt;project&amp;gt;/nginx-web
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We can check that everything went fine with the deployment by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get event &lt;span class="nt"&gt;--namespace&lt;/span&gt; nginx-web
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;





&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;LAST SEEN  TYPE      REASON              KIND   MESSAGE
1m         Normal    Pulling             Pod    pulling image &lt;span class="s2"&gt;"eu.gcr.io/gke-tutorial-xxxxxx/nginx-web:latest"&lt;/span&gt;
1m         Normal    Pulled              Pod    Successfully pulled image &lt;span class="s2"&gt;"eu.gcr.io/gke-tutorial-xxxxxx/nginx-web:latest"&lt;/span&gt;
1m         Normal    Created             Pod    Created container
1m         Normal    Started             Pod    Started container
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We now have an &lt;em&gt;nginx&lt;/em&gt; container running in the right place, but we still need to route traffic to it within the cluster. This is done by creating a &lt;em&gt;service&lt;/em&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;--save-config&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; service.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Our &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/nginx-web/service.yaml#L2"&gt;service definition&lt;/a&gt; is minimal: We simply route incoming traffic to applications that match the &lt;em&gt;selector&lt;/em&gt; &lt;code&gt;nginx-web&lt;/code&gt;. In other words, traffic that gets sent to this service on ports 80 and 443 will get directed to pods running our web backend.&lt;/p&gt;

&lt;h3&gt;
  
  
  kubeIP
&lt;/h3&gt;

&lt;p&gt;Working in a cloud environment, we cannot trust that our virtual machines stay up infinitely. In contrary, we actually embrace this by running our web server on a &lt;em&gt;preemptible&lt;/em&gt; node. Preemptible nodes are cheaper to run, as long as we accept the fact that they go down for a period of time at least once a day.&lt;br&gt;
We could easily ensure higher availability in our cluster by simply scaling up the number of nodes, but for the sake of simplicity, we'll stick to one of each type, defined by our node pools &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/cluster.tf#L33"&gt;ingress-pool&lt;/a&gt; and &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/cluster.tf#L46"&gt;web-pool&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A node pool is a set of instructions on how many and what type of instances we should have running in our cluster at any given time. We'll be running &lt;em&gt;traefik&lt;/em&gt; on a node created from &lt;em&gt;ingress-pool&lt;/em&gt; and the rest of our applications run on nodes created from &lt;em&gt;web-pool&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Even though the nodes from &lt;em&gt;ingress-pool&lt;/em&gt; are not preemptible, they might restart some time. Because our cheap-o cluster doesn't use an external load balancer (which is expen\$ive), we need to find another way to make sure that our ingress node always has the same IP for connectivity.&lt;br&gt;
We solve this issue by creating a &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/gke/network.tf#L63"&gt;static IP address&lt;/a&gt; and using &lt;em&gt;kubeip&lt;/em&gt; to bind that address to our ingress node when necessary.&lt;/p&gt;

&lt;p&gt;Let's create the deployment for &lt;em&gt;kubeip&lt;/em&gt; by navigating to &lt;code&gt;k8s/kubeip&lt;/code&gt; and running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;--save-config&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; deployment.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We define &lt;code&gt;kube-system&lt;/code&gt; as the &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/kubeip/deployment.yaml#L9"&gt;target namespace&lt;/a&gt; for &lt;em&gt;kubeip&lt;/em&gt;, since we want it to communicate directly with the &lt;em&gt;kubernetes&lt;/em&gt; master and find out when a newly created node needs a static address. Using a &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/kubeip/deployment.yaml#L21"&gt;nodeSelector&lt;/a&gt;, we force &lt;em&gt;kubeip&lt;/em&gt; to deploy on a &lt;em&gt;web-pool&lt;/em&gt; node, just like we did with &lt;em&gt;nginx&lt;/em&gt; earlier.&lt;/p&gt;

&lt;p&gt;Next in the config we define a bunch of environment variables, which we bind to values in a &lt;em&gt;ConfigMap&lt;/em&gt;. We instruct our deployment to fetch GCP service account credentials from a &lt;em&gt;kubernetes&lt;/em&gt; &lt;em&gt;secret&lt;/em&gt;. Through the &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/kubeip/deployment.yaml#L70"&gt;service account&lt;/a&gt;, &lt;em&gt;kubeip&lt;/em&gt; can have the required access rights to make changes (assign IPs) in GCP.&lt;/p&gt;

&lt;p&gt;We created a GCP &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/gke/iam.tf#L42"&gt;service account for kubeip&lt;/a&gt; as part of our Terraform process. Now we just need to extract its credentials just like we did with our main service account in the beginning of this tutorial. For added variety, let's use the command line this time. From the root of our project, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud iam service-accounts list
gcloud iam service-accounts keys create keys/kubeip-key.json &lt;span class="nt"&gt;--iam-account&lt;/span&gt; &amp;lt;kubeip service-account &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we have saved the key, we'll store it in the cluster as a &lt;em&gt;kubernetes&lt;/em&gt; secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create secret generic kubeip-key &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;keys/kubeip-key.json &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We have created a GCP service account for &lt;em&gt;kubeip&lt;/em&gt; and configured &lt;em&gt;kubeip&lt;/em&gt; to access it via the &lt;em&gt;kubernetes&lt;/em&gt; secret. We will still need a &lt;em&gt;kubernetes service account&lt;/em&gt; to access information about the nodes in the cluster. Let's do that now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;--save-config&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; serviceaccount.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We define a (&lt;em&gt;kubernetes&lt;/em&gt;) &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/kubeip/serviceaccount.yaml#L3"&gt;ServiceAccount&lt;/a&gt; and below it the &lt;em&gt;ClusterRole&lt;/em&gt; and &lt;em&gt;ClusterRoleBinding&lt;/em&gt; resources, which define what our service account is allowed to do and where.&lt;/p&gt;

&lt;p&gt;Next, we need to create the &lt;em&gt;ConfigMap&lt;/em&gt; for the deployment of &lt;em&gt;kubeip&lt;/em&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;--save-config&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; configmap.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In the &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/kubeip/configmap.yaml#L2"&gt;config&lt;/a&gt;, we set &lt;em&gt;kubeip&lt;/em&gt; to run in &lt;code&gt;web-pool&lt;/code&gt; and watch instances spun up from &lt;code&gt;ingress-pool&lt;/code&gt;. When &lt;em&gt;kubeip&lt;/em&gt; detects such an instance, it checks if there is an unassigned IP address with the &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/gke/network.tf#L70"&gt;label&lt;/a&gt; &lt;em&gt;kubeip&lt;/em&gt; and value &lt;em&gt;static-ingress&lt;/em&gt; in the reserve and gives that address to the instance. We have restricted the &lt;code&gt;ingress-pool&lt;/code&gt; to a single node, so we only need a single static IP address in our reserve.&lt;/p&gt;

&lt;h3&gt;
  
  
  traefik
&lt;/h3&gt;

&lt;p&gt;External load balancers are very useful in keeping your web service responsive under high load. They are also prohibitively expensive for routing traffic to that single pod in your personal cluster, so we're going to make do without one.&lt;/p&gt;

&lt;p&gt;In our tutorial cluster, we dedicate a single node to hosting &lt;em&gt;traefik&lt;/em&gt;, which we configure to route traffic to our web backend (&lt;em&gt;nginx&lt;/em&gt; server). &lt;em&gt;Traefik&lt;/em&gt; can also fetch SSL certificates from resolvers such as &lt;code&gt;letsencrypt&lt;/code&gt; to protect our HTTPS traffic. We're not going to cover procuring a domain name and setting up DNS in this tutorial, but, for reference, I have left everything that's required for setting up a DNS challenge commented out in the code.&lt;/p&gt;

&lt;p&gt;Let's create a namespace and a service account for &lt;em&gt;traefik&lt;/em&gt;. Navigate to &lt;code&gt;k8s/traefik&lt;/code&gt; and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;--save-config&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; namespace.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;





&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;--save-config&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; serviceaccount.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next, we'll create the deployment and take a look at what we've done so far:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;--save-config&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; deployment.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Using a &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/traefik/deployment.yaml#L23"&gt;nodeSelector&lt;/a&gt; once again, we specify that we want &lt;em&gt;traefik&lt;/em&gt; to run on a machine that belongs to &lt;code&gt;ingress-pool&lt;/code&gt;, which means that in our cluster, &lt;em&gt;traefik&lt;/em&gt; will sit on a different machine than &lt;em&gt;kubeip&lt;/em&gt; and &lt;em&gt;nginx&lt;/em&gt;. The thought behind this is that both of our machines are unlikely to go down simultaneously. When &lt;code&gt;web-pool&lt;/code&gt; goes down and is restarted, no problem; &lt;em&gt;traefik&lt;/em&gt; will find it in the cluster and resume routing connections normally.&lt;br&gt;
If our &lt;code&gt;ingress-pool&lt;/code&gt; went down, the situation would be more severe, since we need our external IP bound to that machine. How else would our clients land on our web backend? Remember we don't have an external load balancer...&lt;/p&gt;

&lt;p&gt;Luckily, we have &lt;em&gt;kubeip&lt;/em&gt; which will detect the recently rebooted &lt;code&gt;ingress-pool&lt;/code&gt; machine and assign our external IP back to it in no time. Crisis averted!&lt;/p&gt;

&lt;p&gt;There's a couple key things in our &lt;em&gt;traefik&lt;/em&gt; deployment that sets it apart from our other deployments. First is &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/traefik/deployment.yaml#L22"&gt;hostNetwork&lt;/a&gt; which we need for &lt;em&gt;traefik&lt;/em&gt; to listen on network interfaces of its host machine.&lt;br&gt;
Secondly, we define a &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/traefik/deployment.yaml#L25"&gt;toleration&lt;/a&gt;, because we have &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/cluster.tf#L63"&gt;tainted&lt;/a&gt; the host node pool. Since our &lt;em&gt;traefik&lt;/em&gt; deployment is the only one with this toleration, we can rest assured that no other application is deployed on &lt;code&gt;ingress-pool&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Finally, we give &lt;em&gt;traefik&lt;/em&gt; some &lt;a href="https://github.com/nkoson/gke-tutorial/blob/master/k8s/traefik/deployment.yaml#L40"&gt;arguments&lt;/a&gt; : entry points for HTTP, HTTPS and health check (ping in &lt;em&gt;traefik&lt;/em&gt; lingo). We also enable the &lt;em&gt;kubernetes&lt;/em&gt; provider, which lets us use &lt;em&gt;custom resources&lt;/em&gt;. Let's create them now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;--save-config&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; resource.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we can add &lt;em&gt;routes&lt;/em&gt; to &lt;em&gt;traefik&lt;/em&gt; using our new custom resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;--save-config&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; route.yaml
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The two routes now connect the "web" and "websecure" entrypoints (which we set up as arguments for &lt;em&gt;traefik&lt;/em&gt;) to our &lt;code&gt;nginx-web&lt;/code&gt; service. We should now be able to see HTML content served to us by &lt;em&gt;nginx&lt;/em&gt; when we connect to our static IP address. &lt;/p&gt;

&lt;p&gt;Please enjoy your cluster-on-a-budget responsively!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
